Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
15 - 30 Lacs
Chennai
Remote
Who We Are For 20 years, we have been working with organizations large and small to help solve business challenges through technology. We bring a unique combination of engineering and strategy to Make Data Work for organizations. Our clients range from the travel and leisure industry to publishing, retail and banking. The common thread between our clients is their commitment to making data work as seen through their investment in those efforts. In our quest to solve data challenges for our clients, we work with large enterprise, cloud-based and marketing technology suites. We have a deep understanding of these solutions so we can help our clients make the most of their investment in an efficient way to have a data-driven business. Softcrylic now joins forces with Hexaware to Make Data Work in bigger ways! Why Work at Softcrylic? Softcrylic provides an engaging, team-focused, and rewarding work environment where people are excited about the work they do and passionate about delivering creative solutions to our clients. Work Timing: 12:30 pm to 9:30 pm (Flexible in work timing) Here's how to approach the interview: All technical interview rounds will be conducted virtually. The final round will be a face-to-face interview with HR in Chennai. However, there will be a 15-minute technical assessment/in-person technical discussion as part of the final round. Make sure to prepare accordingly for both virtual and in-person components. Job Description: 5 + years of experience in working as Data Engineer Experience in migrating existing datasets from Big Query to Databricks using Python scripts. Conduct thorough data validation and QA to ensure accuracy, completeness, parity, and consistency in reporting. Monitor the stability and status of migrated data pipelines, applying fixes as needed. Migrate data pipelines from Airflow to Airbyte/Dagster based on provided frameworks. Develop Python scripts to facilitate data migration and pipeline transformation. Perform rigorous testing on migrated data and pipelines to ensure quality and reliability. Required Skills: Strong experience in working on Python for scripting Good experience in working on Data Bricks and Big Query Familiarity with data pipeline tools such as Airflow, Airbyte, and Dagster. Strong understanding of data quality principles and validation techniques. Ability to work collaboratively with cross-functional teams. Dinesh M dinesh.m@softcrylic.com +9189255 18191
Posted 2 months ago
4.0 - 7.0 years
10 - 20 Lacs
Hyderabad
Work from Office
We are seeking a skilled Data Engineer with extensive experience in the Cloudera Data Platform (CDP) to join our dynamic team. The ideal candidate will have over four years of experience in designing, developing, and managing data pipelines, and will be proficient in big data technologies. This role requires a deep understanding of data engineering best practices and a passion for optimizing data flow and collection across a diverse range of sources. Required Skills and Qualifications: Experience: 4+ years of experience in data engineering, with a strong focus on big data technologies. Cloudera Expertise: Proficient in Cloudera Data Platform (CDP) and its ecosystem, including Hadoop, Spark, HDFS, Hive, Impala, and other relevant tools. Programming Languages: Strong programming skills in Python, Scala, or Java. ETL Tools: Experience with ETL tools and processes. Data Warehousing: Knowledge of data warehousing concepts and experience with data modeling. SQL: Advanced SQL skills for querying and manipulating large datasets. Linux/Unix: Proficiency in Linux/Unix shell scripting. Version Control: Familiarity with version control systems like Git. Problem-Solving: Strong analytical and problem-solving skills. Communication: Excellent verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Preferred Qualifications: Cloud Experience: Experience with cloud platforms such as AWS, Azure, or Google Cloud. Data Streaming: Experience with real-time data streaming technologies like Kafka. DevOps: Familiarity with DevOps practices and tools such as Docker, Kubernetes, and CI/CD pipelines. Education: Bachelor’s degree in computer science, Information Technology, or a related field. Main Skill: Hadoop, Spark,Hive,Impala,Scala,Python,Java,Linux Roles and Responsibilities Develop and maintain scalable data pipelines using Cloudera Data Platform (CDP) components. Design and implement ETL processes to extract, transform, and load data from various data sources into the data lake or data warehouse. Optimize and troubleshoot data workflows for performance and efficiency. Manage and administer Hadoop clusters within the Cloudera environment. Monitor and ensure the health and performance of the Cloudera platform. Implement data security best practices, including encryption, data masking, and user access control. Work closely with data scientists, analysts, and other stakeholders to understand data requirements and provide the necessary support. Collaborate with cross-functional teams to design and deploy big data solutions that meet business needs. Participate in code reviews, provide feedback, and contribute to team knowledge sharing. Create and maintain comprehensive documentation of data engineering processes, data architecture, and system configurations. Provide support for production data pipelines, including troubleshooting and resolving issues as they arise. Train and mentor junior data engineers, fostering a culture of continuous learning and improvement. Stay up to date with the latest industry trends and technologies related to data engineering and big data. Propose and implement improvements to existing data pipelines and architectures. Explore and integrate new tools and technologies to enhance the capabilities of the data engineering team.
Posted 2 months ago
3.0 - 6.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Join us as a Data Engineer II in Bengaluru! Build scalable data pipelines using Python, SQL, AWS, Airflow, and Kafka. Drive real-time & batch data systems across analytics, ML, and product teams. A hybrid work option is available. Required Candidate profile 3+ yrs in data engineering with strong Python, SQL, AWS, Airflow, Spark, Kafka, Debezium, Redshift, ETL & CDC experience. Must know data lakes, warehousing, and orchestration tools.
Posted 2 months ago
5.0 - 10.0 years
0 - 2 Lacs
Bengaluru
Work from Office
Role : Pyspark Developer Experience : 5+ Years Work Location : Bangalore (5 Days - WFO) Mode of interview : F2F (Bangalore) Date of interview : 21st Jun & 22nd Jun (Saturday & Sunday) Timings : 10:30 AM to 3 PM Skills Required: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Role & responsibilities
Posted 2 months ago
11.0 - 21.0 years
50 - 100 Lacs
Bengaluru
Hybrid
Our Engineering team is driving the future of cloud securitydeveloping one of the worlds largest, most resilient cloud-native data platforms. At Skyhigh Security, were enabling enterprises to protect their data with deep intelligence and dynamic enforcement across hybrid and multi-cloud environments. As we continue to grow, were looking for a Principal Data Engineer to help us scale our platform, integrate advanced AI/ML workflows, and lead the evolution of our secure data infrastructure. Responsibilities: As a Principal Data Engineer, you will be responsible for: Leading the design and implementation of high-scale, cloud-native data pipelines for real-time and batch workloads. Collaborating with product managers, architects, and backend teams to translate business needs into secure and scalable data solutions. Integrating big data frameworks (like Spark, Kafka, Flink) with cloud-native services (AWS/GCP/Azure) to support security analytics use cases. Driving CI/CD best practices, infrastructure automation, and performance tuning across distributed environments. Evaluating and piloting the use of AI/LLM technologies in data pipelines (e.g., anomaly detection, metadata enrichment, automation). Evaluate and integrate LLM-based automation and AI-enhanced observability into engineering workflows. Ensure data security and privacy compliance. Mentoring engineers, ensuring high engineering standards, and promoting technical excellence across teams. What We’re Looking For (Minimum Qualifications) 10+ years of experience in big data architecture and engineering including deep proficiency with AWS cloud platform. Expertise in distributed systems and frameworks such as Apache Spark, Scala, Kafka, Flink and Elasticsearch with experience building production-grade data pipelines. Strong programming skills in Java for building scalable data applications. Hands-on experience with ETL tools and orchestration systems. Solid understanding of data modeling across both relational (PostgreSQL, MySQL) and NoSQL (Hbase) databases and performance tuning. What Will Make You Stand Out (Preferred Qualifications) Experience integrating AI/ML or LLM frameworks (e.g., LangChain, LlamaIndex) into data workflows. Experience implementing CI/CD pipelines with Kubernetes, Docker, and Terraform. Knowledge of modern data warehousing (e.g., BigQuery, Snowflake) and data governance principles (GDPR, HIPAA). Strong ability to translate business goals into technical architecture and mentor teams through delivery. Familiarity with visualization tools (Tableau, Power BI) to communicate data insights, even if not a primary responsibility.
Posted 2 months ago
0.0 - 5.0 years
0 - 108 Lacs
Kolkata
Work from Office
We are a AI Company, led by Adithiyaa Tulshan, with 15 years of experience in AI. We are looking for associates who are hungry to adopt to the change due to AI and deliver value for clients fill in this form: https://forms.gle/BUcqTK3gBHARPcxv5 Flexi working Work from home Over time allowance Annual bonus Sales incentives Performance bonus Joining bonus Retention bonus Referral bonus Career break/sabbatical
Posted 2 months ago
2.0 - 6.0 years
8 - 13 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI) We are looking for Indias top 1% Data Scientists for a unique job opportunity to work with the industry leaders, Who can be a part of the community We are looking for top-tier Data Scientists with expertise in predictive modeling, statistical analysis, and A/B testing If you have experience in this field then this is your chance to collaborate with industry leaders, Whats in it for you Pay above market standards The role is going to be contract based with project timelines from 2 12 months, or freelancing, Be a part of an Elite Community of professionals who can solve complex AI challenges, Work location could be: Remote (Highly likely) Onsite on client location Deccan AIs Office: Hyderabad or Bangalore Responsibilities: Lead design, development, and deployment of scalable data science solutions optimizing large-scale data pipelines in collaboration with engineering teams, Architect advanced machine learning models (deep learning, RL, ensemble) and apply statistical analysis for business insights, Apply statistical analysis, predictive modeling, and optimization techniques to derive actionable business insights, Own the full lifecycle of data science projects?from data acquisition, preprocessing, and exploratory data analysis (EDA) to model development, deployment, and monitoring, Implement MLOps workflows (model training, deployment, versioning, monitoring) and conduct A/B testing to validate models, Required Skills: Expert in Python, data science libraries (Pandas, NumPy, Scikit-learn), and R with extensive experience with machine learning (XGBoost, PyTorch, TensorFlow) and statistical modeling, Proficient in building scalable data pipelines (Apache Spark, Dask) and cloud platforms (AWS, GCP, Azure), Expertise in MLOps (Docker, Kubernetes, MLflow, CI/CD) along with strong data visualization skills (Tableau, Plotly Dash) and business acumen, Nice to Have: Experience with NLP, computer vision, recommendation systems, or real-time data processing (Kafka, Flink), Knowledge of data privacy regulations (GDPR, CCPA) and ethical AI practices, Contributions to open-source projects or published research, What are the next steps Register on our Soul AI website Our team will review your profile Clear all the screening rounds: Clear the assessments once you are shortlisted Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project Skip the Noise Focus on Opportunities Built for You!
Posted 2 months ago
7.0 - 12.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Conduct technical analyses of existing data pipelines, ETL processes, and on-premises/cloud system, identify technical bottlenecks, evaluate migration complexities, and propose optimizations. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders Strong experience in Synapse Analytics, Databricks, ADF, Azure SQL (DW/DB), SSIS. Strong experience in Advanced PS, Batch Scripting, C# (.NET 3.0). Expertise on Orchestration systems with ActiveBatch and AZ orchestration tools. Strong understanding of data warehousing, DLs, and Lakehouse concepts. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Understand and review PowerShell (PS), SSIS, Batch Scripts, and C# (.NET 3.0) codebases for data processes. Assess the complexity of trigger migration across Active Batch (AB), Synapse, ADF, and Azure Databricks (ADB). Define usage of Azure SQL DW, SQL DB, and Data Lake (DL) for various workloads, proposing transitions where beneficial. Analyze data patterns for optimization, including direct raw-to-consumption loading and zone elimination (e.g., stg/app zones). Understand requirements for external tables (Lakehouse) Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 2 months ago
3.0 - 7.0 years
4 - 8 Lacs
Bengaluru
Work from Office
As a member of the Data and Technology practice, you will be working on advanced AI ML engagements tailored for the investment banking sector. This includes developing and maintaining data pipelines, ensuring data quality, and enabling data-driven insights. Your core responsibility will be to build and manage scalable data infrastructure that supports our proof-of-concept initiatives (POCs) and full-scale solutions for our clients. You will work closely with data scientists, DevOps engineers, and clients to understand their data requirements, translate them into technical tasks, and develop robust data solutions. Your primary duties will encompass: Develop, optimize, and maintain scalable and reliable data pipelines using tools such as Python, SQL, and Spark. Integrate data from various sources including APIs, databases, and cloud storage solutions such as Azure, Snowflake, and Databricks. Implement data quality checks and ensure the accuracy and consistency of data. Manage and optimize data storage solutions, ensuring high performance and availability. Work closely with data scientists and DevOps engineers to ensure seamless integration of data pipelines and support machine learning model deployment. Monitor and optimize the performance of data workflows to handle large volumes of data efficiently. Create detailed documentation of data processes. Implement security best practices and ensure compliance with industry standards. Experience / Skills 5+ years of relevant experience in: Experience in a data engineering role , preferably within the financial services industry . Strong experience with data pipeline tools and frameworks such as Python, SQL, and Spark. Proficiency in cloud platforms, particularly Azure, Snowflake, and Databricks. Experience with data integration from various sources including APIs and databases. Strong understanding of data warehousing concepts and practices. Excellent problem-solving skills and attention to detail. Strong communication skills, both written and oral, with a business and technical aptitude. Additionally, desired skills: Familiarity with big data technologies and frameworks. Experience with financial datasets and understanding of investment banking metrics. Knowledge of visualization tools (e.g., PowerBI). Education Bachelors or Masters in Science or Engineering disciplines such as Computer Science, Engineering, Mathematics, Physics, etc.
Posted 2 months ago
4.0 - 6.0 years
4 - 9 Lacs
Gurugram
Work from Office
As a key member of the DTS team, you will primarily collaborate closely with a leading global hedge fund on data engagements foundation in building modern, responsive web applications using Blazor and MudBlazor. In this role, you will be instrumental in shaping the user experience of our applications, working closely with cross-functional teams to deliver high-quality, scalable, and maintainable UI components. Desired Skills and Experience Essential skills Bachelors or masters degree in computer science, Engineering, or a related field. 4-6 years of experience in data engineering, with a strong background in building and maintaining data pipelines and ETL processes. Proven experience with Blazor and MudBlazor, or strong willingness to learn. Solid understanding of modern JavaScript frameworks (e.g., React, Angular, Vue). Strong grasp of HTML, CSS, and responsive design principles. Experience working in collaborative, agile development environments. Familiarity with accessibility standards and frontend performance optimization. Experience with Razor components and .NET backend integration. Exposure to unit testing and automated UI testing tools. Key Responsibilities Build and maintain responsive, reusable UI components in Blazor and MudBlazor. Translate UI/UX mockups and business requirements into functional frontend features. Work closely with backend engineers to ensure smooth API integrations. Participate in code reviews and collaborate on frontend design patterns and best practices. Investigate and resolve UI bugs and performance issues. Contributes to maintaining consistency, accessibility, and scalability of the frontend codebase. Collaborate with QA and DevOps to support testing and deployment pipelines. Stay current with frontend trends and technologies, particularly in .NET and Blazor ecosystems. Our current stack includes C#, .NET 5+, Blazor, and the MudBlazor component library. Developers with strong experience in modern JavaScript frameworks (such as React, Angular, or Vue) and a willingness to quickly learn Blazor and Razor components. Key Metrics C#, .NET 5+, Blazor UI Library: MudBlazor Git, CI/CD, Agile/Scrum Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders
Posted 2 months ago
2.0 - 4.0 years
2 - 6 Lacs
Gurugram
Work from Office
As a key member of the DTS team, you will primarily collaborate closely with a global leading hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to design data pipelines and delivery structures. Desired Skills and Experience Essential skills B.Tech/ M.Tech/ MCA with 2-4 years of overall experience. Skilled in Python and SQL. Experience with data modeling, data warehousing, and building data pipelines. Experience working with FTP, API, S3 and other distribution channels to source data. Experience working with financial and/or alternative data products. Experience working with cloud native tools for data processing and distribution. Experience with Snowflake and Airflow. Key Responsibilities Engage with vendors and technical teams to systematically ingest, evaluate, and create valuable data assets. Collaborate with core engineering team to create central capabilities to process, manage and distribute data assts at scale. Apply robust data quality rules to systemically qualify data deliveries and guarantee the integrity of financial datasets. Engage with technical and non-technical clients as SME on data asset offerings. Key Metrics Python, SQL. Snowflake Data Engineering and pipelines Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders
Posted 2 months ago
6.0 - 8.0 years
12 - 17 Lacs
Gurugram
Work from Office
As a key member of the DTS team, you will primarily collaborate closely with a global leading hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to design data pipelines and delivery structures. Desired Skills and Experience Essential skills A bachelors degree in computer science, engineering, mathematics, or statistics 6-8 years of experience in a Data Engineering role, with a proven track record of delivering insightful and value add dashboards Experience writing Advanced SQL queries, Python and a deep understanding of relational databases Experience working within an Azure environment Experience with Tableau, Holland Mountain ATLAS is a plus. Experience with master data management and data governance is a plus. Ability to prioritize multiple projects simultaneously, problem solve, and think outside the box Key Responsibilities Develop, test and release Data packages for Tableau Dashboards to support all business functions, including investments, investor relations, marketing and operations Support ad hoc requests, including the ability to write queries and extract data from a data warehouse Assist with the management and maintenance of an Azure environment Maintain a data dictionary, which includes documentation of database structures, ETL processes and reporting dependencies Key Metrics Python, SQL Data Engineering, Azure and ATLAS Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders
Posted 2 months ago
9.0 - 14.0 years
35 - 55 Lacs
Noida
Hybrid
Looking For A Better Opportunity? Join Us and Make Things Happen with DMI a Encora company now....! Encora is seeking a full-time Lead Data Engineer with Logistic domian expertise to support our manufacturing large scale client in digital transformation. The Lead Data Engineer is responsible for ensuring the day-to-day leadership and guidance of the local, India-based, data team. This role will be the primary interface with the management team of the client and will work cross functionally with various IT functions to streamline project delivery. Minimum Requirements: l 8+ years of experience overall in IT l Current - 5+ years of experience on Azure Cloud as Data Engineer l Current - 3+ years of hands-on experience on Databricks/ AzureDatabricks l Proficient in Python/PySpark l Proficient in SQL/TSQL l Proficient in Data Warehousing Concepts (ETL/ELT, Data Vault Modelling, Dimensional Modelling, SCD, CDC) Primary Skills: Azure Cloud, Databricks, Azure Data Factory, Azure Synapse Analytics, SQL/TSQL, PySpark, Python + Logistic domain expertise Work Location: Noida, India (Candidates who are open for relocation on immediate basis can also apply) Interested candidates can apply at nidhi.dubey@encora.com along with their updated resume: 1. Total experience: 2.Relevant experience in Azure Cloud: 3. Relevant experience in Azure Databricks: 4. Relevant experience in Azure Syanspse: 5. Relevant experience in SQL/T-SQL: 6. Relevant experience in Pyspark: 7. Relevant experience in python: 8. Relevant experience in logistic domain: 9. Relevant experience in data warehosuing: 10. Current CTC: 11. Expected CTC: 12. Official Notice Period. if serving please specify LWD:
Posted 2 months ago
13.0 - 16.0 years
15 - 22 Lacs
Hyderabad, Bangalore Rural, Chennai
Work from Office
Company Name: Leading General Insurance company (Chennai ) Industry: General Insurance Role - Data Platform and Ingestion Manager mail at manjeet.kaur@mounttalent.com whatsapp at 8384077438 Years of Experience 13 -18 Years Purpose This role will own the data ingestion and storage processes and functions on the data platform. Their role is to ensure timely delivery of data to the org for modeling, reporting and analysis Finally, this role will be required to work closely with the Data Engineering Head to support the establishment of a new Data Ops model working towards a safe, well controlled, platform that will enable further progression of the Cholas Data Strategy roadmap. Key Responsibilities Responsibilities will be expected to be developed and will include but will not be restricted to: Data Ingestion create and maintain all data ingestion pipelines from selected source systems to the data platform. Team Leadership: Lead a data engineering support and minor enhancements team through both day-to-day data management (BAU) activities and change projects, defining and orchestrating tasks and deliverables in support of the Project Manager and acting as Scrum Master as appropriate. Controls & Standards: Work in collaboration with Data Engineers and Architects and function as a senior contributor on the design, build, and management of the Data Platform; taking direct ownership of controls and standards to ensure that all new data requirements are met using the most appropriate controls and engineering practices. Business Stakeholder Engagement: Set up the necessary governance forums with program stakeholders (both business and IT) to define and execute against a platform technology roadmap, business change, and data product build plan. Team Management – Take ownership of the change/minor enhancements and support team and play a leading role in team development while holding team accountable for their commitments, removing roadblocks to their work; using organizational resources to improve capacity for project work; and mentoring and developing team members, and lead recruitment activities when required. Technical and Qualitative requirements Experience in the field of AWS based data engineering with proven experience in orchestrating and governing data pipelines, integration, data modelling, release management. Experience of data product development (MI/BI) and continuous improvement with proven record of successfully implementing data projects. Solid firsthand experience with AWS data systems and tools. Substantial experience of leading DevOps sprint teams using Scrum; translating business requirements into deliverable user stories and defined sprints, with experience acting as a Scrum master; leading daily stand-ups, coordinating development activity and monitoring progress. Solid understanding of software development life cycle models and introducing continuous improvement development activities, managing the balance of new product and CI sprints Balanced business background. Background in Insurance sector will be preferred. Strong people skills including mentoring, coaching, collaborating, and team building Strong analytical, planning, and organizational skills with an ability to manage competing demands Excellent oral and written communications skills and experience interacting with both business and IT individuals at all levels including the executive level
Posted 2 months ago
6.0 - 11.0 years
25 - 37 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Azure Expertise, Proven experience with Azure Cloud services especially Azure Data Factory, Azure SQL Database & Azure Databricks Expert in PySpark data processing & analytics Strong background in building and optimizing data pipelines and workflows. Required Candidate profile Solid exp with data modeling,ETL processes & data warehousing Performance Tuning Ability to optimize data pipelines & jobs to ensure scalability & performance troubleshooting & resolving performance
Posted 2 months ago
7.0 - 12.0 years
20 - 35 Lacs
Pune
Hybrid
Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data science and data application teams Partner with cross-functional stakeholder including Product managers, Architects, Data Quality engineers, Application and Quantitative Science end users to deliver engineering solutions Contribute to defining data governance across the data platform Basic Requirements: A minimum of a BS degree in computer science, software engineering, or related scientific discipline is desired 3+ years of work experience in building scalable and robust data engineering solutions Strong understanding of Object Oriented programming and proficiency with programming in Python (TDD) and Pyspark to build scalable algorithms 3+ years of experience in distributed computing and big data processing using the Apache Spark framework including Spark optimization techniques 2+ years of experience with Databricks, Delta tables, unity catalog, Delta Sharing, Delta live tables(DLT) and incremental data processing Experience with Delta lake, Unity Catalog Advanced SQL coding and query optimization experience including the ability to write analytical and nested queries 3+ years of experience in building scalable ETL/ ELT Data Pipelines on Databricks and AWS (EMR) 2+ Experience of orchestrating data pipelines using Apache Airflow/ MWAA Understanding and experience of AWS Services that include ADX, EC2, S3 3+ years of experience with data modeling techniques for structured/ unstructured datasets Experience with relational/columnar databases - Redshift, RDS and interactive querying services - Athena/ Redshift Spectrum Passion towards healthcare and improving patient outcomes Demonstrate analytical thinking with strong problem solving skills Stay on top of emerging technologies and posses willingness to learn. Bonus Experience (optional) Experience with Agile environment Experience operating in a CI/CD environment Experience building HTTP/REST APIs using popular frameworks Healthcare experience
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Data Engineering Manager to lead a team building data pipelines, models, and analytics infrastructure. Ideal for experienced engineers who can manage both technical delivery and team growth. Key Responsibilities: Lead development of ETL/ELT pipelines and data platforms Manage data engineers and collaborate with analytics/data science teams Architect systems for data ingestion, quality, and warehousing Define best practices for data architecture, testing, and monitoring Required Skills & Qualifications: Strong experience with big data tools (Spark, Kafka, Airflow) Proficiency in SQL, Python, and cloud data services (e.g., Redshift, BigQuery) Proven leadership and team management in data engineering contexts Bonus: Experience with real-time streaming and ML pipeline integration Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
6.0 - 9.0 years
10 - 24 Lacs
Gurugram
Work from Office
Responsibilities: * Design, develop & maintain data pipelines using Snowflake, AWS/GCP * Collaborate with cross-functional teams on ETL processes & data modeling
Posted 2 months ago
2.0 - 6.0 years
0 - 1 Lacs
Pune
Work from Office
As Lead Data Engineer , you'll design and manage scalable ETL pipelines and clean, structured data flows for real-time retail analytics. You'll work closely with ML engineers and business teams to deliver high-quality, ML-ready datasets. Responsibilities: Develop and optimize large-scale ETL pipelines Design schema-aware data flows and dashboard-ready datasets Manage data pipelines on AWS (S3, Glue, Redshift) Work with transactional and retail data for real-time insights
Posted 2 months ago
2.0 - 6.0 years
0 - 1 Lacs
Pune
Work from Office
As Lead ML Engineer , you'll lead the development of predictive models for demand forecasting, customer segmentation, and retail optimization, from feature engineering through deployment. As Lead ML Engineer, you'll lead the development of predictive models for demand forecasting, customer segmentation, and retail optimization, from feature engineering through deployment. Responsibilities: Build and deploy models for forecasting and optimization Perform time-series analysis, classification, and regression Monitor model performance and integrate feedback loops Use AWS SageMaker, MLflow, and explainability tools (e.g., SHAP or LIME)
Posted 2 months ago
3.0 - 6.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Roles & Responsibilities: Exp level: 10 years Analyzing raw data Developing and maintaining datasets Improving data quality and efficiency Interpret trends and patterns Conduct complex data analysis and report on results Prepare data for prescriptive and predictive modeling Build algorithms and prototypes Combine raw information from different sources Explore ways to enhance data quality and reliability Identify opportunities for data acquisition Develop analytical tools and programs Collaborate with data scientists and architects on several projects Technical Skills: Implementing data governance with monitoring, alerting, reporting Technical Writing capability: documenting standards, templates and procedures. Databricks Knowledge on patterns for scaling ETL pipelines effectively Orchestrating data analytics workloads – Databricks jobs and workflows Integrating Azure DevOps CI/CD practices with data pipeline development ETL modernization, Data modelling Strong exposure to Azure Data services, Synapse, Data orchestration & Visualization Data warehousing & Data lakehouse architecrures Data streaming & Ream time analytics Python PySpark Library, Pandas Azure Data Factory Data orchestration Azure SQL Scripting, Querying, stored procedures
Posted 2 months ago
7.0 - 12.0 years
20 - 35 Lacs
Noida, Chennai
Hybrid
Deployment, configuration & maintenance of Databricks clusters & workspaces Security & Access Control Automate administrative task using tools like Python, PowerShell &Terraform Integrations with Azure Data Lake, Key Vault & implement CI/CD pipelines Required Candidate profile Azure, AWS, or GCP; Azure experience is preferred Strong skills in Python, PySpark, PowerShell & SQL Experience with Terraform ETL processes, data pipeline &big data technologies Security & Compliance
Posted 2 months ago
3.0 - 5.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Position summary: We are seeking a Senior Software Development Engineer – Data Engineering with 3-5 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. Key Responsibilities: Work with cloud-based data solutions (Azure, AWS, GCP). Implement data modeling and warehousing solutions. Developing and maintaining data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Designing and optimizing data storage solutions, including data warehouses and data lakes. Ensuring data quality and integrity through data validation, cleansing, and error handling. Collaborating with data analysts, data architects, and software engineers to understand data requirements and deliver relevant data sets (e.g., for business intelligence). Implementing data security measures and access controls to protect sensitive information. Monitor and troubleshoot issues in data pipelines, notebooks, and SQL queries to ensure seamless data processing. Develop and maintain Power BI dashboards and reports. Work with DAX and Power Query to manipulate and transform data. Basic Qualifications Bachelor’s or master’s degree in computer science or data science 3-5 years of experience in data engineering, big data processing, and cloud-based data platforms. Proficient in SQL, Python, or Scala for data manipulation and processing. Proficient in developing data pipelines using Azure Synapse, Azure Data Factory, Microsoft Fabric. Experience with Apache Spark, Databricks and Snowflake is highly beneficial for handling big data and cloud-based analytics solutions. Preferred Qualifications Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub). Experience in BI and analytics tools (Tableau, Power BI, Looker). Familiarity with data observability tools (Monte Carlo, Great Expectations). Contributions to open-source data engineering projects.
Posted 2 months ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune
Hybrid
Data expert with 5+ years in Kafka & Cosmos DB. Strong skills in designing data pipelines, real-time data processing, optimizing Cosmos DB, and working with various data formats, Kafka-based microservices, event-driven architectures, Azure services.
Posted 2 months ago
13.0 - 18.0 years
44 - 48 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
About KPI Partners. KPI Partners is a leading provider of data-driven insights and innovative analytics solutions. We strive to empower organizations to harness the full potential of their data, driving informed decision-making and business success. We are seeking an enthusiastic and experienced professional to join our dynamic team as an Associate Director / Director in Data Engineering & Modeling. We are looking for a highly skilled and motivated Associate Director/ Director – Data Engineering & Solution Architecture to support the strategic delivery of modern data platforms and enterprise analytics solutions. This is a hands-on leadership role that bridges technology and business, helping design, develop, and operationalize scalable cloud-based data ecosystems. You will work closely with client stakeholders, internal delivery teams, and practice leadership to drive the architecture, implementation, and best practices across key initiatives. Key Responsibilities Solution Design & Architecture : Collaborate on designing robust, secure, and cost-efficient data architectures using cloud-native platforms such as Databricks, Snowflake, Azure Data Services, AWS, and Incorta. Data Engineering Leadership : Oversee the development of scalable ETL/ELT pipelines using ADF, Airflow, dbt, PySpark, and SQL, with an emphasis on automation, error handling, and auditing. Data Modeling & Integration : Design data models (star, snowflake, canonical), resolve dimensional hierarchies, and implement efficient join strategies. API-based Data Sourcing : Work with REST APIs for data acquisition — manage pagination, throttling, authentication, and schema evolution. Platform Delivery : Support end-to-end project lifecycle — from requirement analysis and PoCs to development, deployment, and handover. CI/CD & DevOps Enablement : Implement and manage CI/CD workflows using Git, Azure DevOps, and related tools to enforce quality and streamline deployments. Mentoring & Team Leadership : Mentor senior engineers and developers, conduct code reviews, and promote best practices across engagements. Client Engagement : Interact with clients to understand needs, propose solutions, resolve delivery issues, and maintain high satisfaction levels. Required Skills & Qualifications 14+ years of experience in Data Engineering, BI, or Solution Architecture roles. Strong hands-on expertise in one of the cloud like Azure, Databricks, Snowflake, and AWS (EMR). Proficiency in Python, SQL, and PySpark for large-scale data transformation. Proven skills in developing dynamic and reusable data pipelines (metadata-driven preferred). Strong grasp of data modeling principles and modern warehouse design. Experience with API integrations, including error handling and schema versioning. Ability to design modular and scalable solutions aligned with business goals. Solid communication and stakeholder management skills. Preferred Qualifications Exposure to data governance, data quality frameworks, and security best practices. Certifications in Azure Data Engineering, Databricks, or Snowflake are a plus. Experience working with Incorta and building materialized views or delta-based architectures. Experience working with enterprise ERP systems. Exposure leading data ingestion from Oracle Fusion ERP and other enterprise systems. What We Offer Opportunity to work on cutting-edge data transformation projects for global enterprises Mentorship from senior leaders and a clear path to Director-level roles Flexible work environment and a culture that values innovation, ownership, and growth Competitive compensation and professional development support
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40353 Jobs | Dublin
Wipro
19668 Jobs | Bengaluru
Accenture in India
18077 Jobs | Dublin 2
EY
16566 Jobs | London
Uplers
12079 Jobs | Ahmedabad
Amazon
10867 Jobs | Seattle,WA
Accenture services Pvt Ltd
10456 Jobs |
Bajaj Finserv
10205 Jobs |
Oracle
9728 Jobs | Redwood City
IBM
9609 Jobs | Armonk