Home
Jobs

30 Data Factory Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

0 - 0 Lacs

Pune, Chennai

Hybrid

Naukri logo

Ciklum is looking for a Senior Microsoft Fabric Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: We are seeking a highly skilled and experienced Senior Microsoft Fabric Data Engineer to design, develop, and optimize advanced data solutions leveraging the Microsoft Fabric platform. You will be responsible for building robust, scalable data pipelines, integrating diverse and large-scale data sources, and enabling sophisticated analytics and business intelligence capabilities. This role requires extensive hands-on expertise with Microsoft Fabric, a deep understanding of Azure data services, and mastery of modern data engineering practices. Responsibilities: Lead the design and implementation of highly scalable and efficient data pipelines and data warehouses using Microsoft Fabric and a comprehensive suite of Azure services (Data Factory, Synapse Analytics, Azure SQL, Data Lake) Develop, optimize, and oversee complex ETL/ELT processes for data ingestion, transformation, and loading from a multitude of disparate sources, ensuring high performance with large-scale datasets Ensure the highest level of data integrity, quality, and governance throughout the entire Fabric environment, establishing best practices for data management Collaborate extensively with stakeholders, translating intricate business requirements into actionable, resilient, and optimized data solutions Proactively troubleshoot, monitor, and fine-tune data pipelines and workflows for peak performance and efficiency, particularly in handling massive datasets Architect and manage workspace architecture, implement robust user access controls, and enforce data security in strict compliance with privacy regulations Automate platform tasks and infrastructure management using advanced scripting languages (Python, PowerShell) and Infrastructure as Code (Terraform, Ansible) principles Document comprehensive technical solutions, enforce code modularity, and champion best practices in version control and documentation across the team Stay at the forefront of Microsoft Fabric updates, new features, and contribute significantly to continuous improvement initiatives and the adoption of cutting-edge technologies Requirements: Minimum of 5+ years of progressive experience in data engineering, with at least 3 years of hands-on, in-depth work on Microsoft Fabric and a wide array of Azure data services Exceptional proficiency in SQL, Python, and advanced data transformation tools (e.g., Spark, PySpark notebooks) Mastery of data warehousing concepts, dimensional modeling, and advanced ETL best practices Extensive experience with complex hybrid cloud and on-premises data integration scenarios Profound understanding of data governance, security protocols, and compliance standards Excellent problem-solving, analytical, and communication skills, with the ability to articulate complex technical concepts clearly to both technical and non-technical audiences Desirable: Experience with Power BI, Azure Active Directory, and managing very large-scale data infrastructure Strong familiarity with Infrastructure as Code and advanced automation tools Bachelors degree in Computer Science, Engineering, or a related field (or equivalent extensive experience) What's in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, youll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Experiences of tomorrow. Engineered together Interested already? We would love to get to know you! Submit your application. Cant wait to see you at Ciklum.

Posted 3 days ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Pune, Gurugram

Hybrid

Naukri logo

Strong experience in Microsoft Azure technologies such as App Service, Microsoft Entra ID, SCIM, Service Bus, Function App, Data Factory, Cosmos, etc. Working understanding of basic identity and access management and identity provider terminology and concepts (IDP, OIDC, OAuth, token flow, delegation/federation, etc.) Proficient in C# .NET and Angular web technologies (HTML, CSS, TypeScript, JavaScript, etc.) Experience with building applications within a service-oriented or microservice architecture SQL Server database development and light design Nice To Haves: Experience with identity and access management development and operating/utilizing an identity provider Knowledge of Duende/Identity Server and AzureB2C

Posted 3 days ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Chennai

Work from Office

Naukri logo

Notice period: Immediate 15days Profile source: Anywhere in India Timings: 1:00pm-10:00pm Work Mode: WFO (Mon-Fri) Job Summary: We are looking for an experienced and highly skilled Senior Data Engineer to lead the design and development of our data infrastructure and pipelines. As a key member of the Data & Analytics team, you will play a pivotal role in scaling our data ecosystem, driving data engineering best practices, and mentoring junior engineers. This role is ideal for someone who thrives on solving complex data challenges and building systems that power business intelligence, analytics, and advanced data products. Key Responsibilities: Design and build robust, scalable, and secure data pipelines and Lead the complete lifecycle of ETL/ELT processes, encompassing data intake, transformation, and storage including the concept of SCD type2. Collaborate with data scientists, analysts, backend and product teams to define data requirements and deliver impactful data solutions. Maintain and oversee the data infrastructure, including cloud storage, processing frameworks, and orchestration tools. Build logical and physical data model using any data modeling tool Champion data governance practices, focusing on data quality, lineage tracking, and catalog Guarantee adherence of data systems to privacy regulations and organizational Guide junior engineers, conduct code reviews, and foster knowledge sharing and technical best practices within the team. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related Minimum of 5 years of practical experience in a data engineering or comparable Demonstrated expertise in SQL and Python (or similar languages such as Scala/Java). Extensive experience with data pipeline orchestration tools (e.g., Airflow, dbt, ). Proficiency in cloud data platforms, including AWS (Redshift, S3, Glue), or GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). Familiarity with big data technologies (e.g., Spark, Kafka, Hive) and other data Solid grasp of data warehousing principles, data modeling techniques, and performance (e.g. Erwin Data Modeler, MySQL Workbench) Exceptional problem-solving abilities coupled with a proactive and team-oriented approach.

Posted 3 days ago

Apply

5.0 - 8.0 years

5 - 15 Lacs

Chennai

Work from Office

Naukri logo

Notice period: Immediate 15days Profile source: Tamil Nadu Timings: 1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri) About the Role We are looking for an experienced and highly skilled Senior Data Engineer to lead the design and development of our data infrastructure and pipelines. As a key member of the Data & Analytics team, you will play a pivotal role in scaling our data ecosystem, driving data engineering best practices, and mentoring junior engineers. This role is ideal for someone who thrives on solving complex data challenges and building systems that power business intelligence, analytics, and advanced data products. Key Responsibilities Design and build robust, scalable, and secure data pipelines and Lead the complete lifecycle of ETL/ELT processes, encompassing data intake, transformation, and Collaborate with data scientists, analysts, and product teams to define data requirements and deliver impactful data solutions. Maintain and oversee the data infrastructure, including cloud storage, processing frameworks, and orchestration tools. Champion data governance practices, focusing on data quality, lineage tracking, and catalog Guarantee adherence of data systems to privacy regulations and organizational Guide junior engineers, conduct code reviews, and foster knowledge sharing and technical best practices within the team. Required Qualifications & Skills: Bachelors or masters degree in computer science, Engineering, or a related Minimum of 5 years of practical experience in a data engineering or comparable Demonstrated expertise in SQL and Python (or similar languages such as Scala/Java). Extensive experience with data pipeline orchestration tools (e.g., Airflow, dbt, Prefect). Proficiency in cloud data platforms, including AWS (Redshift, S3, Glue), GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). Familiarity with big data technologies (e.g., Spark, Kafka, Hive) and contemporary data stack Solid grasp of data warehousing principles, data modeling techniques, and performance Exceptional problem-solving abilities coupled with a proactive and team-oriented approach.

Posted 3 days ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

Type: Contract (36 Months Project) Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai Availability: Immediate Joiners Preferred Were looking for a Junior Microsoft Purview Specialist to support our data cataloging and governance initiatives in a fast-paced remote setup. Key Responsibilities: Assist in the configuration and management of Microsoft Purview Support data cataloging, classification, and lineage tracking Work with data owners to ensure proper tagging and metadata management Help implement data governance policies Assist in integrating Purview with Azure and on-premises sources Document governance processes and resolve Purview-related issues Collaborate with project teams for timely delivery Primary Skills Required: Microsoft Purview Data Cataloging & Classification Metadata Management Understanding of Data Governance Azure Data Services (Basic knowledge is a plus) Strong communication and collaboration skills Preferred Qualifications: Certification/training in Microsoft Purview or related tools Exposure to Azure ecosystem: Data Factory, Synapse, Data Lake Ability to work independently in a remote environment If interested, please share your profile with the following details: Full Name: Total Experience: Relevant Microsoft Purview Experience: Current CTC: Expected CTC: Notice Period / Availability: Current Location: Preferred Location (Remote):

Posted 4 days ago

Apply

2.0 - 4.0 years

4 - 7 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Type: Contract (36 Months Project) Availability: Immediate Joiners Preferred Were looking for a Junior Microsoft Purview Specialist to support our data cataloging and governance initiatives in a fast-paced remote setup. Key Responsibilities: Assist in the configuration and management of Microsoft Purview Support data cataloging, classification, and lineage tracking Work with data owners to ensure proper tagging and metadata management Help implement data governance policies Assist in integrating Purview with Azure and on-premises sources Document governance processes and resolve Purview-related issues Collaborate with project teams for timely delivery Primary Skills Required: Microsoft Purview Data Cataloging & Classification Metadata Management Understanding of Data Governance Azure Data Services (Basic knowledge is a plus) Strong communication and collaboration skills Preferred Qualifications: Certification/training in Microsoft Purview or related tools Exposure to Azure ecosystem: Data Factory, Synapse, Data Lake Ability to work independently in a remote environment If interested, please share your profile with the following details: Full Name: Total Experience: Relevant Microsoft Purview Experience: Current CTC: Expected CTC: Notice Period / Availability: Current Location: Preferred Location (Remote): Location-remote,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 5 days ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, remote Experience: 13 Years Preferred Availability: Immediate Joiners Preferred Were looking for a Junior Microsoft Purview Specialist to support our data cataloging and governance initiatives in a fast-paced remote setup. Key Responsibilities: Assist in the configuration and management of Microsoft Purview Support data cataloging, classification, and lineage tracking Work with data owners to ensure proper tagging and metadata management Help implement data governance policies Assist in integrating Purview with Azure and on-premises sources Document governance processes and resolve Purview-related issues Collaborate with project teams for timely delivery Primary Skills Required: Microsoft Purview Data Cataloging & Classification Metadata Management Understanding of Data Governance Azure Data Services (Basic knowledge is a plus) Strong communication and collaboration skills Preferred Qualifications: Certification/training in Microsoft Purview or related tools Exposure to Azure ecosystem: Data Factory, Synapse, Data Lake Ability to work independently in a remote environment If interested, please share your profile with the following details: Full Name: Total Experience: Relevant Microsoft Purview Experience: Current CTC: Expected CTC: Notice Period / Availability: Current Location: Preferred Location (Remote):

Posted 6 days ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Job Title: Microsoft Purview Specialist (Junior Level) Type: Contract (36 Months Project) Location: Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Experience: 1--3 Years Preferred Availability: Immediate Joiners Preferred Were looking for a Junior Microsoft Purview Specialist to support our data cataloging and governance initiatives in a fast-paced remote setup. Key Responsibilities: Assist in the configuration and management of Microsoft Purview Support data cataloging, classification, and lineage tracking Work with data owners to ensure proper tagging and metadata management Help implement data governance policies Assist in integrating Purview with Azure and on-premises sources Document governance processes and resolve Purview-related issues Collaborate with project teams for timely delivery Primary Skills Required: Microsoft Purview Data Cataloging & Classification Metadata Management Understanding of Data Governance Azure Data Services (Basic knowledge is a plus) Strong communication and collaboration skills Preferred Qualifications: Certification/training in Microsoft Purview or related tools Exposure to Azure ecosystem: Data Factory, Synapse, Data Lake Ability to work independently in a remote environment If interested, please share your profile with the following details: Full Name: Total Experience: Relevant Microsoft Purview Experience: Current CTC: Expected CTC: Notice Period / Availability: Current Location: Preferred Location (Remote):

Posted 6 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

Naukri logo

Job Summary This position provides input and support for full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She performs tasks within planned durations and established deadlines. This position collaborates with teams to ensure effective communication and support the achievement of objectives. He/She provides knowledge, development, maintenance, and support for applications. Responsibilities: Generates application documentation. Contributes to systems analysis and design. Designs and develops moderately complex applications. Contributes to integration builds. Contributes to maintenance and support. Monitors emerging technologies and products. Technical Skills : Cloud Platforms: Azure (Databricks, Data Factory, Data Lake Storage, Synapse Analytics). Data Processing: Databricks (PySpark, Spark SQL), Apache Spark. Programming Languages: Python, SQL Data Engineering Tools: Delta Lake, Azure Data Factory, Apache Airflow Other: Git, CI/CD Professional Experience : Design and implementation of a scalable data lakehouse on Azure Databricks, optimizing data ingestion, processing, and analysis for improved business insights. Develop and maintain efficient data pipelines using PySpark and Spark SQL for extracting, transforming, and loading (ETL) data from diverse sources.(Azure and GCP). Develop SQL stored procedures for data integrity. Ensure data accuracy and consistency across all layers. Implement Delta Lake for ACID transactions and data versioning, ensuring data quality and reliability. Create frameworks using Databricks and Data Factory to process incremental data for external vendors and applications. Implement Azure functions to trigger and manage data processing workflows. Design and implement data pipelines to integrate various data sources and manage Databricks workflows for efficient data processing. Conduct performance tuning and optimization of data processing workflows. Provide technical support and troubleshooting for data processing issues. Experience with successful migrations from legacy data infrastructure to Azure Databricks,improvingscalability and cost savings. Collaborate with data scientists and analysts to build interactive dashboards and visualizations on Databricks for data exploration and analysis. Effective oral and written management communication skills. Qualifications: Minimum 5 years of Relevant experience Bachelors Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics or related field

Posted 6 days ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad, Secunderabad

Work from Office

Naukri logo

Proficiency in SQL, Python, and data pipeline frameworks such as Apache Spark, Databricks, or Airflow. Hands-on experience with cloud data platforms (e.g., Azure Synapse, AWS Redshift, Google BigQuery). Strong understanding of data modeling, ETL/ELT, and data lake/warehouse/ Datamart architectures. Knowledge on Data Factory or AWS Glue Experience in developing reports and dashboards using tools like Power BI, Tableau, or Looker.

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Position : Senior Azure Data Engineer (Only Immediate Joiner) Location : Bangalore Mode of Work : Work from Office Experience : 7 years relevant experience Job Type : Full Time (On Roll) Job Description Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills. Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions. Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution. Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Interested candidates kindly share your CV and below details to usha.sundar@adecco.com 1) Present CTC (Fixed + VP) - 2) Expected CTC - 3) No. of years experience - 4) Notice Period - 5) Offer-in hand - 6) Reason of Change - 7) Present Location -

Posted 1 week ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Overview Data Analyst will be responsible to partner closely with business and S&T teams in preparing final analysis reports for the stakeholders enabling them to make important decisions based on various facts and trends and lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. This role will be interacting with DG, DPM, EA, DE, EDF, PO and D &Ai teams for historical data requirement and sourcing the data for Mosaic AI program to scale solution to new markets. Responsibilities Lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. Partners with FP&A Product Owner and associated business SMEs to understand & document business requirements and associated needs Performs the analysis of business data requirements and translates into a data design that satisfies local, sector and global requirements Using automated tools to extract data from primary and secondary sources. Using statistical tools to identify, analyse, and interpret patterns and trends in complex data sets could be helpful for the diagnosis and prediction. Working with engineers, and business teams to identify process improvement opportunities, propose system modifications. Proactively identifies impediments and looks for pragmatic and constructive solutions to mitigate risk. Be a champion for continuous improvement and drive efficiency. Preference will be given to candidate having functional understanding of financial concepts (P&L, Balance Sheet, Cash Flow, Operating Expense) and has experience modelling data & designing data flows Qualifications Bachelor of Technology from a reputed college Minimum 8-10 years of relevant work experience on data modelling / analytics, preferably Minimum 5-6year experience of navigating data in Azure Databricks, Synapse, Teradata or similar database technologies Expertise in Azure (Databricks, Data Factory, Date Lake Store Gen2) Proficient in SQL, Pyspark to analyse data for both development validation and operational support is critical Exposure to GenAI Good Communication & Presentation skill is must for this role.

Posted 3 weeks ago

Apply

1.0 - 5.0 years

7 - 10 Lacs

Kolkata

Work from Office

Naukri logo

Job Title : SSIS Developer Number of Positions : 5 Experience : 45 Years Location : Remote (Preferred : Ahmedabad, Gurgaon, Mumbai, Pune, Bangalore) Shift Timing : Evening/Night (Start time : 6 : 30 PM IST onwards) Job Summary We are seeking skilled SSIS Developers with 45 years of experience in developing and maintaining data integration solutions The ideal candidate will have strong expertise in SSIS and SQL, solid understanding of data warehousing concepts, and exposure to Azure data services This role requires clear communication and the ability to work independently during evening or night hours. Key Responsibilities Design, develop, and maintain SSIS packages for ETL processes. Write and optimize complex SQL queries and stored procedures. Ensure data accuracy, integrity, and performance across DWH systems. Collaborate with team members to gather and understand requirements. Work with Azure-based data platforms and services as needed. Troubleshoot and resolve data integration issues promptly. Document technical specifications and maintain version control. Required Skills Proficient in Microsoft SSIS (SQL Server Integration Services). Strong SQL skills, including performance tuning and debugging. Good understanding of data warehousing concepts and ETL best practices. Exposure to Azure (e.g., Data Factory, SQL Database, Blob Storage). Strong communication and collaboration skills. Ability to work independently during US-aligned hours. Preferred Qualifications Experience working in a remote, distributed team environment. Familiarity with agile methodologies and tools like JIRA, Git.

Posted 3 weeks ago

Apply

14.0 - 24.0 years

35 - 55 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Hybrid

Naukri logo

About the role We are seeking a Sr. Practice Manager with Insight , you will be involved in different phases related to Software Development Lifecycle including Analysis, Design, Development and Deployment. We will count on you to be proficient in Software Design and Development, data modelling, data processing and data visualization. Along the way, you will get to: Help customers leverage existing data resources, implement new technologies and tooling to enable data science and data analytics Track the performance of our resources and related capabilities Experience mentoring and managing other data engineers and ensuring data engineering best practices are being followed. Constantly evolve and scale our capabilities along with the growth of the business and needs of our customers Be Ambitious : This opportunity is not just about what you do today but also about where you can go tomorrow. As a Practice Manager, you are positioned for swift advancement within our organization through a structured career path. When you bring your hunger, heart, and harmony to Insight, your potential will be met with continuous opportunities to upskill, earn promotions, and elevate your career. What were looking for Sr. Practice Manager with: Total of 14+ yrs of relevant experience, atleast 5-6 years in people management, managing 20+ team. Minimum 12 years of experience in Data technology. Experience in Data Warehouse and excellent command in SQL, data modeling and ETL development. Hands-on experience in SQL Server, Microsoft Azure (Data Factory, Data Lake, Data Bricks) Experience in MSBI (SSRS, SSIS, SSAS), writing queries and stored procedures. (Good to have) Experienced using Power BI, MDX, DAX, MDS, DQS. (Good to have) Experience developing design related to Predictive Analytics model Ability to handle performance improvement tasks & data archiving. Proficient in relevant provisioning of Azure resources, forecasting hardware usage, and managing to a budget.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :

Posted 1 month ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be proficient in Databricks Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptoms Manages problems that require involvement of others to solve Reaches sound decisions quickly Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Roles Responsibilities Provides innovative and cost effective solution using databricks Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Learn adapt quickly to new Technologies as per the business need Develop a team of Operations Excellence building tools and capabilities that the Development teams leverage to maintain high levels of performance scalability security and availability Skills The Candidate must have 710 yrs of experience in databricks delta lake Hands on experience on Azure Experience on Python scripting Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Knowledge of Azure architecture and design Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Azure Data Migration - Con/AM - HYD - J48933 Roles & Responsibilities Working with functional experts to understand data migration requirements and translate into data engineering and data analytics functionality. Design and implement data integrations, pipelines, and algorithms to extract and transform data from various sources into a format ready to load into target systems. Design and development of data profiling tools to analyse and understand the structure, quality, and integrity of data prior to migration. Implement reconciliation reports to verify and validate data accuracy post-migration, identifying discrepancies and ensuring consistency between source and target systems. Assisting in scoping, estimation, and task planning for assigned projects. Perform testing of ETL processes and data profiling tools, debugging issues and refining operations based on feedback and requirements. Document the ETL processes, data profiling methods, and reconciliation procedures to maintain clear and accessible records for future reference and compliance. Keep up-to-date with the latest tools, technologies, and best practices in data engineering to continuously improve the quality and efficiency of work. Mandatory skills: Demonstrated experience of converting business requirements and use cases into technical solutions. Deep knowledge and of how to design & build data pipelines in Data Factory/ Azure Synapse Strong skills in programming languages such as Python, SQL, or Java, which are commonly used for data manipulation and ETL processes Hands-on experience working in complex data warehouse implementations using Azure SQL Data warehouse, Azure Data Factory and Azure SQL Database. Good communication skills to work effectively within a team and interact with clients or other stakeholders to gather requirements and present solutions. Comfortable in an Agile working environment and using Scrum project management. Strong analytical and problem-solving skills to troubleshoot issues during the migration process and optimize data workflows. High attention to detail to accurately implement ETL processes and generate precise reconciliation reports. Desired skills: Using ETL tools in the context of Data Migration projects Experience of building ETL solutions against COTS or SaaS based applications, such as SAP, Oracle ERP or Microsoft Dynamics. A proven ability to build resilient, tested data pipelines with data quality monitoring embedded (DataOps) Knowledge of security best practices for data protection and compliance. Azure data engineering certification (DP203) Required Candidate profile Candidate Experience Should Be : 4 To 10 Candidate Degree Should Be : BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MBA,MCA

Posted 1 month ago

Apply

8.0 - 10.0 years

13 - 15 Lacs

Pune

Work from Office

Naukri logo

We are seeking a hands-on Lead Data Engineer to drive the design and delivery of scalable, secure data platforms on Google Cloud Platform (GCP). In this role you will own architectural decisions, guide service selection, and embed best practices across data engineering, security, and performance disciplines. You will partner with data modelers, analysts, security teams, and product owners to ensure our pipelines and datasets serve analytical, operational, and AI/ML workloads with reliability and cost efficiency. Familiarity with Microsoft Azure data services (Data Factory, Databricks, Synapse, Fabric) is valuable, as many existing workloads will transition from Azure to GCP. Key Responsibilities Lead end-to-end development of high-throughput, low-latency data pipelines and lake-house solutions on GCP (BigQuery, Dataflow, Pub/Sub, Dataproc, Cloud Composer, Dataplex, etc.). Define reference architectures, technology standards for data ingestion, transformation, and storage. Drive service-selection trade-offscost, performance, scalability, and securityacross streaming and batch workloads. Conduct design reviews and performance tuning sessions; ensure adherence to partitioning, clustering, and query-optimization standards in BigQuery. Contribute to long-term cloud data strategy, evaluating emerging GCP features and multi-cloud patterns (Azure Synapse, Data Factory, Purview, etc.) for future adoption. Lead the code reviews and oversee the development activities delegated to Data engineers. Implement best practices recommended by Google Cloud Provide effort estimates for the data engineering activities Participate in discussions to migrate existing Azure workloads to GCP, provide solutions to migrate the work loads for selected data pipelines Must-Have Skills 810 years in data engineering, with 3+ years leading teams or projects on GCP. Expert in GCP data services (BigQuery, Dataflow/Apache Beam, Dataproc/Spark, Pub/Sub, Cloud Storage) and orchestration with Cloud Composer or Airflow. Proven track record designing and optimizing large-scale ETL/ELT pipelines (streaming + batch). Strong fluency in SQL and one major programming language (Python, Java, or Scala). Deep understanding of data lake / lakehouse, dimensional & data-vault modeling, and data governance frameworks. Excellent communication and stakeholder-management skills; able to translate complex technical topics to non-technical audiences. Nice-to-Have Skills Hands-on experience with Microsoft Azure data services (Azure Synapse Analytics, Data Factory, Event Hub, Purview). Experience integrating ML pipelines (Vertex AI, Dataproc ML) or real-time analytics (BigQuery BI Engine, Looker). Familiarity with open-source observability stacks (Prometheus, Grafana) and FinOps tooling for cloud cost optimization. Preferred Certifications Google Professional Data Engineer (strongly preferred) or Google Professional Cloud Architect Microsoft Certified: Azure Data Engineer Associate (nice to have) Education Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related technical field. Equivalent professional experience will be considered.

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Pune

Work from Office

Naukri logo

Job Summary We are seeking an energetic Senior Data Engineer with hands-on expertise in Google Cloud Platform to build, maintain, and migrate data pipelines that power analytics and AI workloads. You will leverage GCP servicesBigQuery, Dataflow, Cloud Composer, Pub/Sub, and Cloud Storagewhile collaborating with data modelers, analysts, and product teams to deliver highly reliable, well-governed datasets. Familiarity with Microsoft Azure data services (Data Factory, Databricks, Synapse, Fabric) is valuable, as many existing workloads will transition from Azure to GCP. Key Responsibilities Design, develop, and optimize batch and streaming pipelines on GCP using Dataflow / Apache Beam, BigQuery, Cloud Composer (Airflow), and Pub/Sub. Maintain and enhance existing data workflows—monitoring performance, refactoring code, and automating tests to ensure data quality and reliability. Migrate data assets and ETL / ELT workloads from Azure (Data Factory, Databricks, Synapse, Fabric) to corresponding GCP services, ensuring functional parity and cost efficiency. Partner with data modelers to implement partitioning, clustering, and materialized-view strategies in BigQuery to meet SLAs for analytics and reporting. Conduct root-cause analysis for pipeline failures, implement guardrails for data quality, and document lineage. Must-Have Skills 4-6 years of data-engineering experience, including 2+ years building pipelines on GCP (BigQuery, Dataflow, Pub/Sub, Cloud Composer). Proficiency in SQL and one programming language (Python, Java, or Scala). Solid understanding of ETL / ELT patterns, data-warehouse modeling (star, snowflake, data vault), and performance-tuning techniques. Experience implementing data-quality checks, observability, and cost-optimization practices in cloud environments. Nice-to-Have Skills Practical exposure to Azure data services—Data Factory, Databricks, Synapse Analytics, or Microsoft Fabric. Preferred Certifications Google Professional Data Engineer or Associate Cloud Engineer Microsoft Certified: Azure Data Engineer Associate (nice to have) Education Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or a related technical field. Equivalent professional experience will be considered.

Posted 1 month ago

Apply

2.0 - 7.0 years

20 - 30 Lacs

Pune

Work from Office

Naukri logo

Work mode – Currently this is remote but it’s not permanent WFH , once business ask the candidate to come to office, they must relocate. Mandatory:- DE , Azure , synapse , SQL python , Pyspark, ETL,Fabric, • Exp.in Python for scripting or data tasks. Required Candidate profile • Hands-on exp in SQL& relational databases (SQL Server,PostgreSQL). • data warehousing concepts (ETL, • Hands-on exp in Azure data integration tools like DF, Synapse, Data Lake and Blob Storage.

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 10 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Hybrid

Naukri logo

Role & Responsibilities Job Description: We are seeking a skilled and experienced Microsoft Fabric Engineer to join data engineering team. The ideal candidate will have a strong background in designing, developing, and maintaining data solutions using Microsoft Fabric, i ncluding experience across key workloads such as Data Engineering, Data Factory, Data Science, Real-Time Analytics, and Power BI. Require deep understanding of Synapse Data Warehouse, OneLake, Notebooks, Lakehouse architecture, and Power BI integration within Microsoft ecosystem. Key Responsibilities: Design, implement scalable and secure data solutions using Microsoft Fabric. Build and maintain Data Pipelines using Dataflows Gen2 and Data Factory. Work with Lakehouse architecture and manage datasets in OneLake. Develop notebooks (PySpark or T-SQL) for data transformation and processing. Collaborate with data analysts to create interactive dashboards, reports using Power BI (within Fabric). Leverage Synapse Data Warehouse and KQL databases for structured real-time analytics. Monitor and optimize performance of data pipelines and queries. Ensure to adhere data quality, security, and governance practices. Stay current with Microsoft Fabric updates and roadmap, recommending enhancements. Required Skills: 3+ years of hands-on experience with Microsoft Fabric or similar tools in the Microsoft data stack. Strong proficiency with: Data Factory (Fabric) Synapse Data Warehouse / SQL Analytics Endpoints Power BI integration and DAX Notebooks (PySpark, T-SQL) Lakehouse and OneLake Understanding of data modeling, ETL/ELT processes, and real-time data streaming. Experience with KQL (Kusto Query Language) is a plus. Familiarity with Microsoft Purview, Azure Data Lake, or Azure Synapse Analytics is advantageous. Qualifications: Microsoft Fabric, Onelake, Data Factory, Data Lake, DataMesh

Posted 1 month ago

Apply

8 - 12 years

25 - 30 Lacs

Noida

Hybrid

Naukri logo

Role & responsibilities Develop and implement data pipelines using Azure Data Factory and Databricks. Work with stakeholders to gather requirements and translate them into technical solutions. Migrate data from Oracle to Azure Data Lake. Optimize data processing workflows for performance and scalability. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data architects and other team members to design and implement data solutions. Preferred candidate profile Strong experience with Azure Data Services, including Azure Data Factory, Synapse Analytics, and Databricks. Proficiency in data transformation and ETL processes. Hands-on experience with Oracle to Azure Data Lake migrations is a plus. Strong problem-solving and analytical skills. Optimize performance and cost efficiency for Databricks clusters, data pipelines, and storage systems Monitor and manage cloud resources to ensure high availability, performance and scalability Prepare architecture diagrams, technical documentation, and runbooks for the deployed solutions. Excellent communication and teamwork skills. Preferred Qualifications: Azure Data Engineer Associate certification. Databricks Certification. Understanding of ODI, ODS, OAS is a plus. Perks and benefits

Posted 1 month ago

Apply

8 - 12 years

25 - 30 Lacs

Gurugram

Hybrid

Naukri logo

Role & responsibilities Develop and implement data pipelines using Azure Data Factory and Databricks. Work with stakeholders to gather requirements and translate them into technical solutions. Migrate data from Oracle to Azure Data Lake. Optimize data processing workflows for performance and scalability. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data architects and other team members to design and implement data solutions. Preferred candidate profile Strong experience with Azure Data Services, including Azure Data Factory, Synapse Analytics, and Databricks. Proficiency in data transformation and ETL processes. Hands-on experience with Oracle to Azure Data Lake migrations is a plus. Strong problem-solving and analytical skills. Optimize performance and cost efficiency for Databricks clusters, data pipelines, and storage systems Monitor and manage cloud resources to ensure high availability, performance and scalability Prepare architecture diagrams, technical documentation, and runbooks for the deployed solutions. Excellent communication and teamwork skills. Preferred Qualifications: Azure Data Engineer Associate certification. Databricks Certification. Understanding of ODI, ODS, OAS is a plus. Perks and benefits

Posted 1 month ago

Apply

5 - 10 years

4 - 8 Lacs

Mysuru

Work from Office

Naukri logo

Must have Selenium, Java with Data factory and Databricks. Excellent Communication Skill Max Budget : 20 LPA Max NP: 15 Days. Job Title: Off Shore Automation Engineer Minimum Qualifications and Job Requirements: 5+ years of experience in automating APIs and web services. 3+ years of experience in Selenium automation tool. 1+ years of expereince with Datafactory and Databricks Experience with BDD implementations using Cucumber.Excellent SQL skills and the ability to write complex queriesHighly skilled in at least one programming language. Java is preferred Highly skilled in 2 or more Automation Test tools. Experience in ReadyAPI is preferred. 2+ years of experience with Jenkins 2+ years of experience delivery automation solutions using Agile methodology. Experience with Eclipse or similar IDEsExperience with Source Control tools such as Git Ability to work on multiple projects concurrently and meet deadlines Ability to work in a fast-paced team environment. Expectations include a high level of initiative and a strong commitment to job knowledge, productivity, and attention to detailStrong verbal and written communication skills.Solid software engineering skills participated in full lifecycle development on large projects.

Posted 1 month ago

Apply

5 - 10 years

9 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking a highly skilled Power BI Expert with over 5 years of experience in business intelligence and data analytics. The ideal candidate will have expertise in Azure, Data Factory, Microsoft Fabric, and Data Warehousing. Required Candidate profile Experience with Power BI, Azure, Data Warehousing, and related technologies Proficiency in DAX, Power Query, SQL, and data visualization best practices Degree in Computer Science, Data Analytic.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies