Jobs
Interviews

25 Adls Gen2 Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

5 - 9 Lacs

Hyderabad

Work from Office

6+ years of experience in Data engineering projects using COSMOS DB- Azure Databricks (Min 3-5 projects) Strong expertise in building data engineering solutions using Azure Databricks, Cosmos DB Strong T-SQL programming skills or with any other flavor of SQL Experience working with high volume data, large objects, complex data transformations Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. Good understanding of data modelling for data warehouse and data marts Strong verbal and written communication skills Ability to learn, contribute and grow in a fast phased environment Nice to have: Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, ADLS Gen2, Azure Events Hub Experience using Jira and ServiceNow in project environments Experience in implementing Datawarehouse and ETL solutions

Posted 5 days ago

Apply

9.0 - 12.0 years

15 - 20 Lacs

Chennai

Work from Office

Job Title:Data Engineer Lead / Architect (ADF)Experience9-12YearsLocation:Remote / Hybrid : Role and ResponsibilitiesTalk to client stakeholders, and understand the requirements for building their data warehouse / data lake / data Lakehouse. Design, develop and maintain data pipelines in Azure Data Factory (ADF) for ETL from on-premise and cloud based sources Design, develop and maintain data warehouses and data lakes in Azure Run large data platform and other related programs to provide business intelligence support Design and Develop data models to support business intelligence solutions Implement best practices in data modelling and data warehousing Troubleshoot and resolve issues related to ETL and data connections Skills Required: Excellent written and verbal communication skills Excellent knowledge and experience in ADF Well versed with ADLS Gen 2 Knowledge of SQL for data extraction and transformation Ability to work with various data sources (Excel, SQL databases, APIs, etc.) Knowledge in SAS would be added advantage Knowledge in Power BI would be added advantage

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

kerala

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our GDS Consulting team, you will be part of NCLC team delivering specific to Microsoft account. You will be working on latest Microsoft BI technologies and will collaborate with other teams within Consulting services. The opportunity We're looking for resources with expertise in Microsoft BI, Power BI, Azure Data Factory, Data Bricks to join the group of our Data Insights team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of our service offering. Your Key Responsibilities Responsible for managing multiple client engagements. Understand and analyse business requirements by working with various stakeholders and create the appropriate information architecture, taxonomy and solution approach. Work independently to gather requirements, cleansing extraction and loading of data. Translate business and analyst requirements into technical code. Create interactive and insightful dashboards and reports using Power BI, connecting to various data sources and implementing DAX calculations. Design and build complete ETL/Azure Data Factory processes moving and transforming data for ODS, Staging, and Data Warehousing. Design and development of solutions in Data Bricks, Scala, Spark, SQL to process and analyze large datasets, perform data transformations, and build data models. Design SQL Schema, Database Schema, Stored procedures, function, and T-SQL queries. Skills And Attributes For Success Collaborating with other members of the engagement team to plan the engagement and develop work program timelines, risk assessments and other documents/templates. Able to manage Senior stakeholders. Experience in leading teams to execute high quality deliverables within stipulated timeline. Skills in PowerBI, Azure Data Factory, Databricks, Azure Synapse, Data Modelling, DAX, Power Query, Microsoft Fabric. Strong proficiency in Power BI, including data modelling, DAX, and creating interactive visualizations. Solid experience with Azure Databricks, including working with Spark, PySpark (or Scala), and optimizing big data processing. Good understanding of various Azure services relevant to data engineering, such as Azure Blob Storage, ADLS Gen2, Azure SQL Database/Synapse Analytics. Strong SQL Skills and experience with one of the following: Oracle, SQL, Azure SQL. Good to have experience in SSAS or Azure SSAS and Agile Project Management. Basic Knowledge on Azure Machine Learning services. Excellent Written and Communication Skills and ability to deliver technical demonstrations. Quick learner with a can-do attitude. Demonstrating and applying strong project management skills, inspiring teamwork and responsibility with engagement team members. To qualify for the role, you must have A bachelor's or master's degree. A minimum of 4-7 years of experience, preferably background in a professional services firm. Excellent communication skills with consulting experience preferred. Ideally, you'll also have Analytical ability to manage multiple projects and prioritize tasks into manageable work products. Can operate independently or with minimum supervision. What Working At EY Offers At EY, we're dedicated to helping our clients, from startups to Fortune 500 companies and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around. Opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that's right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 1 week ago

Apply

10.0 - 16.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Work from Office

Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Data Lake Storage (ADLS). Develop complex SQL queries to optimize database performance and troubleshoot issues in Azure SQL databases. Collaborate with cross-functional teams to gather requirements for data processing needs and design solutions that meet business needs. Implement data quality checks using PySpark on big data datasets stored in Azure Blobs or ADLS. Troubleshoot technical issues related to ADF workflows, SQL queries, and Python scripts. Desired Candidate Profile 8+ years of experience as an Azure Data Engineer with expertise in ADF, ADLS Gen2, Azure Data Lake, Data Bricks, Pyspark, SQL, Python. Bachelor's degree in Any Specialization (BCA/B.Tech/B.E.). Strong understanding of cloud computing concepts and experience working with Microsoft Azure platform. Location: Chennai, Coimbatore, Hyderabad, Bangalore, Pune & Gurgaon.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Work from Office

Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Data Lake Storage (ADLS). Develop complex SQL queries to optimize database performance and troubleshoot issues in Azure SQL databases. Collaborate with cross-functional teams to gather requirements for data processing needs and design solutions that meet business needs. Implement data quality checks using PySpark on big data datasets stored in Azure Blobs or ADLS. Troubleshoot technical issues related to ADF workflows, SQL queries, and Python scripts. Desired Candidate Profile 5-10 years of experience as an Azure Data Engineer with expertise in ADF, ADLS Gen2, Azure Data Lake, Data Bricks, Pyspark, SQL, Python. Bachelor's degree in Any Specialization (BCA/B.Tech/B.E.). Strong understanding of cloud computing concepts and experience working with Microsoft Azure platform. Location: Chennai, Coimbatore, Hyderabad, Bangalore, Pune & Gurgaon.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

3 - 6 Lacs

Bengaluru

Work from Office

We are looking for a skilled SQL PySpark professional with 3 to 8 years of experience to join our team. The ideal candidate will have expertise in developing data pipelines and transforming data using Databricks, Synapse notebooks, and Azure Data Factory. Roles and Responsibility Collaborate with technical architects and cloud solutions teams to design data pipelines, marts, and reporting solutions. Code, test, and optimize Databricks jobs for efficient data processing and report generation. Set up scalable data pipelines integrating with various data sources and cloud platforms using Databricks. Ensure best practices are followed in terms of code quality, data security, and scalability. Participate in code and design reviews to maintain high development standards. Optimize data querying layers to enhance performance and support analytical requirements. Leverage Databricks to set up scalable data pipelines that integrate with a variety of data sources and cloud platforms. Collaborate with data scientists and analysts to support machine learning workflows and analytic needs. Stay updated with the latest developments in Databricks and associated technologies to drive innovation. Job Proficiency in PySpark or Scala and SQL for data processing tasks. Hands-on experience with Azure Databricks, Delta Lake, Delta Live tables, Auto Loader, and Databricks SQL. Expertise with Azure Data Lake Storage (ADLS) Gen2 for optimized data storage and retrieval. Strong knowledge of data modeling, ETL processes, and data warehousing concepts. Experience with Power BI for dashboarding and reporting is a plus. Familiarity with Azure Synapse for analytics and integration tasks is desirable. Knowledge of Spark Streaming for real-time data stream processing is an advantage. MLOps knowledge for integrating machine learning into production workflows is beneficial. Familiarity with Azure Resource Manager (ARM) templates for infrastructure as code (IaC) practices is preferred. Demonstrated expertise of 4-5 years in developing data ingestion and transformation pipelines using Databricks, Synapse notebooks, and Azure Data Factory. Solid understanding and hands-on experience with Delta tables, Delta Lake, and Azure Data Lake Storage Gen2. Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation. Proficiency in building and optimizing query layers using Databricks SQL. Demonstrated experience integrating Databricks with Azure Synapse, ADLS Gen2, and Power BI for end-to-end analytics solutions. Prior experience in developing, optimizing, and deploying Power BI reports. Familiarity with modern CI/CD practices, especially in the context of Databricks and cloud-native solutions.

Posted 3 weeks ago

Apply

8.0 - 12.0 years

20 - 32 Lacs

Hyderabad, Ahmedabad

Hybrid

Were Hiring: Senior Data Engineer – Azure & Snowflake Expert Location: Hyderabad / Ahmedabad Experience: 8–12 Years Immediate Joiners Preferred Are you passionate about designing scalable data pipelines and building high-performing data platforms in the cloud? We are looking for a Senior Data Engineer with strong hands-on expertise in Snowflake and Azure Data Factory to join our growing team. Key Responsibilities: Design and optimize scalable data pipelines for large datasets. Develop and orchestrate ETL/ELT workflows using Azure Data Factory (ADF) . Manage data storage with Azure Blob Storage and ADLS Gen2 . Implement event-driven automations using Azure Logic Apps . Write robust SQL queries, stored procedures, and build data models. Ensure data quality, security, and governance practices are enforced. Troubleshoot and optimize existing pipelines and infrastructure. Must-Have Skills : Expert-level Snowflake knowledge – design, development, and optimization. Proficiency in the Azure data ecosystem : ADF, Blob Storage, ADLS Gen2, Logic Apps. Strong SQL expertise for complex data manipulation. Familiarity with Git and version control. Excellent problem-solving and communication skills. Nice to Have : Experience with dbt (data build tool) . Knowledge of Python and DevOps/CI-CD practices for data engineering.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

Hyderabad

Work from Office

6+ years of experience in Data engineering projects using COSMOS DB- Azure Databricks (Min 3-5 projects) Strong expertise in building data engineering solutions using Azure Databricks, Cosmos DB Strong T-SQL programming skills or with any other flavor of SQL Experience working with high volume data, large objects, complex data transformations Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. Good understanding of data modelling for data warehouse and data marts Strong verbal and written communication skills Ability to learn, contribute and grow in a fast phased environment Nice to have: Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, ADLS Gen2, Azure Events Hub Experience using Jira and ServiceNow in project environments Experience in implementing Datawarehouse and ETL solutions

Posted 3 weeks ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Req ID: 329815 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a DOT NET Full Stack + Azure Developer to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Experience: 10 to 12 years Responsibilities: Design and architect web applications using ASP.NET Core 6 and C# for backend development. Develop single-page applications (SPAs) using React and Redux for frontend development. Implement unit testing frameworks such as xUnit for backend code and Jest for frontend code to ensure high-quality software. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the performance, quality, and responsiveness of applications. Identify and correct bottlenecks and fix bugs. Maintain code quality, organization, and automation. Provide technical leadership and mentoring to junior & senior developers. Stay updated with the latest industry trends and technologies. Required Skills: Proficiency in ASP.NET Core 6, C#, React, Redux, and Python. Strong experience with RESTful API design and implementation. Expertise in SQL Server and database management. Hands-on experience with Azure services including Azure Cognitive search, Azure Web App, Azure App Service, Azure Function App, Azure Application Insights, Azure Logic App, Azure Data Factory, Azure Search Service, Azure SQL, ADLS Gen2, Azure Storage Account, Azure Key Vault, API Connection, Alert Rules, and Azure AI. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Senior Data Engineer (Remote, Contract 6 Months) Databricks, ADF, and PySpark. We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background #ContractDetails Role: Senior Data Engineer Mode: Remote Duration: 6 Months Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune

Posted 1 month ago

Apply

6.0 - 11.0 years

6 - 11 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Key Responsibilities: Build scalable ETL pipelines and implement robust data solutions using Azure technologies. Manage and orchestrate workflows with Azure Data Factory (ADF), Databricks, ADLS Gen2, and Azure Key Vault. Design, maintain, and optimize secure and efficient data lake architectures. Collaborate with stakeholders to gather requirements and translate them into detailed technical specifications. Implement CI/CD pipelines to enable automated, seamless data deployment leveraging Azure DevOps. Monitor and troubleshoot data quality, performance bottlenecks, and scalability issues in production pipelines. Write clean, modular, and reusable PySpark code adhering to Agile development methodologies. Maintain thorough documentation of data pipelines, architecture designs, and best practices for team reuse. Must-Have Skills: 6+ years of experience in Data Engineering roles. Strong expertise with SQL, Python, PySpark, Apache Spark. Hands-on experience with Azure Databricks, Azure Data Factory (ADF), ADLS Gen2, Azure DevOps, and Azure Key Vault. Deep knowledge of data warehousing concepts, ETL development, data modeling, and governance. Familiarity with Agile software development lifecycle (SDLC) and containerization tools like Docker. Commitment to clean coding practices and maintaining high-quality codebases. Good to Have Skills: Experience with Azure Event Hubs and Logic Apps. Exposure to Power BI for data visualization. Strong problem-solving skills with a background in logic building and competitive programming.

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 16 Lacs

Pune

Work from Office

Roles & Responsibilities: Design end-to-end data code development using pyspark, python, SQL and Kafka leveraging Microsoft Fabric's capabilities. Requirements: Hands-on experience with Microsoft Fabric , including Lakehouse, Data Factory, and Synapse . Strong expertise in PySpark and Python for large-scale data processing and transformation. Deep knowledge of Azure data services (ADLS Gen2, Azure Databricks, Synapse, ADF, Azure SQL, etc.). Experience in designing, implementing, and optimizing end-to-end data pipelines on Azure. Understanding of Azure infrastructure setup (networking, security, and access management) is good to have. Healthcare domain knowledge is a plus but not mandatory.

Posted 1 month ago

Apply

5.0 - 10.0 years

18 - 30 Lacs

Noida

Remote

Role Title: Sr. Azure Data Platform Engineer Location: India 1 remote role and 5 WFO Noida location candidates need to have L2 and L3 Support experience with the below) We are seeking an Azure Data Platform Engineer with a strong focus on Administration and hands-on experience in Azure platform engineering services. Ideal candidates should have expertise in administering services such as: Azure Key Vault Function App & Logic App Event Hub App Services Azure Data Factory (Administration) Azure Monitor & Log Analytics Azure Databricks (Administration) ETL processes Cosmos DB (Administration) Azure DevOps & CI/CD pipelines Azure Synapse Analytics (Administration) Python / Shell scripting Azure Data Lake Storage (ADLS) Azure Kubernetes Service (AKS) Additional knowledge of Tableau and Power BI would be a plus. Also, candidates should have hands-on experience managing and ensuring the stability, security, and performance of these platforms, with a focus on automation, monitoring, and incident management. Proficient in distributed system architectures and Azure Data Engineering services like Event Hub, Data Factory, ADLS Gen2, Cosmos DB, Synapse, Databricks, APIM, Function App , Logic App, and App Services Implement, and manage infrastructure using IaC tools such as Azure Resource Manager (ARM) templates and Terraform. Manage containerized applications using Docker and orchestrate them with Azure Kubernetes Service (AKS). Set up and manage monitoring, logging, and alerting systems using Azure Monitor, Log Analytics, and Application Insights. Implement disaster recovery (DR) strategies, backups, and failover mechanisms for critical workloads. Automate infrastructure provisioning, scaling, and management for high availability and efficiency. Experienced in managing and maintaining clusters across Development, Test, Preproduction, and Production environments on Azure. Skilled in defining, scheduling, and monitoring job flows, with proactive alert setup. Adept at troubleshooting failed jobs in azure tools like Databricks and Data Factory, performing root cause analysis, and applying corrective measures. Hands-on experience with distributed streaming tools like Event Hub. Expertise in designing and managing backup and disaster recovery solutions using Infrastructure as Code (IaC) with Terraform. Strong experience in automating processes using Python, shell scripting, and working with Jenkins and Azure DevOps. Proficient in designing and maintaining Azure CI/CD pipelines for seamless code integration, testing, and deployment. Experienced in monitoring and troubleshooting VM resources such as memory, CPU, OS, storage, and network. Skilled at monitoring applications and advising developers on improving job and workflow performance. Capable of reviewing and resolving log file issues for system and application components. Adaptable to evolving technologies, with a strong sense of responsibility and accomplishment. Knowledgeable in agile methodologies for software delivery. 5-15 years of experience with Azure and cloud platforms, leveraging cloud-native tools to build, manage, and optimize secure, scalable solutions.

Posted 1 month ago

Apply

3.0 - 5.0 years

7 - 10 Lacs

Pune

Work from Office

Job Title: Data Engineer Location : Pune, India (On-site) Experience : 3 5 years Employment Type: Full-time Job Summary We are looking for a hands-on Data Engineer who can design and build modern Lakehouse solutions on Microsoft Azure. You will own data ingestion from source-system APIs through Azure Data Factory into OneLake, curate bronze silver gold layers on Delta Lake, and deliver dimensional models that power analytics at scale. Key Responsibilities Build secure, scalable Azure Data Factory pipelines that ingest data from APIs, files, and databases into OneLake. Curate raw data into Delta Lake tables on ADLS Gen 2 using the Medallion (bronze silver gold) architecture, ensuring ACID compliance and optimal performance. Develop and optimize SQL/Spark SQL transformations in Azure Fabric Warehouse / Lakehouse environments. Apply dimensional-modelling best practices (star/snowflake, surrogate keys, SCDs) to create analytics-ready datasets. Implement monitoring, alerting, lineage, and CI/CD (Git/Azure DevOps) for all pipelines and artifacts. Document data flows, data dictionaries, and operational runbooks. Must-Have Technical Skills Azure Fabric & Lakehouse experience Azure Fabric Warehouse experience / Azure Synapse Data Factory building, parameterizing, and orchestrating API-driven ingestion pipelines ADLS Gen 2 + Delta Lake Strong SQL advanced querying, tuning, and procedural extensions (T-SQL / Spark SQL) Data-warehousing & Dimensional Modelling concepts Good-to-Have Skills Python (PySpark, automation, data-quality checks) Unix/Linux shell scripting DevOps (Git, Azure DevOps) Education & Certifications BE / B. Tech in computer science, Information Systems, or related field Preferred: Microsoft DP-203 Azure Data Engineer Associate Soft Skills Analytical, detail-oriented, and proactive problem solver Clear written and verbal communication; ability to simplify complex topics Collaborative and adaptable within agile, cross-functional teams

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Senior Data Engineer (Remote, Contract 6 Months) Databricks, ADF, and PySpark. We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Remote, Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background #ContractDetails Role: Senior Data Engineer Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India Duration: 6 Months Email to Apply: navaneeta@suzva.com Contact: 9032956160

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 68 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS/Azure/GCP Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Chennai

Hybrid

The Operations Engineer will work in collaboration with and under the direction of the Manager of Data Engineering, Advanced Analytics to provide operational services, governance, and incident management solutions for the Analytics team. This includes modifying existing data ingestion workflows, releases to QA and Prod, working closely with cross functional teams and providing production support for daily issues. Essential Job Functions: * Takes ownership of customer issues reported and see problems through to resolution * Researches, diagnoses, troubleshoots and identifies solutions to resolve customer issues * Follows standard procedures for proper escalation of unresolved issues to the appropriate internal teams * Provides prompt and accurate feedback to customers * Ensures proper recording and closure of all issues * Prepares accurate and timely reports * Documents knowledge in the form of knowledge base tech notes and articles Other Responsibilities: * Be part of on-call rotation * Support QA and production releases, off-hours if needed * Work with developers to troubleshoot issues * Attend daily standups * Create and maintain support documentation (Jira/Confluence) Minimum Qualifications and Job Requirements: * Proven working experience in enterprise technical support * Basic knowledge of systems, utilities, and scripting * Strong problem-solving skills * Excellent client-facing skills * Excellent written and verbal communication skills * Experience with Microsoft Azure including Azure Data Factory (ADF), Databricks, ADLS (Gen2) * Experience with system administration and SFTP * Experience leveraging analytics team tools such as Alteryx or other ETL tools * Experience with data visualization software (e.g. Domo, Datorama) * Experience with SQL programming * Experience automating routine data tasks using various software tools (e.g., Jenkins, Nexus, SonarQube, Rundeck, Task Scheduler)

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 6-8 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead the migration of the ETLs from on-premises SQLServer based data warehouse to Azure Cloud, Databricks and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, and Databricks (Pyspark) Review and analyze existing on-premises ETL processes developed in SSIS and T-SQL Implement DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 6+ years of experience as a Cloud Data Engineer Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage) and Databricks Solid experience in ETL development using on-premises databases and ETL technologies Experience with Python or other scripting languages for data processing Experience with Agile methodologies Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills Proven solid analytical skills and attention to detail Proven ability to adapt to new technologies and learn quickly Preferred Qualifications Certification in Azure or Databricks Experience with data modeling and database design Experience with development in Snowflake for data engineering and analytics workloads Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)

Posted 1 month ago

Apply

5.0 - 9.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Technical Delivery Lead to join our team for a Cloud Data Modernization project. The successful candidate will be responsible for managing and leading the migration of an on-premises Enterprise Data Warehouse (SQLServer) to a modern cloud-based data platform utilizing Azure Cloud data tools and Snowflake. This platform will enable offshore (non-US) resources to build and develop Reporting, Analytics, and Data Science solutions. Primary Responsibilities Manage and lead the migration of the on-premises SQLServer Enterprise Data Warehouse to Azure Cloud and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, Databricks, and Snowflake Manage and guide the development of cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Implement and oversee DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Provide technical leadership and mentorship to the engineering team Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 8+ years of experience in a Cloud Data Engineering role, with 3+ years in a leadership or technical delivery role Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage), Databricks, and Snowflake Experience with Python or other scripting languages for data processing Experience with Agile methodologies and project management tools Solid experience in developing cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills. Solid analytical skills and attention to detail Proven track record of successful project delivery in a cloud environment Preferred Qualifications Certification in Azure or Snowflake Experience working with automated ETL conversion tools used during cloud migrations (SnowConvert, BladeBridge, etc.) Experience with data modeling and database design Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Gurugram

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Senior Cloud Data Engineer to join our team for a Cloud Data Modernization project. The successful candidate will be responsible for migrating our on-premises Enterprise Data Warehouse (SQLServer) to a modern cloud-based data platform utilizing Azure Cloud data tools, Delta lake and Snowflake. Primary Responsibilities Lead the migration of the ETLs from on-premises SQLServer based data warehouse to Azure Cloud, Databricks and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, and Databricks (Pyspark) Review and analyze existing on-premises ETL processes developed in SSIS and T-SQL Implement DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 6+ years of experience as a Cloud Data Engineer Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage) and Databricks Solid experience in ETL development using on-premises databases and ETL technologies Experience with Python or other scripting languages for data processing Experience with Agile methodologies Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Solid communication and collaboration skills Solid analytical skills and attention to detail Ability to adapt to new technologies and learn quickly Preferred Qualifications Certification in Azure or Databricks Experience with data modeling and database design Experience with development in Snowflake for data engineering and analytics workloads Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 15 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Test Planning & Automation Lead - Cloud Data Modernization Position Overview: We are seeking a highly skilled and experienced Test Planning & Automation Lead to join our team for a Cloud Data Modernization project. This role involves leading the data validation testing efforts for the migration of an on-premises Enterprise Data Warehouse (SQLServer) to a target cloud tech stack comprising Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage, etc.) and Snowflake. The primary goal is to ensure data consistency between the on-premises and cloud environments. Primary Responsibilities Lead Data Validation TestingOversee and manage the data validation testing process to ensure data consistency between the on-premises SQLServer and the target cloud environment Tool Identification and AutomationIdentify and implement appropriate tools to automate the testing process, reducing reliance on manual methods such as Excel or manual file comparisons Testing Plan DevelopmentDefine and develop a comprehensive testing plan that addresses validations for all data within the data warehouse CollaborationWork closely with data engineers, cloud architects, and other stakeholders to ensure seamless integration and validation of data Quality AssuranceEstablish and maintain quality assurance standards and best practices for data validation and testing ReportingGenerate detailed reports on testing outcomes, data inconsistencies, and corrective actions Continuous ImprovementContinuously evaluate and improve testing processes and tools to enhance efficiency and effectiveness Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor degree or above education Leadership Experience6+ years as a testing lead in Data Warehousing or Cloud Data Migration projects Automation ToolsExperience with data validation through custom built python frameworks and testing automation tools Testing MethodologiesProficiency in defining and implementing testing methodologies and frameworks for data validation Technical ExpertiseSolid knowledge of Python, SQL Server, Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage), Databricks, and Snowflake Analytical Skills: Proven excellent analytical and problem-solving skills to identify and resolve data inconsistencies CommunicationProven solid communication skills to collaborate effectively with cross-functional teams Project ManagementDemonstrated ability to manage multiple tasks and projects simultaneously, ensuring timely delivery of testing outcomes Preferred Qualifications Experience in leading data validation testing efforts in cloud migration projects Familiarity with Agile methodologies and project management tools At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 6-8 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies