Jobs
Interviews

1586 Adf Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

20 - 23 Lacs

Gurugram, Haryana, India

On-site

We are hiring a detail-oriented and technically strong ETL Test Engineer to validate, verify, and maintain the quality of complex ETL pipelines and Data Warehouse systems . The ideal candidate will have a solid understanding of SQL , data validation techniques , regression testing , and Azure-based data platforms including Databricks . Key Responsibilities Perform comprehensive testing of ETL pipelines, ensuring data accuracy and completeness across systems. Validate Data Warehouse (DWH) objects including fact and dimension tables. Design and execute test cases and test plans for data extraction, transformation, and loading processes. Conduct regression testing to validate enhancements and ensure no breakage of existing data flows. Work with SQL to write complex queries for data verification and backend testing. Test data processing workflows in Azure Data Factory and Databricks environments. Collaborate with developers, data engineers, and business analysts to understand requirements and raise defects proactively. Perform root cause analysis for data-related issues and suggest improvements. Create clear and concise test documentation, logs, and reports. Required Technical Skills Strong knowledge of ETL testing methodologies and tools Excellent skills in SQL (joins, aggregation, subqueries, performance tuning) Hands-on experience with Data Warehousing and data models (Star/Snowflake) Experience in test case creation, execution, defect logging, and closure Proficient in regression testing, data validation, data reconciliation Working knowledge of Azure Data Factory (ADF), Azure Synapse, and Databricks Experience with test management tools like JIRA, TestRail, or HP ALM Nice to Have Exposure to automation testing for data pipelines Scripting knowledge in Python or PySpark Understanding of CI/CD in data testing Experience with data masking, data governance, and privacy rules Qualifications Bachelor’s degree in Computer Science, Information Systems, or related field 3+ years of hands-on experience in ETL/Data Warehouse testing Excellent analytical and problem-solving skills Strong attention to detail and communication skills Skills: regression,azure,data reconciliation,test management tools,data validation,azure databricks,etl testing,data warehousing,dwh,etl pipeline,test case creation,azure data factory,test cases,etl tester,regression testing,sql,databricks

Posted 1 month ago

Apply

3.0 years

20 - 23 Lacs

Pune, Maharashtra, India

On-site

We are hiring a detail-oriented and technically strong ETL Test Engineer to validate, verify, and maintain the quality of complex ETL pipelines and Data Warehouse systems . The ideal candidate will have a solid understanding of SQL , data validation techniques , regression testing , and Azure-based data platforms including Databricks . Key Responsibilities Perform comprehensive testing of ETL pipelines, ensuring data accuracy and completeness across systems. Validate Data Warehouse (DWH) objects including fact and dimension tables. Design and execute test cases and test plans for data extraction, transformation, and loading processes. Conduct regression testing to validate enhancements and ensure no breakage of existing data flows. Work with SQL to write complex queries for data verification and backend testing. Test data processing workflows in Azure Data Factory and Databricks environments. Collaborate with developers, data engineers, and business analysts to understand requirements and raise defects proactively. Perform root cause analysis for data-related issues and suggest improvements. Create clear and concise test documentation, logs, and reports. Required Technical Skills Strong knowledge of ETL testing methodologies and tools Excellent skills in SQL (joins, aggregation, subqueries, performance tuning) Hands-on experience with Data Warehousing and data models (Star/Snowflake) Experience in test case creation, execution, defect logging, and closure Proficient in regression testing, data validation, data reconciliation Working knowledge of Azure Data Factory (ADF), Azure Synapse, and Databricks Experience with test management tools like JIRA, TestRail, or HP ALM Nice to Have Exposure to automation testing for data pipelines Scripting knowledge in Python or PySpark Understanding of CI/CD in data testing Experience with data masking, data governance, and privacy rules Qualifications Bachelor’s degree in Computer Science, Information Systems, or related field 3+ years of hands-on experience in ETL/Data Warehouse testing Excellent analytical and problem-solving skills Strong attention to detail and communication skills Skills: regression,azure,data reconciliation,test management tools,data validation,azure databricks,etl testing,data warehousing,dwh,etl pipeline,test case creation,azure data factory,test cases,etl tester,regression testing,sql,databricks

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are looking for a Java & Oracle ADF (Application Development Framework) Developer with 5+ years of experience to design, develop, and maintain enterprise applications using Java & Oracle's ADF technology stack and related technologies. Location - Ramanujan IT City, Chennai (Onsite) Contract Duration - 3+ months (Extendable) Immediate Joiner Role and Responsibilities Design and develop enterprise applications using Java & Oracle ADF framework and related technologies Create and maintain ADF Business Components (Entity Objects, View Objects, Application Modules) Develop user interfaces using ADF Faces components and ADF Task Flows Implement business logic and data validation rules using ADF BC Design and develop reports using Jasper Reports Configure and maintain application servers (Tomcat, JBoss) Integrate applications with MySQL databases and web services Handle build and deployment processes Perform code reviews and ensure adherence to coding standards Debug and resolve production issues Collaborate with cross-functional teams including business analysts, QA, and other developers Provide technical documentation and maintain project documentation Core Technical Skills: Strong expertise in Oracle ADF framework (5 years hands-on experience) Proficient in Java/J2EE technologies Advanced knowledge of ADF Business Components (Entity Objects, View Objects, Application Modules) Strong experience with ADF Faces Rich Client components Expertise in ADF Task Flows (bounded and unbounded) Proficient in MySQL database design, optimization, and query writing Strong experience with Jasper Reports for report development and customization Application Server & Build Experience: Experience in deploying and maintaining applications on Tomcat Server Experience with JBoss/WildFly application server configuration and deployment Expertise in build tools (Maven/Ant) and build automation Experience with continuous integration and deployment processes Knowledge of application server clustering and load balancing Database Skills: Strong knowledge of MySQL database administration Experience in writing complex SQL queries and stored procedures Understanding of database optimization and performance tuning Knowledge of database backup and recovery procedures Reporting Skills: Expertise in Jasper Reports design and development Experience in creating complex reports with sub-reports Knowledge of JasperReports Server administration Ability to integrate reports with web applications

Posted 1 month ago

Apply

5.0 years

20 - 24 Lacs

Chennai, Tamil Nadu, India

On-site

Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field. Skills: data modeling,business intelligence,python,dbt,performene tuning,airflow,informatica,azkaban,luigi,power bi,etl,dwh,fivetran,data quality,snowflake,sql

Posted 1 month ago

Apply

5.0 years

0 Lacs

Delhi, Delhi

Remote

Full time | Work From Office This Position is Currently Open Department / Category: ENGINEER Listed on Jun 30, 2025 Work Location: NEW DELHI BANGALORE HYDERABAD Job Descritpion of Databricks Engineer 6 to 8 Years Relevant Experience We are looking for an experienced Databricks Engineer to join our data engineering team and contribute to designing and implementing scalable data solutions on the Azure platform. This role involves working closely with cross-functional teams to build high-performance data pipelines and maintain a modern Lakehouse architecture. Key Responsibilities: Design and develop scalable data pipelines using Spark-SQL and PySpark in Azure Databricks. Build and maintain Lakehouse architecture using Azure Data Lake Storage (ADLS) and Databricks. Perform comprehensive data preparation tasks, including: Data cleaning and normalization Deduplication Type conversions Collaborate with the DevOps team to deploy and manage solutions in production environments. Partner with Data Science and Business Intelligence teams to share insights, align on best practices, and drive innovation. Support change management through training, communication, and documentation during upgrades, data migrations, and system changes. Required Qualifications: 5+ years of IT experience with strong exposure to cloud technologies, particularly in Microsoft Azure. Hands-on experience with: Databricks, Azure Data Factory (ADF), and Azure Data Lake Storage (ADLS) Programming with PySpark, Python, and SQL Solid understanding of data engineering concepts, data modeling, and data processing frameworks. Ability to work effectively in distributed, remote teams. Excellent communication skills in English (both written and verbal). Preferred Skills: Strong working knowledge of distributed computing frameworks, especially Apache Spark and Databricks. Experience with Delta Lake and Lakehouse architecture principles. Familiarity with data tools and libraries such as Pandas, Spark-SQL, and PySpark. Exposure to on-premise databases such as SQL Server, Oracle, etc. Experience with version control tools (e.g., Git) and DevOps practices including CI/CD pipelines. Required Skills for Databricks Engineer Job Spark-SQL and PySpark in Azure Databricks Python SQL Our Hiring Process Screening (HR Round) Technical Round 1 Technical Round 2 Final HR Round

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra

On-site

Who are we Fulcrum Digital is an agile and next-generation digital accelerating company providing digital transformation and technology services right from ideation to implementation. These services have applicability across a variety of industries, including banking & financial services, insurance, retail, higher education, food, healthcare, and manufacturing. Position Overview We are seeking a highly skilled Senior Data Engineer to lead the migration, optimization, and governance of data pipelines in Azure Data Factory (ADF) and SQL. The ideal candidate will have extensive experience in Change Data Capture (CDC), performance tuning, data security, and compliance within a cloud-based architecture. Key Responsibilities Architect & optimize CDC pipelines in ADF, Data Sync Services and SQL, ensuring efficient data ingestion. Implement performance tuning strategies such as parallel processing, indexing, partitioning, and data archiving. Enforce security & compliance by implementing RBAC, data encryption, audit logging, and access control best practices. Define and implement data governance models, including roles, metadata management, and compliance policies. Develop robust error handling & monitoring with automated alerts, logging strategies, and retry mechanisms for CDC failures. Provide technical documentation & best practices for CDC implementation, security, and performance optimization. Requirements Qualifications & Skills 5+ years of experience in data engineering with expertise in Azure (ADF, Synapse), SQL, and CDC implementation. Strong knowledge of performance tuning techniques, including parallel processing and indexing strategies. Hands-on experience with data security, governance, and compliance best practices in a cloud environment. Experience with error logging, monitoring, and notification setup for data pipelines. Ability to troubleshoot performance bottlenecks and implement scalable solutions. Excellent communication and documentation skills. Job Opening ID RRF_5456 Job Type Permanent Industry IT Services Date Opened 01/07/2025 City Pune City Province Maharashtra Country India Postal Code 411057

Posted 1 month ago

Apply

5.0 years

20 - 24 Lacs

Pune, Maharashtra, India

On-site

Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field. Skills: data modeling,business intelligence,python,dbt,performene tuning,airflow,informatica,azkaban,luigi,power bi,etl,dwh,fivetran,data quality,snowflake,sql

Posted 1 month ago

Apply

5.0 years

20 - 24 Lacs

Gurugram, Haryana, India

On-site

Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field. Skills: data modeling,business intelligence,python,dbt,performene tuning,airflow,informatica,azkaban,luigi,power bi,etl,dwh,fivetran,data quality,snowflake,sql

Posted 1 month ago

Apply

4.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Description Experience: 4+ years of experience Key Responsibilities Help define/improvise actionable and decision driving management information Ensure streamlining, consistency and standardization of MI within the handled domain Build and operate flexible processes/reports that meet changing business needs Would be required to prepare the detailed documentation of schemas Any other duties commensurate with position or level of responsibility Desired Profile Prior experience in insurance companies/insurance sector would be an added advantage Experience of Azure Technologies (include SSAS, SQL Server, Azure Data Lake, Synapse) Hands on experience on SQL and PowerBI. Excellent understanding in developing stored procedures, functions, views and T-SQL programs. Developed and maintained ETL (Data Extraction, Transformation and Loading) mappings using ADF to extract the data from multiple source systems. Analyze existing SQL queries for performance improvements. Excellent written and verbal communication skills Ability to create and design MI for the team Expected to handle multiple projects, with stringent timelines Good interpersonal skills Actively influences strategy by creative and unique ideas Exposure to documentation activities Key Competencies Technical Learning - Can learn new skills and knowledge, as per business requirement Action Orientated - Enjoys working; is action oriented and full of energy for the things he/she sees as challenging; not fearful of acting with a minimum of planning; seizes more opportunities than others. Decision Quality - Makes good decisions based upon a mixture of analysis, wisdom, experience, and judgment. Detail Orientation : High attention to detail, especially in data quality, documentation, and reporting Communication & Collaboration : Strong interpersonal skills, effective in team discussions and stakeholder interactions Qualifications B.Com/ BE/BTech/MCA from reputed College/Institute

Posted 1 month ago

Apply

7.0 - 10.0 years

0 Lacs

Greater Madurai Area

On-site

Job Requirements Why work for us? Alkegen brings together two of the world’s leading specialty materials companies to create one new, innovation-driven leader focused on battery technologies, filtration media, and specialty insulation and sealing materials. Through global reach and breakthrough inventions, we are delivering products that enable the world to breathe easier, live greener, and go further than ever before. With over 60 manufacturing facilities with a global workforce of over 9,000 of the industry’s most experienced talent, including insulation and filtration experts, Alkegen is uniquely positioned to help customers impact the environment in meaningful ways. Alkegen offers a range of dynamic career opportunities with a global reach. From production operators to engineers, technicians to specialists, sales to leadership, we are always looking for top talent ready to bring their best. Come grow with us! Key Responsibilities Lead and manage the Data Operations team, including BI developers and ETL developers, to deliver high-quality data solutions. Oversee the design, development, and maintenance of data models, data transformation processes, and ETL pipelines. Collaborate with business stakeholders to understand their data needs and translate them into actionable data insights solutions. Ensure the efficient and reliable operation of data pipelines and data integration processes. Develop and implement best practices for data management, data quality, and data governance. Utilize SQL, Python, and Microsoft SQL Server to perform data analysis, data manipulation, and data transformation tasks. Build and deploy data insights solutions using tools such as PowerBI, Tableau, and other BI platforms. Design, create, and maintain data warehouse environments using Microsoft SQL Server and the data vault design pattern. Design, create, and maintain ETL packages using Microsoft SQL Server and SSIS. Work closely with cross-functional teams in a matrix organization to ensure alignment with business objectives and priorities. Lead and mentor team members, providing guidance and support to help them achieve their professional goals. Proactively identify opportunities for process improvements and implement solutions to enhance data operations. Communicate effectively with stakeholders at all levels, presenting data insights and recommendations in a clear and compelling manner. Implement and manage CI/CD pipelines to automate the testing, integration, and deployment of data solutions. Apply Agile methodologies and Scrum practices to ensure efficient and timely delivery of projects. Skills & Qualifications Master's or Bachelor’s degree in computer science, Data Science, Information Technology, or a related field. 7 to 10 years of experience in data modelling, data transformation, and building and managing ETL processes. Strong proficiency in SQL, Python, and Microsoft SQL Server for data manipulation and analysis. Extensive experience in building and deploying data insights solutions using BI tools such as PowerBI and Tableau. At least 2 years of experience leading BI developers or ETL developers. Experience working in a matrix organization and collaborating with cross-functional teams. Proficiency in cloud platforms such as Azure, AWS, and GCP. Familiarity with data engineering tools such as ADF, Databricks, Power Apps, Power Automate, and SSIS. Strong stakeholder management skills with the ability to communicate complex data concepts to non-technical audiences. Proactive and results-oriented, with a focus on delivering value aligned with business objectives. Knowledge of CI/CD pipelines and experience implementing them for data solutions. Experience with Agile methodologies and Scrum practices. Relevant certifications in Data Analytics, Data Architecture, Data Warehousing, and ETL are highly desirable. At Alkegen, we strive every day to help people – ALL PEOPLE – breathe easier, live greener and go further than ever before. We believe that diversity and inclusion is central to this mission and to our impact. Our diverse and inclusive culture drives our growth & innovation and we nurture it by actively embracing our differences and using our varied perspectives to solve the complex challenges facing our changing and diverse world. Employment selection and related decisions are made without regard to sex, race, ethnicity, nation of origin, religion, color, gender identity and expression, age, disability, education, opinions, culture, languages spoken, veteran’s status, or any other protected class.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Description Experience: 4+ years of experience Key Responsibilities Help define/improvise actionable and decision driving management information Ensure streamlining, consistency and standardization of MI within the handled domain Build and operate flexible processes/reports that meet changing business needs Would be required to prepare the detailed documentation of schemas Any other duties commensurate with position or level of responsibility Desired Profile Prior experience in insurance companies/insurance sector would be an added advantage Experience of Azure Technologies (include SSAS, SQL Server, Azure Data Lake, Synapse) Hands on experience on SQL and PowerBI. Excellent understanding in developing stored procedures, functions, views and T-SQL programs. Developed and maintained ETL (Data Extraction, Transformation and Loading) mappings using ADF to extract the data from multiple source systems. Analyze existing SQL queries for performance improvements. Excellent written and verbal communication skills Ability to create and design MI for the team Expected to handle multiple projects, with stringent timelines Good interpersonal skills Actively influences strategy by creative and unique ideas Exposure to documentation activities Key Competencies Technical Learning - Can learn new skills and knowledge, as per business requirement Action Orientated - Enjoys working; is action oriented and full of energy for the things he/she sees as challenging; not fearful of acting with a minimum of planning; seizes more opportunities than others. Decision Quality - Makes good decisions based upon a mixture of analysis, wisdom, experience, and judgment. Detail Orientation : High attention to detail, especially in data quality, documentation, and reporting Communication & Collaboration : Strong interpersonal skills, effective in team discussions and stakeholder interactions Qualifications B.Com/ BE/BTech/MCA from reputed College/Institute

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad, Bengaluru

Work from Office

Role & responsibilities 8-12 years of professional work experience in a relevant field Proficient in Azure Databricks, ADF, Delta Lake, SQL Data Warehouse, Unity Catalog, Mongo DB, Python Experience/ prior knowledge on semi structure data and Structured Streaming, Azure synapse analytics, data lake, data warehouse. Proficient in creating Azure Data Factory pipelines for ETL/ELT processing ; copy activity, custom Azure development etc. Lead the technical team of 4-6 resource. Prior Knowledge in Azure DevOps and CI/CD processes including Github . Good knowledge of SQL and Python for data manipulation, transformation, and analysis knowledge on Power bi would be beneficial. Understand business requirements to set functional specifications for reporting applications

Posted 1 month ago

Apply

6.0 - 8.0 years

3 - 8 Lacs

Bengaluru

On-site

Country/Region: IN Requisition ID: 26961 Work Model: Position Type: Salary Range: Location: INDIA - BENGALURU - HP Title: Technical Lead-Data Engg Description: Area(s) of responsibility Azure Data Lead - 5A ( HP Role – Senior Data Engineer) Experience: 6 to 8 Years Azure Lead with experience in Azure ADF, ADLS Gen2, Databricks, PySpark and Advanced SQL Responsible for designing and implementing secure, scalable, and highly available cloud-based solutions and estimation on Azure Cloud 4 Years of experince in Azure Databricks and PySpark Experience in Performance Tuning Experience with integration of different data sources with Data Warehouse and Data Lake is required Experience in creating Data warehouse, data lakes Understanding of data modelling and data architecture concepts To be able to clearly articulate pros and cons of various technologies and platforms Experience in supporting tools GitHub, Jira, Teams, Confluence need to be used Collaborate with clients to understand their business requirements and translate them into technical solutions that leverage AWS and Azure cloud platforms. Mandatory Skillset: Azure Databricks, PySpark and Advanced SQL

Posted 1 month ago

Apply

12.0 - 14.0 years

9 - 10 Lacs

Bengaluru

On-site

Country/Region: IN Requisition ID: 26981 Work Model: Position Type: Salary Range: Location: INDIA - BENGALURU - AUTOMOTIVE Title: Project Manager Description: Area(s) of responsibility Job description - Azure Tech Project Manager Experience Required- 12- 14 years Project Manager will be responsible for driving project management activities in the Azure Cloud Services (ADF, Azure Databricks, PySpark, ADLS Gen2) Strong understanding of Azure Services process execution from acquiring data from source system to visualization Experience in Azure DevOPS Experience in Data Warehouse and Data Lake and Visualizations Project Management skills including time and risk management, resource prioritization and project structuring Responsible for END TO END project execution and delivery across multiple clients Understand ITIL processes related to incident management, problem management, application life cycle management, operational health management. Strong in Agile and Jira Tool Strong customer service, problem solving, organizational and conflict management skills Should be able to prepare weekly/monthly reports both internal and client management Strong in Agile and Jira Tool Should be able to help team members on technical issue Should be good learner and open to learn new functionalities

Posted 1 month ago

Apply

0 years

1 - 9 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. This role is crucial for us on the Cloud Data Engineering team (Data Exchange)for all the Cloud development, migrations, and support works related to WellMed Data Services. Team and Role require to maintain and supports EDW, IS cloud modernization in wellmed, that involves cloud development on Data Streaming using Apache Kafka, Kubernetes, Databricks, Snowflake, SQL Server, Airflow for monitoring and support. GIT and DevOps for pipeline automations. Primary Responsibility: Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Hands-on experience in Cloud Data Engineering Contributed for all the Cloud development, migrations, and support works Proven to be comfortable independently maintain and supports EDW, cloud modernization, SQL development, Azure cloud development, ETL using Azure Data Factory Proven to successfully implemented data Streaming using Apache Kafka, Kubernetes, Databricks, Snowflake, SQL Server, Airflow for monitoring and support. GIT and DevOps for pipeline automations At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

12.0 - 14.0 years

0 Lacs

Greater Bengaluru Area

On-site

Area(s) of responsibility Job Description - Azure Tech Project Manager Experience Required- 12- 14 years Project Manager will be responsible for driving project management activities in the Azure Cloud Services (ADF, Azure Databricks, PySpark, ADLS Gen2) Strong understanding of Azure Services process execution from acquiring data from source system to visualization Experience in Azure DevOPS Experience in Data Warehouse and Data Lake and Visualizations Project Management skills including time and risk management, resource prioritization and project structuring Responsible for END TO END project execution and delivery across multiple clients Understand ITIL processes related to incident management, problem management, application life cycle management, operational health management. Strong in Agile and Jira Tool Strong customer service, problem solving, organizational and conflict management skills Should be able to prepare weekly/monthly reports both internal and client management Strong in Agile and Jira Tool Should be able to help team members on technical issue Should be good learner and open to learn new functionalities

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Key Responsibilities • Build and optimize ETL/ELT pipelines using Databricks and ADF , ingesting data from diverse sources including APIs, flat files, and operational databases. • Develop and maintain scalable PySpark jobs for batch and incremental data processing across Bronze, Silver, and Gold layers. • Write clean, production-ready Python code for data processing, orchestration, and integration tasks. • Contribute to the medallion architecture design and help implement data governance patterns across data layers. • Collaborate with analytics, data science, and business teams to design pipelines that meet performance and data quality expectations. • Monitor, troubleshoot, and continuously improve pipeline performance and reliability. • Support CI/CD for data workflows using Git , Databricks Repos , and optionally Terraform for infrastructure-as-code. • Document pipeline logic, data sources, schema transformations, and operational playbooks. ⸻ Required Qualifications • 3–5 years of experience in data engineering roles with increasing scope and complexity. • Strong hands-on experience with Databricks , including Spark, Delta Lake, and SQL-based transformations. • Proficiency in PySpark and Python for large-scale data manipulation and pipeline development. • Hands-on experience with Azure Data Factory for orchestrating data workflows and integrating with Azure services. • Solid understanding of data modeling concepts and modern warehousing principles (e.g., star schema, slowly changing dimensions). • Comfortable with Git-based development workflows and collaborative coding practices. ⸻ Preferred / Bonus Qualifications • Experience with Terraform to manage infrastructure such as Databricks workspaces, ADF pipelines, or storage resources. • Familiarity with Unity Catalog , Databricks Asset Bundles (DAB) , or Delta Live Tables (DLT) . • Experience with Azure DevOps or GitHub Actions for CI/CD in a data environment. • Knowledge of data governance , role-based access control , or data quality frameworks . • Exposure to real-time ingestion using tools like Event Hubs , Azure Functions , or Autoloader .

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

1. Strategy, Framework, and Governance Operating Model - Develop and maintain enterprise-wide data governance strategies, standards, and policies. - Align governance practices with business goals like regulatory compliance and analytics readiness. - Define roles and responsibilities within the governance operating model. - Drive governance maturity assessments and lead change management initiatives. 2. Stakeholder Alignment & Organizational Enablement - Collaborate across IT, legal, business, and compliance teams to align governance priorities. - Define stewardship models and create enablement, training, and communication programs. - Conduct onboarding sessions and workshops to promote governance awareness. 3. Architecture Design for Data Governance Platforms - Design scalable and modular data governance architecture. - Evaluate tools like Microsoft Purview, Collibra, Alation, BigID, Informatica. - Ensure integration with metadata, privacy, quality, and policy systems. 4. Microsoft Purview Solution Architecture - Lead end-to-end implementation and management of Microsoft Purview. - Configure RBAC, collections, metadata scanning, business glossary, and classification rules. - Implement sensitivity labels, insider risk controls, retention, data map, and audit dashboards. 5. Metadata, Lineage & Glossary Management - Architect metadata repositories and ingestion workflows. - Ensure end-to-end lineage (ADF → Synapse → Power BI). - Define governance over business glossary and approval workflows.

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

This is full time remote contract position. (So no freelancing or moonlighting possible) You may need to provide few hours of overlapping time with US timezone . You may need to go through the background verification process in which your claimed experience, education certificates and references given will be verified. So pls don't apply if you are not comfortable to go through this verification process. This is client facing role hence excellent communication in English language is MUST . Min. Experience : 5+ years About role : Our client is about to start an ERP replacement. They plan to move away from the AWS platform and move to an Azure data lake feeding Snowflake. We need a resource who can be Snowflake thought leader and having Microsoft azure data engineering expertise. Key Responsibilities: Data Ingestion & Orchestration (Transformation & Cleansing) : o Design and maintain Azure Data Factory (ADF) pipelines : Extract data from sources like ERPs (SAP, Oracle), UKG, SharePoint, and REST APIs. o Configure scheduled/event-driven loads : Set up ADF for automated data ingestion. o Transform and cleanse data : Develop logic in ADF for Bronze-to-Silver layer transformations. o Implement data quality checks : Ensure accuracy and consistency. · Snowflake Data Warehousing: o Design/optimize data models: Create tables, views, and stored procedures for Silver and Gold layers. o ETL/ELT in Snowflake: Transform curated Silver data into analytical Gold structures. o Performance tuning: Optimize queries and data loads. Design, develop, and optimize data models within Snowflake, including creating tables, views, and stored procedures for both Silver and Gold layers. o Implement ETL/ELT processes within Snowflake to transform curated data (Silver) into highly optimized analytical structures (Gold) Data Lake Management: o Implement Azure Data Lake Gen2 solutions : Follow medallion architecture (Bronze, Silver). o Manage partitioning, security, governance: Ensure efficient and secure data storage. · Collaboration & Documentation: Partner with stakeholders to convert data needs into technical solutions, document pipelines and models, and uphold best practices through code reviews. Monitoring & Support: Track pipeline performance, resolve issues, and deploy alerting/logging for proactive data integrity and issue detection. · Data visualization tools : Proficient in like Power BI, DAX, and Power Query for creating insightful reports. Skilled in Python for data processing and analysis to support data engineering tasks. Required Skills & Qualifications: Over 5+ years of experience in data engineering, data warehousing, or ETL development. Microsoft Azure proficiency: Azure Data Factory (ADF): Experience in designing, developing, and deploying complex data pipelines. Azure Data Lake Storage Gen2: Hands-on experience with data ingestion, storage, and organization. Expertise in Snowflake Data Warehouse and ETL/ELT: Understanding Snowflake architecture. SQL proficiency for manipulation and querying. Experience with Snowpipe, tasks, streams, and stored procedures. Strong understanding of data warehousing concepts and ETL/ELT principles. Data Formats & Integration : Experience with various data formats (e.g., Parquet, CSV, JSON) and data integration patterns. Data Visualization: Experience with Power BI, DAX, Power Query. Scripting: Python for data processing and analysis. Soft Skills: Problem-solving, attention to detail, communication, and collaboration Nice-to-Have Skills : Version control (e.g., Git), Agile/Scrum methodologies and Data governance and security best practices.

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

Years of experience: 5+ Years JD: High-Level Responsibilities: • Design and develop scalable data pipelines using Azure Data Factory, incorporating SSIS packages where applicable. • Write and optimize T-SQL queries for data transformation, validation, and loading. • Collaborate with the customer’s data architects to understand and modernize legacy data integration patterns. • Perform relational database design and schema optimization for Azure SQL or Synapse targets. • Support migration of on-premise or legacy ETL jobs into cloud-native Azure Integration Services. • Conduct unit testing and troubleshoot data pipeline issues during sprint cycles. • Provide support during UK overlap hours (up to 8 PM IST) to align with customer team’s collaboration windows. Mapped Skills: • Azure Data Factory development (SSIS helpful) • T-SQL development • Relational database design • SQL Server Management Studio • Azure Data Studio • Azure Portal • Visual Studio 2022 • Experience migrating existing integrations to AIS Recommended Skills: • Azure Synapse Analytics (often paired with ADF in modern pipelines) • Data flow transformations in ADF • Data lake concepts and Azure Data Lake Gen2 • Monitoring & debugging ADF pipelines • Integration Runtime setup and optimization • Azure Key Vault integration in ADF • Performance tuning in SQL Server and Azure SQL DB • Knowledge of Delta Lake format if modern analytics is a goal

Posted 1 month ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title : Snowflake Developer/Data Engineer Location : Chennai(Hybrid) Experience: 6+ Years About the Role We are looking for a Snowflake Developer with 6+ years of hands-on experience in Snowflake, SnowSQL, Cortex, DBT, and data warehousing . The ideal candidate should have strong expertise in data modeling, transformation, and optimization, along with excellent communication skills to collaborate with business and technical teams. Key Responsibilities Develop and optimize Snowflake data models, schemas, and performance-tuned queries. Write and execute SnowSQL scripts for data transformation and automation. Utilize Snowflake Cortex to integrate AI-driven analytics and insights. Implement DBT (Data Build Tool) for data transformation, testing, and orchestration. Design and maintain ​ ADF data pipelines and ETL/ELT workflows . Collaborate with cross-functional teams to understand data needs and provide solutions. Ensure data security, governance, and best practices in Snowflake. Troubleshoot performance issues and implement tuning strategies. Required Skills & Qualifications 6+ years of hands-on experience with Snowflake and cloud data warehousing. Strong expertise in SnowSQL and DBT . Expertise in Cortex is a Plus Experience in data modeling, performance tuning, and query optimization . Hands-on experience with ETL/ELT processes and data pipelines . Strong understanding of SQL, data warehousing concepts, and cloud architecture . Experience integrating Snowflake with other BI/Analytics tools . Excellent problem-solving skills and attention to detail . Strong communication skills to interact with business and technical stakeholders. Knowledge / hands on experience in PowerBI, Fabric is a plus.

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job description Job Name: Senior Data Engineer DBT & Snowflake Years of Experience: 5 Job Description: We are looking for a skilled and experienced DBT-Snowflake Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: DBT,Snowflake Secondary Skills: ADF,Databricks,Python,Airflow,Fivetran,Glue Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: • Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Snowflake, ensure the effective transformation and load data from diverse sources into data warehouse or data lake. • Implement and manage data models in DBT, guarantee accurate data transformation and alignment with business needs. • Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. • Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance. • Establish best DBT processes to improve performance, scalability, and reliability. • Expertise in SQL and a strong understanding of Data Warehouse concepts and Modern Data Architectures. • Familiarity with cloud-based platforms (e.g., AWS, Azure, GCP). • Migrate legacy transformation code into modular DBT data models

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

We are seeking highly motivated and skilled DevOps Support Engineers to join our team. The ideal candidates will have a strong background in modern DevOps tools and practices, with expertise in Kubernetes, Snowflake, Python, Azure, Azure Data Factory (ADF), and other relevant technologies. This role requires a blend of technical expertise, problem-solving skills, and a customer-focused mindset to ensure the smooth operation and scalability of our infrastructure. Location: Off-Shore (India) Positions: 4 Key Responsibilities: 1. Platform Support and Maintenance:  Provide day-to-day operational support for our systems, ensuring high availability, performance, and reliability.  Monitor, troubleshoot, and resolve issues related to Kubernetes clusters, Snowflake data pipelines, and Azure infrastructure.  Collaborate with cross-functional teams to address incidents and implement robust solutions. 2. Infrastructure Automation and Optimization:  Develop and maintain automation scripts and tools using Python to streamline deployment, monitoring, and scaling processes.  Optimize Kubernetes cluster configurations, including resource allocation and scaling strategies.  Implement best practices for cloud resource utilization on Azure to reduce costs and improve efficiency. 3. Data Pipeline Management:  Support and enhance data pipelines built on Snowflake and Azure Data Factory (ADF).  Monitor data flow, troubleshoot pipeline failures, and ensure data integrity and availability.  Collaborate with data engineering teams to implement new data workflows and improve existing pipelines. 4. Security and Compliance:  Ensure the platform adheres to security standards and compliance requirements.  Perform regular audits of infrastructure and implement security patches as needed.  Manage role-based access control (RBAC) and permissions in Kubernetes, Snowflake, and Azure environments. 5. Collaboration and Communication:  Work closely with development, QA, and product teams to ensure seamless integration and deployment of new features.  Participate in on-call rotations to provide 24/7 support for critical issues.  Document processes, configurations, and troubleshooting guides to improve knowledge sharing across the team. Required Skills and Qualifications: 1. Technical Expertise:  Proficient in managing Kubernetes clusters, including deployment, scaling, and monitoring.  Hands-on experience with Snowflake, including data modeling, query optimization, and pipeline management.  Strong programming skills in Python for automation and scripting.  Solid understanding of Azure cloud services, including compute, storage, networking, and identity management.  Familiarity with Azure Data Factory (ADF) for building and managing ETL/ELT pipelines. 2. DevOps Practices :  Experience with CI/CD pipelines and tools (e.g., Jenkins, GitHub Actions, Azure DevOps).  Knowledge of infrastructure-as-code (IaC) tools such as Terraform or ARM templates.  Proficiency in monitoring tools like Prometheus, Grafana, or Azure Monitor. 3. Soft Skills:  Excellent problem-solving and analytical skills, with a proactive mindset.  Strong communication skills to work effectively with cross-functional teams.  Ability to prioritize tasks and manage multiple responsibilities in a fast-paced environment.

Posted 1 month ago

Apply

10.0 years

0 Lacs

India

Remote

Job Title: Lead Data Engineer Experience: 8–10 Years Location: Remote Mandatory: Prior hands-on experience with Fivetran integrations About the Role: We are seeking a highly skilled Lead Data Engineer with 8–10 years of deep expertise in cloud-native data platforms, including Snowflake, Azure, DBT , and Fivetran . This role will drive the design, development, and optimization of scalable data pipelines, leading a cross-functional team and ensuring data engineering best practices are implemented and maintained. Key Responsibilities: Lead the design and development of data pipelines (batch and real-time) using Azure, Snowflake, DBT, Python , and Fivetran . Translate complex business and data requirements into scalable, efficient data engineering solutions. Architect multi-cluster Snowflake setups with an eye on performance and cost. Design and implement robust CI/CD pipelines for data workflows (Git-based). Collaborate closely with analysts, architects, and business teams to ensure data architecture aligns with organizational goals. Mentor and review work of onshore/offshore data engineers. Define and enforce coding standards, testing frameworks, monitoring strategies , and data quality best practices. Handle real-time data processing scenarios where applicable. Own end-to-end delivery and documentation for data engineering projects. Must-Have Skills: Fivetran : Proven experience integrating and managing Fivetran connectors and sync strategies. Snowflake Expertise : Warehouse management, cost optimization, query tuning Internal vs. external stages, loading/unloading strategies Schema design, security model, and user access Python (advanced): Modular, production-ready code for ETL/ELT, APIs, and orchestration DBT : Strong command of DBT for transformation workflows and modular pipelines Azure : Azure Data Factory (ADF), Databricks Integration with Snowflake and other services SQL : Expert-level SQL for transformations, validations, and optimizations Version Control : Git, branching, pull requests, and peer code reviews CI/CD : DevOps/DataOps workflows for data pipelines Data Modeling : Star schema, Data Vault, normalization/denormalization techniques Strong documentation using Confluence, Word, Excel, etc. Excellent communication skills – verbal and written Good to Have: Experience with real-time data streaming tools (Event Hub, Kafka) Exposure to monitoring/data observability tools Experience with cost management strategies for cloud data platforms Exposure to Agile/Scrum-based environments

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. This role is crucial for us on the Cloud Data Engineering team (Data Exchange)for all the Cloud development, migrations, and support works related to WellMed Data Services. Team and Role require to maintain and supports EDW, IS cloud modernization in wellmed, that involves cloud development on Data Streaming using Apache Kafka, Kubernetes, Databricks, Snowflake, SQL Server, Airflow for monitoring and support. GIT and DevOps for pipeline automations. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Hands-on experience in Cloud Data Engineering Contributed for all the Cloud development, migrations, and support works Proven to be comfortable independently maintain and supports EDW, cloud modernization, SQL development, Azure cloud development, ETL using Azure Data Factory Proven to successfully implemented data Streaming using Apache Kafka, Kubernetes, Databricks, Snowflake, SQL Server, Airflow for monitoring and support. GIT and DevOps for pipeline automations At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies