Jobs
Interviews

1444 Adf Jobs - Page 25

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

1 - 9 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Functions may include database architecture, engineering, design, optimization, security, and administration; as well as data modeling, big data development, Extract, Transform, and Load (ETL) development, storage engineering, data warehousing, data provisioning and other similar roles. Responsibilities may include Platform-as-a-Service and Cloud solution with a focus on data stores and associated eco systems. Duties may include management of design services, providing sizing and configuration assistance, ensuring strict data quality, and performing needs assessments. Analyzes current business practices, processes and procedures as well as identifying future business opportunities for leveraging data storage and retrieval system capabilities. Manages relationships with software and hardware vendors to understand the potential architectural impact of different vendor strategies and data acquisition. May design schemas, write SQL or other data markup scripting and helps to support development of Analytics and Applications that build on top of data. Selects, develops and evaluates personnel to ensure the efficient operation of the function. Primary Responsibilities: Participate in scrum process and deliver stories/features according to the schedule Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable Participate in product support activities as needed by the team. Understand product architecture, features being built and come up with product improvement ideas and POCs Analyzes and investigates Provides explanations and interpretations within area of expertise Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 7+ years of Implementation experience on time-critical production projects following key software development practices 5+ years of programming experience in Python or any programming language Tools/Technologies: Programming Languages: Python, PySpark Cloud Technologies: Azure (ADF, Databricks, WebApp, Key vault, SQL Server, function app, logic app, Synapse Azure Machine Learning, DevOps) DevOps, implementation of Bigdata, Apache Spark and Azure Cloud Experience: Deep experience in Data Analysis, including source data analysis, data profiling and mapping Good experience in building data pipelines using ADF/Azure Databricks Proven hands-on experience with a large-scale data warehouse Hands-on data migration experience from legacy systems to new solutions, such as from on-premises clusters to Cloud Hands-on programming experience in Spark using scala/python Large scale data processing using PySpark on azure ecosystem Implementation of self-service analytics platform ETL framework using PySpark on Azure Expert skills in Azure data processing tools (Azure Data Factory, Azure Databricks) Solid proficiency in SQL and complex queries Ability to learn and adapt to new data technologies Proven good problem solving skills Proven good communication skills Preferred Qualifications: Knowledge/Experience on Azure Synapse and Power BI Knowledge on US healthcare industry/Pharmacy data At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen

Posted 1 month ago

Apply

2.0 years

0 Lacs

India

On-site

Job Description: Full Stack Power BI / Data Analytics Trainer KSR Datavizon Pvt Ltd is hiring a Full Stack Power BI Trainer to lead our Data Analytics training program and mentor future business intelligence professionals. 🔹 Responsibilities: Deliver online or offline training sessions on Power BI, SQL, ADF (Azure Data Factory), and Microsoft Fabric. Teach both visualization and data engineering workflows relevant to end-to-end BI project delivery. Help students master data modeling, DAX, ETL concepts , and dashboard design best practices. Design assessments, case studies, and real-time use cases for practice. Keep training modules aligned with current BI industry needs. 🔹 Required Skills: Strong knowledge of Power BI Desktop and Service , DAX, and Power Query. Proficiency in SQL (joins, CTEs, functions). Hands-on experience with Azure Data Factory (ADF) and Microsoft Fabric (Lakehouse, Pipelines). Experience in building real-time dashboards and publishing Power BI reports. Excellent presentation and communication skills. Previous training or mentorship experience is a plus. 📍 Location : Hyderabad (On-site) 🕒 Type : Part-time (Flexible hours) 🧑‍💼 Experience : 2+ years in Data Analytics / BI or Training Join us to shape the careers of the next-gen Power BI professionals!

Posted 1 month ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Data Governance Power BI Specialist Experience: 4 – 6 Years Location: Bangalore, Gurgaon, Pune Notice Period: Immediate to 15 Days Job Purpose: Evaluate the data governance framework and Power BI environment, provide recommendations for enhancing data quality and discoverability, and optimize Power BI performance. Key Responsibilities: Review PowerShell (PS), SSIS, Batch Scripts, and C# (.NET 3.0) codebases for data processes Assess complexity of trigger migration across Active Batch (AB), Synapse, ADF, and Azure Databricks (ADB) Define and propose transitions in the use of Azure SQL DW, SQL DB, and Data Lake (DL) Analyze data patterns for optimization, including raw-to-consumption loading and elimination of intermediate zones (e.g., staging/application zones) Understand and implement requirements for external tables (Lakehouse) Ensure the quality of deliverables within project timelines Develop understanding of equity market domain Collaborate with domain experts and stakeholders to define business rules and logic Maintain continuous communication with global stakeholders Troubleshoot complex issues across development, test, UAT, and production environments Coordinate end-to-end project delivery and manage client queries Ensure adherence to SLA/TAT and perform quality checks Work independently as well as collaboratively in cross-functional teams Required Skills and Experience: B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science, or a related field 7+ years of experience in data and cloud architecture working with client stakeholders Strong knowledge of Power BI, Data Governance, Azure Data Factory, Azure Data Lake, Databricks Experience in reviewing PowerShell, SSIS, Batch Scripts, and .NET-based codebases Familiarity with data optimization patterns and architecture transitions in Azure Project management and team leadership experience within agile environments Strong organizational, analytical, and communication skills Ability to deliver high-quality results to internal and external stakeholders

Posted 1 month ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities We are looking for Technical resource for Oracle Apps R12 financial modules-based application. Below will the main responsibilities of the user: Development activity of Oracle R12.2 release Interact with business users and BA/SA to understand the requirements Prepare the technical specification documents Develop the new Interface, conversion and reports Develop/Customize/personalize new/existing oracle form and OAF pages Perform Impact analysis on possible code changes Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s Degree in Computer Science / Engineering 3+ years of Oracle EBS (Technical) experience with R12 release Development experience in the EBS environment in Reports, Interfaces, Conversions, Extensions, Workflow (RICEW) and Forms deliverables Experience in P2P, Oracle General Ledger (GL), Account Payables (AP), Receivables (AR), Cash Management (CM), Sub-ledger Accounting (SLA), and System administrator modules Experience of end-user interaction for requirements gathering, understanding customer needs and working with multiple groups to coordinate and carry out technical activities which include new development, maintenance and production support activities Good knowledge of R12 financial table structure Good knowledge of Agile Methodologies Good hands-on knowledge of SQL, PLSQL, Oracle reports, Oracle form. OAF/ADF, BI publisher reports, Shell scripting and WebServices (Integrated SOA Gateway) Oracle APEX Knowledge Knowledge of WebServices using Integrated SOA Gateway Proven good analytical, performance tuning and debugging skills. At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Functions may include database architecture, engineering, design, optimization, security, and administration; as well as data modeling, big data development, Extract, Transform, and Load (ETL) development, storage engineering, data warehousing, data provisioning and other similar roles. Responsibilities may include Platform-as-a-Service and Cloud solution with a focus on data stores and associated eco systems. Duties may include management of design services, providing sizing and configuration assistance, ensuring strict data quality, and performing needs assessments. Analyzes current business practices, processes and procedures as well as identifying future business opportunities for leveraging data storage and retrieval system capabilities. Manages relationships with software and hardware vendors to understand the potential architectural impact of different vendor strategies and data acquisition. May design schemas, write SQL or other data markup scripting and helps to support development of Analytics and Applications that build on top of data. Selects, develops and evaluates personnel to ensure the efficient operation of the function. Primary Responsibilities Participate in scrum process and deliver stories/features according to the schedule Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable Participate in product support activities as needed by the team. Understand product architecture, features being built and come up with product improvement ideas and POCs Analyzes and investigates Provides explanations and interpretations within area of expertise Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 7+ years of Implementation experience on time-critical production projects following key software development practices 5+ years of programming experience in Python or any programming language Tools/Technologies: Programming Languages: Python, PySpark Cloud Technologies: Azure (ADF, Databricks, WebApp, Key vault, SQL Server, function app, logic app, Synapse Azure Machine Learning, DevOps) DevOps, implementation of Bigdata, Apache Spark and Azure Cloud Experience: Deep experience in Data Analysis, including source data analysis, data profiling and mapping Good experience in building data pipelines using ADF/Azure Databricks Proven hands-on experience with a large-scale data warehouse Hands-on data migration experience from legacy systems to new solutions, such as from on-premises clusters to Cloud Hands-on programming experience in Spark using scala/python Large scale data processing using PySpark on azure ecosystem Implementation of self-service analytics platform ETL framework using PySpark on Azure Expert skills in Azure data processing tools (Azure Data Factory, Azure Databricks) Solid proficiency in SQL and complex queries Ability to learn and adapt to new data technologies Proven good problem solving skills Proven good communication skills Preferred Qualifications Knowledge/Experience on Azure Synapse and Power BI Knowledge on US healthcare industry/Pharmacy data At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen

Posted 1 month ago

Apply

8.0 years

0 Lacs

India

Remote

We are seeking a highly experienced Data Architect with a strong background in designing scalable data solutions and leading data engineering teams. The ideal candidate will have deep expertise in Microsoft Azure , ETL processes , and modern data architecture principles. This role involves close collaboration with stakeholders, engineering teams, and business units to design and implement robust data pipelines and architectures. Assessments of existing data components, Performing POCs, Consulting to the stakeholders Proposing end to end solutions to an enterprise's data specific business problems, and taking care of data collection, extraction, integration, cleansing, enriching and data visualization Ability to design large data platforms to enable Data Engineers, Analysts & scientists Strong exposure to different Data architectures, data lake & data warehouse Design and implement end-to-end data architecture solutions on Azure cloud platform. Lead the design and development of scalable ETL/ELT pipelines using tools such as Azure Data Factory (ADF). Architect data lakes using Azure Data Lake Storage (ADLS) and integrate with Azure Synapse Analytics for enterprise-scale analytics. Collaborate with business analysts, data scientists, and engineers to understand data needs and deliver high-performing solutions. Define data models, metadata standards, data quality rules, and security protocols. Define tools & technologies to develop automated data pipelines, write ETL processes, develop dashboard & report and create insights Continually reassess current state for alignment with architecture goals, best practices and business needs DB modeling, deciding best data storage, creating data flow diagrams, maintaining related documentation Taking care of performance, reliability, reusability, resilience, scalability, security, privacy & data governance while designing a data architecture Apply or recommend best practices in architecture, coding, API integration, CI/CD pipelines Coordinate with data scientists, analysts, and other stakeholders for data-related needs Help the Data Science & Analytics Practice grow by mentoring junior Practice members, leading initiatives, leading Data Practice Offerings Provide thought leadership by representing the Practice / Organization on internal / external platforms Qualificatons: 8+ years of experience in data architecture, data engineering, or related roles. Translate business requirements into data requests, reports and dashboards. Strong Database & modeling concepts with exposure to SQL & NoSQL Databases Expertise in designing and writing ETL processes in Python/PySpark Strong data architecture patterns & principles, ability to design secure & scalable data lakes, data warehouse, data hubs, and other event-driven architectures Proven expertise in Microsoft Azure data services, especially ADF, ADLS, Synapse Analytics. Strong hands-on experience in designing and building ETL/ELT pipelines. Proficiency in data modeling, SQL, and performance tuning. Demonstrated leadership experience, with the ability to manage and mentor technical teams. Excellent communication and stakeholder management skills. Proficiency in data visualization tools like Tableau, Power BI or similar to create meaningful insights. Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. Good to have: Azure certifications (e.g., Azure Data Engineer Associate, Azure Solutions Architect). Experience with modern data platforms, data governance frameworks, and real-time data processing tools. Benefits: Imagine a flexible work environment – whether it's the office, your home, or a blend of both. From interviews to onboarding, we embody a remote-first approach. You will be part of a global team, learning from top talent around the world and across cultures, speaking English everyday. Our global workforce enables our team to leverage global resources to accomplish our work in efficient and effective teams. We’re big on your well-being – as a company, we spend a whole trimester in our annual cycle focused on wellbeing. Whether it is taking advantage of fitness offerings, mental health plans (country-dependent), or simply leveraging generous time off, we want all of our team members operating at their best. Our professional services model enables us to accelerate career growth and development opportunities - across projects, offerings, and industries. We are an equal opportunity employer. It goes without saying that we live by values like Intrinsic Dignity and Open Collaboration to create cutting-edge technology AND reinforce our commitment to diversity - globally and locally. Join us and be a part of a global tech community! 🌍💼 Check out our Linkedin site and Careers page to learn more about what it’s like to be part of our #oneteam!

Posted 1 month ago

Apply

4.0 - 8.0 years

27 - 32 Lacs

Bengaluru

Work from Office

Skill Set Required: Strong expertise in DAX Modeling (Minimum of 6 years of relevant experience) Hands-on experience with Power BI (reporting and modeling) Data Engineering exposure Proficiency in SQL and ETL processes Experience with Data Warehousing (working on terabyte-scale data) Familiarity with Azure and related data management tools Interested candidates can share their updated resume to rolly.martin@thompsonshr.com

Posted 1 month ago

Apply

75.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description About Advance Auto Parts Founded in Roanoke, VA in 1932, Advance Auto Parts is a leading automotive aftermarket retail parts provider that serves both professional installer and do-it-yourself Customers. As of July 13, 2019, Advance operated 4,912 stores and 150 Worldpac branches in the United States, Canada, Puerto Rico, and the U.S. Virgin Islands. The Company also serves 1,250 independently owned CARQUEST branded stores across these locations in addition to Mexico, the Bahamas, Turks, and Caicos and the British Virgin Islands. The company has a workforce of over 70,000 knowledgeable and experienced Team Members who are proud to provide outstanding service to their Customers, Communities, and each other every day. About AAP Global Capability Centre We are continually innovating and seeking to elevate the Customer experience at each of our stores. For an organization of our size and reach, today, it has become more critical than ever, to identify synergies and build shared capabilities. The AAP Global Capability Center, located in Hyderabad, is a step in this strategic direction that enables us to access a larger talent pool, unlock operational efficiencies and increase levels of collaboration. About Information Technology At Advance Auto Parts, the IT organization is embracing the digitization of retail and working to transform our organization as a leader in the modern age of retail. We are leading the way with DevOps, thinking cloud-first, and adopting modern approaches to infrastructure management. We realize Agile is more than a manifesto, and that applications need to be portable, event and service-oriented, and support a data analytics and data-first culture of the modern business. We are taking action to transform a 75-year-old company to be an industry leader in building a best-in-class omnichannel experience for its customers. Software Developer, Level 9 Job Summary As a seasoned Software Developer at Level 9, your role is integral in utilizing Azure Cloud with expertise in Azure services, API services, and database management a sophisticated blend of data analytics skills, domain knowledge, and technical prowess to translate data into actionable business insights. You will design, develop, and optimize cloud-based solutions, ensuring performance, scalability, and security with an in-depth understanding of technical design and frameworks. In this capacity, you possess the technical skills required to build functionality, troubleshoot issues, and act as the primary point of contact for interactions with business stakeholders, external partners, and internal collaborators. Essential Duties And Responsibilities Azure Cloud Developering: Manage and administer Microsoft Azure services, including provisioning, performance monitoring, security, and governance. Design and implement data pipelines for ingesting, transforming, and integrating data from various sources (MS SQL, DB2, APIs, Kafka, external vendor files, etc.). Ensure data integrity, identify inconsistencies, and oversee successful data releases in cloud environments. Develop strategies to optimize Azure cloud architecture, ensuring efficiency, security, and cost-effectiveness. API & Integration Build, consume, and maintain RESTful APIs and services using Postman and related tools. Work on microservices architectures, ensuring seamless data flow across integrated applications. Utilize Azure Linked Servers and other cloud-native database services. Develop and optimize MS SQL Server databases, including complex queries, stored procedures, data modelling, and tuning. Implement data warehousing principles (e.g., Slowly Changing Dimensions, Facts vs. Dimensions). Maintain applications with a focus on scalability, operational efficiency, and troubleshooting production issues. Collaboration & Process Improvement: Work closely with stakeholders, project managers, and cross-functional teams to understand business needs and deliver solutions. Identify and implement process improvements for data integration, governance, and cloud operations. Provide mentorship and technical guidance to junior developers. Investigate and resolve system issues across multiple platforms. Participate in an on-call rotation to support production systems as needed. Adapt to shifting priorities in a dynamic work environment. Required Qualifications Technical Skills Bachelor’s degree in computer science, Developering, or a related field 7+ years of experience in Azure Cloud developering, data integration, and pipeline development. Strong expertise in Azure Data Factory (ADF), Data Bricks, Azure Pipelines, and related cloud services. Hands-on experience with REST APIs, Postman, and JSON-based integrations. Proficiency in MS SQL Server, database modeling, and performance optimization. Familiarity with CI/CD tools (Azure DevOps, Jenkins, Git, etc.). Familiarity with Power BI, VS Code, and SQL Server permissions for ETL/reporting. Strong background in Agile/Scrum methodologies for project execution. Demonstrated knowledge in building, debugging, and maintaining enterprise cloud applications Soft Skills Excellent problem-solving and analytical skills with attention to detail. Strong collaboration, communication, and stakeholder management abilities. Ability to work independently and lead cross-functional teams. Proven track record of meeting deadlines and adapting to dynamic priorities. California Residents Click Below For Privacy Notice https://jobs.advanceautoparts.com/us/en/disclosures We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age national origin, religion, sexual orientation, gender identity, status as a veteran and basis of disability or any other federal, state or local protected class.

Posted 1 month ago

Apply

12.0 - 22.0 years

20 - 35 Lacs

Pune, Chennai, Bengaluru

Hybrid

Role & responsibilities Lead (Hands On) the team of Data Engineers Good Communication and strong in technical design decision making. Strong Experience in ETL / Dataware Ware House Strong Experience in Databricks, Unity Catalogue, Medallion Architecture, Data Lineage & Pyspark, CTE Good Experience in Data Analysis Strong Experience in SQL Queries, Stored Procedures, Views, Functions, UDF (User Defined Functions) Experience in Azure Cloud, ADF, Storage & Containers, Azure DB, Azure Datalake Experience in SQL Server Experince in Data Migration & Production Support"

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Senior Data Engineer Location: Hyderabad, Chennai & Bangalore Experience: 5+ Years Job Summary We are looking for a highly skilled and experienced Senior Data Engineer to join our growing data engineering team. The ideal candidate will have a strong background in building and optimizing data pipelines and data architecture, as well as experience with big data technologies and Azure cloud services. You will work closely with cross-functional teams to ensure data is accessible, reliable, and ready for analytics and business insights. Mandatory Skills Advanced SQL Python or Scala for data engineering ETL pipeline development Cloud platforms (AWS/GCP/Azure) Azure 1st party services (ADF, Azure Databricks, Synapse, etc.) Big data tools (Spark, Hadoop) Data warehousing (Redshift, Snowflake, Big Query) Workflow orchestration tools (Airflow, Prefect, or similar) Key Responsibilities Design, develop, and maintain scalable and reliable data pipelines Demonstrate experience and leadership across two full project cycles using Azure Data Factory, Azure Databricks, and PySpark Collaborate with data analysts, scientists, and software engineers to understand data needs Implement data quality checks and monitoring systems Optimize data delivery and processing across a wide range of sources and formats Ensure security and governance policies are followed in all data handling processes Evaluate and recommend tools and technologies to improve data engineering capabilities Lead and mentor junior data engineers as needed Work with cross-functional teams in a dynamic and fast-paced environment Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field Certification in cloud platforms or big data technologies (preferred) 5+ years of data engineering experience Technical Skills Programming: Python, Scala, SQL Big Data: Spark, Hadoop, Hive Stream Processing: Storm, Spark Streaming Data Warehousing: Snowflake, Big Query, Redshift Cloud: AWS (S3, Lambda, Glue), GCP, Azure (ADF, Azure Databricks) Orchestration: Apache Airflow, Prefect, Luigi Databases: PostgreSQL, MySQL, NoSQL (MongoDB, Cassandra) Tools: Git, Docker, Kubernetes (basic), CI/CD Soft Skills Strong problem-solving and analytical thinking Excellent verbal and written communication Ability to manage multiple tasks and deadlines Collaborative mindset with a proactive attitude Strong analytic skills related to working with unstructured datasets Good to Have Experience with real-time data processing (Kafka, Flink) Knowledge of data governance and privacy regulations (GDPR, HIPAA) Familiarity with ML model data pipeline integration Work Experience Minimum 5 years of relevant experience in data engineering roles Experience with Azure 1st party services across at least two full project lifecycles Compensation & Benefits Competitive salary and annual performance-based bonuses Comprehensive health and optional Parental insurance. Retirement savings plans and tax savings plans. Work-Life Balance: Flexible work hours Key Result Areas (KRAs) Timely development and delivery of high-quality data pipelines Implementation of scalable data architectures Collaboration with cross-functional teams for data initiatives Compliance with data security and governance standards Key Performance Indicators (KPIs) Uptime and performance of data pipelines Reduction in data processing time Number of critical bugs post-deployment Stakeholder satisfaction scores Successful data integrations and migrations Contact: hr@bigtappanalytics.com

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Data Architect Location: Hyderabad, Chennai & Bangalore Exprience: 10+ Years Job Summary We are seeking a highly experienced and strategic Data Architect to lead the design and implementation of robust data architecture solutions. The ideal candidate will have a deep understanding of data modelling, governance, integration, and analytics platforms. As a Data Architect, you will play a crucial role in shaping the data landscape across the organization, ensuring data availability, consistency, security, and quality. Mandatory Skills Enterprise data architecture and modeling Cloud data platforms (Azure, AWS, GCP) Data warehousing and lakehouse architecture Data governance and compliance frameworks ETL/ELT design and orchestration Master Data Management (MDM) Databricks architecture and implementation Key Responsibilities Lead, define, and implement end-to-end modern data platforms on public cloud using Databricks Design and manage scalable data models and storage solutions Collaborate with enterprise architects, data architects, ETL developers & engineers, data scientists, and information designers to define required data structures, formats, pipelines, metadata, and workload orchestration capabilities Establish data standards, governance policies, and best practices Oversee the integration of new data technologies and tools Lead the development of data pipelines, marts, and lakes Ensure data solutions are compliant with security and regulatory standards Address aspects such as data privacy & security, data ingestion & processing, data storage & compute, analytical & operational consumption, data modeling, data virtualization, self-service data preparation & analytics, AI enablement, and API integrations Mentor data engineers and developers on best practices Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field Relevant certifications in cloud platforms, data architecture, or governance Technical Skills Data Modelling: Conceptual, Logical, Physical modelling (ERwin, Power Designer, etc.) Cloud: Azure (ADF, Synapse, Databricks), AWS (Redshift, Glue), GCP (Big Query) Databases: SQL Server, Oracle, PostgreSQL, NoSQL (MongoDB, Cassandra) Data Integration: Informatica, Talend, Apache NiFi Big Data: Hadoop, Spark, Kafka Governance Tools: Collibra, Alation, Azure Purview Scripting: Python, SQL, Shell DevOps/DataOps practices and CI/CD tools Soft Skills Strong leadership and stakeholder management Excellent communication and documentation skills Strategic thinking with problem-solving ability Collaborative and adaptive in cross-functional teams Good to Have Experience in AI/ML data lifecycle support Exposure to industry data standards and frameworks (TOGAF, DAMA-DMBOK) Experience with real-time analytics and streaming data solutions Work Experience Minimum 10 years in data engineering, architecture, or related roles At least 5 years of hands-on experience in designing data platforms on Azure Demonstrated knowledge of 2 full project cycles using Databricks as an architect Experience supporting and working with cross-functional teams in a dynamic environment Advanced working SQL knowledge and experience working with relational databases and unstructured datasets Experience with stream-processing systems such as Storm, Spark-Streaming Compensation & Benefits Competitive salary and annual performance-based bonuses Comprehensive health and optional Parental insurance. Retirement savings plans and tax savings plans. Work-Life Balance: Flexible work hours Key Result Areas (KRAs) Effective implementation of scalable and secure data architecture Governance and compliance adherence Standardization and optimization of data assets Enablement of self-service analytics and data democratization Key Performance Indicators (KPIs) Architecture scalability and reusability metrics Time-to-delivery for data initiatives Data quality and integrity benchmarks Compliance audit outcomes Satisfaction ratings from business stakeholders Contact: hr@bigtappanalytics.com

Posted 1 month ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Analyze, design develop, and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Responsibilities Preferred Qualifications: Oracle Applications Lab (OAL) has a central role within Oracle. It's role is to work with Product Development and Oracle internal business to deliver Oracle products for Oracle to use internally. OAL has a role of implementing Oracle applications, databases and middleware, supporting Oracle applications for Oracle internally and configuring Oracle applications to meet the specific needs of Oracle. OAL also provides a showcase for Oracle's products The role will involve: Working as part of a global team to implement and support new business applications for HR and Payroll Debugging and solving sophisticated problems and working closely with Oracle Product Development and other groups to implement solutions Developing and implementing product extensions and customizations Testing new releases Providing critical production support Your skills should include: Experience in designing and supporting Oracle E-Business Suite and Fusion applications. Preferably Oracle HRMS/Fusion HCM Strong Oracle technical skills: SQL, PL/SQL, Java, XML, ADF, SOA etc Communicating confidently with peers and management within technical and business teams Detailed Description and Job Requirements: Work with Oracle's world class technology to develop, implement, and support Oracle's global infrastructure. As a member of the IT organization, help analyze existing complex programs and formulate logic for new complex internal systems. Prepare flowcharting, perform coding, and test/debug programs. Develop conversion and system implementation plans. Recommend changes to development, maintenance, and system standards. Job duties are varied and complex using independent judgment. May have project lead role. BS or equivalent experience in programming on enterprise or department servers or systems. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Job Description Deevita is seeking a highly skilled and experienced Senior Data Engineer / Data Platform Specialist to join our team. This role involves building and maintaining robust data marts, developing scalable data models, and designing enterprise-grade data platform solutions. The ideal candidate will collaborate closely with cross-functional teams and play a key role in ensuring data accuracy, performance, and alignment with business goals. We are a team talented engineers, fun to work with, who is looking to make difference in Healthcare. Join us as we work together to revolutionize the future of pharma with the latest technologies and a collaborative, fast-paced work environment. Roles & Responsibilities: Design, build, and maintain data marts to support various business functions and carrier partners. Work closely with cross-functional stakeholders to define data modeling roadmaps . Collaborate with application/product teams to understand business logic and translate into efficient data models . Optimize SQL queries and data structures for performance and scalability. Develop and manage ETL pipelines using Azure Logic Apps, Power Platform, and Azure Data Factory . Ensure data accuracy, integrity, and consistency through rigorous validation and cleansing processes. Establish and enforce data modeling standards and best practices across teams. Maintain thorough and up-to-date data model documentation . Design and develop advanced Power BI visualizations , dashboards, and paginated reports. Create and maintain Power Platform applications (Power Apps, Power Automate) for internal use cases. Provide technical support and troubleshooting for Power BI dashboards and SQL-based solutions. Ensure data privacy, security, and compliance in collaboration with the IT team. Analyze business requirements and propose best-fit data architecture solutions . Work with the engineering team to ensure delivery and performance of solutions. Required Qualifications: 7-10 years of experience in designing and developing enterprise-grade data platform solutions involving SQL Server, Azure SQL Database, Power BI. Master’s or Bachelor’s degree in Computer Science or Engineering fields. (BE/B.Tech/ M.E / M.Tech, MCA) Must have at least 7+ years of software development experience in SQL Server/ Azure SQL Database Must have at least 3+ years of software development experience in building data visualizations for business /enterprise customers using Power BI (Power BI Desktop, Power BI Report Builder, Power BI Service). Must have 1+ year of experience in building applications using Power Platform (Power Apps, Power Automate). Must have at least one Microsoft Certification in Power BI Must be hands-on, deeply technical and exposure to latest features within SQL Server, able to do both coding and guide group of junior/mid-level database engineers Must have excellent communication skills (English) to interface with US clients directly and verify in proficiency in all types of communication modes – Listening, Reading, Writing and Speaking Must have strong experience in Performance Tuning / Optimization Initiatives using Healthcare, Pharma/Life science industry experience will be an added advantage Preferred Qualifications: Healthcare/Pharma/Life science industry experience will be an added advantage Azure Development using ADF, Data pipelines Benefits Industry Competitive Compensation package Exposure to work on advanced technologies and excellent career growth opportunities both technical and organization level Paid Time off (EL, SL, CL) , Health Insurance coverage Hybrid /Remote Work From Home About us DeeVita is a dynamic and growing organization, providing advanced technology services and solutions to enterprise customers in USA over the last decade. Deevita specializes in Data, analytics, AI, and Cloud Solutions & product development services from startups to enterprise customers.

Posted 1 month ago

Apply

7.0 years

5 - 9 Lacs

Hyderābād

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities: Ingest data from multiple on-prem and cloud data sources using various tools & capabilities in Azure Design and develop Azure Databricks processes using PySpark/Spark-SQL Design and develop orchestration jobs using ADF, Databricks Workflow Analyzing data engineering processes being developed and act as an SME to troubleshoot performance issues and suggest solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions etc Building test framework for the Databricks notebook jobs for automated testing before code deployment Design and build POCs to validate new ideas, tools, and architectures in Azure Continuously explore new Azure services and capabilities; assess their applicability to business needs Create detailed documentation for cloud processes, architecture, and implementation patterns Work with data & analytics team to build and deploy efficient data engineering processes and jobs on Azure cloud Prepare case studies and technical write-ups to showcase successful implementations and lessons learned Work closely with clients, business stakeholders, and internal teams to gather requirements and translate them into technical solutions using best practices and appropriate architecture Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Ensure solutions adhere to security, compliance, and governance standards Monitor and optimize data pipelines and cloud resources for cost and performance efficiency Identifies solutions to non-standard requests and problems Support and maintain the self-service BI warehouse Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 7+ years of overall experience in Data & Analytics engineering 5+ years of experience working with Azure, Databricks, and ADF, Data Lake 5+ years of experience working with data platform or product using PySpark and Spark-SQL Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. In-depth understanding of Azure architecture & ability to come up with efficient design & solutions Highly proficient in Python and SQL Proven excellent communication skills Preferred Qualifications: Snowflake, Airflow experience Power BI development experience Experience or knowledge of health care concepts – E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #NIC

Posted 1 month ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Profile: Sr. DW BI Developer Location: Sector 64, Noida (Work from Office) Position Overview: Working with the Finance Systems Manager, the role will ensure that ERP system is available and fit for purpose. The ERP Systems Developer will be developing the ERP system, providing comprehensive day-to-day support, training and develop the current ERP System for the future. Key Responsibilities: As a Sr. DW BI Developer, the candidate will participate in the design / development / customization and maintenance of software applications. As a DW BI Developer, the person should analyse the different applications/Products, design and implement DW using best practices. Rich data governance experience, data security, data quality, provenance / lineage. The candidate will also be maintaining a close working relationship with the other application stakeholders. Experience of developing secured and high-performance web application(s) Knowledge of software development life-cycle methodologies e.g. Iterative, Waterfall, Agile, etc. Designing and architecting future releases of the platform. Participating in troubleshooting application issues. Jointly working with other teams and partners handling different aspects of the platform creation. Tracking advancements in software development technologies and applying them judiciously in the solution roadmap. Ensuring all quality controls and processes are adhered to. Planning the major and minor releases of the solution. Ensuring robust configuration management. Working closely with the Engineering Manager on different aspects of product lifecycle management. Demonstrate the ability to independently work in a fast-paced environment requiring multitasking and efficient time management. Required Skills and Qualifications: End to end Lifecyle of Data warehousing, DataLakes and reporting Experience with Maintaining/Managing Data warehouses. Responsible for the design and development of a large, scaled-out, real-time, high performing Data Lake / Data Warehouse systems (including Big data and Cloud). Strong SQL and analytical skills. Experience in Power BI, Tableau, Qlikview, Qliksense etc. Experience in Microsoft Azure Services. Experience in developing and supporting ADF pipelines. Experience in Azure SQL Server/ Databricks / Azure Analysis Services Experience in developing tabular model. Experience in working with APIs. Minimum 2 years of experience in a similar role Experience with data warehousing, data modelling. Strong experience in SQL 2-6 years of total experience in building DW/BI systems Experience with ETL and working with large-scale datasets. Proficiency in writing and debugging complex SQLs. Prior experience working with global clients. Hands on experience with Kafka, Flink, Spark, SnowFlake, Airflow, nifi, Oozie, Pig, Hive,Impala Sqoop. Storage like HDFS , Object Storage (S3 etc), RDBMS, MPP and Nosql DB. Experience with distributed data management, data sfailover,luding databases (Relational, NoSQL, Big data, data analysis, data processing, data transformation, high availability, and scalability) Experience in end-to-end project implementation in Cloud (Azure / AWS / GCP) as a DW BI Developer Rich data governance experience, data security, data quality, provenance / lineagHive, Impalaerstanding of industry trends and products in dataops , continuous intelligence , Augmented analytics , and AI/ML. Prior experience of working in cloud like Azure, AWS and GCP Prior experience of working with Global Clients To know our Privacy Policy, please click on the link below or copy paste the URL on your browser: https://gedu.global/wp-content/uploads/2023/09/GEDU-Privacy-Policy-22092023-V2.0-1.pdf

Posted 1 month ago

Apply

8.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Role: Senior Dot Net Developer Experience :8+ years Notice period :Immediate Location :Trivandrum / Kochi Introduction Candidates with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure DevOps. This is a client facing role and hence should have strong communication skills. This is for a US client and the resource should be hands-on - experience in coding and Azure Cloud. Working hours - 8 hours , with a 4 hours of overlap during EST Time zone. ( 12 PM - 9 PM) This overlap hours is mandatory as meetings happen during this overlap hours Responsibilities include: Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-SQL, and modern JavaScript/jQuery Integrate and support third-party APIs and external services Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC) Participate in Agile/Scrum ceremonies and manage tasks using Jira Understand technical priorities, architectural dependencies, risks, and implementation challenges Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and reliability • Certifications : Microsoft Certified: Azure Fundamentals Microsoft Certified: Azure Developer Associate Other relevant certifications in Azure, .NET, or Cloud technologies Primary Skills : 8+ years of hands-on development experience with: C#, .NET Core 6/8+, Entity Framework / EF Core JavaScript, jQuery, REST APIs Expertise in MS SQL Server, including: Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types Skilled in unit testing with XUnit, MSTest Strong in software design patterns, system architecture, and scalable solution design Ability to lead and inspire teams through clear communication, technical mentorship, and ownership Strong problem-solving and debugging capabilities Ability to write reusable, testable, and efficient code Develop and maintain frameworks and shared libraries to support large-scale applications Excellent technical documentation, communication, and leadership skills Microservices and Service-Oriented Architecture (SOA) Experience in API Integrations 2+ years of hands with Azure Cloud Services, including: Azure Functions Azure Durable Functions Azure Service Bus, Event Grid, Storage Queues Blob Storage, Azure Key Vault, SQL Azure Application Insights, Azure Monitoring Secondary Skills: Familiarity with AngularJS, ReactJS, and other front-end frameworks Experience with Azure API Management (APIM) Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes) Experience with Azure Data Factory (ADF) and Logic Apps Exposure to Application Support and operational monitoring Azure DevOps - CI/CD pipelines (Classic / YAML)

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position: Data Engineer Experience: 6 +yrs. Job Location: Pune / Mumbai Job Profile Summary- Azure Databricks and Hands on Pyspark with tuning Azure Data Factory pipelines for various data loading into ADB, perf tuning Azure Synapse Azure Monitoring and Log Analytics ( error handling in ADF pipelines and ADB) Logic Apps and Functions Performance Tuning Databricks, Datafactory and Synapse Databricks data loading (layers ) and Export (which connection options, which best approach for report and access for fast)

Posted 1 month ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Acuity Knowledge Partners Acuity Knowledge Partners (Acuity) is a leading provider of bespoke research, analytics and technology solutions to the financial services sector, including asset managers, corporate and investment banks, private equity and venture capital firms, hedge funds and consulting firms. Its global network of over 6,000 analysts and industry experts, combined with proprietary technology, supports more than 600 financial institutions and consulting companies to operate more efficiently and unlock their human capital, driving revenue higher and transforming operations. Acuity is headquartered in London and operates from 10 locations worldwide. The company fosters a diverse, equitable and inclusive work environment, nurturing talent, regardless of race, gender, ethnicity or sexual orientation. Acuity was established as a separate business from Moody’s Corporation in 2019, following its acquisition by Equistone Partners Europe (Equistone). In January 2023, funds advised by global private equity firm Permira acquired a majority stake in the business from Equistone, which remains invested as a minority shareholder. For more information, visit www.acuitykp.com Position Title- Associate Director Experience Level- 10+yrs Department-IT Location-Gurgaon Job Summary Role Overview We are seeking a highly skilled Oracle Fusion Techno-Functional Consultant with deep expertise in both technical development and functional configuration across key Oracle Fusion modules (Finance, SCM, HCM, or Projects). The ideal candidate will be responsible for the end-to-end delivery of Oracle Fusion solutions, including system analysis, technical development, functional configuration, integration, testing, and user support. This role requires a strong understanding of Oracle Cloud architecture and hands-on experience in implementing and supporting Oracle Fusion applications in a global enterprise environment. Key Responsibilities Collaborate with business stakeholders to gather and analyse requirements across modules such as Financials (AP, AR, GL, FA), SCM, HCM, or Projects. Configure Oracle Fusion modules to align with business needs, leveraging best practices. Lead and support Oracle Cloud module implementations, rollouts, and upgrades. Prepare functional design documents (FDDs) and provide input into solution design. Conduct functional testing, UAT support, and issue resolution. Facilitate knowledge transfer and user training sessions for key users and superusers. Technical Responsibilities: Develop technical solutions including custom reports (BI Publisher, OTBI), integrations (OIC, REST/SOAP APIs), and extensions (using PaaS and VBCS). Write and review technical design documents (MD50, MD70) and conduct peer code reviews. Build and manage integrations between Oracle Fusion and third-party systems using Oracle Integration Cloud (OIC), BIP, FBDI, and HDL. Monitor and troubleshoot technical issues including performance tuning and bug fixing. Ensure compliance with data governance, security, and system performance standards. Project and Support Responsibilities: Participate in ERP enhancement projects, change requests, and day-to-day support activities. Serve as a subject matter expert and act as a liaison between IT and business units. Manage and document change control processes, and contribute to the creation of SOPs and support materials. Engage in continuous improvement initiatives to optimise system usage and performance. Required Qualifications and Experience: Bachelor’s degree in Computer Science, Information Systems, Finance, or related discipline. 10+ years of techno-functional experience with Oracle Fusion Applications (Cloud ERP). Strong domain knowledge in at least one of the following: Finance, SCM, HCM, or Projects. Proven experience with configuration of Oracle Cloud modules and business process setup. Technical expertise in BI Publisher, OTBI, HDL, FBDI, Oracle Integration Cloud (OIC), REST/SOAP APIs, and SQL/PLSQL. Experience with Oracle Security, Role-Based Access Control (RBAC), and workflow configuration. Strong understanding of data migration strategies and tools. Excellent communication and stakeholder management skills. Oracle certifications in Cloud ERP modules (preferred). Preferred Skills: Experience with Agile/Scrum methodologies. Exposure to Oracle Cloud quarterly patch impact assessments. Familiarity with tools like JIRA, ServiceNow, or equivalent for ticket management. Knowledge of VBCS, ADF, or other Oracle PaaS development frameworks is a plus.

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Country India Working Schedule Full-Time Work Arrangement Hybrid Relocation Assistance Available No Posted Date 17-Jun-2025 Job ID 9856 Description And Requirements We are seeking a skilled and experienced Azure Data Factory/ Synapse Engineer with expertise in SQL and Pipelines to join our dynamic team. As an Azure Data Engineer, you will be responsible for developing and implementing dynamic pipelines for data integration platform. Interact with the stakeholders/ data engineering manager to understand the ad-hoc and strategic data / project requirements and provide logical and long-term technical solutions. Work independently on basic to intermediate level data extraction, validation, and manipulation assignments using SQL, Python and ADF/Synapse. Work on maintaining and supporting the day-to-day operations revolving around DW management, cleanups on Azure Cloud platform. Write SQL scripts to update, verify the data loads and perform data validations. Using Git, GitHub to log the development work and manage deployments Effectively manage the evolving priorities and maintain clear communication with the stakeholders and/or Data Engineering Manager involved. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible. Join us!

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About the Company JMAN Group is a growing technology-enabled management consultancy that empowers organizations to create value through data. Founded in 2010, we are a team of 450+ consultants based in London, UK, and a team of 300+ engineers in Chennai, India. Having delivered multiple projects in the US, we are now opening a new office in New York to help us support and grow our US client base. We approach business problems with the mindset of a management consultancy and the capabilities of a tech company. We work across all sectors, and have in depth experience in private equity, pharmaceuticals, government departments and high-street chains. Our team is as cutting edge as our work. We take pride for ourselves on being great to work with – no jargon or corporate-speak, flexible to change and receptive of feedback. We have a huge focus on investing in the training and professional development of our team, to ensure they can deliver high quality work and shape our journey to becoming a globally recognised brand. The business has grown quickly in the last 3 years with no signs of slowing down. About the Role 7+ years of experience in managing Data & Analytics service delivery, preferably within a Managed Services or consulting environment. Responsibilities Serve as the primary owner for all managed service engagements across all clients, ensuring SLAs and KPIs are met consistently. Continuously improve the operating model, including ticket workflows, escalation paths, and monitoring practices. Coordinate triaging and resolution of incidents and service requests raised by client stakeholders. Collaborate with client and internal cluster teams to manage operational roadmaps, recurring issues, and enhancement backlogs. Lead a >40 member team of Data Engineers and Consultants across offices, ensuring high-quality delivery and adherence to standards. Support transition from project mode to Managed Services – including knowledge transfer, documentation, and platform walkthroughs. Ensure documentation is up to date for architecture, SOPs, and common issues. Contribute to service reviews, retrospectives, and continuous improvement planning. Report on service metrics, root cause analyses, and team utilization to internal and client stakeholders. Participate in resourcing and onboarding planning in collaboration with engagement managers, resourcing managers and internal cluster leads. Act as a coach and mentor to junior team members, promoting skill development and strong delivery culture. Qualifications ETL or ELT: Azure Data Factory, Databricks, Synapse, dbt (any two – Mandatory). Data Warehousing: Azure SQL Server/Redshift/Big Query/Databricks/Snowflake (Anyone - Mandatory). Data Visualization: Looker, Power BI, Tableau (Basic understanding to support stakeholder queries). Cloud: Azure (Mandatory), AWS or GCP (Good to have). SQL and Scripting: Ability to read/debug SQL and Python scripts. Monitoring: Azure Monitor, Log Analytics, Datadog, or equivalent tools. Ticketing & Workflow Tools: Freshdesk, Jira, ServiceNow, or similar. DevOps: Containerization technologies (e.g., Docker, Kubernetes), Git, CI/CD pipelines (Exposure preferred). Required Skills Strong understanding of data engineering and analytics concepts, including ELT/ETL pipelines, data warehousing, and reporting layers. Experience in ticketing, issue triaging, SLAs, and capacity planning for BAU operations. Hands-on understanding of SQL and scripting languages (Python preferred) for debugging/troubleshooting. Proficient with cloud platforms like Azure and AWS; familiarity with DevOps practices is a plus. Familiarity with orchestration and data pipeline tools such as ADF, Synapse, dbt, Matillion, or Fabric. Understanding of monitoring tools, incident management practices, and alerting systems (e.g., Datadog, Azure Monitor, PagerDuty). Strong stakeholder communication, documentation, and presentation skills. Experience working with global teams and collaborating across time zones.

Posted 1 month ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Role-Azure Data Engineer Experience-8+ years Location-Remote(Need candidates from south India) Notice-Immediate Interested candidates share your resume to sunilkumar@xpetize.com Keyskills: § Design and develop warehouse solutions using Azure Synapse Analytics, ADLS, ADF, Databricks, Power BI, Azure Analysis Services § Should be proficient in SSIS, SQL and Query optimization. § Should have worked in onshore offshore model managing challenging scenarios. § Expertise in working with large amounts of data (structured and unstructured), building data pipelines for ETL workloads and generate insights utilizing Data Science, Analytics. § Expertise in Azure, AWS cloud services, and DevOps/CI/CD frameworks. § Ability to work with ambiguity and vague requirements and transform them into deliverables. § Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently. § Drive automation efforts across the data analytics team utilizing Infrastructure as Code (IaC) using Terraform, Configuration Management, and Continuous Integration (CI) / Continuous Delivery (CD) tools such as Jenkins. § Help build define architecture frameworks, best practices & processes. Collaborate on Data warehouse architecture and technical design discussions. § Expertise in Azure Data factory and should be familiar with building pipelines for ETL projects. § Expertise in SQL knowledge and experience working with relational databases. § Expertise in Python and ETL projects § Experience in data bricks will be of added advantage.

Posted 1 month ago

Apply

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice under RMI - Optum Advisory umbrella. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities Design and implement secure, scalable, and cost-effective cloud data architectures using cloud services such as Azure Data Factory (ADF), Azure Databricks, Azure Storage, Key Vault, Snowflake, Synapse Analytics, MS Fabric/Power BI etc. Define and lead data & cloud strategy, including migration plans, modernization of legacy systems, and adoption of new cloud capabilities Collaborate with clients to understand business requirements and translate them into optimal cloud architecture solutions, balancing performance, security, and cost Evaluate and compare cloud services (e.g., Databricks, Snowflake, Synapse Analytics) and recommend the best-fit solutions based on project needs and organizational goals Lead the full lifecycle of data platform and product implementations, from planning and design to deployment and support Drive cloud migration initiatives, ensuring smooth transition from on-premise systems while engaging and upskilling existing teams Lead and mentor a team of cloud and data engineers, fostering a culture of continuous learning and technical excellence Plan and guide the team in building Proof of Concepts (POCs), exploring new cloud capabilities, and validating emerging technologies Establish and maintain comprehensive documentation for cloud setup processes, architecture decisions, and operational procedures Work closely with internal and external stakeholders to gather requirements, present solutions, and ensure alignment with business objectives Ensure all cloud solutions adhere to security best practices, compliance standards, and governance policies Prepare case studies and share learnings from implementations to build organizational knowledge and improve future projects Building and analyzing data engineering processes and act as an SME to troubleshoot performance issues and suggesting solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions, Maven etc Building test framework for the Databricks notebook jobs for automated testing before code deployment Continuously explore new Azure services and capabilities; assess their applicability to business needs Create detailed documentation for cloud processes, architecture, and implementation patterns Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Identifies solutions to non-standard requests and problems Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 12+ years of overall experience in Data & Analytics engineering 10+ years of solid experience working as an Architect designing data platforms using Azure, Databricks, Snowflake, ADF, Data Lake, Synapse Analytics, Power BI etc. 10+ years of experience working with data platform or product using PySpark and Spark-SQL In-depth experience designing complex Azure architecture for various business needs & ability to come up with efficient design & solutions Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. Experience in leading team and people management Highly proficient and hands-on experience with Azure services, Databricks/Snowflake development etc. Excellent communication and stakeholder management skills Preferred Qualifications Snowflake, Airflow experience Power BI development experience Eexperience or knowledge of health care concepts - E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #NIC

Posted 1 month ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Purpose This role includes designing and building AI/ML products at scale to improve customer Understanding & Sentiment analysis, recommend customer requirements, recommend optimal inputs, Improve efficiency of Process. This role will collaborate with product owners and business owners. Key Responsibilities Leading a team of junior and experienced data scientists Lead and participate in end-to-end ML projects deployments that require feasibility analysis, design, development, validation, and application of state-of-the art data science solutions. Push the state of the art in terms of the application of data mining, visualization, predictive modelling, statistics, trend analysis, and other data analysis techniques to solve complex business problems including lead classification, recommender systems, product life-cycle modelling, Design Optimization problems, Product cost & weigh optimization problems. Leverage and enhance applications utilizing NLP, LLM, OCR, image based models and Deep Learning Neural networks for use cases including text mining, speech and object recognition Identify future development needs, advance new emerging ML and AI technology, and set the strategy for the data science team Cultivate a product-centric, results-driven data science organization Write production ready code and deploy real time ML models; expose ML outputs through APIs Partner with data/ML engineers and vendor partners for input data pipes development and ML models automation Provide leadership to establish world-class ML lifecycle management processes Qualifications Job Requirements MTech / BE / BTech / MSc in CS or Stats or Maths Experience Over 10 years of Applied Machine learning experience in the fields of Machine Learning, Statistical Modelling, Predictive Modelling, Text Mining, Natural Language Processing (NLP), LLM, OCR, Image based models, Deep learning and Optimization Expert Python Programmer: SQL, C#, extremely proficient with the SciPy stack (e.g. numpy, pandas, sci-kit learn, matplotlib) Proficiency in work with open source deep learning platforms like TensorFlow, Keras, Pytorch Knowledge of the Big Data Ecosystem: (Apache Spark, Hadoop, Hive, EMR, MapReduce) Proficient in Cloud Technologies and Service (Azure Databricks, ADF, Databricks MLflow) Functional Competencies A demonstrated ability to mentor junior data scientists and proven experience in collaborative work environments with external customers Proficient in communicating technical findings to non-technical stakeholders Holding routine peer code review of ML work done by the team Experience in leading and / or collaborating with small to midsized teams Experienced in building scalable / highly available distribute systems in production Experienced in ML lifecycle mgmt. and ML Ops tools & frameworks

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description.

Posted 1 month ago

Apply

0.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Our software engineers at Fiserv bring an open and creative mindset to a global team developing mobile applications, user interfaces and much more to deliver industry-leading financial services technologies to our clients. Our talented technology team members solve challenging problems quickly and with quality. We're seeking individuals who can create frameworks, leverage developer tools, and mentor and guide other members of the team. Collaboration is key and whether you are an expert in a legacy software system or are fluent in a variety of coding languages you're sure to find an opportunity as a software engineer that will challenge you to perform exceptionally and deliver excellence for our clients. Full-time Entry, Mid, Senior Yes (occasional), Minimal (if any) Responsibilities Requisition ID R-10363786 Date posted 06/20/2025 End Date 06/26/2025 City Chennai State/Region Tamil Nadu Country India Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Professional, Software Development Engineering What does a great Software Development Engineer do? As Software Development Engineer your focus will be on applying the principles of engineering to software development. The role focuses on the complex and large software systems that make up the core systems for the organization. You will be responsible for developing, unit testing, and integration tasks working within this highly visible-client focused web services application. Development efforts will also include feature enhancements, client implementations, and bug fixes as well as support of the production environment. What you will do: Collaborate within a team environment in the development, testing, and support of software development project lifecycles. Develop web interfaces and underlying business logic. Prepare any necessary technical documentation. Track and report daily and weekly activities. Participate in code reviews and code remediation. Perform and develop proper unit tests and automation. Participate in a 24 hour on-call rotation to support previous releases of the product. Research problems discovered by QA or product support and develop solutions to the problems. Perform additional duties as determined by business needs and as directed by management. What you will need to have: Bachelor’s degree in Computer Science, Engineering or Information Technology, or equivalent experience. 3-5 years of experience in developing scalable and secured J2EE applications. Excellent knowledge in Java based technologies (Core Java, JSP, AJAX, JSF, EJB, and Spring Framework), Oracle SQL/PLSQL and App servers like WebLogic, JBOSS. Excellent knowledge in SOAP & REST web service implementations. Knowledge in UNIX environment is preferred. Experience in JSF UI components (Oracle ADF & Rich Faces) technology is preferred. Good analytical, organizational, and problem-solving abilities. Good at prioritizing the tasks and commitment to complete them. Strong team player / customer service orientation. Demonstrated ability to work with both end users and technical staff. Ability to track progress against assigned tasks, report status, and proactively identifies issues. Demonstrate the ability to present information effectively in communications with peers and project management team. Highly Organized and Works well in a fast paced, fluid and dynamic environment. What would be great to have: Experience working in a Scrum Development Team Banking and Financial Services experience Java Certifications. Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies