Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 15.0 years
55 - 60 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Position Overview We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities Data Architecture & Modeling Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) Create and maintain data lineage documentation and data dictionaries for healthcare datasets Establish data modeling standards and best practices across the organization Technical Leadership Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica Architect scalable data solutions that handle large volumes of healthcare transactional data Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) Design data models that support analytical, reporting and AI/ML needs Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality Implement data governance frameworks specific to healthcare data privacy and security requirements Establish data quality monitoring and validation processes for critical health plan metrics Lead eUorts to standardize healthcare data definitions across multiple systems and data sources. Required Qualifications Technical Skills 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data Expert-level proficiency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) Proficiency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data Experience with healthcare data standards and medical coding systems Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication Proven track record of leading data modeling projects in complex healthcare environments Strong analytical and problem-solving skills with ability to work with ambiguous requirements Excellent communication skills with ability to explain technical concepts to business stakeholders Experience mentoring team members and establishing technical standards. Preferred Qualifications Experience with Medicare Advantage, Medicaid, or Commercial health plan operations Cloud platform certifications (AWS, Azure, or GCP) Experience with real-time data streaming and modern data lake architectures Knowledge of machine learning applications in healthcare analytics Previous experience in a lead or architect role within healthcare organizations. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 8 hours ago
3.0 - 8.0 years
11 - 21 Lacs
Hyderabad
Work from Office
About Position: We are conducting an in-person hiring drive on 28th june 2025, for Azure Data Engineer in Hyderabad. In Person Drive Location: Persistent Systems (6th Floor), Gate 11, SALARPURIA SATTVA ARGUS, SALARPURIA SATTVA KNOWLEDGE CITY, beside T hub, Shilpa Gram Craft Village, Madhapur, Rai Durg, Hyderabad, Telangana 500081 We are hiring Azure Data Engineer with skills in Azure Databricks, Azure DataFactory, Pyspark, SQL. Role: Azure Data Engineer Location: Hyderabad Experience: 3-8 Years Job Type: Full Time Employment What You'll Do: Design and implement robust ETL/ELT pipelines using PySpark on Databricks. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements. Optimize data workflows for performance and scalability. Manage and monitor data pipelines in production environments. Ensure data quality, integrity, and security across all stages of data processing. Integrate data from various sources including APIs, databases, and cloud storage. Develop reusable components and frameworks for data processing. Document technical solutions and maintain code repositories. Expertise You'll Bring: Bachelors or Masters degree in Computer Science, Engineering, or related field. 2+ years of experience in data engineering or software development. Strong proficiency in PySpark and Apache Spark. Hands-on experience with Databricks platform. Proficiency in SQL and working with relational databases. Experience with cloud platforms (Azure, AWS, or GCP). Familiarity with Delta Lake, MLflow, and other Databricks ecosystem tools. Strong problem-solving and communication skills. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry's best Let's unleash your full potential at Persistent "Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind."
Posted 9 hours ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad
Work from Office
We are seeking a highly experienced and self-driven Senior Data Engineer to design, build, and optimize modern data pipelines and infrastructure. This role requires deep expertise in Snowflake, DBT, Python, and cloud data ecosystems. You will play a critical role in enabling data-driven decision-making across the organization by ensuring the availability, quality, and integrity of data. Key Responsibilities: Design and implement robust, scalable, and efficient data pipelines using ETL/ELT frameworks. Develop and manage data models and data warehouse architecture within Snowflake . Create and maintain DBT models for transformation, lineage tracking, and documentation. Write modular, reusable, and optimized Python scripts for data ingestion, transformation, and automation. Collaborate closely with data analysts, data scientists, and business teams to gather and fulfill data requirements. Ensure data integrity, consistency, and governance across all stages of the data lifecycle. Monitor pipeline performance and implement optimization strategies for queries and storage. Follow best practices for data engineering including version control (Git), testing, and CI/CD integration. Required Skills and Qualifications: 8+ years of experience in Data Engineering or related roles. Deep expertise in Snowflake : schema design, performance tuning, security, and access controls. Proficiency in Python , particularly for scripting, data transformation, and workflow automation. Strong understanding of data modeling techniques (e.g., star/snowflake schema, normalization). Proven experience with DBT for building modular, tested, and documented data pipelines. Familiarity with ETL/ELT tools and orchestration platforms like Apache Airflow or Prefect . Advanced SQL skills with experience handling large and complex data sets. Exposure to cloud platforms such as AWS , Azure , or GCP and their data services. Preferred Qualifications: Experience implementing data quality checks and governance frameworks. Understanding of modern data stack and CI/CD pipelines for data workflows. Contributions to data engineering best practices, open-source projects, or thought leadership.
Posted 9 hours ago
8.0 - 12.0 years
12 - 22 Lacs
Pune
Work from Office
Role & responsibilities 8+ years with lead exp 2 open positions - Data Engineer The relevant experience for data engineer should be 5+ in data engineering. Data Engineer required skillsets - SQL, Python, ETL, Pyspark, Data Engineer, Databricks. Excellent Comm Skills (No Compromise), Focus is SQL- 90% strong exposure Location- Pune Notice Period- 0 to 15 Days Joiners Only Thanks & Regards Sushma Patil HR Cordinator sushma.patil@in.experis.com
Posted 13 hours ago
3.0 - 8.0 years
5 - 11 Lacs
Pune, Mumbai (All Areas)
Hybrid
Overview: TresVista is looking to hire an Associate in its Data Intelligence Group team, who will be primarily responsible for managing clients as well as monitor/execute projects both for the clients as well as internal teams. The Associate may be directly managing a team of up to 3-4 Data Engineers & Analysts across multiple data engineering efforts for our clients with varied technologies. They would be joining the current team of 70+ members, which is a mix of Data Engineers, Data Visualization Experts, and Data Scientists. Roles and Responsibilities: Interacting with the client (internal or external) to understand their problems and work on solutions that address their needs Driving projects and working closely with a team of individuals to ensure proper requirements are identified, useful user stories are created, and work is planned logically and efficiently to deliver solutions that support changing business requirements Managing the various activities within the team, strategizing how to approach tasks, creating timelines and goals, distributing information/tasks to the various team members Conducting meetings, documenting, and communicating findings effectively to clients, management and cross-functional teams Creating Ad-hoc reports for multiple internal requests across departments Automating the process using data transformation tools Prerequisites Strong analytical, problem-solving, interpersonal, and communication skills Advanced knowledge of DBMS, Data Modelling along with advanced querying capabilities using SQL Working experience in cloud technologies (GCP/ AWS/Azure/Snowflake) Prior experience in building and deploying ETL/ELT pipelines using CI/CD, and orchestration tools such as Apache Airflow, GCP workflows, etc. Proficiency in Python for building ETL/ELT processes and data modeling Proficiency in Reporting and Dashboards creation using Power BI/Tableau Knowledge in building ML models and leveraging Gen AI for modern architectures. Experience working with version control platforms like GitHub Familiarity with IaC tools like Terraform and Ansible is good to have Stakeholder Management and client communication experience would be preferred Experience in the Financial Services domain will be an added plus Experience in Machine Learning tools and techniques will be good to have Experience 3-7 years Education BTech/MTech/BE/ME/MBA in Analytics Compensation The compensation structure will be as per industry standards
Posted 1 day ago
8.0 - 13.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Role: Senior Data Engineer Location: Bangalore - Hybrid Experience : 10+ Years Job Requirements: ETL & Data Pipelines: Experience building and maintaining ETL pipelines with large data sets using AWS Glue, EMR, Kinesis, Kafka, CloudWatch Programming & Data Processing: Strong Python development experience with proficiency in Spark or PySpark Experience in using APIs Database Management: Strong skills in writing SQL queries and performance tuning in AWS Redshift Proficient with other industry-leading RDBMS such as MS SQL Server and PostgreSQL AWS Services: Proficient in working with AWS services including AWS Lambda, Event Bridge, Step Functions, SNS, SQS, S3, and MI models Interested candidates can share their resume at Neesha1@damcogroup.com
Posted 1 day ago
5.0 - 8.0 years
20 - 30 Lacs
Bengaluru
Work from Office
Job Description: Skill/ Tech Stack: Data Engineer Location: Bangalore Experience: 5 to 8 years Work from the office in a Hybrid mode (Thrice a week). Job Overview: The ideal candidate will: Work with the team to define high-level technical requirements and architecture for the back-end services , Data components, data monetization components Develop new application features and enhance existing ones Develop relevant documentation and diagrams. Work with other teams for deployment, testing, training, and production support. Integration with Data Engineering teams Ensure that development, coding, privacy, and security standards are adhered to Write clean and quality code. Ready to work on new technologies as business demands Strong communication skills and work ethics. Core/Must have skills: Out of total years of experience, minimum 5+ years of professional experience in Python development, with a focus on data-intensive applications. Proven experience with Apache Spark and PySpark for large-scale data processing. Solid understanding of SQL and experience working with relational databases (e.g., Oracle, sparkSQL) and query optimization. Experience in SDLC, particularly in applying software development best practices and methodologies. Experience in creating and maintaining unit tests, integration tests, and performance testing for data pipelines and systems. Experience with big data platform Databricks. Experience in building data intensive applications, data products and good understanding of data pipeline (Feature Data Engineering ,Data Transformation, Data Lineage, Data Quality) Experience with cloud platforms such as AWS for data infrastructure and services is preferred. This is a hands-on developer positions within a small elite development team that moves very fast Role will evolve as tech leadership for Data Initiative Good to have skills: Knowledge of FX business / capital market domain is a plus Knowledge of data formats like AVRO, Parquet, and working with complex data types. Experience with Apache Kafka for real-time data streaming and Kafka Streams for processing data streams. Experience with Airflow for orchestrating complex data workflows and pipelines. Expertise or interest in Linux Exposure to data governance and security best practices in data management.
Posted 1 day ago
7.0 - 10.0 years
14 - 24 Lacs
Chennai
Hybrid
Key Skills: Database, Data Engineer, MS SQL Server, MySQL, Database Design. Roles and Responsibilities: Database Design & Architecture: Design, develop, and maintain complex SQL Server databases. Define and implement efficient database models, schemas, and indexing strategies based on business requirements. Performance Tuning & Optimization: Analyze and optimize SQL queries, stored procedures, and indexing strategies. Monitor database performance using tools like SQL Profiler, Extended Events, and DMVs. Implement performance enhancements such as partitioning, caching, and execution plan optimization. Database Deployment and Integration: Build and deploy database systems for new applications. Ensure seamless integration with front-end and back-end systems. Collaborate with developers to implement APIs for database access. Database Maintenance and Monitoring: Monitor database systems for performance, uptime, and availability. Address incidents and alerts related to database health. Perform routine maintenance tasks, including backups and recovery testing. Collaboration & Support: Work closely with application developers to optimize database interactions. Provide production support, troubleshoot database issues, and participate in on-call rotations. Document database processes, architecture, and troubleshooting guidelines. Database Administration & Maintenance: Install, configure, and upgrade SQL Server instances in both on-premises and cloud environments. Manage database security, user access controls, and compliance policies including RBAC, encryption, and auditing. Develop and enforce database backup, recovery, and retention policies. Experience Requirement: 7+ years of experience as a Database Engineer or Administrator specializing in SQL Server. Expertise in SQL Server 2016/2019/2022, including T-SQL and advanced query optimization. Strong knowledge of indexing, partitioning, and performance tuning techniques. Experience with PowerShell, T-SQL scripting, or other automation tools. Familiarity with CI/CD pipelines for database deployments (e.g., Redgate). Hands-on experience with high-availability (HA) and disaster recovery (DR) solutions. Strong analytical and problem-solving skills. Excellent communication and documentation abilities. Experience with cloud-based SQL Server solutions (AWS RDS) is preferred. Familiarity with NoSQL databases such as MongoDB and Redis is a plus. Education: B.Tech M.Tech (Dual), B.E., B.Tech.
Posted 1 day ago
3.0 - 8.0 years
12 - 19 Lacs
Pune
Hybrid
This is Only for Pune Local Candidates ( Not for Relocation Candidates) Role : Data Engineer This is C2H Role Experience : 3- 8 yrs Location : Kharadi , Pune Excellent Communication SKills NP: Immediate joiner to 1 m Primary Skills Python, document intelligence, NLP, unstructured data extraction (desirable to have OpenAI and prompt engineering) Secondary Skills Azure infra experiences and data bricks Mandatory Skills Data Infrastructure & Engineering Designing, building, productionizing, and maintaining scalable and reliable data infrastructure and data products. Experience with data modeling, pipeline idempotency, and operational observability 2.Programming Languages: Proficiency in one or more object-oriented programming languages such as: Python Scala Java C# 3.Database Technology : Strong experience with: SQL and NoSQL databases Query structures and design best practices Scalability, readability, and reliability in database design 4.Distributed Systems Experience implementing large-scale distributed systems in collaboration with senior team members. 5. . Software Engineering Best Practices Technical design and reviews Unit testing, monitoring, and alerting Code versioning, code reviews, and documentation CI/CD pipeline development and maintenance 6.Security & Compliance Deploying secure and well-tested software and data assets Meeting privacy and compliance requirement 7.Site Reliability Engineering Service reliability, on-call rotations, defining and maintaining SLAs Infrastructure as code and containerized deployments Job Description : Able to enrich data by data transformation and joining with other datasets. Able to analyze data and derive statistical insights. Able to convey story through data visualization. Ability to build Data pipelines for diverse interfaces. Good understating of API workflow. Technical Skills : AWS Data Lake and AWS data hub and AWS cloud platform. Interested Candidate Share Resume at dipti.bhaisare@in.experis.com
Posted 2 days ago
3.0 - 6.0 years
20 - 30 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer II (Python, SQL) Experience: 3 to 6 years Location: Bangalore, Karnataka (Work from office, 5 days a week) Role: Data Engineer II (Python, SQL) As a Data Engineer II, you will work on designing, building, and maintaining scalable data pipelines. Youll collaborate across data analytics, marketing, data science, and product teams to drive insights and AI/ML integration using robust and efficient data infrastructure. Key Responsibilities: Design, develop and maintain end-to-end data pipelines (ETL/ELT). Ingest, clean, transform, and curate data for analytics and ML usage. Work with orchestration tools like Airflow to schedule and manage workflows. Implement data extraction using batch, CDC, and real-time tools (e.g., Debezium, Kafka Connect). Build data models and enable real-time and batch processing using Spark and AWS services. Collaborate with DevOps and architects for system scalability and performance. Optimize Redshift-based data solutions for performance and reliability. Must-Have Skills & Experience: 3+ years in Data Engineering or Data Science with strong ETL and pipeline experience. Expertise in Python and SQL . Strong experience in Data Warehousing , Data Lakes , Data Modeling , and Ingestion . Working knowledge of Airflow or similar orchestration tools. Hands-on with data extraction techniques like CDC , batch-based, using Debezium, Kafka Connect, AWS DMS . Experience with AWS Services : Glue, Redshift, Lambda, EMR, Athena, MWAA, SQS, etc. Knowledge of Spark or similar distributed systems. Experience with queuing/messaging systems like SQS , Kinesis , RabbitMQ .
Posted 2 days ago
7.0 - 12.0 years
25 - 40 Lacs
Gurugram
Remote
Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime YoE: 7 to 10 years relevant experience Shift: 6.30pm to 2.30am IST Job Purpose: The Senior Data Engineer designs, builds, and maintains scalable data pipelines and architectures to support the Denials AI workflow under the guidance of the Team Lead, Data Management. This role ensures data is reliable, compliant with HIPAA, and optimized. Duties & Responsibilities: Collaborate with the Team Lead and crossfunctional teams to gather and refine data requirements for Denials AI solutions. Design, implement, and optimize ETL/ELT pipelines using Python, Dagster, DBT, and AWS data services (Athena, Glue, SQS). Develop and maintain data models in PostgreSQL; write efficient SQL for querying and performance tuning. Monitor pipeline health and performance; troubleshoot data incidents and implement preventive measures. Enforce data quality and governance standards, including HIPAA compliance for PHI handling. Conduct code reviews, share best practices, and mentor junior data engineers. Automate deployment and monitoring tasks using infrastructure-as-code and AWS CloudWatch metrics and alarms. Document data workflows, schemas, and operational runbooks to support team knowledge transfer. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, or related field. 5+ years of handson experience building and operating productiongrade data pipelines. Solid experience with workflow orchestration tools (Dagster) and transformation frameworks (DBT) or other similar tools such (Microsoft SSIS, AWS Glue, Air Flow). Strong SQL skills on PostgreSQL for data modeling and query optimization or any other similar technologies (Microsoft SQL Server, Oracle, AWS RDS). Working knowledge with AWS data services: Athena, Glue, SQS, SNS, IAM, and CloudWatch. Basic proficiency in Python and Python data frameworks (Pandas, PySpark). Experience with version control (GitHub) and CI/CD for data projects. Familiarity with healthcare data standards and HIPAA compliance. Excellent problemsolving skills, attention to detail, and ability to work independently. Strong communication skills, with experience mentoring or leading small technical efforts.
Posted 2 days ago
5.0 - 10.0 years
18 - 30 Lacs
Pune, Bengaluru
Hybrid
Job role & responsibilities:- Understanding operational needs by collaborating with specialized teams Supporting business operations.This involves architecture designing, building and deploying data systems, pipelines etc Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Build ETL and data movement solutions. Migrate data from traditional database systems to Cloud environment Technical Skills, Experience & Qualification required:- Experience in Cloud Data Engineering. Proficient in using Informatica PowerExchange for data integration and real-time data capture from various sources including databases, applications, and cloud environments. Hands on experience in Informatica PowerExchange Bachelors Degree in Computer Science or related field Proficient in Cloud Services Azure Strong hands-on experience for working with Streaming dataset Ability to integrate data from heterogeneous sources such as relational databases, NoSQL databases, and flat files using PowerExchange. Knowledge of CDC (Change Data Capture) methodologies and implementation using PowerExchange for real-time data updates. Enthusiasm for staying updated with the latest trends in data engineering and Informatica technologies. Willingness to participate in training and certification programs related to Informatica and data engineering. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Immediate joiners will be preferred only
Posted 4 days ago
4.0 - 7.0 years
12 - 17 Lacs
Chennai, Bengaluru
Work from Office
Key Skill: Pyspark, Cloudera Data Platfrorm, Big data Hadoop, Hive, Kafka Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Candidate’s Profile & Technical Competency: BE/B Tech, MCA/BCA with 4+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform : Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies : Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation : Strong scripting skills in Linux. Ready for 6 months contract role at Chennai Can you join 15 days
Posted 5 days ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad
Hybrid
Senior Data Engineer Aveva PI Specialist Location: Hyderabad Experience: 8+ years Job Type: Full-Time Industry: Pharmaceuticals / Biotech / Manufacturing Work Mode: Hybrid Job Summary: We are looking for a Senior Data Engineer with deep expertise in Aveva PI solutions to join our organization. This critical role involves leading the implementation and governance of Aveva PI across multiple manufacturing sites while driving its strategic adoption within our Center of Excellence (CoE) . The ideal candidate will bring a balance of strong technical skills, industrial domain knowledge, and experience in data governance to optimize real-time data solutions. Key Responsibilities: Lead end-to-end implementation of Aveva PI across manufacturing plants. Install, configure, and validate PI Servers and data connectivity (e.g., OPC, PI Cloud Connect, PI WEB API, RDBMS, UFL ). Design and build AF structure, Event Frames (EF), PI Analytics, Notifications , and develop data architecture for collection, aggregation, and visualization. Drive the strategic vision and adoption of Aveva PI through the CoE. Establish and maintain governance frameworks and compliance standards for Aveva PI usage. Collaborate with cross-functional teams to gather requirements and implement robust system architectures. Develop and maintain technical documentation , best practices, SOPs, and training resources. Ensure high availability and performance of Aveva PI systems through proactive monitoring and support. Lead cross-functional forums to promote knowledge sharing , innovation, and continuous improvement. Required Skills & Experience: 8+ years of hands-on experience with Aveva PI, including full-cycle implementations. In-depth knowledge of PI System components : PI Server, PI AF, PI Vision, PI Interfaces, PI Analytics. Solid understanding of industrial automation , process data integration , and ERP/MES system interactions . Experience with GMP environments , including creation of qualification and compliance documentation. Strong scripting and data skills: SQL, Python or similar. Familiarity with cloud technologies and data lake integration with PI data. Proficiency in data governance and OT systems best practices . Excellent communication and leadership skills to guide stakeholders and lead forums. Experience in Agile delivery environments and working in or establishing Centers of Excellence (CoE) . Preferred Industry Experience: Pharmaceuticals Biotech Chemicals / Manufacturing Why Join Us? Be a key part of a strategic digital transformation initiative Work with cutting-edge PI and cloud technologies Lead innovation in real-time industrial data systems Opportunity to shape and grow a Center of Excellence Apply Now on minal_mohurle@persolapac.com to drive operational excellence through data! CONFIDENTIAL NOTE: By submitting your resume or personal data, you acknowledge reading and agreeing to our Privacy Policy . You hereby provide voluntary consent to the collection, use, processing, and disclosure of your data by us and our affiliates, in line with the Privacy Policy . and applicable laws. If you wish to withdraw your consent or have any concerns, you may submit a request to our designated consent manager, as outlined in our Privacy Policy . We prioritize your privacy. SECURITY NOTE: We at PERSOLKELLY India or our representatives, do not ask job seekers for fees, personal banking information, or payments through unofficial channels. Official communications will only come from @persolkelly.com. Report any suspicious activity to [Contactus_in@persolkelly.com]Contactus_in@persolkelly.com . Click here to find out how you can safeguard yourself from job scams..
Posted 5 days ago
3.0 - 8.0 years
25 - 30 Lacs
Gurugram
Hybrid
Key Skills: Python, Data Engineer, SQL Roles & Responsibilities: Design, develop, test, deploy, maintain and improve software. Develop flowcharts, layouts and documentation to identify requirements & solutions. Write well designed & high-quality testable code. Produce specifications and determine operational feasibility. Integrate software components into fully functional platform. Apply pro-actively & perform hands-on design and implementation of best practice CI/CD. Coaching & mentoring of other Service Team members. Develop/contribute to software verification plans and quality assurance procedures. Document and maintain software functionality. Troubleshoot, debug and upgrade existing systems, including participating in DR tests. Deploy programs and evaluate customer feedback. Contribute to team estimation for delivery and expectation management for scope. Comply with industry standards and regulatory requirements. Ensuring throughout project life cycle, the data component of the solution i.e. data management (data generation, standardisation, Integration, transformation, migration, delivery, etc) is aligned with the Data Architecture pattern/ principles/policies and working closely with the solution designers. Identification and escalation of any potential gaps in the target data architecture, including likely impact of gaps. Responsible for ensuring the Data design will fit for business purpose and deliver expected benefits with minimal risk/data impact to the other business unit across the enterprise, who are users of the specific data. In addition, the data engineer to ensure the design caters for potential future usage of data by customers across the enterprise. Experience Requirments: 3-8 years technical experience (within financial services industry preferred). Technical Domain experience (Subject Matter Expertise in Technology or Tools). Solid experience, knowledge and skills in Data Engineering, BI/software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse/Lake House environment. Hands on programming experience in writing Python, SQL, Unix Shell scripts, in a complex enterprise environment. Experience in configuration management using Ansible/Jenkins/GIT. Hands on experience working on relational Databases - Oracle or MS SQL or DB2 (SQL/PLSQL, Database design, Normalisation, Execution plan analysis, Index creation and maintenance, Stored Procedures). Knowledge of Data Modelling techniques and practices used for a Data Warehouse/Data Mart application. Quality engineering development experience (CI/CD - Jenkins). Experience with Source Control Tools - Github or BitBucket. Skilled in querying data from a range of data sources that store structured and unstructured data. Desired Experience: Hands on expertise in SQL and PL/SQL in Oracle pref. 19c. Hands on exposure on Bash/Shell scripting & Control M for job scheduling Experience with Source code control - GitHub, VSTS etc. Experience with UNIX command-line tools for complex file operations Understanding and ability to translate/physicalise Data Models (Star Schema, Data Vault 2.0 etc). Data integration experience in complex multi asset/multi-platform/ distributed technology environments involving data migration strategy, decommission plans, etc. Understanding of data management concepts, Data Architecture Principles, Data Controls, Data Policies and Standards and possible causes of Data risks/impacts in the organisation. Good to Have Exposure to Big Data (Hive-QL/DataBricks/Redshift SQL). Exposure on python scripting. Exposure to Cloud/AWS service including S3, Glue, Athena, Data Pipeline, etc. Knowledge or understanding of Power BI. Essential Capabilities Enthusiasm for technology, keeping up with latest trends. Ability to articulate complex technical issues and desired outcomes of system enhancements. Proven analytical skills and evidence-based decision making. Excellent problem solving, troubleshooting & documentation skills. Strong written and verbal communication skills. Excellent collaboration and interpersonal skills. Strong delivery focus with an active approach to quality and auditability. Ability to work under pressure and excel within a fast-paced environment Ability to self-manage tasks. Agile software development practices. Qualification: UG / PG in Computer Science.
Posted 5 days ago
5.0 - 10.0 years
0 - 3 Lacs
Noida
Work from Office
• Act as Data domain expert for Snowflake in a collaborative environment to provide demonstrated understanding of data management best practices and patterns. • Design and implement robust data architectures to meet and support business requirements leveraging Snowflake platform capabilities. • Develop and enforce data modelling standards and best practices for Snowflake environments. • Develop, optimize, and maintain Snowflake data warehouses. • Leverage Snowflake features such as clustering, materialized views, and semi structured data processing to enhance data solutions. • Ensure data architecture solutions meet performance, security, and scalability requirements. • Stay current with the latest developments and features in Snowflake and related technologies, continually enhancing our data capabilities. • Collaborate with cross-functional teams to gather business requirements, translate them into effective data solutions in Snowflake and provide data-driven insights. • Stay updated with the latest trends and advancements in data architecture and Snowflake technologies. • Provide mentorship and guidance to junior data engineers and architects. • Troubleshoot and resolve data architecture-related issues effectively. Skills Requirement: • 5+ years of proven experience as a Data Engineer with 3+ years as Data Architect. • Proficiency in Snowflake with Hands-on experience with Snowflake features such as clustering, materialized views, and semi-structured data processing. • Experience in designing and building manual or auto ingestion data pipeline using Snowpipe. • Design and Develop automated monitoring processes on Snowflake using combination of Python, PySpark, Bash with SnowSQL. • SnowSQL Experience in developing stored Procedures writing Queries to analyse and transform data • Working experience on ETL tools like Fivetran, DBT labs, MuleSoft • Expertise in Snowflake concepts like setting up Resource monitors, RBAC controls, scalable virtual warehouse, SQL performance tuning, zero copy clone, time travel and automating them. • Excellent problem-solving skills and attention to detail. • Effective communication and collaboration abilities. • Relevant certifications (e.g., SnowPro Core / Advanced) are a must have. • Must have expertise in AWS. Azure, Salesforce Platform as a Service (PAAS) model and its integration with Snowflake to load/unload data. • Strong communication and exceptional team player with effective problem-solving skills Educational Qualification Required: • Masters degree in Business Management (MBA / PGDM) / Bachelor's degree in computer science, Information Technology, or related field.
Posted 5 days ago
5.0 - 10.0 years
11 - 21 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Job Title: Senior Data Engineer ADF | Snowflake | DBT | Databricks Experience: 5 to 8 Years Locations: Pune / Hyderabad / Gurgaon / Bangalore (Hybrid) Job Type: Full Time, Permanent Job Description: We are hiring for a Senior Data Engineer role with strong expertise in Azure Data Factory (ADF) , Snowflake , DBT , and Azure Databricks . The ideal candidate will be responsible for designing, building, and maintaining scalable cloud-based data pipelines and enabling high-quality data delivery for analytics and reporting. Key Responsibilities Build and manage ETL/ELT pipelines using ADF, Snowflake, DBT, and Databricks Create parameterized, reusable components within ADF pipelines Perform data transformations and modeling in Snowflake using DBT Use Databricks for data processing using PySpark/SQL Collaborate with stakeholders to define and implement data solutions Optimize data workflows for performance, scalability , and cost-efficiency Ensure data quality, governance, and documentation standards Mandatory Skills Azure Data Factory (ADF) Snowflake DBT (Data Build Tool) Azure Databricks Strong SQL and data modeling experience Good-to-Have Skills Azure Data Lake, Azure Synapse, Blob Storage CI/CD using Azure DevOps or GitHub Python scripting, PySpark Power BI/Tableau integration Experience in metadata/data governance tools Role Requirements Education : Bachelors/Masters degree in Computer Science, Data Engineering, or related field Certifications : Azure or Snowflake certification is a plus Strong problem-solving and communication skills Keywords: Azure Data Factory, ADF, Snowflake, DBT, Azure Databricks, PySpark, SQL, Data Engineer, Azure Data Lake, ETL, ELT, Azure Synapse, Power BI, CI/CD
Posted 6 days ago
5.0 - 10.0 years
15 - 20 Lacs
Pune
Work from Office
AZURE DATA ENGINEER Skills - Strong technical experience in Azure, SQL , Azure data factory, ETL, Databricks Graduation must Experience- 5-10 years CTC- Up to 14 - 20 LPA 21st June -F2F Interview only (Pune) Contact- 7742324144
Posted 1 week ago
2.0 - 5.0 years
6 - 8 Lacs
Chennai
Work from Office
Design, develop, and maintain scalable data pipelines and ETL processes. Build and optimize data architecture to ensure data quality and consistency. Integrate data from diverse internal and external sources. Collaborate with cross-functional teams. Required Candidate profile 2 years of professional experience in data engineering or software development. Solid understanding of SQL and proficiency in at least one programming language, such as Python, Java, or Scala.
Posted 1 week ago
5.0 - 10.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Job Description: We are seeking a talented and experienced Data Scientist to join our dynamic team. The ideal candidate will have a strong background in data analysis, machine learning, statistical modeling, and artificial intelligence. Experience with Natural Language Processing (NLP) is desirable. Experience delivering products that incorporate AI/ML, familiarity with Cloud Services such as AWS highly desirable. Key Responsibilities: Clean, prepare, and explore data to find trends and patterns Build, validate, and implement AI/ML models Extensively document all aspects of the work including data analysis, model development, results Collaborate with other team members teams to incorporate AI/ML models into software applications Stay updated with the latest advancements in AI/ML domain and incorporate into day-to-day work Required Skills/Qualifications: 3-5 years of experience in AI/ML related work Extensive experience in Python Familiarity with Statistical models such as Linear/Logistic regression, Bayesian Models, Classification/Clustering models, Time Series analysis Experience with deep learning models such as CNNs, RNNs, LSTM, Transformers Experience with machine learning frameworks such as TensorFlow, PyTorch, Scikit- learn, Keras Experience with GenAI, LLMs, RAG architecture would be a plus Familiarity with cloud services such as AWS, Azure Familiarity with version control systems (e.g., Git), JIRA, Confluence Familiarity with MLOPs concepts, AI/ML pipeline tooling such as Kedro Knowledge of CI/CD pipelines and DevOps practices Experience delivering customer facing AI Solutions delivered as SaaS would be a plus Bachelors degree in Computer Science, Engineering, or a related field, or equivalent practical experience. Strong problem-solving skills and attention to detail Excellent verbal and written communication and teamwork skills Benefits: Competitive salary and benefits package Opportunity to work on cutting-edge technologies and innovative projects Collaborative and inclusive work environment Professional development and growth opportunities
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Hybrid
•Strong experience as an AWS/Azure/GCP Data Engineer & must have AWS/Azure/GCP Databricks experience. •Expert proficiency in Spark Scala, Python, spark,ADF & SQL •Design & develop applications on Databricks. NP-Immediate Email- sachin@assertivebs.com
Posted 1 week ago
8.0 - 13.0 years
27 - 42 Lacs
Kolkata, Hyderabad, Pune
Work from Office
About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title: Senior GCP Data Engineer Experience: 8 to 13 years Key Responsibilities : Design, build, and maintain scalable and reliable data pipelines on Google Cloud Platform (GCP) . Develop ETL/ELT workflows using Cloud Dataflow , Apache Beam , Dataproc , BigQuery , and Cloud Composer (Airflow). Optimize performance of data processing and storage solutions (e.g., BigQuery, Cloud Storage). Collaborate with data analysts, data scientists, and business stakeholders to deliver data-driven insights. Design and implement data lake and data warehouse solutions following best practices. Ensure data quality, security, and governance across GCP environments. Implement CI/CD pipelines for data engineering workflows using tools like Cloud Build , GitLab CI , or Jenkins . Monitor and troubleshoot data jobs, ensuring reliability and timeliness of data delivery. Mentor junior engineers and participate in architectural design discussions. Technical Skills: Strong experience in Google Cloud Platform (GCP) data services: BigQuery , Dataflow , Dataproc , Pub/Sub , Cloud Storage , Cloud Functions Proficiency in Python and/or Java for data processing. Strong knowledge of SQL and performance tuning in large-scale environments. Hands-on experience with Apache Beam , Apache Spark , and Airflow . Solid understanding of data modeling , data warehousing , and streaming/batch processing . Experience with CI/CD , Git, and modern DevOps practices for data workflows. Familiarity with data security and compliance in cloud environments. NOTE : Only immediate and 15 days joiners Notice period : Only immediate and 15 days joiners Location: Pune, Chennai. Hyderabad, Kolkata Mode of Work : WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,INDIA. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in
Posted 1 week ago
6.0 - 11.0 years
10 - 20 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Description: 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in Python and PySpark for data processing and transformation tasks. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills : Deep understanding of ETL concepts and best practices.. Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills : Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data Pipelines: Design, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL Development: Develop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow Automation: Build and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data Integration: Work with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms.Optimization and Scaling: Optimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.
Posted 1 week ago
4.0 - 9.0 years
7 - 17 Lacs
Mumbai, Navi Mumbai, Mumbai (All Areas)
Work from Office
Role & responsibilities Strong, hands-on proficiency with Snowflake: In-depth knowledge of Snowflake architecture, features (e.g., Snowpipe, Tasks, Streams, Time Travel, Zero-Copy Cloning). Experience in designing and implementing Snowflake data models (schemas, tables, views). Expertise in writing and optimizing complex SQL queries in Snowflake. Experience with data loading and unloading techniques in Snowflake. Solid experience with AWS Cloud services: Proficiency in using AWS S3 for data storage, staging, and as a landing zone for Snowflake. Experience with other relevant AWS services (e.g., IAM for security, Lambda for serverless processing, Glue for ETL - if applicable). Strong experience in designing and building ETL/ELT data pipelines.
Posted 1 week ago
3.0 - 8.0 years
3 - 5 Lacs
Hyderabad
Work from Office
Key Skills: Data Engineer, Python. Roles and Responsibilities: Develop and maintain scalable data pipelines using Python and PySpark. Design and implement data lake and data warehouse solutions to support business intelligence and analytics needs. Work extensively on the Databricks platform for data processing and transformation. Write complex SQL queries and build efficient data models to support analytics and reporting. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Ensure data quality, consistency, and reliability across various sources and destinations. Troubleshoot and resolve issues in data ingestion, transformation, and delivery processes. Lead and mentor junior data engineers, ensuring adherence to best practices and coding standards. Experience Requirement: 3-8 years of experience with data warehousing and data lake architectures. Extensive hands-on experience with the Databricks platform. Proven expertise in SQL and data modeling. Strong proficiency in Python and PySpark. Excellent problem-solving and analytical skills. Demonstrated experience in leading and mentoring teams. Education: Any Graduation.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data engineer job market in India is rapidly growing as organizations across various industries are increasingly relying on data-driven insights to make informed decisions. Data engineers play a crucial role in designing, building, and maintaining data pipelines to ensure that data is accessible, reliable, and secure for analysis.
The average salary range for data engineer professionals in India varies based on experience and location. Entry-level data engineers can expect to earn anywhere between INR 4-6 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
The typical career progression for a data engineer in India may include roles such as Junior Data Engineer, Data Engineer, Senior Data Engineer, Lead Data Engineer, and eventually Chief Data Engineer. As professionals gain more experience and expertise in handling complex data infrastructure, they may move into management roles such as Data Engineering Manager.
In addition to strong technical skills in data engineering, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and Java. Familiarity with cloud platforms like AWS, GCP, or Azure, as well as proficiency in data warehousing technologies, is also beneficial for data engineers.
As you explore data engineer jobs in India, remember to showcase your technical skills, problem-solving abilities, and experience in handling large-scale data projects during interviews. Stay updated with the latest trends in data engineering and continuously upskill to stand out in this competitive job market. Prepare thoroughly, apply confidently, and seize the opportunities that come your way!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France