Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12 - 16 years
35 - 37 Lacs
Hyderabad
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 3 months ago
12 - 16 years
35 - 37 Lacs
Nagpur
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 3 months ago
12 - 16 years
35 - 37 Lacs
Jaipur
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 3 months ago
12 - 16 years
35 - 37 Lacs
Lucknow
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 3 months ago
12 - 16 years
35 - 37 Lacs
Kanpur
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 3 months ago
12 - 16 years
35 - 37 Lacs
Pune
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 3 months ago
12 - 16 years
35 - 37 Lacs
Ahmedabad
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 3 months ago
12 - 16 years
35 - 37 Lacs
Surat
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 3 months ago
5 - 8 years
22 - 25 Lacs
Hyderabad
Hybrid
Role & responsibilities Bachelors degree in computer science, engineering, or a related field. Master’s degree preferred. Data: 5+ years of experience with data analytics and data warehousing. Sound knowledge of data warehousing concepts. SQL: 5+ years of hands-on experience on SQL and query optimization for data pipelines. ELT/ETL: 5+ years of experience in Informatica/ 3+ years of experience in IICS/IDMC Migration Experience: Experience Informatica on prem to IICS/IDMC migration Cloud: 5+ years’ experience working in AWS cloud environment Python: 5+ years of hands-on experience of development with Python Workflow: 4+ years of experience in orchestration and scheduling tools (e.g. Apache Airflow) Advanced Data Processing: Experience using data processing technologies such as Apache Spark or Kafka Troubleshooting: Experience with troubleshooting and root cause analysis to determine and remediate potential issues Communication: Excellent communication, problem-solving and organizational and analytical skills Able to work independently and to provide leadership to small teams of developers. Reporting: Experience with data reporting (e.g. MicroStrategy, Tableau, Looker) and data cataloging tools (e.g. Alation) Experience in Design and Implementation of ETL solutions with effective design and optimized performance, ETL Development with industry standard recommendations for jobs recovery, fail over, logging, alerting mechanisms. Preferred candidate profile
Posted 3 months ago
8.0 - 12.0 years
30 - 37 Lacs
noida, hyderabad, chennai
Work from Office
About the Role We are seeking an experienced and certified Senior Full Stack Engineer with deep expertise across backend, frontend, DevOps, and cloud technologies. The ideal candidate will be a hands-on developer who can design, build, and maintain high-performance, scalable solutions while adhering to best practices in cloud architecture and automation. Key Responsibilities Design, develop, and maintain scalable applications using Node.js , TypeScript , and Python . Build and enhance modern web interfaces using React and Angular . Implement DevOps pipelines using GitHub , Jenkins , and Terraform . Deploy and manage AWS infrastructure ( CloudFormation, EC2, DynamoDB, Step Functions, S3, Redshift, Athena ). Work with Snowflake and other data warehouse solutions for advanced analytics. Ensure adherence to coding standards, security best practices, and performance benchmarks. Collaborate with cross-functional teams to deliver high-quality solutions within deadlines. Mandatory Skills & Technologies Backend: Node.js, TypeScript, Python Frontend: React, Angular DevOps Tools: GitHub, Jenkins, Terraform AWS Services: CloudFormation, DynamoDB, EC2, Step Functions, Redshift, Athena, S3 Data Warehousing: Snowflake Certifications: Relevant AWS or cloud certifications (mandatory) Qualifications 812 years of hands-on software development experience. Proven expertise in both frontend and backend technologies. Strong experience with CI/CD, infrastructure as code, and AWS architecture. Relevant cloud certifications (AWS Solutions Architect, Developer Associate, or equivalent). Excellent problem-solving, communication, and collaboration skills
Posted Date not available
8.0 - 12.0 years
30 - 37 Lacs
pune, gurugram, bengaluru
Work from Office
About the Role We are seeking an experienced and certified Senior Full Stack Engineer with deep expertise across backend, frontend, DevOps, and cloud technologies. The ideal candidate will be a hands-on developer who can design, build, and maintain high-performance, scalable solutions while adhering to best practices in cloud architecture and automation. Key Responsibilities Design, develop, and maintain scalable applications using Node.js , TypeScript , and Python . Build and enhance modern web interfaces using React and Angular . Implement DevOps pipelines using GitHub , Jenkins , and Terraform . Deploy and manage AWS infrastructure ( CloudFormation, EC2, DynamoDB, Step Functions, S3, Redshift, Athena ). Work with Snowflake and other data warehouse solutions for advanced analytics. Ensure adherence to coding standards, security best practices, and performance benchmarks. Collaborate with cross-functional teams to deliver high-quality solutions within deadlines. Mandatory Skills & Technologies Backend: Node.js, TypeScript, Python Frontend: React, Angular DevOps Tools: GitHub, Jenkins, Terraform AWS Services: CloudFormation, DynamoDB, EC2, Step Functions, Redshift, Athena, S3 Data Warehousing: Snowflake Certifications: Relevant AWS or cloud certifications (mandatory) Qualifications 812 years of hands-on software development experience. Proven expertise in both frontend and backend technologies. Strong experience with CI/CD, infrastructure as code, and AWS architecture. Relevant cloud certifications (AWS Solutions Architect, Developer Associate, or equivalent). Excellent problem-solving, communication, and collaboration skills.
Posted Date not available
9.0 - 12.0 years
15 - 30 Lacs
gurugram
Remote
We are looking for an experienced Senior Data Engineer to lead the development of scalable AWS-native data lake pipelines , with a strong focus on time series forecasting , upsert-ready architectures , and enterprise-grade data governance . This role demands end-to-end ownership of the data lifecycle from ingestion to partitioning, versioning, QA, lineage tracking, and BI delivery. The ideal candidate will be highly proficient in AWS data services , PySpark , and versioned storage formats such as Apache Hudi or Iceberg . A strong understanding of data quality , observability , governance , and metadata management in large-scale analytical systems is critical. Roles & Responsibilities Design and implement data lake zoning (Raw Clean Modeled) using Amazon S3, AWS Glue, and Athena. Ingest structured and unstructured datasets including POS, USDA, Circana, and internal sales data. Build versioned and upsert-ready ETL pipelines using Apache Hudi or Iceberg. Create forecast-ready datasets with lagged, rolling, and trend features for revenue and occupancy modeling. Optimize Athena datasets with partitioning, CTAS queries, and S3 metadata tagging. Implement S3 lifecycle policies, intelligent file partitioning, and audit logging for performance and compliance. Build reusable transformation logic using dbt-core or PySpark to support KPIs and time series outputs. Integrate data quality frameworks such as Great Expectations, custom logs, and AWS CloudWatch for field-level validation and anomaly detection. Apply data governance practices using tools like OpenMetadata or Atlan, enabling lineage tracking, data cataloging, and impact analysis. Establish QA automation frameworks for pipeline validation, data regression testing, and UAT handoff. Collaborate with BI, QA, and business teams to finalize schema design and deliverables for dashboard consumption. Ensure compliance with enterprise data governance policies and enable discovery and collaboration through metadata platforms. Preferred Candidate Profile 9-12 years of experience in data engineering. Deep hands-on experience with AWS Glue, Athena, S3, Step Functions, and Glue, Data Catalog. Strong command over PySpark, dbt-core, CTAS query optimization, and advanced partition strategies. Proven experience with versioned ingestion using Apache Hudi, Iceberg, or Delta Lake. Experience in data lineage, metadata tagging, and governance tooling using OpenMetadata, Atlan, or similar platforms. Proficiency in feature engineering for time series forecasting (lags, rolling windows, trends). Expertise in Git-based workflows, CI/CD, and deployment automation (Bitbucket or similar). Strong understanding of time series KPIs: revenue forecasts, occupancy trends, demand volatility, etc. Knowledge of statistical forecasting frameworks (e.g., Prophet, GluonTS, Scikit-learn). Experience with Superset or Streamlit for QA visualization and UAT testing. Experience building data QA frameworks and embedding data validation checks at each stage of the ETL lifecycle. Independent thinker capable of designing systems that scale with evolving business logic and compliance requirements. Excellent communication skills for collaboration with BI, QA, data governance, and business stakeholders. High attention to detail, especially around data accuracy, documentation, traceability, and auditability.
Posted Date not available
6.0 - 11.0 years
20 - 27 Lacs
kochi, madurai, thiruvananthapuram
Hybrid
We are seeking a seasoned Lead Data Engineer with over 7+ years of experience, primarily focused on Python and writing complex SQL queries in PostgreSQL. The ideal candidate will have a strong background in Python scripting, with additional knowledge of AWS services such as Lambda, Step Functions, and other data engineering tools. Experience in integrating data into Salesforce is a plu The candidate should have deep expertise in writing and optimizing complex SQL queries iN PostgreSQL. Proficiency in Python scripting for data manipulation and automation. Familiarity with AWS services like Lambda, Step Functions. Knowledge on building semantic data layers between applications and backend databases. Experience in integrating data into Salesforce and understanding of its data architecture is a valuable asset. Strong troubleshooting and debugging skills
Posted Date not available
5.0 - 10.0 years
17 - 25 Lacs
noida, hyderabad, chennai
Hybrid
Strong experience using AWS, Appflow, S3,Athena, Lambda, RDS, Event bridge, Lakeformation, Apache, SNS, Cloudformation, Secret manager, Glue and Glue(Pyspark). Proficiency with SQL, Python and Snowflake Strong technical skills in services like Appflow, S3,Athena, Lambda, RDS, Event bridge, Lakeformation, Apache, SNS, Cloudformation, Secret manager, Glue and Glue(Pyspark), SQL, Data Warehousing, Informatica, Oracle Knowledge on Data Warehousing concepts is essential and prior experience in Informatica Power Centre and Oracle Exadata will prove useful. Ability to clearly summarize methodology and key points of program/report in technical documentation/specifications is also required
Posted Date not available
6.0 - 11.0 years
20 - 35 Lacs
hyderabad
Work from Office
Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, opportunity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on Software engineering concepts and 3+ years applied experience Hands on experience in writing high-quality Python and Terraform code, with in-depth knowledge of AWS services like ALB, Lambda, EventBridge, Step Functions, DynamoDB, and Lake Formation. Develop, code, test, and deploy software using company-wide frameworks and best practices. Improve and adhere to agile methodologies for continuous enhancement of team processes. Collaborate with Technology, Product, and Business teams to deliver high-quality products to our users. Provide guidance to the team in overcoming technical issues and challenges. Preferred qualifications, capabilities, and skills Familiarity with modern front-end technologies Exposure to cloud technologies Snowflake knowledge is preferred
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
57101 Jobs | Dublin
Wipro
24505 Jobs | Bengaluru
Accenture in India
19467 Jobs | Dublin 2
EY
17463 Jobs | London
Uplers
12745 Jobs | Ahmedabad
IBM
12087 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11498 Jobs | Seattle,WA
Accenture services Pvt Ltd
10993 Jobs |
Oracle
10696 Jobs | Redwood City