Home
Jobs

244 Aws Glue Jobs - Page 5

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Roles and Responsibilities: Experience in GLUE AWS Experience with one or more of the followingSpark, Scala, Python, and/or R . Experience in API development with NodeJS Experience with AWS (S3, EC2) or other cloud provider Experience in Data Virtualization tools like Dremio and Athena is a plus Should be technically proficient in Big Data concepts Should be technically proficient in Hadoop and noSQL (MongoDB) Good communication and documentation skills

Posted 2 weeks ago

Apply

8.0 - 13.0 years

1 - 4 Lacs

Pune

Work from Office

Naukri logo

Roles & Responsibilities: Provides expert level development system analysis design and implementation of applications using AWS services specifically using Python for Lambda Translates technical specifications and/or design models into code for new or enhancement projects (for internal or external clients). Develops code that reuses objects is well-structured includes sufficient comments and is easy to maintain Provides follow up Production support when needed. Submits change control requests and documents. Participates in design code and test inspections throughout the life cycle to identify issues and ensure methodology compliance. Participates in systems analysis activities including system requirements analysis and definition e.g. prototyping. Participates in other meetings such as those for use case creation and analysis. Performs unit testing and writes appropriate unit test plans to ensure requirements are satisfied. Assists in integration systems acceptance and other related testing as needed. Ensures developed code is optimized in order to meet client performance specifications associated with page rendering time by completing page performance tests. Technical Skills Required Experience in building large scale batch and data pipelines with data processing frameworks in AWS cloud platform using PySpark (on EMR) & Glue ETL Deep experience in developing data processing data manipulation tasks using PySpark such as reading data from external sources merge data perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code using CI/CD tools Bit bucket and Bamboo Strong AWS cloud computing experience. Extensive experience in Lambda S3 EMR Redshift Should have worked on Data Warehouse/Database technologies for at least 8 years. 7. Any AWS certification will be an added advantage.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes

Posted 2 weeks ago

Apply

5.0 - 8.0 years

15 - 18 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts: Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Gurugram

Work from Office

Naukri logo

To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 2 weeks ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Gurugram, Bengaluru

Work from Office

Naukri logo

To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 2 weeks ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Naukri logo

Job Title: Sr. QA automation - Glue 5+ years of QA experience with a strong focus on test automation. Good understanding of core Java/Javascript and Python for writing scalable and maintainable test scripts along with AWS Glue. Good experience in SQL for data verification and validation. (optional). Good experience in performance testing (Jmeter/locust/gatling)• Experience with API testing tools (e.g., Postman, REST Assured) and automation frameworks (e.g. selenium webdriver, playwright, Pytest). Familiarity with CI/CD pipelines and tools like Jenkins, GitLab, or GitHub Actions and GitHub Copilot. Good understanding of software QA methodologies, tools, and processes. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. . Good to have: Hands on experience with Content management system based applications. Note- We are looking to hire 1 month or less notice period candidates. About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the worlds most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company , is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Why work for Material In addition to fulfilling, high-impact work, company culture and benefits are integral to determining if a job is a right fit for you. Heres a bit about who we are and highlights around What we offer. Who We Are & What We Care About:- Material is a global company and we work with best-of-class brands worldwide. We also create and launch new brands and products, putting innovation and value creation at the center of our practice. Our clients are in the top of their class, across industry sectors from technology to retail, transportation, finance and healthcare. Material employees join a peer group of exceptionally talented colleagues across the company, the country, and even the world. We develop capabilities, craft and leading-edge market offerings across seven global practices including strategy and insights, design, data & analytics, technology and tracking. Our engagement management team makes it all hum for clients. We prize inclusion and interconnectedness. We amplify our impact through the people, perspectives, and expertise we engage in our work. Our commitment to deep human understanding combined with a science & systems approach uniquely equips us to bring a rich frame of reference to our work. A community focused on learning and making an impact. Material is an outcomes focused company. We create experiences that matter, create new value and make a difference in people's lives. What We Offer:- Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work (Certified). Health and Family Insurance. 40+ Leaves per year along with maternity & paternity leaves. Wellness, meditation and Counselling sessions.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc

Posted 2 weeks ago

Apply

3.0 - 5.0 years

4 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Primary skills: Technology->Cloud Platform->AWS Data Analytics->AWS Glue DataBrew Preferred Skills: Technology->Cloud Platform->AWS Data Analytics->AWS Glue DataBrew Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit * Location of posting is subject to business requirements

Posted 2 weeks ago

Apply

4.0 - 6.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Work from Office

Naukri logo

Role: Data Engineer Experience: 4-6 yrs Location: Chennai,Bangalore,Pune,Hyderabad,Kochi,Bhubaneshwar Required Skillset =>Should have experience in Pyspark =>Shoud have experience in AWS Glue Interested candidates can send resume to jegadheeswari.m@spstaffing.in or reach me @9566720836

Posted 2 weeks ago

Apply

6.0 - 11.0 years

16 - 20 Lacs

Gurugram, sector 20

Work from Office

Naukri logo

Conduct API Testing using REST Assured. Perform Automation Testing using Selenium WebDriver. Carry out Performance and Load Testing with JMETER. Ensure the quality and reliability of applications integrating through pub/sub mechanisms, AWS API Gateway, and REST APIs Work with Publisher/Subscriber event-based integrations, AWS Glue, AWS Event Bridge, and Lambda functions. Collaborate with cross-functional teams to identify and resolve integration and testing challenges. Proven experience in API Testing, Automation Testing (Selenium WebDriver), and Performance Testing (JMETER). Strong understanding of integration patterns such as pub/sub and REST-based messaging. Hands-on experience with AWS technologies including AWS Glue, AWS EventBridge, and Lambda functions. Ability to work effectively in a hybrid setting and commute to the office as required. _ Skills : - Automation Testing ,Java,Selenium webdriver,BDD cucumber,API Testing,AWS (min 6 month)

Posted 2 weeks ago

Apply

3.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT THE ROLE We are seeking a highly skilled, hands-on Senior QA & Test Automation Specialist ( T e st Automation Engineer ) with strong experience in data validation , ETL testing , test automation , and QA process ownership . This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Collaborate with the QA Manager to design and implement end-to-end test strategies for data validation, semantic layer testing, and GraphQL API validation. Perform manual validation of data pipelines, including source-to-target data mapping, transformation logic, and business rule verification. Develop and maintain automated data validation scripts using Python and PySpark for both real-time and batch pipelines. Contribute to the design and enhancement of reusable automation frameworks, with components for schema validation, data reconciliation, and anomaly detection. Validate semantic layers (e.g., Looker, dbt models) and GraphQL APIs, ensuring data consistency, compliance with contracts, and alignment with business expectations. Write and manage test plans, test cases, and test data for structured, semi-structured, and unstructured data. Track, manage, and report defects using tools like JIRA, ensuring thorough root cause analysis and timely resolution. Collaborate with Data Engineers, Product Managers, and DevOps teams to integrate tests into CI/CD pipelines and enable shift-left testing practices. Ensure comprehensive test coverage for all aspects of the data lifecycle, including ingestion, transformation, delivery, and consumption. Participate in QA ceremonies (standups, planning, retrospectives) and continuously contribute to improving the QA process and culture. Experience building or maintainingtest data generators Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: 6-9 years of experience in QA roles, with at least 3+ yearsof strong exposure to data pipeline testing and ETL validation. Strong in SQL, Python, and optionally PySpark comfortable with writing complex queries and validation scripts. Practical experience with manual validation of data pipelines and source-to-target testing. Experience in validating GraphQL APIs, semantic layers (Looker, dbt, etc.), and schema/data contract compliance. Familiarity with data integration tools and platforms such as Databricks, AWS Glue, Redshift, Athena, or BigQuery. Strong understanding of test planning, defect tracking, bug lifecycle management, and QA documentation. Experience working in Agile/Scrum environments with standard QA processes. Knowledge of test case and defect management tools (e.g., JIRA, TestRail, Zephyr). Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Deep hands-on expertise in SQL, Python, and PySpark for testing and automating validation. Proven experience in manual and automated testing of batch and real-time data pipelines. Familiarity with data processing and analytics stacks: Databricks, Spark, AWS (Glue, S3, Athena, Redshift). Experience with bug tracking and test management tools like JIRA, TestRail, or Zephyr. Ability to troubleshoot data issues independently and collaborate with engineering for root cause analysis. Experience integrating automated tests into CI/CD pipelines (e.g., Jenkins, GitHub Actions). Experience validating data from various file formats such as JSON, CSV, Parquet, and Avro Strong ability to validate and automate data quality checks: schema validation, null checks, duplicates, thresholds, and transformation validation Hands-on experience with API testing using Postman, pytest, or custom automation scripts Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Familiarity with monitoring/observability tools such as Datadog, Prometheus, or CloudWatch Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 3 weeks ago

Apply

3.0 - 7.0 years

2 - 6 Lacs

Hyderabad, Pune, Gurugram

Work from Office

Naukri logo

Location Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : Python, PySpark, SQL, IAM, CloudFormation, StepFunctions, and Redshift. Not Ready to Apply Join our talent pool and we'll reach out when a job fits your skills.

Posted 3 weeks ago

Apply

3.0 - 7.0 years

2 - 6 Lacs

Hyderabad, Pune, Gurugram

Work from Office

Naukri logo

Location Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : Python, Pyspark, SQL, AWS Services - AWS Glue, S3, IAM, Athena, AWS CloudFormation, AWS Code Pipeline, AWS Lambda, Transfer Family, AWS Lake Formation, and CloudWatch, CI/CD automation of AWS CloudFormation stacks Not Ready to Apply Join our talent pool and we'll reach out when a job fits your skills.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Job Information Job Opening ID ZR_1634_JOB Date Opened 12/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title AWS-BIGDATA-DEVELOPER City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: Experience in GLUE AWS Experience with one or more of the followingSpark, Scala, Python, and/or R . Experience in API development with NodeJS Experience with AWS (S3, EC2) or other cloud provider Experience in Data Virtualization tools like Dremio and Athena is a plus Should be technically proficient in Big Data concepts Should be technically proficient in Hadoop and noSQL (MongoDB) Good communication and documentation skills check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 3 weeks ago

Apply

6.0 - 10.0 years

1 - 4 Lacs

Pune

Work from Office

Naukri logo

Job Information Job Opening ID ZR_1594_JOB Date Opened 29/11/2022 Industry Technology Job Type Work Experience 6-10 years Job Title AWS GLUE Engineer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Roles & Responsibilities: Provides expert level development system analysis design and implementation of applications using AWS services specifically using Python for Lambda Translates technical specifications and/or design models into code for new or enhancement projects (for internal or external clients). Develops code that reuses objects is well-structured includes sufficient comments and is easy to maintain Provides follow up Production support when needed. Submits change control requests and documents. Participates in design code and test inspections throughout the life cycle to identify issues and ensure methodology compliance. Participates in systems analysis activities including system requirements analysis and definition e.g. prototyping. Participates in other meetings such as those for use case creation and analysis. Performs unit testing and writes appropriate unit test plans to ensure requirements are satisfied. Assists in integration systems acceptance and other related testing as needed. Ensures developed code is optimized in order to meet client performance specifications associated with page rendering time by completing page performance tests. Technical Skills Required Experience in building large scale batch and data pipelines with data processing frameworks in AWS cloud platform using PySpark (on EMR) & Glue ETL Deep experience in developing data processing data manipulation tasks using PySpark such as reading data from external sources merge data perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code using CI/CD tools Bit bucket and Bamboo Strong AWS cloud computing experience. Extensive experience in Lambda S3 EMR Redshift Should have worked on Data Warehouse/Database technologies for at least 8 years. 7. Any AWS certification will be an added advantage. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 3 weeks ago

Apply

3.0 - 8.0 years

15 - 22 Lacs

Gurugram

Work from Office

Naukri logo

Data Engineer Exp : 4 years of Experience Years of Minimum Relevant : 3+ Years in data engineering Location- Gurgaon Role Summary: The Data Engineer will develop and maintain AWS-based data pipelines, ensuring optimal ingestion, transformation, and storage of clinical trial data. The role requires expertise in ETL, AWS Glue, Lambda functions, and Redshift optimization. Must have- AWS (Glue, Lambda, Redshift, Step Functions) Python, SQL, API-based ingestion Pyspark Redshift, SQL/PostgreSQL, Snowflake (optional) Redshift Query Optimization, Indexing IAM, Encryption, Row-Level Security

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Naukri logo

Role Overview: An AWS SME with a Data Science Background is responsible for leveraging Amazon Web Services (AWS) to design, implement, and manage data-driven solutions. This role involves a combination of cloud computing expertise and data science skills to optimize and innovate business processes. Key Responsibilities: Data Analysis and Modelling: Analyzing large datasets to derive actionable insights and building predictive models using AWS services like SageMaker, Bedrock, Textract etc. Cloud Infrastructure Management: Designing, deploying, and maintaining scalable cloud infrastructure on AWS to support data science workflows. Machine Learning Implementation: Developing and deploying machine learning models using AWS ML services. Security and Compliance: Ensuring data security and compliance with industry standards and best practices. Collaboration: Working closely with cross-functional teams, including data engineers, analysts, DevOps and business stakeholders, to deliver data-driven solutions. Performance Optimization: Monitoring and optimizing the performance of data science applications and cloud infrastructure. Documentation and Reporting: Documenting processes, models, and results, and presenting findings to stakeholders. Skills & Qualifications Technical Skills: Proficiency in AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker). Strong programming skills in Python. Experience with AI/ML project life cycle steps. Knowledge of machine learning algorithms and frameworks (e.g., TensorFlow, Scikit-learn). Familiarity with data pipeline tools (e.g., AWS Glue, Apache Airflow). Excellent communication and collaboration abilities.

Posted 3 weeks ago

Apply

12.0 - 17.0 years

10 - 14 Lacs

Gurugram

Work from Office

Naukri logo

Project Role Application Lead Project Role Description Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills AWS Glue Good to have skills NA Minimum 12 year(s) of experience is required Educational Qualification 15 years full time education SummaryAs an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication among team members. Roles & Responsibilities- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Lead the effort to design, build, and configure applications- Act as the primary point of contact- Ensure seamless communication among team members- Provide guidance and support to team members Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue- Strong understanding of cloud computing concepts- Experience in designing and implementing scalable applications- Knowledge of data integration and ETL processes- Hands-on experience with AWS services such as S3, Lambda, and Redshift Additional Information- The candidate should have a minimum of 12 years of experience in AWS Glue- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education

Posted 3 weeks ago

Apply

12.0 - 17.0 years

10 - 14 Lacs

Gurugram

Work from Office

Naukri logo

Project Role Application Lead Project Role Description Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills AWS Analytics Good to have skills NA Minimum 12 year(s) of experience is required Educational Qualification 15 years full time education SummaryAs an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication within the team and stakeholders. Roles & Responsibilities- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Expected to provide solutions to problems that apply across multiple teams- Lead the application development process- Ensure effective communication within the team and stakeholders- Provide guidance and support to team members Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Analytics- Strong understanding of cloud computing principles- Experience in designing and implementing scalable applications- Knowledge of data analytics and visualization tools- Hands-on experience with AWS services- Familiarity with DevOps practices Additional Information- The candidate should have a minimum of 12 years of experience in AWS Analytics- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education

Posted 3 weeks ago

Apply

2.0 - 6.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Job Digital Solutions Schedule Full-time Employment Type Agency Contractor Job Level Experienced Job Posting May 28, 2025 Unposting Date Jun 27, 2025 Reporting Manager Title Director - Digital Customer Solutions We deliver the worlds most complex projects Work as part of a collaborative and inclusive team Enjoy a varied & challenging role Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, were bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role Develop and implement data pipelines for ingesting and collecting data from various sources into a centralized data platform. Develop and maintain ETL jobs using AWS Glue services to process and transform data at scale. Optimize and troubleshoot AWS Glue jobs for performance and reliability. Utilize Python and PySpark to efficiently handle large volumes of data during the ingestion process. Collaborate with data architects to design and implement data models that support business requirements. Create and maintain ETL processes using Airflow, Python and PySpark to move and transform data between different systems. Implement monitoring solutions to track data pipeline performance and proactively identify and address issues. Manage and optimize databases, both SQL and NoSQL, to support data storage and retrieval needs. Familiarity with Infrastructure as Code (IaC) tools like Terraform, AWS CDK and others. Proficiency in event-driven integrations, batch-based and API-led data integrations. Proficiency in CICD pipelines such as Azure DevOps, AWS pipelines or Github Actions. About You To be considered for this role it is envisaged you will possess the following attributes Strong hands-on experience in implementing Microservice-based Solutions Expertise in Azure services like Azure Kubernetes Service (AKS), Azure Functions, Azure Blob Storage, Azure SQL, Azure APIM, Application Gateways Good understanding of OWASP Vulnerabilities and their remediation Strong understanding of microservices architecture principles, containerization (e.g., Docker, Kubernetes), and API design. Strong understanding Data Architecture, Data Modelling, Data Management, Data Integration patterns and challenges. Experience with scalable data platforms and solutions integrating and standardising data from different enterprise applications Having one or more certifications as a Solution Architect in any of the leading Cloud Platforms Experience with EPC(Engineering, Procurement and Construction) Customers Moving forward together Were committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, theres a path for you here. And theres no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

4 - 7 Lacs

Mumbai

Hybrid

Naukri logo

PF Detection is mandatory Job Description: Minimum 5 years of experience in database development and ETL tools. Strong expertise in SQL and database platforms (e.g. SQL Server Oracle PostgreSQL). Proficiency in ETL tools (e.g. Informatica SSIS Talend DataStage) and scripting languages (e.g. Python Shell). Experience with data modeling and schema design. Familiarity with cloud databases and ETL tools (e.g. AWS Glue Azure Data Factory Snowflake). 6. Understanding of data warehousing concepts and best practices

Posted 3 weeks ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Pyspark SparkSQL SQL and Glue AWS cloud experience Good understanding of dimensional modelling Good understanding DevOps CloudOps DataOps CI/CD & with a SRE mindset Understanding of Lakehouse and DW architecture strong analysis and analytical skills understanding of version control systems specifically Git strong in software engineering APIs Microservices etc Soft skills written and oral communication skills ability to translate business needs to system

Posted 3 weeks ago

Apply

7.0 - 12.0 years

9 - 10 Lacs

Pune, Chennai, Bengaluru

Work from Office

Naukri logo

Candidates must have relevant of max 3 or 2 year experience in aws glue , data integration , lambda , athena redshift

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies