Home
Jobs

228 Aws Glue Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

5 - 11 Lacs

Pune, Mumbai (All Areas)

Hybrid

Naukri logo

Overview: TresVista is looking to hire an Associate in its Data Intelligence Group team, who will be primarily responsible for managing clients as well as monitor/execute projects both for the clients as well as internal teams. The Associate may be directly managing a team of up to 3-4 Data Engineers & Analysts across multiple data engineering efforts for our clients with varied technologies. They would be joining the current team of 70+ members, which is a mix of Data Engineers, Data Visualization Experts, and Data Scientists. Roles and Responsibilities: Interacting with the client (internal or external) to understand their problems and work on solutions that address their needs Driving projects and working closely with a team of individuals to ensure proper requirements are identified, useful user stories are created, and work is planned logically and efficiently to deliver solutions that support changing business requirements Managing the various activities within the team, strategizing how to approach tasks, creating timelines and goals, distributing information/tasks to the various team members Conducting meetings, documenting, and communicating findings effectively to clients, management and cross-functional teams Creating Ad-hoc reports for multiple internal requests across departments Automating the process using data transformation tools Prerequisites Strong analytical, problem-solving, interpersonal, and communication skills Advanced knowledge of DBMS, Data Modelling along with advanced querying capabilities using SQL Working experience in cloud technologies (GCP/ AWS/Azure/Snowflake) Prior experience in building and deploying ETL/ELT pipelines using CI/CD, and orchestration tools such as Apache Airflow, GCP workflows, etc. Proficiency in Python for building ETL/ELT processes and data modeling Proficiency in Reporting and Dashboards creation using Power BI/Tableau Knowledge in building ML models and leveraging Gen AI for modern architectures. Experience working with version control platforms like GitHub Familiarity with IaC tools like Terraform and Ansible is good to have Stakeholder Management and client communication experience would be preferred Experience in the Financial Services domain will be an added plus Experience in Machine Learning tools and techniques will be good to have Experience 3-7 years Education BTech/MTech/BE/ME/MBA in Analytics Compensation The compensation structure will be as per industry standards

Posted 3 hours ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Role: Senior Data Engineer Location: Bangalore - Hybrid Experience : 10+ Years Job Requirements: ETL & Data Pipelines: Experience building and maintaining ETL pipelines with large data sets using AWS Glue, EMR, Kinesis, Kafka, CloudWatch Programming & Data Processing: Strong Python development experience with proficiency in Spark or PySpark Experience in using APIs Database Management: Strong skills in writing SQL queries and performance tuning in AWS Redshift Proficient with other industry-leading RDBMS such as MS SQL Server and PostgreSQL AWS Services: Proficient in working with AWS services including AWS Lambda, Event Bridge, Step Functions, SNS, SQS, S3, and MI models Interested candidates can share their resume at Neesha1@damcogroup.com

Posted 4 hours ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Python (Programming Language) Good to have skills : AWS S3 (Simple Storage Service), AWS Lambda Administration, AWS GlueMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language), AWS S3 (Simple Storage Service), AWS Lambda Administration, AWS Glue.- Good To Have Skills: Experience with AWS S3 (Simple Storage Service), AWS Lambda Administration, AWS Glue.- Strong understanding of software development life cycle methodologies.- Experience with version control systems such as Git.- Familiarity with RESTful APIs and web services. Additional Information:- The candidate should have minimum 5 years of experience in Python (Programming Language).- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 6 hours ago

Apply

6.0 - 8.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for energetic, self-motivated and exceptional Data engineers to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with a star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into databases from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business processes and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have the ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance

Posted 6 hours ago

Apply

6.0 - 8.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for energetic, self-motivated and exceptional Data engineer to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into database from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business process and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance. Immediate Joiners.

Posted 8 hours ago

Apply

7.0 - 9.0 years

15 - 30 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Job Title: Senior Data Associate - Cloud Data Engineering Experience: 7+ Years Employment Type: Full-Time Industry: Information Technology / Data Engineering / Cloud Platforms Job Summary: We are seeking a highly skilled and experienced Senior Data Associate to join our data engineering team. The ideal candidate will have a strong background in cloud data platforms, big data processing, and enterprise data systems, with hands-on experience across both AWS and Azure ecosystems. This role involves building and optimizing data pipelines, managing large-scale data lakes and warehouses, and enabling advanced analytics and reporting. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS Glue, PySpark, and Azure Data Factory. Work with AWS Redshift, Athena, Azure Synapse, and Databricks to support data warehousing and analytics solutions. Integrate and manage data across MongoDB, Oracle, and cloud-native storage like Azure Data Lake and S3. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality datasets. Implement data quality checks, monitoring, and governance practices. Optimize data workflows for performance, scalability, and cost-efficiency. Support data migration and modernization initiatives across cloud platforms. Document data flows, architecture, and technical specifications. Required Skills & Qualifications: 7+ years of experience in data engineering, data integration, or related roles. Strong hands-on experience with: AWS Redshift, Athena, Glue, S3 Azure Data Lake, Synapse Analytics, Databricks PySpark for distributed data processing MongoDB and Oracle databases Proficiency in SQL, Python, and data modeling. Experience with ETL/ELT design and implementation. Familiarity with data governance, security, and compliance standards. Strong problem-solving and communication skills. Preferred Qualifications: Certifications in AWS (e.g., Data Analytics Specialty) or Azure (e.g., Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps for data workflows. Knowledge of data cataloging tools (e.g., AWS Glue Data Catalog, Azure Purview). Exposure to real-time data processing and streaming technologies. Required Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake

Posted 8 hours ago

Apply

12.0 - 15.0 years

5 - 5 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Senior Data Architect - Big Data & Cloud Solutions Experience: 10+ Years Industry: Information Technology / Data Engineering / Cloud Computing Job Summary: We are seeking a highly experienced and visionary Data Architect to lead the design and implementation of scalable, high-performance data solutions. The ideal candidate will have deep expertise in Apache Kafka, Apache Spark, AWS Glue, PySpark, and cloud-native architectures, with a strong background in solution architecture and enterprise data strategy. Key Responsibilities: Design and implement end-to-end data architecture solutions on AWS using Glue, S3, Redshift, and other services. Architect and optimize real-time data pipelines using Apache Kafka and Spark Streaming. Lead the development of ETL/ELT workflows using PySpark and AWS Glue. Collaborate with stakeholders to define data strategies, governance, and best practices. Ensure data quality, security, and compliance across all data platforms. Provide technical leadership and mentorship to data engineers and developers. Evaluate and recommend new tools and technologies to improve data infrastructure. Translate business requirements into scalable and maintainable data solutions. Required Skills & Qualifications: 10+ years of experience in data engineering, architecture, or related roles. Strong hands-on experience with: Apache Kafka (event streaming, topic design, schema registry) Apache Spark (batch and streaming) AWS Glue, S3, Redshift, Lambda, CloudFormation/Terraform PySpark for large-scale data processing Proven experience in solution architecture and designing cloud-native data platforms. Deep understanding of data modeling, data lakes, and data warehousing concepts. Strong programming skills in Python and SQL. Experience with CI/CD pipelines and DevOps practices for data workflows. Excellent communication and stakeholder management skills. Preferred Qualifications: AWS Certified Solutions Architect or Big Data Specialty certification. Experience with data governance tools and frameworks. Familiarity with containerization (Docker, Kubernetes) and orchestration tools (Airflow, Step Functions). Exposure to machine learning pipelines and MLOps is a plus. Required Skills Apache,Pyspark,Aws Cloud,Kafka

Posted 8 hours ago

Apply

7.0 - 9.0 years

5 - 5 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Azure Infrastructure Consultant - Cloud & Data Integration Experience: 8+ Years Employment Type: Full-Time Industry: Information Technology / Cloud Infrastructure / Data Engineering Job Summary: We are looking for a seasoned Azure Infrastructure Consultant with a strong foundation in cloud infrastructure, data integration, and real-time data processing. The ideal candidate will have hands-on experience across Azure and AWS platforms, with deep knowledge of Apache NiFi, Kafka, AWS Glue, and PySpark. This role involves designing and implementing secure, scalable, and high-performance cloud infrastructure and data pipelines. Key Responsibilities: Design and implement Azure-based infrastructure solutions, ensuring scalability, security, and performance. Lead hybrid cloud integration projects involving Azure and AWS services. Develop and manage ETL/ELT pipelines using AWS Glue, Apache NiFi, and PySpark. Architect and support real-time data streaming solutions using Apache Kafka. Collaborate with cross-functional teams to gather requirements and deliver infrastructure and data solutions. Implement infrastructure automation using tools like Terraform, ARM templates, or Bicep. Monitor and optimize cloud infrastructure and data workflows for cost and performance. Ensure compliance with security and governance standards across cloud environments. Required Skills & Qualifications: 8+ years of experience in IT infrastructure and cloud consulting. Strong hands-on experience with: Azure IaaS/PaaS (VMs, VNets, Azure AD, App Services, etc.) AWS services including Glue, S3, Lambda Apache NiFi for data ingestion and flow management Apache Kafka for real-time data streaming PySpark for distributed data processing Proficiency in scripting (PowerShell, Python) and Infrastructure as Code (IaC). Solid understanding of networking, security, and identity management in cloud environments. Strong communication and client-facing skills. Preferred Qualifications: Azure or AWS certifications (e.g., Azure Solutions Architect, AWS Data Analytics Specialty). Experience with CI/CD pipelines and DevOps practices. Familiarity with containerization (Docker, Kubernetes) and orchestration. Exposure to data governance tools and frameworks. Required Skills Azure,Microsoft Azure,Azure Paas,aws glue

Posted 8 hours ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

The resource should have a strong background in working with cloud platforms, APIs, and data processing. Experience with tools like AWS Glue, Athena, and Databricks will be highly beneficial. AWS Glue Jobs: The QE should have familiarity with AWS Glue jobs for ETL processes. We expect them to validate the successful execution of around Glue jobs, ensuring that transformations and data ingestion tasks are working smoothly without errors. Athena Querying: Experience with querying data using AWS Athena is a must, as the QE will be required to validate queries across multiple datasets. We expect the resource to run and validate Athena queries for data accuracy and integrity. Databricks Testing: The candidate should also have experience with Databricks, particularly in validating data pipelines and transformations within the Databricks environment. The QE will need to test Databricks notebooks or jobs, ensuring data accuracy in the Bronze, Silver, and Gold layers. Boomi integrations

Posted 1 day ago

Apply

2.0 - 7.0 years

4 - 8 Lacs

Ahmedabad

Work from Office

Naukri logo

Travel Designer Group Founded in 1999, Travel Designer Group has consistently achieved remarkable milestones in a relatively short span of time. While we embody the agility, growth mindset, and entrepreneurial energy typical of start-ups, we bring with us over 24 years of deep-rooted expertise in the travel trade industry. As a leading global travel wholesaler, we serve as a vital bridge connecting hotels, travel service providers, and an expansive network of travel agents worldwide. Our core strength lies in sourcing, curating, and distributing high-quality travel inventory through our award-winning B2B reservation platform, RezLive.com. This enables travel trade professionals to access real-time availability and competitive pricing to meet the diverse needs of travelers globally. Our expanding portfolio includes innovative products such as: * Rez.Tez * Affiliate.Travel * Designer Voyages * Designer Indya * RezRewards * RezVault With a presence in over 32+ countries and a growing team of 300+ professionals, we continue to redefine travel distribution through technology, innovation, and a partner-first approach. Website : https://www.traveldesignergroup.com/ Profile :- ETL Developer ETL Tools - any 1 -Talend / Apache NiFi /Pentaho/ AWS Glue / Azure Data Factory /Google Dataflow Workflow & Orchestration : any 1 good to have- not mandatory Apache Airflow/dbt (Data Build Tool)/Luigi/Dagster / Prefect / Control-M Programming & Scripting : SQL (Advanced) Python ( mandatory ) Bash/Shell (mandatory) Java or Scala (optional for Spark) -optional Databases & Data Warehousing MySQL / PostgreSQL / SQL Server / Oracle mandatory Snowflake - good to have Amazon Redshift - good to have Google BigQuery - good to have Azure Synapse Analytics - good to have MongoDB / Cassandra - good to have Cloud & Data Storage : any 1 -2 AWS S3 / Azure Blob Storage / Google Cloud Storage - mandatory Kafka / Kinesis / Pub/Sub Interested candidate also share your resume in shivani.p@rezlive.com

Posted 1 day ago

Apply

3.0 - 6.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Data Engineer II (Python, SQL) Experience: 3 to 6 years Location: Bangalore, Karnataka (Work from office, 5 days a week) Role: Data Engineer II (Python, SQL) As a Data Engineer II, you will work on designing, building, and maintaining scalable data pipelines. Youll collaborate across data analytics, marketing, data science, and product teams to drive insights and AI/ML integration using robust and efficient data infrastructure. Key Responsibilities: Design, develop and maintain end-to-end data pipelines (ETL/ELT). Ingest, clean, transform, and curate data for analytics and ML usage. Work with orchestration tools like Airflow to schedule and manage workflows. Implement data extraction using batch, CDC, and real-time tools (e.g., Debezium, Kafka Connect). Build data models and enable real-time and batch processing using Spark and AWS services. Collaborate with DevOps and architects for system scalability and performance. Optimize Redshift-based data solutions for performance and reliability. Must-Have Skills & Experience: 3+ years in Data Engineering or Data Science with strong ETL and pipeline experience. Expertise in Python and SQL . Strong experience in Data Warehousing , Data Lakes , Data Modeling , and Ingestion . Working knowledge of Airflow or similar orchestration tools. Hands-on with data extraction techniques like CDC , batch-based, using Debezium, Kafka Connect, AWS DMS . Experience with AWS Services : Glue, Redshift, Lambda, EMR, Athena, MWAA, SQS, etc. Knowledge of Spark or similar distributed systems. Experience with queuing/messaging systems like SQS , Kinesis , RabbitMQ .

Posted 1 day ago

Apply

5.0 - 10.0 years

22 - 37 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Naukri logo

Experience: 5-8 Years (Lead-23 LPA), 8-10 Years (Senior Lead 35 LPA), 10+ Years (Architect- 42 LPA)- Max Location : Bangalore as 1 st preference , We can also go for Hyderabad, Chennai, Pune, Gurgaon Notice: Immediate to max 15 Days Joiner Mode of Work: Hybrid Job Description: Athena, Step Functions, Spark - Pyspark, ETL Fundamentals, SQL (Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2, Lake Formation, Aurora, S3, Modern Data Platform Fundamentals, PLSQL, Cloud front We are looking for an experienced AWS Data Engineer to design, build, and manage robust, scalable, and high-performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands-on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation. Key Responsibilities: Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions. Work with structured and semi-structured data using Athena, S3, and Lake Formation to enable efficient querying and access control. Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration. Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning. Build data lakes and data warehouses using S3, Aurora, and Athena. Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM. Develop and maintain metadata, lineage, and data cataloging capabilities. Participate in data modeling exercises for both OLTP and OLAP environments. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Monitor, debug, and optimize data pipelines for reliability and performance. Required Skills & Experience: Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront. Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL. Solid understanding of ETL/ELT processes and data warehousing concepts. Familiarity with modern data platform fundamentals and distributed data processing. Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases. Experience with orchestration and workflow management tools within AWS. Strong debugging and performance tuning skills across the data stack.

Posted 1 day ago

Apply

3.0 - 5.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description: We are looking for energetic, self-motivated and exceptional Data engineer to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into database from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business process and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance

Posted 1 day ago

Apply

5.0 - 10.0 years

22 - 37 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

We are looking for "AWS Data Engineer (With GCP, BigQuery)" with Minimum 5 years experience Contact- Yashra (95001 81847) Required Candidate profile Athena,Step Functions, Spark - Pyspark, ETL Fundamentals, SQL(Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2,Lake Formation, Aurora, S3, Modern Data Platform Fundamentals

Posted 1 day ago

Apply

5.0 - 15.0 years

0 - 28 Lacs

Bengaluru

Work from Office

Naukri logo

Key Skills : Python, Pyspark, AWS Glue, Redshift and Spark Steaming, Job Description: 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. What The requirement is, you can check with the candidate the followings (Vast knowledge of python, pyspark, glue job, lambda, step function, sql) Please find the Data Engineering requirement JD and client expectations from candidate. 1. Process these events and save data in Trusted and refined bucket schemas 2. Bring Six Tables for Historical data to Raw bucket. Populate historical data in trusted and refined bucket schemas. 3. Publish raw, trusted and refined bucket data from #2 and #3 to corresponding buckets in CCB data lake Develop Analytics pipeline to publish data to Snowflake 4. Integrate TDQ/BDQ in the Glue pipeline 5. Develop Observability dashboards for these jobs 6. Implement reliability wherever needed to prevent data loss 7. Configure Data archival policies and periodic cleanup 8. Perform end to end testing of the implementation 9. Implement all of the above in Production 10. Implement Reconcile data across SORs, Auth Data Lake and CCB Data Lake 11. Success criteria is All the 50 Kafka events are ingested in the CCB data lake and existing 16 Tableau dashboards are populated using this data.

Posted 1 day ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Hyderabad

Hybrid

Naukri logo

Job Title: Lead Data Engineer Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities: Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications: 10 Years industry implementation experience with data integration tools such as AWS services Redshift, Athena, Lambda, Glue, S3, ETL, etc. 5-8 years of management experience required 5-8 years consulting experience preferred Minimum of 5 years of data architecture, data modelling or similar experience Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in orchestration & working experience cloud native / 3rd party ETL data load orchestration Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Strong databricks experience required to create notebooks in pyspark Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Preferred Skills & Experience: Knowledge and working experience with Data Integration processes, such as Data Warehousing, EAI, etc. Experience in providing estimates for the Data Integration projects including testing, documentation, and implementation Ability to Analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to provide technical direction to other team members including contractors and employees. Ability to contribute to conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Proven experience leading team members, directly or indirectly, in completing high-quality major deliverables with superior results Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Can create documentation and presentations such that the they “stand on their own” Can advise sales on evaluation of Data Integration efforts for new or existing client work. Can contribute to internal/external Data Integration proof of concepts. Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Effectively influences and, at times, oversees business and data analysis activities to ensure sufficient understanding and quality of data. Demonstrates a complete understanding of and utilises DSC methodology documents to efficiently complete assigned roles and associated tasks. Deals effectively with all team members and builds strong working relationships/rapport with them. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM.

Posted 1 day ago

Apply

5.0 - 8.0 years

15 - 22 Lacs

Ahmedabad

Work from Office

Naukri logo

Strong proficiency in SQL, database experience – Snowflake preferred • Expertise with python, especially with Python-pandas is must • Experience of Tableau and similar BI Tools (Power BI, etc) is must Required Candidate profile Must have 4+ years' experience with Tableau, SQL, AWS & Python. Must be from Ahmedabad or must be open to relocating to Ahmedabad Experience with data modelling • Experience in AWS environments

Posted 3 days ago

Apply

10.0 - 15.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

About Client Hiring for One of the Most Prestigious Multinational Corporations! Job Description Job Title : Aws Solution Architect Qualification : Any Graduate or Above Relevant Experience : 10 -15Years Required Technical Skill Set (Skill Name) : Data lakes, data warehouses, AWS Glue, Aurora with Postgres, MySQL and DynamoDB Location : Bangalore CTC Range : 25 LPA-40 LPA Notice period : Any Shift Timing : N/A Mode of Interview : Virtual Mode of Work : WFO( Work From Office) Pooja Singh KS IT Staffing Analyst Black and White Business solutions PVT Ltd Bangalore, Karnataka, INDIA. pooja.singh@blackwhite.in I www.blackwhite.in

Posted 3 days ago

Apply

12.0 - 15.0 years

16 - 18 Lacs

Bengaluru

Hybrid

Naukri logo

iSource Services is hiring for one of their client for the position of AWS. AWS experience (not Azure or GCP), with 12-15 years of experience, and hands-on expertise in design and implementation. Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Candidates should possess exceptional communication skills to engage effectively with US clients. The ideal candidate must be hands-on with significant practical experience. Availability to work overlapping US hours is essential. The contract duration is 6 months. For this role, we're looking for candidates with 12 to 15 years of experience. AWS experience communication skills

Posted 3 days ago

Apply

8.0 - 12.0 years

16 - 27 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Design, develop, and optimize scalable ETL pipelines using PySpark and AWS data services Work with structured and semi-structured data from various sources and formats (CSV, JSON, Parquet) Build reusable data transformations using Spark DataFrames, RDDs, and Spark SQL Implement data validation, quality checks, and ensure schema evolution across data sources Manage deployment and monitoring of Spark jobs using AWS EMR, Glue, Lambda, and CloudWatch Collaborate with product owners, architects, and data scientists to deliver robust data workflows Tune job performance, manage partitioning strategies, and reduce job latency/cost Contribute to version control, CI/CD processes, and production support Preferred candidate profile Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 5+ years of experience in PySpark, Spark SQL, RDDs, UDFs, and Spark optimization Strong experience in building ETL workflows for large-scale data processing Solid understanding of AWS cloud ecosystem, especially S3, EMR, Glue, Lambda, Athena Proficiency in Python, SQL, and shell scripting Experience with data lakes, partitioning strategies, and file formats (e.g., Parquet, ORC) Familiarity with Git, Jenkins, and automated testing frameworks (e.g., PyTest) Experience with Redshift, Snowflake, or other DW platforms Exposure to data governance, cataloging, or DQ frameworks Terraform or infrastructure-as-code experience Understanding of Spark internals, DAGs, and caching strategies

Posted 4 days ago

Apply

6.0 - 9.0 years

25 - 30 Lacs

Gurugram

Work from Office

Naukri logo

Exp- 6 to 9 years Notice period- Immediate to 15 days Location- GGN only WFO – 4 days in a week Working Shift - 1 PM to 10 PM. Band -4A

Posted 4 days ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Gurugram

Work from Office

Naukri logo

Experienced with AWS, with a strong understanding of cloud services and infrastructure.. Knowledgeable in Big Data concepts and experienced with AWS Glue, including setting up jobs, data cataloging, and managing crawlers.. Proficient in using and maintaining Apache Airflow for workflow management and Terraform for infrastructure automation.. Skilled in Python for scripting and automation tasks.. Independent and proactive in solving problems and troubleshooting issues.. Show more Show less

Posted 4 days ago

Apply

3.0 - 5.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for energetic, self-motivated and exceptional Data engineer to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into database from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business process and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance

Posted 4 days ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Join us as a Data Engineer II in Bengaluru! Build scalable data pipelines using Python, SQL, AWS, Airflow, and Kafka. Drive real-time & batch data systems across analytics, ML, and product teams. A hybrid work option is available. Required Candidate profile 3+ yrs in data engineering with strong Python, SQL, AWS, Airflow, Spark, Kafka, Debezium, Redshift, ETL & CDC experience. Must know data lakes, warehousing, and orchestration tools.

Posted 5 days ago

Apply

8.0 - 13.0 years

12 - 22 Lacs

Pune

Hybrid

Naukri logo

- Exp in developing applications using Python, Glue(ETL), Lambda, step functions services in AWS EKS, S3, Glue, EMR, RDS Data Stores, CloudFront, API Gateway - Exp in AWS services such as Amazon Elastic Compute (EC2), Glue, Amazon S3, EKS, Lambda Required Candidate profile - 10+ years of exp in software development and technical leadership, preferably having a strong financial knowledge in building complex trading applications. - 5+ years of people management exp.

Posted 5 days ago

Apply

Exploring AWS Glue Jobs in India

AWS Glue is a popular ETL (Extract, Transform, Load) service offered by Amazon Web Services. As businesses in India increasingly adopt cloud technologies, the demand for AWS Glue professionals is on the rise. Job seekers looking to explore opportunities in this field can find a variety of roles across different industries in India.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for AWS Glue roles: - Bangalore - Mumbai - Delhi - Hyderabad - Pune

Average Salary Range

The salary range for AWS Glue professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can command salaries in the range of INR 12-18 lakhs per annum.

Career Path

A typical career path in AWS Glue may look like: - Junior AWS Glue Developer - AWS Glue Developer - Senior AWS Glue Developer - AWS Glue Tech Lead

Related Skills

In addition to AWS Glue expertise, professionals in this field are often expected to have knowledge of: - AWS services like S3, Lambda, and Redshift - Programming languages like Python or Scala - ETL concepts and best practices

Interview Questions

  • What is AWS Glue and how does it differ from traditional ETL tools? (basic)
  • How do you handle schema evolution in AWS Glue? (medium)
  • Explain the difference between AWS Glue Data Catalog and Glue ETL. (medium)
  • Can you explain how AWS Glue handles job bookmarking? (medium)
  • How do you troubleshoot job failures in AWS Glue? (medium)
  • What are the different types of triggers supported by AWS Glue? (medium)
  • How do you optimize AWS Glue job performance? (advanced)
  • Explain how to set up security configurations in AWS Glue. (advanced)
  • What are the limitations of AWS Glue? (advanced)
  • How do you handle nested data in AWS Glue transformations? (advanced)
  • Explain the difference between dynamic frames and data frames in AWS Glue. (advanced)
  • How does AWS Glue handle data type conversions? (medium)
  • Can you explain the concept of partitions in AWS Glue tables? (basic)
  • What are the benefits of using AWS Glue over traditional ETL tools? (basic)
  • How do you schedule AWS Glue jobs? (basic)
  • Explain the concept of crawlers in AWS Glue. (medium)
  • What are the different types of AWS Glue jobs? (basic)
  • How do you handle incremental data loading in AWS Glue? (medium)
  • What are the key components of an AWS Glue job? (basic)
  • How do you monitor and audit AWS Glue job executions? (medium)
  • What is the role of AWS Glue in a data lake architecture? (advanced)
  • How do you handle schema evolution in AWS Glue? (medium)
  • Explain the concept of a connection in AWS Glue. (basic)
  • How does AWS Glue handle data deduplication? (medium)
  • Can you explain how to orchestrate AWS Glue jobs with other AWS services? (advanced)

Closing Remark

As you prepare for AWS Glue job interviews in India, make sure to brush up on your technical skills and showcase your expertise in ETL and AWS services. With the right preparation and confidence, you can land a rewarding career in this growing field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies