Home
Jobs

244 Aws Glue Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Position Overview Summary: The Data Engineer will expand and optimize the data and data pipeline architecture, as well as optimize data flow and collection for cross functional teams. The Data Engineer will perform data architecture analysis, design, development and testing to deliver data applications, services, interfaces, ETL processes, reporting and other workflow and management initiatives. The role also will follow modern SDLC principles, test driven development and source code reviews and change control standards in order to maintain compliance with policies. This role requires a highly motivated individual with strong technical ability, data capability, excellent communication and collaboration skills including the ability to develop and troubleshoot a diverse range of problems. Responsibilities Design and develop enterprise data data architecture solutions using Hadoop and other data technologies like Spark, Scala.

Posted 2 hours ago

Apply

3.0 - 5.0 years

14 - 19 Lacs

Mumbai, Pune

Work from Office

Naukri logo

Company: Marsh McLennan Agency Description: Marsh McLennan is seeking candidates for the following position based in the Pune office. Senior Engineer/Principal Engineer What can you expect We are seeking a skilled Data Engineer with 3 to 5 years of hands-on experience in building and optimizing data pipelines and architectures. The ideal candidate will have expertise in Spark, AWS Glue, AWS S3, Python, complex SQL, and AWS EMR. What is in it for you Holidays (As Per the location) Medical & Insurance benefits (As Per the location) Shared Transport (Provided the address falls in service zone) Hybrid way of working Diversify your experience and learn new skills Opportunity to work with stakeholders globally to learn and grow We will count on you to: Design and implement scalable data solutions that support our data-driven decision-making processes. What you need to have: SQL and RDBMS knowledge - 5/5. Postgres. Should have extensive hands-on Database systems carrying tables, schema, views, materialized views. AWS Knowledge. Core and Data engineering services. Glue/ Lambda/ EMR/ DMS/ S3 - services in focus. ETL data:dge :- Any ETL tool preferably Informatica. Data warehousing. Big data:- Hadoop - Concepts. Spark - 3/5 Hive - 5/5 Python/ Java. Interpersonal skills:- Excellent communication skills and Team lead capabilities. Understanding of data systems well in big organizations setup. Passion deep diving and working with data and delivering value out of it. What makes you stand out Databricks knowledge. Any Reporting tool experience. Preferred MicroStrategy. Marsh McLennan (NYSEMMC) is the worlds leading professional services firm in the areas ofrisk, strategy and people. The Companys more than 85,000 colleagues advise clients in over 130 countries.With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses.Marshprovides data-driven risk advisory services and insurance solutions to commercial and consumer clients.Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com , or follow us onLinkedIn andX . Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person. Marsh McLennan (NYSEMMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.

Posted 2 hours ago

Apply

5.0 - 10.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Stellantis is seeking a passionate, innovative, results-oriented Information Communication Technology (ICT) Manufacturing AWS Cloud Architect to join the team. As a Cloud architect, the selected candidate will leverage business analysis, data management, and data engineering skills to develop sustainable data tools supporting Stellantiss Manufacturing Portfolio Planning. This role will collaborate closely with data analysts and business intelligence developers within the Product Development IT Data Insights team. Job responsibilities include but are not limited to Having deep expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms Assembling large, complex sets of data that meet non-functional and functional business requirements Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS, cloud and other SQL technologies. Working with stakeholders to support their data infrastructure needs while assisting with data-related technical issues Maintain high-quality ontology and metadata of data systems Establish a strong relationship with the central BI/data engineering COE to ensure alignment in terms of leveraging corporate standard technologies, processes, and reusable data models Ensure data security and develop traceable procedures for user access to data systems Qualifications, Experience and Competency Education Bachelors or Masters degree in Computer Science, or related IT-focused degree Experience Essential Overall 10-15 years of IT experience Develop, automate and maintain the build of AWS components, and operating systems. Work with application and architecture teams to conduct proof of concept (POC) and implement the design in a production environment in AWS. Migrate and transform existing workloads from on premise to AWS Minimum 5 years of experience in the area of data engineering or data architectureconcepts, approach, data lakes, data extraction, data transformation Proficient in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies. Experience operating very large data warehouses or data lakes Investigate and develop new micro services and features using the latest technology stacks from AWS Self-starter with the desire and ability to quickly learn new technologies Strong interpersonal skills with ability to communicate & build relationships at all levels Hands-on experience from AWS cloud technologies like S3, AWS glue, Glue Catalog, Athena, AWS Lambda, AWS DMS, pyspark, and snowflake. Experience with building data pipelines and applications to stream and process large datasets at low latencies. Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes Desirable Familiarity with data analytics, Engineering processes and technologies Ability to work successfully within a global and cross-functional team A passion for technology. We are looking for someone who is keen to leverage their existing skills while trying new approaches and share that knowledge with others to help grow the data and analytics teams at Stellantis to their full potential! Specific Skill Requirement AWS services (GLUE, DMS, EC2, RDS, S3, VPCs and all core services, Lambda, API Gateway, Cloud Formation, Cloud watch, Route53, Athena, IAM) andSQL, Qlik sense, python/Spark, ETL optimization , If you are interested, please Share below details and Updated Resume Matched First Name Last Name Date of Birth Pass Port No and Expiry Date Alternate Contact Number Total Experience Relevant Experience Current CTC Expected CTC Current Location Preferred Location Current Organization Payroll Company Notice period Holding any offer

Posted 2 hours ago

Apply

6.0 - 11.0 years

8 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Bachelors degree (Computer Science), masters degree or Technical Diploma or equivalent At least 8 years of experience in a similar role At least 5 years of experience on AWS and/or Azure At least 5 years of experience on Databricks At least 5 years of experience on multiples Azure and AWS PaaS solutions: Azure Data Factory, MSSQL, Azure storage, AWS S3, Cognitive search, CosmosDB, Event Hub, AWS glue Strong knowledge of AWS and Azure architecture design best practices Knowledge of ITIL & AGILE methodologies (certifications are a plus) Experience working with DevOps tools such as Git, CI/CD pipelines, Ansible, Azure DevOps Knowledge of Airflow, Kubernetes is an added advantage Solid understanding of Networking/Security and Linux English language on the Business Fluent level is required Curious to continuously learn and explore new approaches/technologies Able to work under pressure in a multi-vendor and multi-cultural team Flexible, agile and adaptive to change Customer-focused approach Good communication skills Analytical mind-set Innovation

Posted 2 hours ago

Apply

12.0 - 16.0 years

14 - 20 Lacs

Pune

Work from Office

Naukri logo

AI/ML/GenAI AWS SME Job Description Role Overview: An AWS SME with a Data Science Background is responsible for leveraging Amazon Web Services (AWS) to design, implement, and manage data-driven solutions. This role involves a combination of cloud computing expertise and data science skills to optimize and innovate business processes. Key Responsibilities: Data Analysis and Modelling: Analyzing large datasets to derive actionable insights and building predictive models using AWS services like SageMaker, Bedrock, Textract etc. Cloud Infrastructure Management: Designing, deploying, and maintaining scalable cloud infrastructure on AWS to support data science workflows. Machine Learning Implementation: Developing and deploying machine learning models using AWS ML services. Security and Compliance: Ensuring data security and compliance with industry standards and best practices. Collaboration: Working closely with cross-functional teams, including data engineers, analysts, DevOps and business stakeholders, to deliver data-driven solutions. Performance Optimization: Monitoring and optimizing the performance of data science applications and cloud infrastructure. Documentation and Reporting: Documenting processes, models, and results, and presenting findings to stakeholders. Skills & Qualifications Technical Skills: Proficiency in AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker). Strong programming skills in Python. Experience with AI/ML project life cycle steps. Knowledge of machine learning algorithms and frameworks (e.g., TensorFlow, Scikit-learn). Familiarity with data pipeline tools (e.g., AWS Glue, Apache Airflow). Excellent communication and collaboration abilities.

Posted 3 hours ago

Apply

5.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Hybrid

Naukri logo

PF Detection is mandatory : Managing data storage solutions on AWS, such as Amazon S3, Amazon Redshift, and Amazon DynamoDB. Implementing and optimizing data processing workflows using AWS services like AWS Glue, Amazon EMR,and AWS Lambda. Working with Spotfire Engineers and business analysts to ensure data is accessible and usable for analysisand visualization. Collaborating with other engineers, and business stakeholders to understand requirements and deliversolutions. Writing code in languages like SQL, Python, or Scala to build and maintain data pipelines and applications. Using Infrastructure as Code (IaC) tools to automate the deployment and management of datainfrastructure. A strong understanding of core AWS services, cloud concepts, and the AWS Well-Architected Framework Conduct an extensive inventory/evaluation of existing environments workflows. Designing and developing scalable data pipelines using AWS services to ensure efficient data flow andprocessing. Integrating / combining diverse data sources to maintain data consistency and reliability. Working closely with data engineers and other stakeholders to understand data requirements and ensureseamless data integration. Build and maintain CI/CD pipelines. Kindly Acknowledge back to this mail with updated Resume.

Posted 3 hours ago

Apply

6.0 - 11.0 years

8 - 15 Lacs

Pune

Hybrid

Naukri logo

- Exp in developing applications using Python, Glue(ETL), Lambda, step functions services in AWS EKS, S3, Glue, EMR, RDS Data Stores, CloudFront, API Gateway - Exp in AWS services such as Amazon Elastic Compute (EC2), Glue, Amazon S3, EKS, Lambda Required Candidate profile - 7+ years of exp in software development and technical leadership, preferably having a strong financial knowledge in building complex trading applications. - Research and evaluate new technologies

Posted 4 hours ago

Apply

5.0 - 10.0 years

4 - 7 Lacs

Mumbai

Hybrid

Naukri logo

PF Detection is mandatory Minimum 5 years of experience in database development and ETL tools. 2. Strong expertise in SQL and database platforms (e.g. SQL Server Oracle PostgreSQL). 3. Proficiency in ETL tools (e.g. Informatica SSIS Talend DataStage) and scripting languages (e.g. Python Shell). 4. Experience with data modeling and schema design. 5. Familiarity with cloud databases and ETL tools (e.g. AWS Glue Azure Data Factory Snowflake). 6. Understanding of data warehousing concepts and best practices

Posted 5 hours ago

Apply

3.0 - 7.0 years

2 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

Pyspark SparkSQL SQL and Glue. ii. AWS cloud experience iii. Good understanding of dimensional modelling iv. Good understanding DevOps CloudOps DataOps CI/CD & with a SRE mindset v. Understanding of Lakehouse and DW architecture vi. strong analysis and analytical skills vii. understanding of version control systems specifically Git viii. strong in software engineering APIs Microservices etc. Soft skills i. written and oral communication skills ii. ability to translate business needs to system.

Posted 5 hours ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Chennai

Hybrid

Naukri logo

Data Engineer, AWS We're looking for a skilled Data Engineer with 5+ years of experience to join our team. You'll play a crucial role in building and optimizing our data infrastructure on AWS, transforming raw data into actionable insights that drive our business forward. What You'll Do: Design & Build Data Pipelines: Develop robust, scalable, and efficient ETL/ELT pipelines using AWS services and Informatica Cloud to move and transform data from diverse sources into our data lake (S3) and data warehouse (Redshift). Optimize Data Models & Architecture: Create and maintain performant data models and contribute to the overall data architecture to support both analytical and operational needs. Ensure Data Quality & Availability: Monitor and manage data flows, ensuring data accuracy, security, and consistent availability for all stakeholders. Collaborate for Impact: Work closely with data scientists, analysts, and business teams to understand requirements and deliver data solutions that drive business value. What You'll Bring: 5+ years of experience as a Data Engineer, with a strong focus on AWS. Proficiency in SQL for complex data manipulation and querying. Hands-on experience with core AWS data services: Storage: Amazon S3 (data lakes, partitioning). Data Warehousing: Amazon Redshift. ETL/ELT: AWS Glue (Data Catalog, crawlers). Serverless & Orchestration: AWS Lambda, AWS Step Functions. Security: AWS IAM. Reports : Looker and Power Bi Experience with big data technologies like PySpark. Experience with Informatica Cloud or similar ETL tools. Strong problem-solving skills and the ability to optimize data processes. Excellent communication skills and a collaborative approach. Added Advantage: Experience with Python for data manipulation and scripting. Looker or PowerBi expirence is desirable Role & responsibilities Preferred candidate profile

Posted 5 hours ago

Apply

8.0 - 13.0 years

0 - 0 Lacs

Hyderabad

Work from Office

Naukri logo

Position: Python Developer + AWS ; Location: Hyderabad ; Job type: Contract to hire On payrolls of Randstad Digital ; Experience: 8+ years ; Face To Face interview: 2nd July-25 ; Number of positions: 5 ; Need only immediate joiners ; 4 days WFO Primary Skills (Mandatory top 3 skills) AWS working experience AWS Glue or equivalent product experience Lambda functions Python programming Kubernetes knowledge Secondary Skills (Good to have) Data quality Data Governance knowledge Migration experience CI/CD Jules working knowledge

Posted 18 hours ago

Apply

7.0 - 12.0 years

30 - 40 Lacs

Indore, Pune, Bengaluru

Hybrid

Naukri logo

Support enhancements to the MDM platform Develop pipelines using snowflake python SQL and airflow Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years of hands on expert level Snowflake, Python, orchestration tools like Airflow Good understanding of investment domain Experience with dbt, Cloud experience (AWS, Azure) DevOps

Posted 18 hours ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Gurugram

Hybrid

Naukri logo

Role & responsibilities Skill - Data Engineer, Pyspark, Athena, AWS Glue Notice period - Immediate joiners Location - Gurgaon Exp - 5 to 12 Yrs. Responsibilities: Designing and implementing cloud-based solutions, with a focus on AWS, and operationalizing development in production. Building and managing infrastructures on AWS using infrastructure-as-code tools like Terraform. Developing efficient and clean automation scripts using languages such as Python. Designing and building the reporting layer for various data sources. Leading key data architecture decisions throughout the development lifecycle. Developing data pipelines and ETL processes, utilizing tools such as AWS Lambda, Redshift, and Glue. Collaborating with cross-functional teams to support the productionalization of ML/AI models. Identifying, designing, and implementing internal process improvements, with a focus on automating manual tasks.

Posted 22 hours ago

Apply

3.0 - 5.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: AWS Glue. Experience3-5 Years.

Posted 1 day ago

Apply

5.0 - 8.0 years

5 - 15 Lacs

Kolkata, Pune

Work from Office

Naukri logo

Data Engineer Mandotory skills - Python,Pyspark & AWS Glue Location- Pune & Kolkatta Share cv at Muktai.S@alphacom.in

Posted 1 day ago

Apply

4.0 - 8.0 years

18 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Python Data Engineer: 4+ years of experience in backend development with Python. Strong experience with AWS services and cloud architecture. Proficiency in developing RESTful APIs and microservices. Experience with database technologies such as SQL, PostgreSQL, and NoSQL databases. Familiarity with containerization and orchestration tools like Docker and Kubernetes. Knowledge of CI/CD pipelines and tools such as Jenkins, GitLab CI, or AWS CodePipeline. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.

Posted 1 day ago

Apply

3.0 - 8.0 years

5 - 11 Lacs

Pune, Mumbai (All Areas)

Hybrid

Naukri logo

Overview: TresVista is looking to hire an Associate in its Data Intelligence Group team, who will be primarily responsible for managing clients as well as monitor/execute projects both for the clients as well as internal teams. The Associate may be directly managing a team of up to 3-4 Data Engineers & Analysts across multiple data engineering efforts for our clients with varied technologies. They would be joining the current team of 70+ members, which is a mix of Data Engineers, Data Visualization Experts, and Data Scientists. Roles and Responsibilities: Interacting with the client (internal or external) to understand their problems and work on solutions that address their needs Driving projects and working closely with a team of individuals to ensure proper requirements are identified, useful user stories are created, and work is planned logically and efficiently to deliver solutions that support changing business requirements Managing the various activities within the team, strategizing how to approach tasks, creating timelines and goals, distributing information/tasks to the various team members Conducting meetings, documenting, and communicating findings effectively to clients, management and cross-functional teams Creating Ad-hoc reports for multiple internal requests across departments Automating the process using data transformation tools Prerequisites Strong analytical, problem-solving, interpersonal, and communication skills Advanced knowledge of DBMS, Data Modelling along with advanced querying capabilities using SQL Working experience in cloud technologies (GCP/ AWS/Azure/Snowflake) Prior experience in building and deploying ETL/ELT pipelines using CI/CD, and orchestration tools such as Apache Airflow, GCP workflows, etc. Proficiency in Python for building ETL/ELT processes and data modeling Proficiency in Reporting and Dashboards creation using Power BI/Tableau Knowledge in building ML models and leveraging Gen AI for modern architectures. Experience working with version control platforms like GitHub Familiarity with IaC tools like Terraform and Ansible is good to have Stakeholder Management and client communication experience would be preferred Experience in the Financial Services domain will be an added plus Experience in Machine Learning tools and techniques will be good to have Experience 3-7 years Education BTech/MTech/BE/ME/MBA in Analytics Compensation The compensation structure will be as per industry standards

Posted 2 days ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Role: Senior Data Engineer Location: Bangalore - Hybrid Experience : 10+ Years Job Requirements: ETL & Data Pipelines: Experience building and maintaining ETL pipelines with large data sets using AWS Glue, EMR, Kinesis, Kafka, CloudWatch Programming & Data Processing: Strong Python development experience with proficiency in Spark or PySpark Experience in using APIs Database Management: Strong skills in writing SQL queries and performance tuning in AWS Redshift Proficient with other industry-leading RDBMS such as MS SQL Server and PostgreSQL AWS Services: Proficient in working with AWS services including AWS Lambda, Event Bridge, Step Functions, SNS, SQS, S3, and MI models Interested candidates can share their resume at Neesha1@damcogroup.com

Posted 2 days ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Python (Programming Language) Good to have skills : AWS S3 (Simple Storage Service), AWS Lambda Administration, AWS GlueMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language), AWS S3 (Simple Storage Service), AWS Lambda Administration, AWS Glue.- Good To Have Skills: Experience with AWS S3 (Simple Storage Service), AWS Lambda Administration, AWS Glue.- Strong understanding of software development life cycle methodologies.- Experience with version control systems such as Git.- Familiarity with RESTful APIs and web services. Additional Information:- The candidate should have minimum 5 years of experience in Python (Programming Language).- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 days ago

Apply

6.0 - 8.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for energetic, self-motivated and exceptional Data engineers to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with a star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into databases from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business processes and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have the ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance

Posted 2 days ago

Apply

6.0 - 8.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for energetic, self-motivated and exceptional Data engineer to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into database from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business process and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance. Immediate Joiners.

Posted 2 days ago

Apply

7.0 - 9.0 years

15 - 30 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Job Title: Senior Data Associate - Cloud Data Engineering Experience: 7+ Years Employment Type: Full-Time Industry: Information Technology / Data Engineering / Cloud Platforms Job Summary: We are seeking a highly skilled and experienced Senior Data Associate to join our data engineering team. The ideal candidate will have a strong background in cloud data platforms, big data processing, and enterprise data systems, with hands-on experience across both AWS and Azure ecosystems. This role involves building and optimizing data pipelines, managing large-scale data lakes and warehouses, and enabling advanced analytics and reporting. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS Glue, PySpark, and Azure Data Factory. Work with AWS Redshift, Athena, Azure Synapse, and Databricks to support data warehousing and analytics solutions. Integrate and manage data across MongoDB, Oracle, and cloud-native storage like Azure Data Lake and S3. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality datasets. Implement data quality checks, monitoring, and governance practices. Optimize data workflows for performance, scalability, and cost-efficiency. Support data migration and modernization initiatives across cloud platforms. Document data flows, architecture, and technical specifications. Required Skills & Qualifications: 7+ years of experience in data engineering, data integration, or related roles. Strong hands-on experience with: AWS Redshift, Athena, Glue, S3 Azure Data Lake, Synapse Analytics, Databricks PySpark for distributed data processing MongoDB and Oracle databases Proficiency in SQL, Python, and data modeling. Experience with ETL/ELT design and implementation. Familiarity with data governance, security, and compliance standards. Strong problem-solving and communication skills. Preferred Qualifications: Certifications in AWS (e.g., Data Analytics Specialty) or Azure (e.g., Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps for data workflows. Knowledge of data cataloging tools (e.g., AWS Glue Data Catalog, Azure Purview). Exposure to real-time data processing and streaming technologies. Required Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake

Posted 2 days ago

Apply

12.0 - 15.0 years

5 - 5 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Senior Data Architect - Big Data & Cloud Solutions Experience: 10+ Years Industry: Information Technology / Data Engineering / Cloud Computing Job Summary: We are seeking a highly experienced and visionary Data Architect to lead the design and implementation of scalable, high-performance data solutions. The ideal candidate will have deep expertise in Apache Kafka, Apache Spark, AWS Glue, PySpark, and cloud-native architectures, with a strong background in solution architecture and enterprise data strategy. Key Responsibilities: Design and implement end-to-end data architecture solutions on AWS using Glue, S3, Redshift, and other services. Architect and optimize real-time data pipelines using Apache Kafka and Spark Streaming. Lead the development of ETL/ELT workflows using PySpark and AWS Glue. Collaborate with stakeholders to define data strategies, governance, and best practices. Ensure data quality, security, and compliance across all data platforms. Provide technical leadership and mentorship to data engineers and developers. Evaluate and recommend new tools and technologies to improve data infrastructure. Translate business requirements into scalable and maintainable data solutions. Required Skills & Qualifications: 10+ years of experience in data engineering, architecture, or related roles. Strong hands-on experience with: Apache Kafka (event streaming, topic design, schema registry) Apache Spark (batch and streaming) AWS Glue, S3, Redshift, Lambda, CloudFormation/Terraform PySpark for large-scale data processing Proven experience in solution architecture and designing cloud-native data platforms. Deep understanding of data modeling, data lakes, and data warehousing concepts. Strong programming skills in Python and SQL. Experience with CI/CD pipelines and DevOps practices for data workflows. Excellent communication and stakeholder management skills. Preferred Qualifications: AWS Certified Solutions Architect or Big Data Specialty certification. Experience with data governance tools and frameworks. Familiarity with containerization (Docker, Kubernetes) and orchestration tools (Airflow, Step Functions). Exposure to machine learning pipelines and MLOps is a plus. Required Skills Apache,Pyspark,Aws Cloud,Kafka

Posted 2 days ago

Apply

7.0 - 9.0 years

5 - 5 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Azure Infrastructure Consultant - Cloud & Data Integration Experience: 8+ Years Employment Type: Full-Time Industry: Information Technology / Cloud Infrastructure / Data Engineering Job Summary: We are looking for a seasoned Azure Infrastructure Consultant with a strong foundation in cloud infrastructure, data integration, and real-time data processing. The ideal candidate will have hands-on experience across Azure and AWS platforms, with deep knowledge of Apache NiFi, Kafka, AWS Glue, and PySpark. This role involves designing and implementing secure, scalable, and high-performance cloud infrastructure and data pipelines. Key Responsibilities: Design and implement Azure-based infrastructure solutions, ensuring scalability, security, and performance. Lead hybrid cloud integration projects involving Azure and AWS services. Develop and manage ETL/ELT pipelines using AWS Glue, Apache NiFi, and PySpark. Architect and support real-time data streaming solutions using Apache Kafka. Collaborate with cross-functional teams to gather requirements and deliver infrastructure and data solutions. Implement infrastructure automation using tools like Terraform, ARM templates, or Bicep. Monitor and optimize cloud infrastructure and data workflows for cost and performance. Ensure compliance with security and governance standards across cloud environments. Required Skills & Qualifications: 8+ years of experience in IT infrastructure and cloud consulting. Strong hands-on experience with: Azure IaaS/PaaS (VMs, VNets, Azure AD, App Services, etc.) AWS services including Glue, S3, Lambda Apache NiFi for data ingestion and flow management Apache Kafka for real-time data streaming PySpark for distributed data processing Proficiency in scripting (PowerShell, Python) and Infrastructure as Code (IaC). Solid understanding of networking, security, and identity management in cloud environments. Strong communication and client-facing skills. Preferred Qualifications: Azure or AWS certifications (e.g., Azure Solutions Architect, AWS Data Analytics Specialty). Experience with CI/CD pipelines and DevOps practices. Familiarity with containerization (Docker, Kubernetes) and orchestration. Exposure to data governance tools and frameworks. Required Skills Azure,Microsoft Azure,Azure Paas,aws glue

Posted 2 days ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

The resource should have a strong background in working with cloud platforms, APIs, and data processing. Experience with tools like AWS Glue, Athena, and Databricks will be highly beneficial. AWS Glue Jobs: The QE should have familiarity with AWS Glue jobs for ETL processes. We expect them to validate the successful execution of around Glue jobs, ensuring that transformations and data ingestion tasks are working smoothly without errors. Athena Querying: Experience with querying data using AWS Athena is a must, as the QE will be required to validate queries across multiple datasets. We expect the resource to run and validate Athena queries for data accuracy and integrity. Databricks Testing: The candidate should also have experience with Databricks, particularly in validating data pipelines and transformations within the Databricks environment. The QE will need to test Databricks notebooks or jobs, ensuring data accuracy in the Bronze, Silver, and Gold layers. Boomi integrations

Posted 2 days ago

Apply

Exploring AWS Glue Jobs in India

AWS Glue is a popular ETL (Extract, Transform, Load) service offered by Amazon Web Services. As businesses in India increasingly adopt cloud technologies, the demand for AWS Glue professionals is on the rise. Job seekers looking to explore opportunities in this field can find a variety of roles across different industries in India.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for AWS Glue roles: - Bangalore - Mumbai - Delhi - Hyderabad - Pune

Average Salary Range

The salary range for AWS Glue professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can command salaries in the range of INR 12-18 lakhs per annum.

Career Path

A typical career path in AWS Glue may look like: - Junior AWS Glue Developer - AWS Glue Developer - Senior AWS Glue Developer - AWS Glue Tech Lead

Related Skills

In addition to AWS Glue expertise, professionals in this field are often expected to have knowledge of: - AWS services like S3, Lambda, and Redshift - Programming languages like Python or Scala - ETL concepts and best practices

Interview Questions

  • What is AWS Glue and how does it differ from traditional ETL tools? (basic)
  • How do you handle schema evolution in AWS Glue? (medium)
  • Explain the difference between AWS Glue Data Catalog and Glue ETL. (medium)
  • Can you explain how AWS Glue handles job bookmarking? (medium)
  • How do you troubleshoot job failures in AWS Glue? (medium)
  • What are the different types of triggers supported by AWS Glue? (medium)
  • How do you optimize AWS Glue job performance? (advanced)
  • Explain how to set up security configurations in AWS Glue. (advanced)
  • What are the limitations of AWS Glue? (advanced)
  • How do you handle nested data in AWS Glue transformations? (advanced)
  • Explain the difference between dynamic frames and data frames in AWS Glue. (advanced)
  • How does AWS Glue handle data type conversions? (medium)
  • Can you explain the concept of partitions in AWS Glue tables? (basic)
  • What are the benefits of using AWS Glue over traditional ETL tools? (basic)
  • How do you schedule AWS Glue jobs? (basic)
  • Explain the concept of crawlers in AWS Glue. (medium)
  • What are the different types of AWS Glue jobs? (basic)
  • How do you handle incremental data loading in AWS Glue? (medium)
  • What are the key components of an AWS Glue job? (basic)
  • How do you monitor and audit AWS Glue job executions? (medium)
  • What is the role of AWS Glue in a data lake architecture? (advanced)
  • How do you handle schema evolution in AWS Glue? (medium)
  • Explain the concept of a connection in AWS Glue. (basic)
  • How does AWS Glue handle data deduplication? (medium)
  • Can you explain how to orchestrate AWS Glue jobs with other AWS services? (advanced)

Closing Remark

As you prepare for AWS Glue job interviews in India, make sure to brush up on your technical skills and showcase your expertise in ETL and AWS services. With the right preparation and confidence, you can land a rewarding career in this growing field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies