Data Engineer

4 - 5 years

5 - 9 Lacs

Posted:2 days ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Part Time

Job Description

Job Information:

Work Experience: 4-5 years
Industry: IT Services Job Type: FULL TIME Location: Noida, India

Job Overview:

We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization.

Key Responsibilities:

  • Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources.
  • Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval.
  • Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting.
  • Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights.
  • Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions.
  • Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance.
  • Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools.
  • Troubleshoot and resolve data pipeline issues, optimizing performance and reliability as needed.

Required Skills & Qualifications:

  • Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies.
  • Strong experience with PySpark for large-scale data processing and transformation.
  • Expertise in SQL and data modeling for relational and non-relational databases.
  • Experience building and optimizing ETL pipelines and data integration workflows.
  • Familiarity with business intelligence and visualization tools, especially Amazon QuickSight.
  • Knowledge of data governance, security, and compliance best practices.
  • Strong programming skills in Python; experience with automation and scripting.
  • Ability to work collaboratively in agile environments and manage multiple priorities effectively.
  • Excellent problem-solving and communication skills.

Preferred Qualifications:

  • AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer).

Good to Have Skills:

Understanding of machine learning, deep learning and Generative AI concepts, Regression, Classification, Predictive modeling, Clustering, Deep Learning.

Interview Process

  • Internal Assessment
  • 3 Technical Rounds

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

Pune, Maharashtra, India

Andhra Pradesh, India

Bengaluru, Karnataka, India