Senior Data Engineer (C2H) | Databricks, AWS | Bangalore | Hyderabad |

8 - 13 years

18 - 22 Lacs

Posted:3 days ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

To Apply - Mandatory to submit Details via Google Form - https://forms.gle/cCa1WfCcidgiSTgh8

Position : Senior Data Engineer - Total 8+ years Required

Relevant 6+ years in Databricks, AWS, Apache Spark & Informatica (Required Skills)

As a Senior data Engineer in our team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations

Seeking experienced data Engineer to design, implement, and maintain robust data pipelines and analytics solutions using databricks & AWS services. The ideal candidate will have a strong background in data services, big data technologies, and programming languages.

Role & responsibilities

  • Technical Leadership: Guide and mentor teams in designing and implementing Databricks solutions.
  • Architecture & Design: Develop scalable data pipelines and architectures using Databricks Lakehouse.
  • Data Engineering: Lead the ingestion and transformation of batch and streaming data.
    Performance Optimization: Ensure efficient resource utilization and troubleshoot performance bottlenecks.
  • Security & Compliance: Implement best practices for data governance, access control, and compliance.
  • Collaboration: Work closely with data engineers, analysts, and business stakeholders.
    Cloud Integration: Manage Databricks environments on Azure, AWS, or GCP.
  • Monitoring & Automation: Set up monitoring tools and automate workflows for efficiency.

Qualifications:

Excellent problem-solving and leadership skills.

Good to have these skills

1. Design and implement scalable, high-performance data pipelines using AWS services

2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda

3. Build and maintain data lakes using S3 and Delta Lake

4. Create and manage analytics solutions using Amazon Athena and Redshift

5. Design and implement database solutions using Aurora, RDS, and DynamoDB

6. Develop serverless workflows using AWS Step Functions

7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL

8. Ensure data quality, security, and compliance with industry standards

9. Collaborate with data scientists and analysts to support their data needs

10. Optimize data architecture for performance and cost-efficiency

11. Troubleshoot and resolve data pipeline and infrastructure issues

Preferred candidate profile

1. Bachelors degree in computer science, Information Technology, or related field

2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS

AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3

4. Experience with data lake technologies, particularly Delta Lake

Aurora, RDS, DynamoDB, PostgreSQL

Python and PySpark

PostgreSQL

8. Experience with AWS Step Functions for workflow orchestration

Technical Skills: Good to have

Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB

Hadoop, Spark, Delta Lake

Python, PySpark

SQL, PostgreSQL, NoSQL

- Data Warehousing and Analytics

- ETL/ELT processes

- Data Lake architectures

- Version control: Git

- Agile methodologies

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Digiads Media logo
Digiads Media

Digital Marketing

Los Angeles

RecommendedJobs for You