Home
Jobs

BIG DATA TECHNICAL LEAD ENGINEER

6 - 11 years

12 - 16 Lacs

Posted:1 month ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

We are seeking a highly skilled Big Data Engineer with a strong background in designing and implementing ETL data pipelines using Apache Spark and either Scala or Python. The ideal candidate will have a deep understanding of big data frameworks, advanced SQL skills, and hands-on experience with Elasticsearch. You will be responsible for building scalable, efficient data solutions and ensuring data quality, reliability, and performance in large-scale environments. KEY RESPONSIBILITIES: Design and implement ETL pipelines: Develop, optimize, and maintain ETL processes for the ingestion, transformation, and extraction of large datasets using Spark (Scala/Python). Data modeling and processing: Build scalable data processing applications that handle high-volume and complex data from various sources. Spark Application Development: Write optimized, scalable Spark applications in Scala or Python to process and analyze large datasets. Elasticsearch Integration: Utilize Elasticsearch for indexing, searching, and querying big data; ensure the data in Elasticsearch is well-structured and optimized for performance. SQL Expertise: Develop complex SQL queries and scripts to perform data analysis, transformations, and ensure data integrity across the pipeline. Data Architecture: Collaborate with architects to design data solutions that are resilient, efficient, and easy to scale. Performance Tuning: Optimize Spark jobs and queries for performance, including tuning parameters, resource management, and troubleshooting issues related to data quality and performance. Collaboration: Work with cross-functional teams, including data scientists, analysts, and software engineers, to deliver end-to-end data solutions. Monitoring and Maintenance: Set up monitoring tools and frameworks to ensure the stability and reliability of data pipelines in production environments. REQUIRED SKILLS AND QUALIFICATIONS: 6+ years of experience in big data engineering or related fields. Strong expertise in Apache Spark for data processing, with hands-on experience developing Spark applications in Scala or Python. Experience in ETL pipeline development, including data ingestion, transformation, and processing of large-scale datasets. Proficient in Elasticsearch: Experience with Elasticsearch for querying and indexing large datasets, understanding of its architecture and performance optimization. Advanced SQL skills: Strong ability to write complex SQL queries for data extraction, aggregation, and analysis. Knowledge of distributed data processing and managing large data sets in a cloud or on-premise environment. Familiarity with Hadoop ecosystems, HDFS, and other big data technologies. Experience with version control (e.g., Git) and CI/CD pipelines. Strong problem-solving skills, attention to detail, and a mindset for performance tuning and optimization.

Mock Interview

Practice Video Interview with JobPe AI

Start Performance Tuning Interview Now
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Swathi Business Solutions
Swathi Business Solutions

IT Services and IT Consulting

Chennai

11-50 Employees

21 Jobs

    Key People

  • Aditi Sharma

    CEO
  • Rahul Kumar

    Chief Operations Officer

RecommendedJobs for You

Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru

Pune, Chennai, Bengaluru