Data Engineer (Pyspark)

5 - 9 years

7 - 12 Lacs

Posted:2 weeks ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Responsibilities:

  • Be part of the DevOps Platform team who own the Data reservoir platform Code/Tools/Processes
  • Participate in the design and architecture of big data solutions.
  • Design, develop, and maintain data patterns.
  • Design, Develop and maintain Automation Testing Frameworks.
  • Expertise in GIT and CI/CD.
  • Optimize and tune Spark code to ensure high performance and scalability.
  • Automate processes and react quickly to continuously improve processes.
  • Write clear and concise documentation for Python/Spark-developed code.

Requirements:

  • Experience is working in DevOps Setup
  • Experience in working with Git Repositories.
  • Experience with Spark SQL, Spark Streaming.
  • Experience with batch and streaming data processing using Spark.
  • Experience in building RESTful APIs is a plus.
  • Experience in using databases such as DB2,Oracle,Hadoop, Hive, Postgres
  • Have high levels of Ownership and accountability for the undertaken tasks.
  • Strong problem-solving and analytical skills.
  • Excellent written and verbal communication skills.
  • Ability to work independently as well as part of a team.
  • Strong attention to detail and accuracy.
  • Strong knowledge of Agile methodologies.
  • Exposure to Cloudera and Azure Platform (Microsoft fabric/Databricks/Data Factory /Synapse) will be an advantage

Responsibilities: Be part of the DevOps Platform team who own the Data reservoir platform Code/Tools/Processes Participate in the design and architecture of big data solutions. Design, develop, and maintain data patterns. Design, Develop and maintain Automation Testing Frameworks. Expertise in GIT and CI/CD.

Optimize and tune Spark code to ensure high performance and scalability. Automate processes and react quickly to continuously improve processes. Write clear and concise documentation for Python/Spark-developed code.

Requirements: Experience is working in DevOps Setup Experience in working with Git Repositories. Experience with Spark SQL, Spark Streaming. Experience with batch and streaming data processing using Spark.

Experience in building RESTful APIs is a plus.

Experience in using databases such as DB2,Oracle,Hadoop, Hive, Postgres Have high levels of Ownership and accountability for the undertaken tasks. Strong problem-solving and analytical skills.

Excellent written and verbal communication skills. Ability to work independently as well as part of a team. Strong attention to detail and accuracy. Strong knowledge of Agile methodologies.

Exposure to Cloudera and Azure Platform (Microsoft fabric/Databricks/Data Factory /Synapse) will be an advantage Experience: 5+ years Qualification: BTech/BE

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Colruyt It Consultancy India logo
Colruyt It Consultancy India

Retail

Halle Flemish Region

RecommendedJobs for You

Chennai, Tamil Nadu, India

Bangalore Urban, Karnataka, India

Chennai, Tamil Nadu, India

Bengaluru / Bangalore, Karnataka, India