Home
Jobs

2 years

0 Lacs

Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Location:

Shift Timing:

Experience:

Company:



About the Role

Data Engineer


Key Responsibilities:

  • Develop, maintain, and optimize data pipelines in

    Databricks (Spark, Notebooks, Delta Lake)

  • Work with large datasets stored on

    AWS S3

  • Collaborate using

    Jira

    ,

    Bitbucket

    , and

    Checklist Buddy

  • Support our

    Data Engineering architecture

    : Ingestion ➝ Raw ➝ Processed ➝ Reporting (Power BI, Tableau – currently inactive)
  • Use

    AWS CLI

    ,

    WSL

    , and

    Python

    for automation and data tasks


Required Skills:

Databricks (Spark, Notebooks, Delta Lake)

AWS S3, AWS CLI

Python scripting

Jira, Bitbucket

WSL environment knowledge

Good understanding of DE architectures

Persistence/Roche JD

Singularity Containerization

HPC/ ON PREM experience

SLURM

DEVOPS and deployment ( including building pip files, custom libraries)

Python stack

GCP

Mainly- DAG, Containerization, Python


Nice to Have:

  • Exposure to BI tools like Power BI, Tableau
  • Familiarity with Agile project tracking tools
  • Checklist-based task management


Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru

Hyderabad, Pune, Coimbatore