Home
Jobs

5 - 10 years

7 - 12 Lacs

Posted:2 days ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Job Summary: As a Data Development Engineer, you will be a core member of an elite team responsible for designing, developing, and scaling high-performance, data-intensive applications. This role demands deep technical expertise , particularly in Python and big data ecosystems such as Apache Spark, along with a strong understanding of modern data pipelines and cloud platforms. You will also have the opportunity to evolve into a technical leadership position within the Data Initiative.
Experience: 5 - 8 Yea rs
Location: Bangalor e
Key Responsibilities:
  • Collaborate with the team to define and implement high-level technical architecture for backend services and data monetization components.
  • Design, develop, and enhance features in scalable data applications and services.
  • Develop technical documentation, data flow diagrams, and architectural designs.
  • Partner with QA, DevOps, Data Engineering, and Product teams for deployment, testing, training, and production support.
  • Build and maintain robust integrations with enterprise data platforms and tools (e.g., Databricks, Kafka).
  • Write clean, efficient, and testable Python and PySpark code.
  • Ensure compliance with development, coding, security, and privacy standards.
  • Proactively learn and adapt to new technologies based on evolving business needs.
  • Mentor junior developers and contribute to establishing best practices.

Qualifications :

  • 5+ years of hands-on Python development experience , specifically in data-intensive environments.
  • Strong expertise in Apache Spark and PySpark for distributed data processing.
  • Proficient in SQL , query optimization, and working with relational databases (e.g., Oracle, SparkSQL ).
  • Solid understanding of software development lifecycle (SDLC) and agile methodologies.
  • Proven experience in writing unit, integration, and performance tests for data pipelines.
  • Hands-on experience with Databricks and large-scale data environments.
  • Deep understanding of data pipelines , including data engineering workflows, data lineage, transformation, and quality frameworks.
  • Familiarity with AWS (or other cloud providers) for deploying and managing data infrastructure.
  • Excellent communication skills and a strong sense of ownership and accountability.

Good To have skills:

  • Experience in foreign exchange (FX) or capital markets is highly desirable.
  • Knowledge of modern data serialization formats (e.g., AVRO, Parquet ).
  • Experience with Apache Kafka and real-time data streaming.
  • Familiarity with Apache Airflow or other orchestration tools.
  • Comfort working in Linux environments and scripting.
  • Exposure to data governance , compliance , and data security best practices .

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Wissen Technology
Wissen Technology

IT Services and IT Consulting

Bangalore Karnataka

RecommendedJobs for You

Bengaluru, Karnataka, India

Bengaluru, Karnataka, India

Bengaluru, Karnataka

Bengaluru, Karnataka, India

Hyderabad, Telangana, India