Senior Data Engineer (Flexcube)

6 - 11 years

9 - 13 Lacs

Posted:None| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Job Summary

We are seeking a talented Data Engineer to join our dynamic team, focusing on big data solutions for the banking industry. In this role, you will lead the design and development of Big Data Warehouse and Data Lake systems, playing a crucial part in our clients' digital transformation journeys. Your expertise in data engineering will empower financial institutions to harness the power of their data, enabling better decision-making and enhanced customer experiences.

Key Responsibilities

  • Big Data Warehouse and Data Lake Development

    : Lead the design and implementation of Big Data Warehouse and Data Lake systems, utilizing the Hadoop ecosystem and cloud technologies.
  • Data Pipeline Architecture

    : Architect efficient data pipelines to ingest, transform, and load data from various sources into the data warehouse and data lake.
  • Hadoop Ecosystem Expertise

    : Employ tools like Spark, Hadoop, Hive, and Sqoop to process and analyze large-scale data, ensuring optimal performance and security.
  • Cloud Integration

    : Leverage cloud technologies such as MS Azure Data Factory, Azure Data Bricks, and ADLS Gen2 to build scalable and cost-effective data solutions.
  • Data Modeling

    : Collaborate with the functional team to understand requirements and design canonical data models for transaction processing.
  • Code Development

    : Write high-quality code using Python, Scala, and SQL, adhering to best practices and coding standards.
  • Peer Reviews

    : Conduct code reviews to ensure code quality and early defect detection.
  • Troubleshooting

    : Troubleshoot and resolve issues, following escalation procedures as needed.
  • Team Collaboration

    : Work as a senior member of the development team, providing technical guidance and participating in project discussions.

Qualifications & Skills

Mandatory:

  • A Computer Science-related degree (BE/BTech/MCA) is required.
  • 3-7 years of experience in Data Engineering, with a strong focus on big data technologies.
  • Hands-on experience with Hadoop ecosystem tools, particularly Spark (PySpark or Spark-Scala), for 3-7 years.
  • Proficiency in programming languages: Python, Scala, and SQL.
  • Understanding of distributed computing, data structures, and algorithms.
  • Experience in the banking domain, preferably with knowledge of core banking solutions like Oracle FLEXCUBE.
  • Strong analytical and communication skills.

Good-to-Have:

  • Prior involvement in at least two production implementations of Big Data Warehouse/Data Lake projects.
  • Knowledge of Oracle FLEXCUBE backend data model.
  • Familiarity with SDLC and Agile methodologies.
  • Flexibility and adaptability to changing project priorities.
  • Excellent written and verbal communication skills.

Self-Assessment Questions:

  1. Describe a successful Big Data Warehouse or Data Lake project you worked on. What were your key contributions, and how did you ensure its success?
  2. How do you approach designing a data pipeline for a large-scale transaction processing system? Elaborate on the tools and techniques you would choose and why.
  3. Share your experience with Spark and Hadoop. How have you optimized data processing and analysis using these technologies?
  4. In your experience, what are the critical considerations when working with cloud data solutions like Azure Data Factory and Azure Data Bricks?
  5. How do you stay updated with the latest trends and technologies in the data engineering field, and how do you apply this knowledge to your work?

Filtration and Screening Criteria:

  • 3-7 years of experience in Data Engineering, specifically in the Big Data domain.
  • Hands-on expertise with Hadoop ecosystem tools, including Spark and Kafka.
  • Profound knowledge of distributed computing and storage systems.
  • Proficiency in SQL and PL/SQL programming and performance tuning.
  • Understanding of Oracle FLEXCUBE backend data model.
  • Strong analytical and communication skills.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Oracle logo
Oracle

Information Technology

Redwood City

RecommendedJobs for You