Home
Jobs

Big Data Developer (Spark/Scala + Python)

5 - 10 years

7 - 12 Lacs

Posted:1 day ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Job Summary

Synechron is seeking an experienced Big Data Developer with strong expertise in Spark, Scala, and Python to lead and contribute to large-scale data projects. The role involves designing, developing, and implementing robust data solutions that leverage emerging technologies to enhance business insights and operational efficiency. The successful candidate will play a key role in driving innovation, mentoring team members, and ensuring the delivery of high-quality data products aligned with organizational objectives.

Software Requirements
  • Required:
    • Apache Spark (latest stable version)
    • Scala (version 2.12 or higher)
    • Python (version 3.6 or higher)
    • Big Data tools and frameworks supporting Spark and Scala
  • Preferred:
    • Cloud platforms such as AWS, Azure, or GCP for data deployment
    • Data processing or orchestration tools like Kafka, Hadoop, or Airflow
    • Data visualization tools for data insights
Overall Responsibilities
  • Lead the development and implementation of data pipelines and solutions using Spark, Scala, and Python
  • Collaborate with business and technology teams to understand data requirements and translate them into scalable solutions
  • Mentor and guide junior team members on best practices in big data development
  • Evaluate and recommend new technologies and tools to improve data processing and quality
  • Stay informed about industry trends and emerging technologies relevant to big data and analytics
  • Ensure timely delivery of data projects with high standards of quality, performance, and security
  • Lead technical reviews, code reviews, and provide inputs to improve overall development standards and practices
  • Contribute to architecture design discussions and assist in establishing data governance standards
Technical Skills (By Category)

Programming Languages:

  • Essential: Spark (Scala), Python
  • Preferred: Knowledge of Java or other JVM languages

Data Management & Databases:

  • Experience with distributed data storage solutions (HDFS, S3, etc.)
  • Familiarity with NoSQL databases (e.g., Cassandra, HBase) and relational databases for data integration

Cloud Technologies:

  • Preferred: Cloud platforms (AWS, Azure, GCP) for data processing, storage, and deployment

Frameworks & Libraries:

  • Spark MLlib, Spark SQL, Spark Streaming
  • Data processing libraries in Python (pandas, PySpark)

Development Tools & Methodologies:

  • Version control (Git, Bitbucket)
  • Agile methodologies (Scrum, Kanban)
  • Data pipeline orchestration tools (Apache Airflow, NiFi)

Security & Compliance:

  • Understanding of data security best practices and data privacy regulations
Experience Requirements
  • 5 to 10 years of hands-on experience in big data development and architecture
  • Proven experience in designing and developing large-scale data pipelines using Spark, Scala, and Python
  • Demonstrated ability to lead technical projects and mentor team members
  • Experience working with cross-functional teams including data analysts, data scientists, and business stakeholders
  • Track record of delivering scalable, efficient, and secure data solutions in complex environments
Day-to-Day Activities
  • Develop, test, and optimize scalable data pipelines using Spark, Scala, and Python
  • Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate into technical solutions
  • Lead code reviews, mentor junior team members, and enforce coding standards
  • Participate in architecture design and recommend best practices in big data development
  • Monitor data workflows performance and troubleshoot issues to ensure data quality and reliability
  • Stay updated with industry trends and evaluate new tools and frameworks for potential implementation
  • Document technical designs, data flows, and implementation procedures
  • Contribute to continuous improvement initiatives to optimize data processing workflows
Qualifications
  • Bachelors or Masters degree in Computer Science, Information Technology, or a related field
  • Relevant certifications in cloud platforms, big data, or programming languages are advantageous
  • Continuous learning on innovative data technologies and frameworks
Professional Competencies
  • Strong analytical and problem-solving skills with a focus on scalable data solutions
  • Leadership qualities with the ability to guide and mentor team members
  • Excellent communication skills to articulate technical concepts to diverse audiences
  • Ability to work collaboratively in cross-functional teams and fast-paced environments
  • Adaptability to evolving technologies and industry trends
  • Strong organizational skills for managing multiple projects and priorities

Mock Interview

Practice Video Interview with JobPe AI

Start Technical Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Synechron
Synechron

Information Technology and Services

New York

1000+ Employees

464 Jobs

    Key People

  • Faisal Husain

    Co-Founder & CEO
  • Maqbool Kazi

    Managing Director

RecommendedJobs for You