Automation NoSQL Data Engineer

5 - 31 years

12 - 17 Lacs

Bengaluru/Bangalore

Posted:1 week ago| Platform: Apna logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Title: Senior Data Engineer – DevOps & Distributed Systems Department: Data Engineering / DevOps Experience Required: 7+ years Employment Type: Full-time About the Role We are looking for a highly skilled and motivated Senior Data Engineer to join our dynamic team. You will work on solving complex data engineering challenges, build robust and scalable real-time data platforms, and ensure the deployment and optimization of distributed data systems in high-performance environments. This role demands a proactive professional who thrives in fast-paced, DevOps-centered settings and is passionate about delivering impactful data solutions. Key Responsibilities Design and implement scalable and reliable data infrastructure using modern DevOps practices. Solve complex data engineering problems across distributed systems and big data platforms. Collaborate with cross-functional teams to architect and deploy real-time data processing pipelines. Provide advanced engineering-level support for deployed data tools and platforms. Manage containerized environments using Kubernetes for Spark and Flink operators. Optimize system performance, manage automated deployments, and ensure production reliability. Respond professionally and promptly to customer support queries and issues. Required Qualifications Bachelor’s degree in Computer Science, Information Systems, or related field. Minimum 7 years of hands-on experience in data engineering or backend development, with a focus on NoSQL and distributed systems. Strong understanding of software development lifecycles, CI/CD, and automation tools. Technical Skills Deep expertise in NoSQL databases such as Apache Cassandra, Clickhouse, and MongoDB – including installation, configuration, and performance tuning. Hands-on experience with Apache Spark, Apache Flink, and Apache Airflow in production environments. Proven ability to configure, tune, and manage real-time data processing pipelines on containerized platforms. Proficiency with Kubernetes for deploying and managing Spark and Flink operators. Strong scripting skills in Python, Bash, or Go for workflow automation and orchestration. Proficient in developing and managing Airflow DAGs, handling dependencies, retries, and scheduling. Solid understanding of CI/CD pipelines using tools such as Jenkins, GitLab CI, or Bamboo. Experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or Chef. Demonstrated ability to troubleshoot, monitor, and optimize performance in distributed environments. What You’ll Bring Problem-solving mindset with the ability to work independently on complex systems. Strong communication and interpersonal skills to interact with internal teams and clients. Agility to adapt quickly to changes and deliver high-quality outcomes under tight deadlines. A proactive, customer-first attitude and a passion for innovation in the data engineering domain. Preferred Certifications (Good to Have)Certified Kubernetes Administrator (CKA) Databricks/Spark/Flink Certifications HashiCorp Terraform Associate

Mock Interview

Practice Video Interview with JobPe AI

Start DevOps Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You