DataOps Engineer - ETL/Python

10 - 14 years

0 Lacs

Posted:5 days ago| Platform: Shine logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

As a DataOps Engineer, you will play a crucial role within our data engineering team, blending elements of software engineering, DevOps, and data analytics. Your primary responsibility will involve the development and maintenance of secure, scalable, and high-quality data pipelines and infrastructure to support our clients" advanced analytics, machine learning, and real-time decision-making needs. Your key responsibilities will include designing, developing, and managing robust ETL/ELT pipelines utilizing Python and modern DataOps methodologies. Additionally, you will implement data quality checks, pipeline monitoring, and error handling mechanisms. Building data solutions on AWS cloud services like S3, ECS, Lambda, and CloudWatch will be an integral part of your role. Furthermore, containerizing applications with Docker and orchestrating them using Kubernetes for scalable deployments will be part of your daily tasks. You will work with infrastructure-as-code tools and CI/CD pipelines to automate deployments efficiently. Moreover, designing and optimizing data models using PostgreSQL, Redis, and PGVector for high-performance storage and retrieval will be essential. Supporting feature stores and vector-based storage for AI/ML applications will also fall under your responsibilities. You will drive Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives to ensure successful sprint delivery. Additionally, reviewing pull requests (PRs), conducting code reviews, and enforcing security and performance standards will be part of your routine. Collaborating closely with product owners, analysts, and architects to refine user stories and technical requirements will also be crucial for the success of the projects. As for the required skills and qualifications, we are looking for someone with at least 10 years of experience in Data Engineering, DevOps, or Software Engineering roles focusing on data products. Proficiency in Python, Docker, Kubernetes, and AWS (especially S3 and ECS) is essential. Strong knowledge of relational and NoSQL databases like PostgreSQL, Redis, and experience with PGVector would be a strong advantage. Deep understanding of CI/CD pipelines, GitHub workflows, and modern source control practices is also required. Experience working in Agile/Scrum environments with excellent collaboration and communication skills is a must. A passion for developing clean, well-documented, and scalable code in a collaborative environment is highly valued. Familiarity with DataOps principles, encompassing automation, testing, monitoring, and deployment of data pipelines, is also beneficial.,

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Aligned Automation logo
Aligned Automation

Industrial Automation

Tech City

RecommendedJobs for You