Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
8 - 12 Lacs
Hyderabad
Work from Office
Responsibilities: Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform Implement and maintain ETL workfl ows using tools like Debezium, Kafka, Airfl ow, and Jenkins to ensure reliable and timely data processing Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for effi cient data retrieval and processing *** Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DocumentDB) to build scalable data solutions Design and implement data warehouse solutions that support analytical needs and machine learning applications Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability Optimize query performance across various database systems through indexing, partitioning, and query refactoring Develop and maintain documentation for data models, pipelines, and processes Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs Stay current with emerging technologies and best practices in data engineering Requirements: 5+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure Strong profi ciency in SQL and experience with relational databases like MySQL and PostgreSQL Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka, Debezium, Airfl ow, or similar technologies Experience with data warehousing concepts and technologies Solid understanding of data modeling principles and best practices for both operational and analytical systems Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack Profi ciency in at least one programming language (Python, Node.js, Java) Experience with version control systems (Git) and CI/CD pipelines Job Description: Experience with graph databases (Neo4j, Amazon Neptune) Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures Experience working with streaming data technologies and real-time data processing Familiarity with data governance and data security best practices Experience with containerization technologies (Docker, Kubernetes) Understanding of fi nancial back-offi ce operations and FinTech domain Experience working in a high-growth startup environment
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi