Home
Jobs

3 Data Framework Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

16 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

We are Hiring Data Engineer for a US based IT Company Based in Hyderabad. Candidates with minimum 5 Years of experience in Data Engineering can apply. This job is for 1 year contract only Job Title: Data Engineer Location: Hyderabad CTC: Upto 20 LPA Experience: 5+ Years Job Overview: We are looking for a seasoned Senior Data Engineer with deep hands-on experience in Talend and IBM DataStage to join our growing enterprise data team. This role will focus on designing and optimizing complex data integration solutions that support enterprise-wide analytics, reporting, and compliance initiatives. In this senior-level position, you will collaborate with data architects, analysts, and key stakeholders to facilitate large-scale data movement, enhance data quality, and uphold governance and security protocols. Key Responsibilities: Develop, maintain, and enhance scalable ETL pipelines using Talend and IBM DataStage Partner with data architects and analysts to deliver efficient and reliable data integration solutions Review and optimize existing ETL workflows for performance, scalability, and reliability Consolidate data from multiple sourcesboth structured and unstructuredinto data lakes and enterprise platforms Implement rigorous data validation and quality assurance procedures to ensure data accuracy and integrity Adhere to best practices for ETL development, including source control and automated deployment Maintain clear and comprehensive documentation of data processes, mappings, and transformation rules Support enterprise initiatives around data migration , modernization , and cloud transformation Mentor junior engineers and participate in code reviews and team learning sessions Required Qualifications: Minimum 5 years of experience in data engineering or ETL development Proficient with Talend (Open Studio and/or Talend Cloud) and IBM DataStage Strong skills in SQL , data profiling, and performance tuning Experience handling large datasets and complex data workflows Solid understanding of data warehousing , data modeling , and data lake architecture Familiarity with version control systems (e.g., Git) and CI/CD pipelines Strong analytical and troubleshooting skills Effective verbal and written communication, with strong documentation habits Preferred Qualifications: Prior experience in banking or financial services Exposure to cloud platforms such as AWS , Azure , or Google Cloud Knowledge of data governance tools (e.g., Collibra, Alation) Awareness of data privacy regulations (e.g., GDPR, CCPA) Experience working in Agile/Scrum environments For further assistance contact/whatsapp: 9354909518 or write to priya@gist.org.in

Posted 2 weeks ago

Apply

6.0 - 8.0 years

15 - 25 Lacs

Hyderabad

Remote

Naukri logo

Job Title : Data Engineer II Experience : 6+ Years Location : Remote (India) Job Type : Full-time Job Description : We are looking for a highly skilled Data Engineer II with 6+ years of experience, including at least 4 years in data engineering or software development. The ideal candidate will be well-versed in building scalable data solutions using modern data ecosystems and cloud platforms. Key Responsibilities : Design, build, and optimize scalable ETL pipelines. Work extensively with Big Data technologies like Snowflake and Databricks . Write and optimize complex SQL queries for large datasets. Define and manage SLAs, performance benchmarks, and monitoring systems. Develop data solutions using the AWS Data Ecosystem , including S3 , Lambda , and more. Handle both relational (e.g., PostgreSQL) and NoSQL databases. Work with programming languages like Python , Java , and/or Scala . Use Linux command-line tools for system and data operations. Implement best practices in data lineage , data quality , data observability , and data discoverability . Preferred (Nice-to-Have) : Experience with data mesh architecture or building distributed data products. Prior exposure to data governance frameworks.

Posted 1 month ago

Apply

6 - 8 years

12 - 16 Lacs

Hyderabad

Remote

Naukri logo

Job Title: Data Engineer Job Summary: Are you passionate about building scalable data pipelines, optimizing ETL processes, and designing efficient data models? We are looking for a Databricks Data Engineer to join our team and play a key role in managing and transforming data in Azure cloud environments. In this role, you will work with Azure Data Factory (ADF), Databricks, Python, and SQL to develop robust data ingestion and transformation workflows. Youll also be responsible for integrating, ,optimizing performance, and ensuring data quality & governance. If you have strong experience in big data processing, distributed computing (Spark), and data modeling, wed love to hear from you! Key Responsibilities: 1. Develop & Optimize ETL Pipelines : Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. 2. Data Modeling & Systematic Layer Modeling : Design logical, physical, and systematic data models for structured and unstructured data. 3. Database Management : Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. 4. Big Data Processi ng: Work with Azure Databricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. 5. Data Quality & Governance : Implement data validation, lineage tracking, and security measures for high-quality, compliant data. 6. Collaboration : Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. 7. Testing and Debugging : Write unit tests and perform debugging to ensure the Implementation is robust and error-free. Conduct performance optimization and security audits. Required Skills and Qualifications: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Preferred Qualifications: Experience with CI/CD for data pipelines using Azure DevOps. Knowledge of Kafka/Event Hub for real-time data processing. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies