Home
Jobs

145 Etl Pipelines Jobs - Page 5

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

10 - 18 Lacs

Varanasi

Work from Office

Naukri logo

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Agra

Work from Office

Naukri logo

Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Surat

Work from Office

Naukri logo

Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Ludhiana

Work from Office

Naukri logo

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Coimbatore

Work from Office

Naukri logo

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Jaipur

Work from Office

Naukri logo

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 1 month ago

Apply

1.0 - 5.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for a skilled and experienced PySpark Tech Lead to join our dynamic engineering team In this role, you will lead the development and execution of high-performance big data solutions using PySpark You will work closely with data scientists, engineers, and architects to design and implement scalable data pipelines and analytics solutions. As a Tech Lead, you will mentor and guide a team of engineers, ensuring the adoption of best practices for building robust and efficient systems while driving innovation in the use of data technologies. Key Responsibilities Lead and DevelopDesign and implement scalable, high-performance data pipelines and ETL processes using PySpark on distributed systems Tech LeadershipProvide technical direction and leadership to a team of engineers, ensuring the delivery of high-quality solutions that meet both business and technical requirements. Architect SolutionsDevelop and enforce best practices for architecture, design, and coding standards Lead the design of complex data engineering workflows, ensuring they are optimized for performance and cost-effectiveness. CollaborationCollaborate with data scientists, analysts, and other stakeholders to understand data requirements, translating them into scalable technical solutions. Optimization & Performance TuningOptimize large-scale data processing pipelines to improve efficiency and performance Implement best practices for memory management, data partitioning, and parallelization in Spark. Code Review & MentorshipConduct code reviews to ensure high-quality code, maintainability, and scalability Provide guidance and mentorship to junior and mid-level engineers. Innovation & Best PracticesStay current on new data technologies and trends, bringing fresh ideas and solutions to the team Implement continuous integration and deployment pipelines for data workflows. Problem SolvingIdentify bottlenecks, troubleshoot, and resolve issues related to data quality, pipeline failures, and performance optimization. Skills And Qualifications Experience: 7+ years of hands-on experience in PySpark and large-scale data processing. Technical Expertise: Strong knowledge of PySpark, Spark SQL, and Apache Kafka. Experience with cloud platforms like AWS (EMR, S3), Google Cloud, or Azure. In-depth understanding of distributed computing, parallel processing, and data engineering principles. Data Engineering: Expertise in building ETL pipelines, data wrangling, and working with structured and unstructured data. Experience with databases (relational and NoSQL) such as SQL, MongoDB, or DynamoDB. Familiarity with data warehousing solutions and query optimization techniques Leadership & Communication: Proven ability to lead a technical team, make key architectural decisions, and mentor junior engineers. Excellent communication skills, with the ability to collaborate effectively with cross-functional teams and stakeholders. Problem Solving: Strong analytical skills with the ability to solve complex problems involving large datasets and distributed systems. Education: Bachelors or Masters degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Show more Show less

Posted 1 month ago

Apply

7.0 - 9.0 years

25 - 35 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

Warm Greetings from Dataceria Software Solutions Pvt Ltd We are Looking For: Senior Azure Data Engineer Domain : BFSI ------------------------------------------------------------------------------------------------------------------------------------------------- As a Senior Azure Data Engineer , you will play a pivotal role in bridging data engineering with front-end development. You willll work closely with Data Scientists and UI Developers (React.js) to design, build, and secure data services that power a next-generation platform. This is a hands-on, collaborative role requiring deep experience across the Azure data ecosystem, API development, and modern DevOps practices. Your Responsibilities Will Include: Building and maintaining scalable Azure data pipelines ( ADF, Synapse, Databricks, DBT) to serve dynamic frontend interfaces. Creating API access layers to expose data to front-end applications and external services. Collaborating with the Data Science team to operationalize models and insights. Working directly with React JS developers to support UI data integration. Ensuring data security , integrity , and monitoring across systems. Implementing and maintaining CI/CD pipelines for seamless deployment. Automating and managing cloud infrastructure using Terraform, Kubernetes, and Azure App Services . Supporting data migration initiatives from legacy infrastructure to modern platforms like Data Mesh Refactoring legacy pipelines with code reuse, version control, and infrastructure-as-code best practices. Analyzing, mapping, and documenting financial data models across various systems. What Were Looking For: 8+ years of experience in data engineering, with a strong focus on the Azure ecosystem (ADF, Synapse, Databricks, App Services). Proven ability to develop and host secure, scalable REST APIs . Experience supporting cross-functional teams, especially front-end/UI and data science groups is a plus. Hands-on experience with Terraform, Kubernetes (Azure EKS), CI/CD, and cloud automation. Strong expertise in ETL/ELT design , performance tuning, and pipeline monitoring . Solid command of Python, SQL , and optionally Scala, Java, or PowerShell. Knowledge of data security practices, governance, and compliance (e.g., GDPR) . Familiarity with big data tools (e.g., Spark, Kafka ), version control (Git), and testing frameworks for data pipelines. Excellent communication skills and the ability to explain technical concepts to diverse stakeholders. Role & responsibilities ---------------------------------------------------------------------------------------------------------------------------------------------- Joining: Immediate Work location: Bangalore (hybrid) , Chennai Open Positions: Senior Azure Data Engineer, If interested, please share your updated resume to carrers@dataceria.com: We welcome applications from skilled candidates who are open to working in a hybrid model. Candidates with less experience but strong technical abilities are also encouraged to apply. ----------------------------------------------------------------------------------------------------- Dataceria Software Solutions Pvt Ltd Follow our LinkedIn for more job openings : https://www.linkedin.com/company/dataceria/ Email : careers@dataceria.com

Posted 1 month ago

Apply

3.0 - 8.0 years

6 - 12 Lacs

Kolkata

Work from Office

Naukri logo

Job Title: AI/ML Data Engineer Location: Kolkata, India Experience: 3+ Years Industry: IT / AI & Data Analytics Job Summary: We are hiring an experienced AI/ML Data Engineer to design and build scalable data pipelines and ETL processes to support analytics and machine learning projects. The ideal candidate will have strong Python and SQL skills, hands-on experience with tools like Apache Airflow , Kafka , and working knowledge of cloud platforms (AWS, GCP, or Azure) . A strong understanding of data transformation, feature engineering, and data automation is essential. Key Skills Required: ETL & Data Pipeline Development Python & SQL Programming Apache Airflow / Kafka / Spark / Hadoop Cloud Platforms: AWS / GCP / Azure Data Cleaning & Feature Engineering Strong Problem-Solving & Business Understanding Preferred Profile: Candidates with a B.Tech / M.Tech / MCA in Computer Science or Data Engineering, and 3+ years of hands-on experience in building data solutions, who can work closely with cross-functional teams and support AI/ML initiatives.

Posted 1 month ago

Apply

4.0 - 9.0 years

9 - 19 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Key Responsibilities - Python & PySpark: - Writing efficient ETL (Extract, Transform, Load) pipelines. - Implementing data transformations using PySpark DataFrames and RDDs. - Optimizing Spark jobs for performance and scalability. - Apache Spark: - Managing distributed data processing. - Implementing batch and streaming data processing. - Tuning Spark configurations for efficient resource utilization. - Unix Shell Scripting: - Automating data workflows and job scheduling. - Writing shell scripts for file management and log processing. - Managing cron jobs for scheduled tasks. - Google Cloud Platform (GCP) & BigQuery: - Designing data warehouse solutions using BigQuery. - Writing optimized SQL queries for analytics. - Integrating Spark with BigQuery for large-scale data processing

Posted 1 month ago

Apply

7.0 - 10.0 years

22 - 30 Lacs

Kolkata, Mumbai, Pune

Hybrid

Naukri logo

Primary Skills: Azure/ AWS, ADB, Kafka, Java/ Python, ETL pipelines, Kubernetes, SQL Secondary Skills: Snowflake 79 Years Old Reputed MNC Company

Posted 1 month ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, Seeking a Cloud Monitoring Specialist to set up observability and real-time monitoring in cloud environments. Key Responsibilities: Configure logging and metrics collection. Set up alerts and dashboards using Grafana, Prometheus, etc. Optimize system visibility for performance and security. Required Skills & Qualifications: Familiar with ELK stack, Datadog, New Relic, or Cloud-native monitoring tools. Strong troubleshooting and root cause analysis skills. Knowledge of distributed systems. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 22 Lacs

New Delhi, Gurugram

Hybrid

Naukri logo

Team Leadership & Management: Lead, mentor, and develop a team of data engineers. Foster a collaborative and innovative team environment. Conduct performance evaluations and support professional growth. Data Engineering & Architecture: Architect and implement scalable data solutions using Azure Databricks and Snowflake. Design, build, and maintain robust data pipelines with a solid understanding of ETL/ELT processes. Optimize data workflows for performance, reliability, and scalability. Solution Architecture: Architect comprehensive data solutions tailored to business needs. Lead the design and implementation of data warehouses, ensuring alignment with organizational objectives. Collaborate with stakeholders to define and refine data requirements and solutions. AI Integration: Work alongside data scientists and AI specialists to integrate machine learning models into data pipelines. Implement AI-driven solutions to enhance data processing and analytics capabilities. Engineering Project Management: Manage data engineering projects from inception to completion, ensuring timely delivery and adherence to project goals. Utilize project management methodologies to track progress, allocate resources, and mitigate risks. Coordinate with stakeholders to define project requirements and objectives. Infrastructure as Code & Automation: Implement and manage infrastructure using Terraform. Develop and maintain CI/CD pipelines to automate deployments and ensure continuous integration and delivery of data solutions. Quality Assurance & Best Practices: Establish and enforce data engineering best practices and standards. Ensure data quality, security, and compliance across all data initiatives. Conduct code reviews and ensure adherence to coding standards. Collaboration & Communication: Work closely with data analysts, business intelligence teams, and other stakeholders to understand data needs and deliver solutions. Communicate technical concepts and project statuses effectively to non-technical stakeholders. Undergraduate degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent experience. Experience: 8+ years of overall experience in data engineering. 2+ years of experience in Managing Data Engineering Teams. Proven experience with Azure Databricks and Snowflake. Solid experience in designing data solutions for data warehouses. Hands-on experience with Terraform for infrastructure as code. Strong knowledge of CI/CD tools and practices. Experience integrating AI and machine learning models into data pipelines. Technical Skills: Proficiency in Spark, Scala, Python, SQL, and Databricks. Proven Unix scripting and SQL skills. Strong understanding of SQL and database management. Familiarity with data warehousing, ETL/ELT processes, and big data technologies. Experience with cloud platforms, preferably Microsoft Azure. Project Management: Proven ability to manage multiple projects simultaneously. Familiarity with project management tools (e.g., Jira, Trello, Asana,Rally). Strong organizational and time-management skills. Soft Skills: Excellent leadership and team management abilities. Ability to work collaboratively in a fast-paced environment. Proven ability to perform with minimal supervision. Solid work prioritization, planning, and organizational skills. Leadership qualities including being proactive, thoughtful, thorough, decisive, and flexible. Role & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Looking for a DevOps Senior Engineer in the Data Engineering team who can help us support next-generation Analytics applications over Oracle cloud. This posting is for DevOps Senior Engineer in the Oracle Analytics Warehouse product development organization. Fully handled Cloud service that provides customers a turn-key enterprise warehouse on the cloud for Fusion Applications. The service is being built on a sophisticated technology stack demonstrating a brand-new data integration platform and the industry's most sophisticated analytical business analytics platform. https://www.oracle.com/solutions/business-analytics/analytics-for-applications.html We are looking for senior engineer with experience in supporting data warehousing products. As a member of the Product development organization, focus will be on working with development teams, providing timely support to customers and identify/implementing process automation, for cloud BI product. . BS or equivalent experience or higher degree in Computer Science / Engineering or equivalent from top university . validated experience, supporting business customers on any Cloud/On-premise BI Application . Experience in SQL/PL-SQL and excellent de-bugging skills . Experience in Diagnosing network latency and intermittent issues, Reading and analyzing log files . Good Functional Knowledge in ERP, Finance, HCM or EBS domain . Working experience with any ERP/in-demand application such as Oracle EBS, Fusion is helpful . Good programming skills in Python/Java . Exposure to cloud infrastructure, Oracle Cloud Infrastructure (OCI) is helpful . Experience in performance tuning SQL and understanding ETL pipelines . Build, Configure, Manage and Coordinate all Build and Release engineering activities . Strong logical/critical thinking and problem resolution skill . Excellent interpersonal skills Career Level - IC2 Roles and Responsibilities: . As member of Pipeline Production Operations, you will address customer issues and tickets within defined SLA's . Proactively identify and resolve potential problems in an effort to prevent them from occurring and improve the overall customer experiences . You will approach each case with a goal of ensuring Oracle Analytics products are performing at an efficient level by addressing any underlying or additional problems uncovered during each Customer engagement. . Co-ordinate and connect with different team members to formulate the solutions to customer issues . You will ensure full understanding of the issue, including impact to customer. You will recommend solutions to customers and follow through to resolution or escalate the case in a timely manner if no resolution can be found. . Bring together logs, configuration details and attempt to reproduce the reported issues. . Develop and improve Knowledge base for the issues and their solutions. Participate in knowledge sharing via involvement in technical discussions and Knowledge Base documentation. Prioritize workload based on severity and demonstrate a sense of urgency when handling cases. Find opportunities for process improvements and automation through building right utilities/tools Willing to be working in Shifts and weekends based on support rota.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune

Hybrid

Naukri logo

Sr Azure Data Engineer About Cloudaeon: Cloudaeon is a global technology consulting and services company. We support companies in managing cloud infrastructure and solutions with the help of big data, DevOps and analytics. We offer first-class solutions and services that use big data and always exceed customer expectations. Our deep vertical knowledge, combined with expertise in several enterprise- class big data platforms, helps develop targeted solutions to meet our customers' business needs. Our global team consists of experienced professionals with experience in various tech stacks. Every member of our team is very active and committed to helping our customers achieve their goals. Job Role: We are looking for a Senior Azure Data Engineer with overall 5+ years of experience to join our team. The ideal candidate should have expertise in Azure Data Factory (ADF), Databricks, SQL, Python, and experience working with SAP IS-Auto as a data source. This role involves data modeling, systematic layer modeling, and ETL/ELT pipeline development to enable efficient data processing and analytics. You will use various methods to transform raw data into useful data systems. Overall, you will strive for efficiency by aligning data systems with business goals. Responsibilities: Develop & Optimize ETL Pipelines: Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. Data Modeling & Systematic Layer Modeling: Design logical, physical, and systematic data models for structured and unstructured data. Integrate SAP IS-Auto: Extract, transform, and load data from SAP IS-Auto into Azure-based data platforms. Database Management: Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. Big Data Processing: Work with Azure Databricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. Data Quality & Governance: Implement data validation, lineage tracking, and security measures for high-quality, compliant data. Collaboration: Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. Requirements: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. SAP IS-Auto Data Handling: Experience integrating SAP IS-Auto as a data source into data pipelines. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Strong problem-solving skills coupled with good communication skills. Open minded, inquisitive, life-long learner. Good conversion of high-level business & technical requirements into technical specs. Feeling comfortable in using Azure cloud technologies. Customer centric, passionate about delivering great digital products and services. Preferred Qualifications: Experience with CI/CD for data pipelines using Azure DevOps. Knowledge of Kafka/Event Hub for real-time data processing. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Gurugram

Hybrid

Naukri logo

Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring Pyspark Developer for one of our leading MNC client. PFB the details for your better understanding: ~~~~ LOOKING FOR IMMEDIATE JOINERS ~~~~ WORK LOCATION: Gurugram Job Role: Pyspark Developer EXPERIENCE: 5 Yrs -10 Yrs CTC Range: 20LPA -28 LPA Work Type: HYBRID Only JD: Must be strong in Advanced SQL (e.g., joins and aggregations) Should have good experience in Pyspark (atleast 4 years) Good have knowledge in AWS services Experience across the data lifecycle Design & develop ETL pipeline using PySpark on AWS framework If interested, kindly APPLY for IMMEDIATE response. Thanks & Regards Sathya K GSN Consulting Mob: 8939666794 Mail ID: sathya@gsnhr.net; Web: https://g.co/kgs/UAsF9W

Posted 1 month ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, We are hiring a Cloud Architect to design and oversee scalable, secure, and cost-efficient cloud solutions. Great for architects who bridge technical vision with business needs. Key Responsibilities: Design cloud-native solutions using AWS, Azure, or GCP Lead cloud migration and transformation projects Define cloud governance, cost control, and security strategies Collaborate with DevOps and engineering teams for implementation Required Skills & Qualifications: Deep expertise in cloud architecture and multi-cloud environments Experience with containers, serverless, and microservices Proficiency in Terraform, CloudFormation, or equivalent Bonus: Cloud certification (AWS/Azure/GCP Architect) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 month ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, Looking for a Cloud Data Engineer to build cloud-based data pipelines and analytics platforms. Key Responsibilities: Develop ETL workflows using cloud data services. Manage data storage, lakes, and warehouses. Ensure data quality and pipeline reliability. Required Skills & Qualifications: Experience with BigQuery, Redshift, or Azure Synapse. Proficiency in SQL, Python, or Spark. Familiarity with data lake architecture and batch/streaming. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 month ago

Apply

5 - 7 years

16 - 25 Lacs

Gurugram

Work from Office

Naukri logo

Key responsibilities: 1. Understand, implement, and automate ETL pipelines with better industry standards 2. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, design infrastructure for greater scalability, etc 3. Developing, integrating, testing, and maintaining existing and new applications 4. Design, and create data pipelines (data lake / data warehouses) for real world energy analytical solutions 5. Expert-level proficiency in Python (preferred) for automating everyday tasks 6. Strong understanding and experience in distributed computing frameworks, particularly Spark, Spark-SQL, Kafka, Spark Streaming, Hive, Azure Databricks etc 7. Limited experience in using other leading cloud platforms preferably Azure. 8. Hands on experience on Azure data factory, logic app, Analysis service, Azure blob storage etc. 9. Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works 10. Must have 5-7 years of experience

Posted 1 month ago

Apply

3 - 8 years

4 - 9 Lacs

Kolkata

Work from Office

Naukri logo

Role Overview: We are seeking a skilled and analytical Climate Risk Analyst to join our sustainability team. In this role, you will apply NGFS and IPCC frameworks to evaluate and quantify both physical and transition climate risks across diverse client portfolios. You'll build probabilistic models, design stress-testing scenarios, and integrate datasets ranging from GIS layers to supply-chain metrics. Partnering with finance, operations, and data engineering teams, you'll translate complex climate trajectories such as carbon pricing, regulatory shifts, and technology adoption into clear, actionable insights and robust risk-management strategies that help our clients enhance resilience and capitalize on emerging opportunities. Key Responsibilities: Conduct climate risk assessments using NGFS and IPCC frameworks to identify client-specific risks and opportunities Develop and execute stress-testing scenarios to evaluate climate impacts on financial portfolios Collect, clean, and integrate diverse datasets (e.g., geospatial layers, historical weather records, supply-chain metrics) Build probabilistic models to quantify hazard frequency, exposure, and vulnerability at the asset and portfolio levels Parameterize transition-risk factors (carbon-pricing trajectories, regulatory changes, technology adoption rates) across multiple climate pathways Collaborate with cross-functional teams to produce comprehensive reports and presentations on climate risk management strategies Translate quantitative findings into clear recommendations for risk mitigation and opportunity capture Required Skills: Strong understanding of physical and transition climate-risk concepts and frameworks (NGFS, IPCC) Proven experience in stress-testing and scenario analysis within a financial or risk-management context Proficiency in statistical and probabilistic modeling techniques (e.g., Monte Carlo simulation) Hands-on experience with data integration and analysis tools (Python/R, SQL, GIS software) Familiarity with cloud-based data platforms and ETL pipelines Excellent quantitative and analytical abilities, with strong attention to detail Exceptional communication skills, able to distill complex analyses into concise, client-ready insights Proven ability to work collaboratively in cross-disciplinary teams Perks and Benefits: Work on a ground-breaking product that significantly contributes to sustainability Exposure to advanced AI tools and methodologies to enhance your development skills and productivity Competitive salary with a comprehensive benefits package Flexible work arrangements A vibrant, inclusive, and supportive team environment Opportunities for professional growth and continuous learning

Posted 1 month ago

Apply

5 - 9 years

7 - 11 Lacs

Kochi, Coimbatore, Thiruvananthapuram

Work from Office

Naukri logo

Job Title - Senior Data Engineer (Graph DB specialist)+ Specialist + Global Song Management Level :9,Specialist Location:Kochi, Coimbatore Must have skills: Data Modeling Techniques and Methodologies Good to have skills:Proficiency in Python and PySpark programming. Job Summary :We are seeking a highly skilled Data Engineer with expertise in graph databases to join our dynamic team. The ideal candidate will have a strong background in data engineering, graph querying languages, and data modeling, with a keen interest in leveraging cutting-edge technologies like vector databases and LLMs to drive functional objectives. Your responsibilities will include: Design, implement, and maintain ETL pipelines to prepare data for graph-based structures. Develop and optimize graph database solutions using querying languages such as Cypher, SPARQL, or GQL. Neo4J DB experience is preferred. Build and maintain ontologies and knowledge graphs, ensuring efficient and scalable data modeling. Integrate vector databases and implement similarity search techniques, with a focus on Retrieval-Augmented Generation (RAG) methodologies and GraphRAG. Collaborate with data scientists and engineers to operationalize machine learning models and integrate with graph databases. Work with Large Language Models (LLMs) to achieve functional and business objectives. Ensure data quality, integrity, and security while delivering robust and scalable solutions. Communicate effectively with stakeholders to understand business requirements and deliver solutions that meet objectives. Roles & Responsibilities: Experience:At least 5 years of hands-on experience in data engineering. With 2 years of experience working with Graph DB. Programming: Querying:Advanced knowledge of Cypher, SPARQL, or GQL querying languages. ETL Processes:Expertise in designing and optimizing ETL processes for graph structures. Data Modeling:Strong skills in creating ontologies and knowledge graphs.Presenting data for Graph RAG based solutions Vector Databases:Understanding of similarity search techniques and RAG implementations. LLMs:Experience working with Large Language Models for functional objectives. Communication:Excellent verbal and written communication skills. Cloud Platforms:Experience with Azure analytics platforms, including Function Apps, Logic Apps, and Azure Data Lake Storage (ADLS). Graph Analytics:Familiarity with graph algorithms and analytics. Agile Methodology:Hands-on experience working in Agile teams and processes. Machine Learning:Understanding of machine learning models and their implementation. Professional & Technical Skills: Additional Information: (do not remove the hyperlink) Qualifications Experience: Minimum 5-10 year(s) of experience is required Educational Qualification: Any graduation / BE / B Tech

Posted 1 month ago

Apply

6 - 8 years

25 - 27 Lacs

Noida

Work from Office

Naukri logo

We are seeking a highly skilled Python AI/ML Professional to join our growing data science and machine learning team. The ideal candidate will have solid experience in developing, deploying, and optimizing AI/ML models using Python and related tools. You will work on real-world problems, build intelligent systems, and contribute to cutting-edge projects across various domains. Key Responsibilities: Design, build, and deploy machine learning models and AI solutions using Python Clean, preprocess, and analyze large datasets to extract meaningful insights Implement models using libraries such as Scikit-learn, TensorFlow, PyTorch, or similar frameworks Build scalable data pipelines and APIs for ML model deployment Collaborate with data engineers, analysts, and product teams to deliver business-driven AI solutions Python AI /ML,Integration,machine learning model deployment ,Moniter ML, ETL Pipelines,FastAPI, Flask,cloud platforms,Docker, Kubernetes, Git, and CI/CD tools Education : Bachelors degree in Computer Science, Information Technology, or a related field

Posted 1 month ago

Apply

7 - 10 years

15 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Job Title: Python , PySpark Exp. :- 8+ yrs Location Workplace: Hyderabad/Bangalore ( Hybrid mode ) Job Type: Full-time Job Description: 8+ years of experience in data engineering with supply chain analytics Strong knowledge on data pipelines, data modelling, metadata management Experience with data lakes, data warehouses , data hubs and ETL Tools like PySpark, Golden Gate Replication, ETL. Languages and technologies : SQL, Python, PySpark, Scala, Spark, SQL/NoSQL databases, Develop and maintain scalable PySpark-based ETL pipelines for big data processing. Optimize Spark jobs through partitioning, caching, and performance tuning techniques. Ensure data quality with validation frameworks and error-handling mechanisms. Work with structured and unstructured data, handling transformations efficiently. Implement CI/CD pipelines for automated data pipeline deployment and monitoring. Relational Databases: Experience with managing and optimizing relational databases (e.g., Oracle SQL , PostgreSQL, SQL Server). NoSQL Databases: Experience with managing and optimizing relational databases (e.g., MongoDB, Cassandra) for handling unstructured and semi-structured data. Expertise in testing and deployment of data applications (e.g., provisioning resources, deploying and monitoring workflows and data quality) Expertise apache airflow

Posted 1 month ago

Apply

3 - 6 years

7 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

Sr Semantic Engineer – Research Data and Analytics What you will do Let’s do this. Let’s change the world. In this vital role you will be part of Research’s Semantic Graph Team is seeking a dedicated and skilled Semantic Data Engineer to build and optimize knowledge graph-based software and data resources. This role primarily focuses on working with technologies such as RDF, SPARQL, and Python. In addition, the position involves semantic data integration and cloud-based data engineering. The ideal candidate should possess experience in the pharmaceutical or biotech industry, demonstrate deep technical skills, and be proficient with big data technologies and demonstrate experience in semantic modeling. A deep understanding of data architecture and ETL processes is also essential for this role. In this role, you will be responsible for constructing semantic data pipelines, integrating both relational and graph-based data sources, ensuring seamless data interoperability, and leveraging cloud platforms to scale data solutions effectively. Roles & Responsibilities: Develop and maintain semantic data pipelines using Python, RDF, SPARQL, and linked data technologies. Develop and maintain semantic data models for biopharma scientific data Integrate relational databases (SQL, PostgreSQL, MySQL, Oracle, etc.) with semantic frameworks. Ensure interoperability across federated data sources, linking relational and graph-based data. Implement and optimize CI/CD pipelines using GitLab and AWS. Leverage cloud services (AWS Lambda, S3, Databricks, etc.) to support scalable knowledge graph solutions. Collaborate with global multi-functional teams, including research scientists, Data Architects, Business SMEs, Software Engineers, and Data Scientists to understand data requirements, design solutions, and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions. Collaborate with data scientists, engineers, and domain experts to improve research data accessibility. Adhere to standard processes for coding, testing, and designing reusable code/components. Explore new tools and technologies to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Maintain comprehensive documentation of processes, systems, and solutions. Harmonize research data to appropriate taxonomies, ontologies, and controlled vocabularies for context and reference knowledge. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience: Doctorate Degree OR Master’s degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 6+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Cloud and Automation ExpertiseGood experience in using cloud platforms (preferably AWS) for data engineering, along with Python for automation, data federation techniques, and model-driven architecture for scalable solutions. Technical Problem-SolvingExcellent problem-solving skills with hands-on experience in test automation frameworks (pytest), scripting tasks, and handling large, complex datasets. Good-to-Have Skills: Experience in biotech/drug discovery data engineering Experience applying knowledge graphs, taxonomy and ontology concepts in life sciences and chemistry domains Experience with graph databases (Allegrograph, Neo4j, GraphDB, Amazon Neptune) Familiarity with Cypher, GraphQL, or other graph query languages Experience with big data tools (e.g. Databricks) Experience in biomedical or life sciences research data management Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 month ago

Apply

2 - 4 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

Associate Data Engineer Graph – Research Data and Analytics What you will do Let’s do this. Let’s change the world. In this vital role you will be part of Research’s Semantic Graph. Team is seeking a dedicated and skilled Data Engineer to design, build and maintain solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance, graph-based, data engineering solutions for large scientific datasets and collaborate with Research partners. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates deep technical skills, has experience with semantic data modeling and graph databases, and understands data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience: Bachelor’s degree and 1to 3 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience OR Diploma and 4 to 7 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Hands on experience with big data technologies and platforms, such as Databricks, workflow orchestration, performance tuning on data processing. Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Experience with system administration skills, such as managing Linux and Windows servers, configuring network infrastructure, and automating tasks with shell scripting. Examples include setting up and maintaining virtual machines, troubleshooting server issues, and ensuring data security through regular updates and backups. Solid understanding of data modeling, data warehousing, and data integration concepts Solid experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining user documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies