Jobs
Interviews

279 Elt Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 10.0 years

7 - 8 Lacs

Bengaluru

Work from Office

Snowflake Must Have- Relevant Experience 3-10 years Snowflake Dev exp. not migration SQL Basic and Advanced SQL is must (joins, null handling, performance tuning, windowing functions- partition, rank etc.) o Know how of Architecture o Stored Procs o ETL/ELT/ETLT, pipeline Basic Python scripting Snowflake features Time Travel, Zero copy cloning, Data Sharing Good to Have-Python Advanced , DBT

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 17 Lacs

Chennai

Work from Office

Data Engineer, Chennai, India. About the job: The Data Engineer is a cornerstone of Vendasta's R&D team, driving the efficient processing, organization, and delivery of clean, structured data in support of business intelligence and decision-making. By developing and maintaining scalable ELT pipelines, they ensure data reliability and scalability, adhering to Vendasta's commitment to delivering data solutions aligned with evolving business needs. Your Impact: Design, implement, and maintain scalable ELT pipelines within a Kimball Architecture data warehouse. Ensure robustness against failures and data entry errors, managing data conformation, de-duplication, survivorship, and coercion. Manage historical and hierarchical data structures, ensuring usability for the Business Intelligence (BI) team and scalability for future growth. Partner with BI teams to prioritize and deliver data solutions while maintaining alignment with business objectives. Work closely with source system owners to extract, clean, and integrate data into the data warehouse. Advocate for and influence improvements in source data integrity. Champion best practices in data engineering, including governance, lineage tracking, and quality assurance. Collaborate with Site Reliability Engineering (SRE) teams to optimize cloud infrastructure usage. Operate within an Agile framework, contributing to team backlogs via Kanban or Scrum processes as appropriate. Balance short-term deliverables with long-term technical investments in collaboration with BI and engineering management. What you bring to the table: 5 - 8 years of proficiency in ETL, SQL and experience with cloud-based platforms like Google Cloud (BigQuery, DBT, Looker). In-depth understanding of Kimball data warehousing principles, including the 34-subsystems of ETL. Strong problem-solving skills for diagnosing and resolving data quality issues. Ability to engage with BI teams and source system owners to prioritize and deliver data solutions effectively. Eagerness to advocate for data integrity improvements while respecting the boundaries of data mesh principles. Ability to balance immediate needs with long-term technical investments. Understanding of cloud infrastructure for effective resource management in partnership with SRE teams. About Vendasta: So what do we actually do? Vendasta is a SaaS company composed of a company of global brands including MatchCraft, Yesware, and Broadly, that builds and sells software and services to help small businesses operate more efficiently as a team, meet more client needs, and provide incredible client experiences. We have offices in Saskatoon, Saskatchewan, Boston and Boca Raton, Florida, and Chennai, India. Perks: Benefits of health insurance Paid time off Training & Career Development: Professional development plans, leadership workshops, mentorship programs, and more! Free Snacks, hot beverages, and catered lunches on Fridays Culture - comprised of our core values: Drive, Innovation, Respect, and Agility Night Shift Premium Provident Fund

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad

Work from Office

We are seeking a highly experienced and self-driven Senior Data Engineer to design, build, and optimize modern data pipelines and infrastructure. This role requires deep expertise in Snowflake, DBT, Python, and cloud data ecosystems. You will play a critical role in enabling data-driven decision-making across the organization by ensuring the availability, quality, and integrity of data. Key Responsibilities: Design and implement robust, scalable, and efficient data pipelines using ETL/ELT frameworks. Develop and manage data models and data warehouse architecture within Snowflake . Create and maintain DBT models for transformation, lineage tracking, and documentation. Write modular, reusable, and optimized Python scripts for data ingestion, transformation, and automation. Collaborate closely with data analysts, data scientists, and business teams to gather and fulfill data requirements. Ensure data integrity, consistency, and governance across all stages of the data lifecycle. Monitor pipeline performance and implement optimization strategies for queries and storage. Follow best practices for data engineering including version control (Git), testing, and CI/CD integration. Required Skills and Qualifications: 8+ years of experience in Data Engineering or related roles. Deep expertise in Snowflake : schema design, performance tuning, security, and access controls. Proficiency in Python , particularly for scripting, data transformation, and workflow automation. Strong understanding of data modeling techniques (e.g., star/snowflake schema, normalization). Proven experience with DBT for building modular, tested, and documented data pipelines. Familiarity with ETL/ELT tools and orchestration platforms like Apache Airflow or Prefect . Advanced SQL skills with experience handling large and complex data sets. Exposure to cloud platforms such as AWS , Azure , or GCP and their data services. Preferred Qualifications: Experience implementing data quality checks and governance frameworks. Understanding of modern data stack and CI/CD pipelines for data workflows. Contributions to data engineering best practices, open-source projects, or thought leadership.

Posted 1 month ago

Apply

8.0 - 13.0 years

17 - 20 Lacs

Bengaluru

Work from Office

Project description We are seeking an experienced Senior Project Manager with a strong background in delivering data engineering and Python-based development projects. In this role, you will manage cross-functional teams and lead Agile delivery for high-impact, cloud-based data initiatives. You'll work closely with data engineers, scientists, architects, and business stakeholders to ensure projects are delivered on time, within scope, and aligned with strategic objectives. The ideal candidate combines technical fluency, strong leadership, and Agile delivery expertise in data-centric environments. Responsibilities Lead and manage data engineering and Python-based development projects, ensuring timely delivery and alignment with business goals. Work closely with data engineers, data scientists, architects, and product owners to gather requirements and define project scope. Translate complex technical requirements into actionable project plans and user stories. Oversee sprint planning, backlog grooming, daily stand-ups, and retrospectives in Agile/Scrum environments. Ensure best practices in Python coding, data pipeline design, and cloud-based data architecture are followed. Identify and mitigate risks, manage dependencies, and escalate issues when needed. Own stakeholder communications, reporting, and documentation of all project artifacts. Track KPIs and delivery metrics to ensure accountability and continuous improvement. Skills Must have ExperienceMinimum 8+ years of project management experience, including 3+ years managing data and Python-based development projects. Agile ExpertiseStrong experience delivering projects in Agile/Scrum environments with distributed or hybrid teams. Technical Fluency: Solid understanding of Python, data pipelines, and ETL/ELT workflows. Familiarity with cloud platforms such as AWS, Azure, or GCP. Exposure to tools like Airflow, dbt, Spark, Databricks, or Snowflake is a plus. ToolsProficiency with JIRA, Confluence, Git, and project dashboards (e.g., Power BI, Tableau). Soft Skills: Strong communication, stakeholder management, and leadership skills. Ability to translate between technical and non-technical audiences. Skilled in risk management, prioritization, and delivery tracking. Nice to have N/A OtherLanguagesEnglishC1 Advanced SenioritySenior

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Kolkata, Hyderabad, Pune

Work from Office

IICS Developer1 Job Overview :We are seeking an experienced IICS (Informatica Intelligent Cloud Services) Developer with hands-on experience in the IICS platform . The ideal candidate must have strong knowledge of Snowflake and be proficient in building and managing integrations between different systems and databases. The role will involve working with cloud-based integration solutions, ensuring data flows seamlessly across platforms, and optimizing performance for large-scale data processes. Key Responsibilities : Design, develop, and implement data integration solutions using IICS (Informatica Intelligent Cloud Services). Work with Snowflake data warehouse solutions, including data loading, transformation, and querying. Build, monitor, and maintain efficient data pipelines between cloud-based systems and Snowflake. Troubleshoot and resolve integration issues within the IICS platform and Snowflake . Ensure optimal data processing performance and manage data flow between various cloud applications and databases. Collaborate with data architects, analysts, and stakeholders to gather requirements and design integration solutions. Implement best practices for data governance, security, and data quality within the integration solutions. Perform unit testing and debugging of IICS data integration tasks. Optimize integration workflows to ensure they meet performance and scalability needs. Key Skills : Hands-on experience with IICS (Informatica Intelligent Cloud Services) . Strong knowledge and experience working with Snowflake as a cloud data warehouse. Proficient in building ETL/ELT workflows , including integration of various data sources into Snowflake . Experience with SQL and writing complex queries for data transformation and manipulation. Familiarity with data integration techniques and best practices for cloud-based platforms. Experience with cloud integration platforms and working with RESTful APIs and other integration protocols. Ability to troubleshoot, optimize, and maintain data pipelines effectively. Knowledge of data governance, security principles, and data quality standards. Qualifications : Bachelors degree in Computer Science, Information Technology, or a related field (or equivalent experience). Minimum of 5 years of experience in data integration development. Proficiency in Snowflake and cloud-based data solutions. Strong understanding of ETL/ELT processes and integration design principles. Experience working in Agile or similar development methodologies. Location - Pune,Hyderabad,Kolkata,Jaipur,Chandigarh

Posted 1 month ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Pune

Work from Office

ELT on Looker reporting.1 8+ years of experience in Looker admin skills, LookML, NDTs, PDTs, Schedules, Groups, etc. Collaborate with ELT on Looker reporting. Actively build a deep knowledge of Backcountrys root data sources, how the data is created, and how the data is ultimately used. This will include heavy collaboration with Sr. Leaders and individual contributors. Act as a lead for team contractors working in Looker assigning and reviewing their work, mentoring them on Looker best practices and BC business practices. Design elegant Looker explores for business use cases to align data modeling to Sr. Leadership vision for how they want their team to report on and measure the business. Coordinate and build enterprise-level reporting, dashboarding, and analytics with an emphasis on teaching analysts how to work with data. Ensure and recommend opportunities to stabilize and automate reporting so the business can reliably receive and use their reports to drive the business. Evaluate opportunities to acquire new streams of data, by building raw data pipelines into our data lake, and then exploring that data for potential value. Continually improve existing data by defining entity relationships / data models to improve accuracy, usability, and security of data. Work closely with business data consumers/creators to understand their data needs and translate those needs to data solutions. Help business users understand data, helping them build reports, new metrics, and ideate new ways for data to inform business success.

Posted 1 month ago

Apply

7.0 - 12.0 years

30 - 45 Lacs

Bengaluru

Hybrid

Roles and Responsibilities: The Senior Data Engineer is the one who designs and builds data foundations and end to end solutions for the Shell Business to maximize value from data. The role helps create a data-driven thinking within the organization, not just within IT teams, but also in the wider Business stakeholder community. A Senior Data engineer is expected to be a subject matter expert, who design & build data solutions and mentor junior engineers. They are also the key drivers to convert Vison and Data Strategy for IT solutions and deliver. Key Characteristics Technology expert who constantly pursues knowledge enhancement and has inherent curiosity to understand work from multiple dimensions. Deep data focus with expertise in technology domain A skilled communicator capable of speaking to both technical developers and business managers. Respected and trusted by leaders and staff. Actively delivers the roll-out and embedding of Data Foundation initiatives in support of the key business programmers. Coordinate the change management process, incident management and problem management process. Present reports and findings to key stakeholders and be the subject matter expert on DataAnalysis & Design. Drive implementation efficiency and effectiveness across the pilots and future projects to minimize cost, increase speed of implementation and maximize value delivery. Contributes to community building initiatives like CoE, CoP. Mandatory skills: AWS/Azure/SAP - Master ELT - Master Data Modeling - Master Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation

Posted 1 month ago

Apply

8.0 - 12.0 years

14 - 18 Lacs

Hyderabad

Work from Office

Key Responsibilities: Design and implement scalable data pipelines using ETL/ELT frameworks. Develop and maintain data models and data warehouse architecture using Snowflake. Build and manage DBT (Data Build Tool) models for data transformation and lineage tracking. Write efficient and reusable Python scripts for data ingestion, transformation, and automation. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Ensure data quality, integrity, and governance across all data platforms. Monitor and optimize performance of data pipelines and queries. Implement best practices for data engineering, including version control, testing, and CI/CD. Required Skills and Qualifications: 8+ years of experience in data engineering or a related field. Strong expertise in Snowflake including schema design, performance tuning, and security. Proficiency in Python for data manipulation and automation. Solid understanding of data modeling concepts (star/snowflake schema, normalization, etc.). Experience with DBT for data transformation and documentation. Hands-on experience with ETL/ELT tools and orchestration frameworks (e.g., Airflow, Prefect). Strong SQL skills and experience with large-scale data sets. Familiarity with cloud platforms (AWS, Azure, or GCP) and data services.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 15 Lacs

Bengaluru

Work from Office

Hiring Immediate Joiners / ELT Content Editor ( Children Books) / Remote Job Dear Candidate, Greetings from Encora Innovation Labs !!! We are hiring for below Role if you are interested, please revert back with the updated Resume & the following details mentioned below: Full Name Highest Qualification - Current Location Preferred Location – Total Experience- Relevant Experience- Current Employer - Current CTC - Expected CTC- Notice Period – Reason for Job Change- Do you have Experience in Editing ELT Contents ( Yes/No) – If Yes – How many Years Do you Experience in Children Book Publishing (Yes/No) – If Yes – How many Years – & which Class/ Grade Job details Position Name – ELT Content Editor Job Location – Remote Education: A strong academic background in English, Mass Communication (or Journalism), Education and Pedagogy, English Language Teaching (ELT), or Early Childhood Care and Education (ECCE). Experience: 4-5 years of progressive experience in children's book publishing, ELT teaching, ELT content development, foundational literacy programs, or pedagogy and curriculum development. Content Expertise: Demonstrated ability to expertly assess, curate, and create engaging and effective children's educational content. Language Proficiency: Exceptional written and oral English language and communication skills, with a keen eye for detail and pedagogical accuracy. Technical Aptitude: Proven ability to quickly learn and adapt to technical processes within a fast-paced, technology-driven environment. Collaboration & Initiative: A highly collaborative mindset with a strong sense of initiative, thriving in a dynamic and agile team setting. Tools Proficiency: Proficient in working with Google Workspace tools (e.g., Docs, Sheets, Slides). Preferred Qualifications: High written and spoken proficiency in a second language such as Spanish, Portuguese, Urdu, Arabic, Thai, Bahasa, or Filipino. Experience working with international markets and understanding diverse cultural content needs Regards, Reshma 9980104317

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Hyderabad

Work from Office

Design, build, and measure complex ELT jobs to process disparate data sources and form a high integrity, high quality, clean data asset.. Working on a range of projects including batch pipelines, data modeling, and data mart solutions you’ll be part of collaborative project teams working to implement robust data collection and processing pipelines to meet specific business need.. Show more Show less

Posted 1 month ago

Apply

7.0 - 10.0 years

27 - 42 Lacs

Pune

Work from Office

Key Responsibilities: Design and/or implement modular, testable, and scalable DBT models aligned with business logic and performance needs. Collaborate with stakeholders to understand existing pipelines and translate them into modern ELT workflows. Implement best practices for version control, CI/CD, testing, and documentation in DBT. Ensure high standards of data and code quality . Required Qualifications: 5+ years of experience in data engineering , with at least 1+ years hands-on with DBT . Strong understanding of SQL , data warehousing , and ELT architecture . Familiarity with legacy ETL tools like IBM DataStage and ability to reverse-engineer existing pipelines. Proficiency in Git , CI/CD pipelines , and dataOps practices. Excellent communication skills and ability to work independently and collaboratively.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

This role requires deep understanding of data warehousing, business intelligence (BI), and data governance principles, with strong focus on the Microsoft technology stack. Data Architecture: Develop and maintain the overall data architecture, including data models, data flows, data quality standards. Design and implement data warehouses, data marts, data lakes on Microsoft Azure platform Business Intelligence: Design and develop complex BI reports, dashboards, and scorecards using Microsoft Power BI. Data Engineering: Work with data engineers to implement ETL/ELT pipelines using Azure Data Factory. Data Governance: Establish and enforce data governance policies and standards. Primary Skills Experience: 15+ years of relevant experience in data warehousing, BI, and data governance. Proven track record of delivering successful data solutions on the Microsoft stack. Experience working with diverse teams and stakeholders. Required Skills and Experience: Technical Skills: Strong proficiency in data warehousing concepts and methodologies. Expertise in Microsoft Power BI. Experience with Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. Knowledge of SQL and scripting languages (Python, PowerShell). Strong understanding of data modeling and ETL/ELT processes. Secondary Skills Soft Skills Excellent communication and interpersonal skills. Strong analytical and problem-solving abilities. Ability to work independently and as part of a team. Strong attention to detail and organizational skills.

Posted 1 month ago

Apply

5.0 - 10.0 years

11 - 21 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Job Title: Senior Data Engineer ADF | Snowflake | DBT | Databricks Experience: 5 to 8 Years Locations: Pune / Hyderabad / Gurgaon / Bangalore (Hybrid) Job Type: Full Time, Permanent Job Description: We are hiring for a Senior Data Engineer role with strong expertise in Azure Data Factory (ADF) , Snowflake , DBT , and Azure Databricks . The ideal candidate will be responsible for designing, building, and maintaining scalable cloud-based data pipelines and enabling high-quality data delivery for analytics and reporting. Key Responsibilities Build and manage ETL/ELT pipelines using ADF, Snowflake, DBT, and Databricks Create parameterized, reusable components within ADF pipelines Perform data transformations and modeling in Snowflake using DBT Use Databricks for data processing using PySpark/SQL Collaborate with stakeholders to define and implement data solutions Optimize data workflows for performance, scalability , and cost-efficiency Ensure data quality, governance, and documentation standards Mandatory Skills Azure Data Factory (ADF) Snowflake DBT (Data Build Tool) Azure Databricks Strong SQL and data modeling experience Good-to-Have Skills Azure Data Lake, Azure Synapse, Blob Storage CI/CD using Azure DevOps or GitHub Python scripting, PySpark Power BI/Tableau integration Experience in metadata/data governance tools Role Requirements Education : Bachelors/Masters degree in Computer Science, Data Engineering, or related field Certifications : Azure or Snowflake certification is a plus Strong problem-solving and communication skills Keywords: Azure Data Factory, ADF, Snowflake, DBT, Azure Databricks, PySpark, SQL, Data Engineer, Azure Data Lake, ETL, ELT, Azure Synapse, Power BI, CI/CD

Posted 1 month ago

Apply

3.0 - 6.0 years

35 - 40 Lacs

Pune, Bengaluru

Work from Office

Client : Our client is a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. Requirements : Our client is looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities : - Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. - Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. - Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. - Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. - Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. - Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. - Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile : - Bachelors or Masters degree in Computer Science, Information Technology, or a related field. - 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. - Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. - Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). - Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. - Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). - Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. - Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). - Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). - Familiarity with version control systems like Git.

Posted 1 month ago

Apply

3.0 - 5.0 years

20 - 22 Lacs

Udaipur

Work from Office

3-5 years of experience in Data Engineering or similar roles Strong foundation in cloud-native data infrastructure and scalable architecture design Build and maintain reliable, scalable ETL/ELT pipelines using modern cloud-based tools Design and optimize Data Lakes and Data Warehouses for real-time and batch processing

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Pune

Work from Office

AZURE DATA ENGINEER Skills - Strong technical experience in Azure, SQL , Azure data factory, ETL, Databricks Graduation must Experience- 5-10 years CTC- Up to 14 - 20 LPA 21st June -F2F Interview only (Pune) Contact- 7742324144

Posted 1 month ago

Apply

3.0 - 8.0 years

15 - 30 Lacs

Navi Mumbai, Pune

Work from Office

We're Hiring: Data Scientist Databricks & ML Deployment Expert Location: Mumbai/Pune Experience: 38 Years Apply Now! Are you passionate about deploying real-world machine learning solutions? We're looking for a versatile Data Scientist with deep expertise in Databricks, PySpark , and end-to-end ML deployment to drive impactful projects in the Retail and Automotive domains. What Youll Do Develop scalable ML models (Regression, Classification, Clustering) Deliver advanced use cases like CLV modeling , Predictive Maintenance , and Time Series Forecasting Design and automate ML workflows on Databricks using PySpark Build and deploy APIs to serve ML models (Flask, FastAPI, Django) Own model deployment and monitoring in production environments Work closely with Data Engineering and DevOps teams for CI/CD integration Optimize pipelines and model performance (code & infrastructure level) Must-Have Skills Strong hands-on with Databricks and PySpark Proven track record in ML model development & deployment (min. 2 production deployments) Solid grasp of Regression, Classification, Clustering & Time Series Proficiency in SQL , workflow automation, and ELT/ETL processes API development (Flask, FastAPI, Django) CI/CD, deployment automation, and ML pipeline optimization Familiarity with Medallion Architecture Domain Expertise Retail : CLV, Pricing, Demand Forecasting Automotive : Predictive Maintenance, Time Series Nice to Have MLflow, Docker, Kubernetes Cloud: Azure, AWS, or GCP If you're excited to build production-ready ML systems that create real business impact, we want to hear from you! Apply Now to chaity.mukherjee@celebaltech.com.

Posted 1 month ago

Apply

8.0 - 13.0 years

12 - 18 Lacs

Hyderabad

Work from Office

Data Engineering Team As a Lead Data Engineer for India, you will be accountable for leading the technical aspects of product engineering by being hands on, working on the enhancement, maintenance and support of the product on which your team is working, within your technology area. You will be responsible for your own hands-on coding, provide the design thinking and design solutions, ensuring the quality of your teams output, representing your team in product-level technical forums and ensuring your team provides technical input to and aligns with the overall product road-map. How will you make an impact? You will work with Engineers in other technology areas to define the overall technical direction for the product on alignment with Groups technology roadmap, standards and frameworks, with product owners and business stakeholders to shape the product's delivery roadmap and with support teams to ensure its smooth operation. You will be accountable for the overall technical quality of the work produced by India that is in line with the expectation of the stakeholders, clients and Group. You will also be responsible for line management of your team of Engineers, ensuring that they perform to the expected levels and that their career development is fully supported. Key responsibilities o Produce Quality Code o Code follows team standards, is structured to ensure readability and maintainability and goes through review smoothly, even for complex changes o Designs respect best practices and are favourably reviewed by peers o Critical paths through code are covered by appropriate tests o High-level designs / architectures align to wider technical strategy, presenting reusable APIs where possible and minimizing system dependencies o Data updates are monitored and complete within SLA o Technical designs follow team and group standards and frameworks, is structured to ensure reusability, extensibility and maintainability and goes through review smoothly, even for complex changes o Designs respect best practices and are favourably reviewed by peers o High-level designs / architectures align to wider technical strategy, presenting reusable APIs where possible and minimizing system dependencies o Estimates are consistently challenging, but realistic o Most tasks are delivered within estimate o Complex or larger tasks are delivered autonomously o Sprint goals are consistently achieved o Demonstrate commitment to continuous improvement of squad activities o The product backlog is consistently well-groomed, with a responsible balance of new features and technical debt mitigation o Other Engineers in the Squad feel supported in their development o Direct reports have meaningful objectives recorded in Quantium's Performance Portal, and understand how those objectives relate to business strategy o Direct reports' career aspirations are understood / documented, with action plans in place to move towards those goals o Direct reports have regular catch-ups to discuss performance, career development and their ongoing happiness / engagement in their role o Any performance issues are identified, documented and agreed, with realistic remedial plans in place o Squad Collaboration o People Management o Produce Quality Technical Design o Operate at high level of productivity Key activities Build technical product/application engineering capability in the team by that is in line with the Groups technical roadmap, standards and frameworks Write polished code, aligned to team standards, including appropriate unit / integration tests Review code and test cases produced by others, to ensure changes satisfy the associated business requirement, follow best practices, and integrate with the existing code-base Provide constructive feedback to other team members on quality of code and test cases Collaborate with other Lead / Senior Engineers to produce high-level designs for larger pieces of work Validate technical designs and estimates produced by other team members Merge reviewed code into release branches, resolving any conflicts that arise, and periodically deploy updates to production and non-production environments Troubleshoot production problems and raise / prioritize bug tickets to resolve any issues Proactively monitor system health and act to report / resolve any issues Provide out of hours support for periodic ETL processes, ensuring SLAs are met Work with business stakeholders and other leads to define and estimate new epics Contribute to backlog refinement sessions, helping to break down each epic into a collection of smaller user stories that will deliver the overall feature Work closely with Product Owners to ensure the product backlog is prioritized to maximize business value and manage technical debt Lead work breakdown sessions to define the technical tasks required to implement each user story Contribute to sprint planning sessions, ensuring the team takes a realistic but challenging amount of work into each sprint and each team member will be productively occupied Contribute to the teams daily stand-up, highlighting any delays or impediments to progress and proposing mitigation for those issues Contribute to sprint review and sprint retro sessions, to maintain a culture of continuous improvement within the team Coach / mentor more junior Engineers to support their continuing development Set and periodically review delivery and development objectives for direct reports Identify each direct reports longer-term career objectives and, as far as possible, factor this into work assignments Hold fortnightly catch-ups with direct reports to review progress against objectives, assess engagement and give them the opportunity to raise concerns about the product or team Work through the annual performance review process for all team members Conduct technical interviews as necessary to recruit new Engineers The superpowers youll be bringing to the team: 8+ years of experience in design, develop, and implement end-to-end data solutions (storage, integration, processing, access) in Google Cloud Platform (GCP) or similar cloud platforms. 2. Strong experience with SQL 3. Values delivering high-quality, peer-reviewed, well-tested code 4. Create ETL/ELT pipelines that transform and process terabytes of structured and unstructured data in real-time 5. Knowledge of DevOps functions and to contribute to CI / CD pipelines 6. Strong knowledge of data warehousing and data modelling and techniques like dimensional modelling etc 7. Strong hands-on experience with BigQuery/Snowflake, Airflow/Argo, Dataflow, Data catalog, VertexAI, Pub/Sub etc or equivalent products in other cloud platforms 8. Solid grip over programming languages like Python or Scala 9. Hands on experience in manipulating SPARK at scale with true in-depth knowledge of SPARK API 10. Experience working with stakeholders and mentoring experience for juniors in the team is good to have 11. Recognized as a go-to person for high-level designs and estimations 12. Experience working with source control tools (GIT preferred) with good understanding of branching / merging strategies 13. Experience in Kubernetes and Azure will be an advantage 14. Understanding of GNU/Linux systems and Bash/scripting 15. Bachelors degree in Computer Science, Information Technology or a related discipline 16. Comfortable working in a fast moving, agile development environment 17. Excellent problem solving / analytical skills 18. Good written / verbal communication skills 19. Commercially aware, with the ability to work with a diverse range of stakeholders 20. Enthusiasm for coaching and mentoring junior engineers 21. Experience in lading teams, including line management responsibilities

Posted 1 month ago

Apply

12.0 - 20.0 years

35 - 50 Lacs

Bengaluru

Hybrid

Data Architect with Cloud Expert, Data Architecture, Data Integration & Data Engineering ETL/ELT - Talend, Informatica, Apache NiFi. Big Data - Hadoop, Spark Cloud platforms (AWS, Azure, GCP), Redshift, BigQuery Python, SQL, Scala,, GDPR, CCPA

Posted 1 month ago

Apply

5.0 - 9.0 years

13 - 17 Lacs

Pune

Work from Office

Diacto is looking for a highly capable Data Architect with 5 to 9 years of experience to lead cloud data platform initiatives with a primary focus on Snowflake and Azure Data Hub. This individual will play a key role in defining the data architecture strategy, implementing robust data pipelines, and enabling enterprise-grade analytics solutions. This is an on-site role based in our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design and implement enterprise-level data architecture with a strong focus on Snowflake and Azure Data Hub Define standards and best practices for data ingestion, transformation, and storage Collaborate with cross-functional teams to develop scalable, secure, and high-performance data pipelines Lead Snowflake environment setup, configuration, performance tuning, and optimization Integrate Azure Data Services with Snowflake to support diverse business use cases Implement governance, metadata management, and security policies Mentor junior developers and data engineers on cloud data technologies and best practices Experience and Skills Required: 5?9 years of overall experience in data architecture or data engineering roles Strong, hands-on expertise in Snowflake, including design, development, and performance tuning Solid experience with Azure Data Hub and Azure Data Services (Data Lake, Synapse, etc.) Understanding of cloud data integration techniques and ELT/ETL frameworks Familiarity with data orchestration tools such as DBT, Airflow, or Azure Data Factory Proven ability to handle structured, semi-structured, and unstructured data Strong analytical, problem-solving, and communication skills Nice to Have: Certifications in Snowflake and/or Microsoft Azure Experience with CI/CD tools like GitHub for code versioning and deployment Familiarity with real-time or near-real-time data ingestio Why Join Diacto Technologies Work with a cutting-edge tech stack and cloud-native architectures Be part of a data-driven culture with opportunities for continuous learning Collaborate with industry experts and build transformative data solutions

Posted 1 month ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Hyderabad, Bengaluru

Hybrid

Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python

Posted 1 month ago

Apply

10.0 - 15.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Novo Nordisk Global Business Services ( GBS) India Department - Global Data & Artificial lntelligence Are you passionate about building scalable data pipelines and optimising data workflowsDo you want to work at the forefront of data engineering, collaborating with cross-functional teams to drive innovationIf so, we are looking for a talented Data Engineer to join our Global Data & AI team at Novo Nordisk. Read on and apply today for a life-changing career! The Position As a Senior Data Engineer, you will play a key role in designing, developing, and main-taining data pipelines and integration solutions to support analytics, Artificial Intelligence workflows, and business intelligence. It includes: Design, implement, and maintain scalable data pipelines and integration solutions aligned with the overall data architecture and strategy. Implement data transformation workflows using modern ETL/ELT approaches while establishing best practices for data engineering, including testing methodologies and documentation. Optimize data workflows by harmonizing and securely transferring data across systems, while collaborating with stakeholders to deliver high-performance solutions for analytics and Artificial Intelligence. Monitoring and maintaining data systems to ensure their reliability. Support data governance by ensuring data quality and consistency, while contributing to architectural decisions shaping the data platform's future. Mentoring junior engineers and fostering a culture of engineering excellence. Qualifications Bachelor’s or master’s degree in computer science, Software Development, Engineering. Possess over 10 years of overall professional experience, including more than 4 years of specialized expertise in data engineering. Experience in developing production-grade data pipelines using Python, Data-bricks and Azure cloud, with a strong foundation in software engineering principles. Experience in the clinical data domain, with knowledge of standards such as CDISC SDTM and ADaM (Good to have). Experience working in a regulated industry (Good to have). About the department You will be part of the Global Data & AI team. Our department is globally distributed and has for mission to harness the power of Data and Artificial Intelligence, integrating it seamlessly into the fabric of Novo Nordisk's operations. We serve as the vital link, weaving together the realms of Data and Artificial Intelligence throughout the whole organi-zation, empowering Novo Nordisk to realize its strategic ambitions through our pivotal initiatives. The atmosphere is fast-paced and dynamic, with a strong focus on collaboration and innovation. We work closely with various business domains to create actionable insights and drive commercial excellence.

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Navi Mumbai

Work from Office

Data Strategy and Planning: Develop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data Modeling: Design and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and Management: Oversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data Integration: Define and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data Architecture: AWS , Snowflake ETL & Data Engineering: AWS Glue, Apache Spark, Step Functions Big Data & Analytics: Athena,Presto, Hadoop Database & Storage: SQL, Snow sql Security & Compliance: IAM, KMS, Data Masking Preferred technical and professional experience Cloud Data Warehousing: Snowflake (Data Modeling, Query Optimization) Data Transformation: DBT (Data Build Tool) for ELT pipeline management Metadata & Data Governance: Alation (Data Catalog, Lineage, Governance

Posted 1 month ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

Pune

Work from Office

Ciklum is looking for a Senior Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Senior Data Engineer, become a part of a cross-functional development team engineering experiences of tomorrow. This role demands deep technical expertise, hands-on development experience, and the ability to mentor team members while collaborating across engineering and business units. Youll work on high-performance data pipelines, microservices, and analytics solutions that directly support mission-critical systems. Responsibilities: Design, develop, and maintain scalable data pipelines using PL/SQL, Oracle, MongoDB, and related technologies Build data microservices using Java, Spring Boot, and containerization (Docker/Kubernetes) Develop, test, and deploy ETL/ELT processes to support real-time and batch data flows Work with tools like OBIEE, ODI, and Oracle APEX to deliver reporting, dashboards, and data visualization solutions Optimize data processing performance and implement best practices for data reliability, scalability, and integrity Collaborate with cross-functional teams to define data architecture, modeling, and integration strategies Participate in code reviews, troubleshooting, and tuning of high-volume transactional data systems Contribute to Agile development practices under the SAFe framework, including PI planning, system demos, and retrospectives Act as a mentor and technical guide for mid- and junior-level engineers, fostering knowledge sharing and continuous improvement Requirements: 7+ years of experience in data engineering within large-scale, enterprise environments Strong hands-on experience with: PL/SQL, Oracle DB, MongoDB Java, Spring Boot, Microservices ETL/ELT frameworks CI/CD pipelines and DevOps best practices Experience working with Oracle Business Intelligence tools (OBIEE, ODI, APEX) Proficiency in data modeling, data optimization techniques, and performance tuning Solid understanding of data lifecycle management, data security, and governance principles Experience with cloud platforms such as AWS or Azure for data storage and processing Knowledge of data visualization tools such as Power BI or Tableau Excellent analytical, communication, and documentation skills Ability to lead initiatives, mentor team members, and collaborate in cross-functional settings Desirable: Familiarity with Kafka, event-driven architecture, or streaming platforms Working experience in SAFe Agile or other scaled Agile delivery models What's in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Experiences of tomorrow. Engineered together Interested already? We would love to get to know you! Submit your application. Can’t wait to see you at Ciklum.

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies