Home
Jobs

3866 Databricks Jobs - Page 12

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: We are seeking a Data Engineer to design, develop, and maintain data ingestion processes to a data platform built using Microsoft Technologies, ensuring data quality and integrity. The role involves collaborating with data architects and business analysts to implement solutions using tools like ADF, Azure Databricks, and requires strong SQL skills. Responsibilities: Key responsibilities include developing, testing, and optimizing ETL workflows and maintaining documentation. ETL development experience in Microsoft data track are required. Work with business team to translate the business requirement to technical requirements. Demonstrated expertise in Agile methodologies, including Scrum, Kanban, or SAFe. Mandatory skill sets: · Strong proficiency in Azure Databricks, including Spark and Delta Lake. · Experience with Azure Data Factory, Azure Data Lake Storage, and Azure SQL Database. · Proficiency in data integration and ETL processes and T-SQL. · Experienced working in Python for data engineering · Experienced working in Postgres Database · Experienced working in graph database · Experienced in architecture design and data modelling Good To Have Skill Sets: · Unity Catalog / Purview · Familiarity with Fabric/Snowflake service offerings · Visualization tool – PowerBI Preferred skill sets: Hands on knowledge of python, Pyspark and strong SQL knowledge. ETL and data warehousing is must. Relevant certifications (Any one) (e.g., Databricks Data Engineer Associate Microsoft Certified: Azure Data Engineer Associate Azure Solution Architect) are mandatory Years of experience required: 5+yrs Education qualification: Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 3 days ago

Apply

10.0 years

0 Lacs

India

On-site

Linkedin logo

Coursera was launched in 2012 by Andrew Ng and Daphne Koller, with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 175 million registered learners as of March 31, 2025. Coursera partners with over 350 leading universities and industry leaders to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. Overview: At Coursera, our Data team is helping to build the future of education through data-driven decision making and data-powered products. We drive marketing, product and business strategy through ensuring high quality data, measurement, experimentation, and causal inference. We are seeking a dynamic, collaborative, and experienced Senior Manager to lead our Data Science team, responsible for KPI reporting and business analytics for business teams and functions. Reporting to the Director of Data Science, this role will play a pivotal role in shaping strategies and initiatives to enable our leaders, stakeholders and partners with reliable data, automation and analytics at Coursera, to drive business growth. Responsibilities: Hire, mentor, and lead a team of data scientists, fostering a culture of continuous learning and integration of analytics into business teams to enhance our online learning platform. Collaborate with stakeholders to understand requirements, develop technical roadmaps, and identify opportunities for automation and efficiency, ensuring the delivery of impactful data products. Develop scalable data solutions by building reliable data layers, improving self-service capabilities, and driving data-driven decision-making across the organization. Conduct deep-dive analyses and hypothesis testing in partnership with teams and stakeholders to uncover actionable insights and improve business outcomes. Build forecasting models to identify patterns, predict future demand, and optimize resource allocation, enabling informed strategic decision-making. Influence strategic direction by partnering with senior leadership, providing data-driven recommendations, and contributing to the company’s long-term vision. Drive innovation and skill development by staying updated on industry trends, emerging technologies, and best practices in business intelligence and analytics. Bridge technical and non-technical teams, ensuring clear communication, aligning priorities, and fostering strong, collaborative relationships to create a positive and productive work environment. Basic Qualifications: Bachelor's degree in Computer Science, Information Systems, Mathematics, Statistics, or related field; Master's degree preferred. 10+ years of experience in data analytics, business intelligence, or related roles, with 4+ years of managerial experience. Proficiency in programming languages such as SQL, Python, or R and experience with relevant data science libraries and frameworks. Hands-on experience with data visualization tools such as Looker, Tableau, or Amplitude. Strong understanding of data warehousing concepts and experience working with cloud-based data platforms (e.g., Databricks, AWS, Google Cloud). Excellent communication skills, with the ability to effectively communicate technical concepts to non-technical stakeholders. Proven leadership abilities, with a track record of successfully managing teams and driving projects to completion. Strong analytical and problem-solving skills, with the ability to translate data into actionable insights and recommendations. Preferred Qualifications: Experience at EdTech or Content Subscription business. Experience working with Salesforce data. Experience with Airflow, Databricks. Experience with PM tools like JIRA, confluence. If this opportunity interests you, you might like these courses on Coursera: Go Beyond the Numbers: Translate Data into Insights Applied AI with DeepLearning Probability & Statistics for Machine Learning & Data Science Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here.

Posted 3 days ago

Apply

10.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow

Posted 3 days ago

Apply

10.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow

Posted 3 days ago

Apply

10.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow

Posted 3 days ago

Apply

10.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow

Posted 3 days ago

Apply

10.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow

Posted 3 days ago

Apply

0 years

10 - 12 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),data,feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance

Posted 3 days ago

Apply

0 years

10 - 12 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,apache spark,aws,ml,langchain,shell scripting,kafka,performance testing,mlops,pandas,knowledge-graph design (rdf/owl/sparql),sql,data,feature engineering,nosql,delta lake,ci/cd,python,aws services (sagemaker, bedrock, lambda),pyspark,synthetic-data augmentation,generative ai,data-cataloging,metadata management,databricks,git,lineage,data governance,azure

Posted 3 days ago

Apply

0 years

10 - 12 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),data,feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance

Posted 3 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking an experienced PMT Technical Project Manager with a minimum of 10 years of experience in managing data projects. The ideal candidate will have a strong background in data management, MDM, Databricks, Azure, and lakeform. The candidate should also have a decent understanding of data domains. As a PMT Project Manager, you will be responsible for overseeing and coordinating all aspects of data projects, including planning, budgeting, and execution. You will work closely with cross-functional teams to ensure the successful delivery of projects on time and within budget. The ideal candidate will have excellent leadership and communication skills, as well as a proven track record of successfully managing data projects. This is a great opportunity for a driven and experienced professional to join our team and make a significant impact in the data industry. The total years of work experience for this position is 12 and the work mode is Work from Virtusa Office.

Posted 3 days ago

Apply

3.0 years

0 Lacs

Greater Madurai Area

On-site

Linkedin logo

Job Requirements Role Description As a BI Developer, you will be responsible for transforming raw data into actionable insights that drive business decisions. Being part of the BI and Reporting team, You will work closely with Data operations team , Data Base Administrators, Data Business Partner and business stakeholders to develop data analytics solutions, create interactive reports, and optimize BI workflows using SQL, Python, Databricks, Power BI, and Tableau. Your expertise in data modelling, visualization, and reporting will be crucial in shaping data-driven strategies. Key Responsibilities Develop data models and interactive dashboards using Power BI, Tableau, and automated reporting to track key performance indicators (KPIs) relevant to business functions. Write complex SQL queries, leverage Python for data manipulation and predictive analytics, and optimize ETL processes for efficient data handling across multiple domains. Work with Databricks for large-scale data processing and implement AWS/Azure-based cloud solutions, ensuring scalability and performance of BI applications. Maintain data accuracy, consistency, and security across platforms, ensuring high-quality BI applications tailored to SCM, finance, sales, and marketing needs. Partner with business teams, communicate complex findings effectively to non-technical stakeholders, and drive a data-centric culture across departments. Required Skills & Qualifications Education: Bachelor’s degree in data science, Computer Science, Business Analytics, or a related field. Experience: 3+ years in BI, data analytics, or reporting roles. Technical Expertise SQL: Strong proficiency in writing queries and optimizing databases. Python: Experience in data manipulation and automation. Databricks: Hands-on experience with cloud-based data processing. Visualization Tools: Power BI, Tableau. Soft Skills Strong analytical thinking and problem-solving abilities. Excellent communication and stakeholder management skills. Ability to translate business needs into technical solutions.

Posted 3 days ago

Apply

8.0 - 12.0 years

14 - 24 Lacs

Pune

Work from Office

Naukri logo

Role & responsibilities Experience: 8-10 years in the Data and Analytics domain with expertise in the Microsoft Data Tech stack. Leadership: Experience in managing teams of 8-10 members. Technical Skills: Expertise in tools like Microsoft Fabric, Azure Synapse Analytics, Azure Data Factory, Power BI, SQL Server, Azure Databricks, etc. Strong understanding of data architecture, pipelines, and governance. Understanding of one of the other data platforms like Snowflake or Google Big query or Amazon Red shift will be a plus and good to have skill. Tech stack - DBT and Databricks or Snowflake Microsoft BI - PBI, Synapse and Fabric Project Management: Proficiency in project management methodologies (Agile, Scrum, or Waterfall). Key Responsibilities Project Delivery & Management: Involved in the delivery of project. Help and define project plan, and ensure timelines are met in project delivery. Maintain quality control and ensure client satisfaction at all stages. Team Leadership & Mentorship: Lead, mentor, and manage a team of 5 to 8 professionals. Conduct performance evaluations and provide opportunities for skill enhancement. Foster a collaborative and high-performance work environment. Client Engagement: Act as the primary point of contact on technical front. Understand client needs and ensure expectations are met or exceeded. Conduct and do bi-weekly and monthly reviews on projects with customer. Technical Expertise & Innovation: Stay updated with the latest trends in Microsoft Data Technologies (Microsoft Fabric, Azure Synapse, Power BI, SQL Server, Azure Data Factory, etc.). Provide technical guidance and support to the team. Regards, Ruchita Shete Busisol Sourcing Pvt. Ltd. Tel No: 7738389588 Email id: ruchita@busisol.net

Posted 3 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Family Data Science & Analysis (India) Travel Required None Clearance Required None What You Will Do Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms ( Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need Bachelors or master’s degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.

Posted 4 days ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Production monitoring and troubleshooting in on Prem ETL and AWS environment Working experience using ETL Datastage along with DB2 Awareness to use tools such as Dynatrace, Appdynamics, Postman , AWS CICD Software code development experience in ETL batch processing and AWS cloud Software code management, repository updates and reuse Implementation and/or configuration, management, and maintenance of software Implementation and configuration of SaaS and public, private and hybrid cloud-based PaaS solutions Integration of SaaS and PaaS solutions with Data Warehouse Application Systems including SaaS and PaaS upgrade management Configuration, Maintenance and support for entire DWA Application Systems landscape including but not limited to supporting DWA Application Systems components and tasks required to deliver business processes and functionally (e.g., logical layers of databases, data marts, logical and physical data warehouses, middleware, interfaces, shell scripts, massive data transfer and uploads, web development, mobile app development, web services and APIs) DWA Application Systems support for day-to-day changes and business continuity and for addressing key business, regulatory, legal or fiscal requirements 11. Support for all Third-party specialized DWA Application Systems DWA Application Systems configuration and collaboration with infrastructure service supplier required to provide application access to external/third parties 13. Integration with internal and external systems (e.g., direct application interfaces, logical middleware configuration and application program interface (API) use and development) Collaboration with third party suppliers such as infrastructure service supplier and enterprise public cloud providers 15. Documentation and end user training of new functionality All activities required to support business process application functionality and to deliver the required application and business functions to End Users in an integrated service delivery model across the DWA Application Development lifecycle (e.g., plan, deliver, run) . Maintain data quality and run batch schedules , Operations and Maintenance Deploy code to all the environments (Prod, UAT, Performance, SIT etc.) Address all open tickets within the SLA CDK (Typescript) CFT (YAML) Nice to have GitHub Scripting -Bash/SH Security minded/best practices known Python Databricks & Snowflake Skills Databricks,Datastage,CloudOps,production support

Posted 4 days ago

Apply

10.0 years

20 - 25 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

7–10 years of data engineering experience, with 5+ years on Databricks and Apache Spark. Expert-level hands-on experience with Databricks and AWS (S3, Glue, EMR, Kinesis, Lambda, IAM, CloudWatch). Primary language: Python; strong skills in Spark SQL. Deep understanding of Lakehouse architecture, Delta Lake, Parquet, Iceberg. Strong experience with Databricks Workflows, Unity Catalog, Runtime upgrades, and cost optimization. Experience with Databricks native monitoring tools and Datadog integration. Security and compliance expertise across data governance and infrastructure layers. Experience with CI/CD automation using Terraform, CloudFormation, and Git. Hands-on experience with disaster recovery and multi-region architecture. Strong problem-solving, debugging, and documentation skills. Skills: s3,pipeline engineering,lakehouse architecture,aws,lambda,iam,parquet,delta lake,datadog,spark sql,ci/cd automation,runtime upgrades,terraform,kinesis,glue,platform governance,python,cloudwatch,databricks,observability,git,iceberg,apache spark,databricks workflows,cloudformation,unity catalog,disaster recovery,emr

Posted 4 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Greetings from TCS! TCS is hiring for Databricks / Python + Pyspark Required Skill Set: Python, Sql , Pyspark Desired Experience Range: 5 to 8 Years Job Location: Gurgaon Must-Have Actively engage in the development, testing, deployment, operation, monitoring and refinement of data services Manage incidents/problems, apply fixes and resolve systematic issues; triage issues with stakeholders and identify and implement solutions to restore productivity Experience with design build and implementation experience on Data Engineering pipelines using SQL, Python , Databricks (or Snowflake) Experience with data solutions in Cloud (optional: preferably with AWS) as well as on-premises assets like Oracle Experience of Pyspark is Desirable. Good experience with Stored Procs. HR Recruitment Anshika Varma

Posted 4 days ago

Apply

3.0 - 6.0 years

25 - 32 Lacs

Hyderabad

Work from Office

Naukri logo

Senior Data Scientist Gen AI Experience: 3 - 6 Years Exp. Salary : INR 25-33 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Onsite (Hyderabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Gen AI Good to have skills : MDM platforms Blend360 (One of Uplers' Clients) is Looking for: Senior Data Scientist Gen AI who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are hiring a Senior Data Scientist (Generative AI) to spearhead the development of advanced AI-powered classification and matching systems on Databricks. You will contribute to flagship programs like the Diageo AI POC by building RAG pipelines, deploying agentic AI workflows, and scaling LLM-based solutions for high-precision entity matching and MDM modernization. Key Responsibilities Design and implement end-to-end AI pipelines for product classification, fuzzy matching, and deduplication using LLMs, RAG, and Databricks-native workflows. Develop scalable, reproducible AI solutions within Databricks notebooks and job clusters, leveraging Delta Lake, MLflow, and Unity Catalog. Engineer Retrieval-Augmented Generation (RAG) workflows using vector search and integrate with Python-based matching logic. Build agent-based automation pipelines (rule-driven + GenAI agents) for anomaly detection, compliance validation, and harmonization logic. Implement explainability, audit trails, and governance-first AI workflows aligned with enterprise-grade MDM needs. Collaborate with data engineers, BI teams, and product owners to integrate GenAI outputs into downstream systems. Contribute to modular system design and documentation for long-term scalability and maintainability. Qualifications Bachelors/Masters in Computer Science, Artificial Intelligence, or related field. 5+ years of overall Data Science experience with 2+ years in Generative AI / LLM-based applications. Deep experience with Databricks ecosystem: Delta Lake, MLflow, DBFS, Databricks Jobs & Workflows. Strong Python and PySpark skills with ability to build scalable data pipelines and AI workflows in Databricks. Experience with LLMs (e.g., OpenAI, LLaMA, Mistral) and frameworks like LangChain or LlamaIndex. Working knowledge of vector databases (e.g., FAISS, Chroma) and prompt engineering for classification/retrieval. Exposure to MDM platforms (e.g., Stibo STEP) and familiarity with data harmonization challenges. Experience with explainability frameworks (e.g., SHAP, LIME) and AI audit tooling. Preferred Skills Knowledge of agentic AI architectures and multi-agent orchestration. Familiarity with Azure Data Hub and enterprise data ingestion frameworks. Understanding of data governance, lineage, and regulatory compliance in AI systems. Interview Process Online Assessment Technical Screenings -2 Technical Interviews - 2 Project Review Client Interview How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Our Vision is to build a company of world-class people that helps our clients optimize business performance through data, technology and analytics. The company has two divisions: Data Science Solutions: We work at the intersection of data, technology and analytics. Talent Solutions: We live and breathe the digital and talent marketplace. About Uplers: Uplers is the #1 hiring platform for SaaS companies, designed to help you hire top product and engineering talent quickly and efficiently. Our end-to-end AI-powered platform combines artificial intelligence with human expertise to connect you with the best engineering talent from India. With over 1M deeply vetted professionals, Uplers streamlines the hiring process, reducing lengthy screening times and ensuring you find the perfect fit. Companies like GitLab, Twilio, TripAdvisor, and AirBnB trust Uplers to scale their tech and digital teams effectively and cost-efficiently. Experience a simpler, faster, and more reliable hiring process with Uplers today.

Posted 4 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Engineer Job Description Product Data & Analytics Team Senior Data Engineer – Product Data & Analytics Overview Product Data & Analytics team builds internal analytic partnerships, strengthening focus on the health of the business, portfolio and revenue optimization opportunities, initiative tracking, new product development and Go-To Market strategies. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data driven decision making? Are you motivated to be part of a Global Analytics team that builds large scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data analytics in the company? The ideal candidate has a knack for seeing solutions in sprawling data sets and the business mindset to convert insights into strategic opportunities for our company. Role & Responsibilities Work closely with global & regional teams to architect, develop, and maintain data engineering, advanced reporting and data visualization capabilities on large volumes of data to support analytics and reporting needs across products, markets and services. Obtain data from multiple sources, collate, analyze, and triangulate information to develop reliable fact bases. Effectively use tools to manipulate large-scale databases, synthesizing data insights. Execute cross-functional projects using advanced modeling and analysis techniques to discover insights that will guide strategic decisions and uncover optimization opportunities. Build, develop and maintain data models, reporting systems, dashboards and performance metrics that support key business decisions. Extract intellectual capital from engagement work and actively share tools, methods and best practices across projects Provide 1st level insights/conclusions/assessments and present findings via Tableau/PowerBI dashboards, Excel and PowerPoint. Apply quality control, data validation, and cleansing processes to new and existing data sources. Lead, mentor and guide more junior team members. Communicate results and business impacts of insight initiatives to stakeholders in leadership, technology, sales, marketing and product teams. Bring your Passion and Expertise All About You Experience in data management, data mining, data analytics, data reporting, data product development and quantitative analysis Financial Institution or a Payments experience a plus Experience presenting data findings in a readable and insight driven format. Experience building support decks. Advanced SQL skills, ability to write optimized queries for large data sets (Big data) Experience on Platforms/Environments: Cloudera Hadoop, Big data technology stack, SQL Server, Microsoft BI Stack Experience with data visualization tools such as Looker, Tableau, PowerBI Experience with Python, R, Databricks a plus Experience on SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS) will be an added advantage Excellent problem solving, quantitative and analytical skills In depth technical knowledge, drive and ability to learn new technologies Strong attention to detail and quality Team player, excellent communication skills Must be able to interact with management, internal stakeholders and collect requirements Must be able to perform in a team, use judgment and operate under ambiguity Education Bachelor’s or master’s Degree in a Computer Science, Information Technology, Engineering, Mathematics, Statistics Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills Analytical/Problem Solving Strong attention to detail and quality Creativity/Innovation Self-motivated, operates with a sense of urgency Project Management/Risk Mitigation Able to prioritize and perform multiple tasks simultaneously Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-245986

Posted 4 days ago

Apply

7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Ahmedabad) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Inferenz) What do you need for this opportunity? Must have skills required: Databricks, SQL, Python, ETL tools, Data Modelling, Data Warehousing Inferenz is Looking for: Position: Lead Data Engineer (Databricks) Location: Ahmedabad, Pune Required Experience: 7 to 10 Years Preferred Immediate Joiner Key Responsibilities: Lead the design, development, and optimization of data solutions using Databricks, ensuring they are scalable, efficient, and secure. Collaborate with cross-functional teams to gather and analyse data requirements, translating them into robust data architectures and solutions. Develop and maintain ETL pipelines, leveraging Databricks and integrating with Azure Data Factory as needed. Implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensure data quality, governance, and security practices are adhered to, maintaining the integrity and reliability of data solutions. Provide technical leadership and mentorship to junior engineers, fostering an environment of learning and growth. Stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. Required Skills & Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. 7+ to 10 years of experience in data engineering, with a focus on Databricks. Proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue. Proficiency in SQL and programming languages such as Python or Scala. Strong understanding of data modelling, ETL processes, and Data Warehousing/Data Lakehouse concepts. Familiarity with cloud platforms, particularly Azure, and containerization technologies such as Docker. Excellent analytical, problem-solving, and communication skills. Demonstrated leadership ability with experience mentoring and guiding junior team members. Preferred Qualifications: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP. Knowledge of data governance frameworks and tools. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 4 days ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

We’re Hiring: Databricks Administrator (5+ Years Experience) | Contractual | PAN India | Hybrid Work Model What we’re looking for: Minimum 5 years of experience in Databricks administration Strong background in cluster management, access control, and job monitoring Proficiency in scripting (Python, Bash, or PowerShell) Experience with AWS or Azure Prior work experience in a Tier 1 company is a must This is a contract-to-hire opportunity offering flexibility, challenging projects, and a chance to work with some of the best minds in data.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 900+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are looking for a Data Engineer with 3 to 5 years of experience to help design, build, and maintain the next-generation data platform for our Sisal team . This role will leverage modern cloud technologies, infrastructure as code (IaC), and advanced data processing techniques to drive business value from our data assets. You will collaborate with cross-functional teams to ensure data availability, quality, and reliability while applying expertise in Databricks on AWS, Python, CI/CD, and Agile methodologies to deliver scalable and efficient solutions. Key Responsibilities Design and implement scalable ETL processes and data pipelines that integrate with diverse data sources Build streaming and batch data processing solutions using Databricks on AWS Develop and optimize Lakehouse architectures ; work with big data access patterns to process large-scale datasets efficiently Drive automation and efficiency using CI/CD pipelines , IaC , and DevOps practices. Improve database performance, implement best practices for data governance, and enhance data security. Required Skills 3 to 5 years of experience in data engineering and ETL pipeline development . Hands-on experience with Databricks on AWS . Proven experience designing and implementing scalable data warehousing solutions . Expertise in AWS data services , particularly DynamoDB, Glue, Athena, EMR, Redshift, Lambda, and Kinesis. Strong programming skills in Python (PySpark/Spark SQL experience preferred) and Java. Desireable / Preferred Skills Knowledge of streaming data processing (e.g., Kafka, Kinesis, Spark Streaming). Experience with CI/CD tools and automation ( Git, Jenkins, Ansible, Shell Scripting, Unit/Integration Testing ). Familiarity with Agile methodologies and DevOps best practices . Benefits We Offer Access to Learnerbly, Udemy , and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs . Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model : 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance , and a Home Office Setup Allowance. Employer PF Contribution , gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards . Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India .

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 900+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are looking for a Data Engineer with 3 to 5 years of experience to help design, build, and maintain the next-generation data platform for our Sisal team . This role will leverage modern cloud technologies, infrastructure as code (IaC), and advanced data processing techniques to drive business value from our data assets. You will collaborate with cross-functional teams to ensure data availability, quality, and reliability while applying expertise in Databricks on AWS, Python, CI/CD, and Agile methodologies to deliver scalable and efficient solutions. Key Responsibilities Design and implement scalable ETL processes and data pipelines that integrate with diverse data sources Build streaming and batch data processing solutions using Databricks on AWS Develop and optimize Lakehouse architectures ; work with big data access patterns to process large-scale datasets efficiently Drive automation and efficiency using CI/CD pipelines , IaC , and DevOps practices. Improve database performance, implement best practices for data governance, and enhance data security. Required Skills 3 to 5 years of experience in data engineering and ETL pipeline development . Hands-on experience with Databricks on AWS . Proven experience designing and implementing scalable data warehousing solutions . Expertise in AWS data services , particularly DynamoDB, Glue, Athena, EMR, Redshift, Lambda, and Kinesis. Strong programming skills in Python (PySpark/Spark SQL experience preferred) and Java. Desireable / Preferred Skills Knowledge of streaming data processing (e.g., Kafka, Kinesis, Spark Streaming). Experience with CI/CD tools and automation ( Git, Jenkins, Ansible, Shell Scripting, Unit/Integration Testing ). Familiarity with Agile methodologies and DevOps best practices . Benefits We Offer Access to Learnerbly, Udemy , and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs . Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model : 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance , and a Home Office Setup Allowance. Employer PF Contribution , gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards . Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India .

Posted 4 days ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a seasoned Engineering Manager (Data Engineering) to lead the end-to-end management of enterprise data assets and operational data workflows. This role is critical in ensuring the availability, quality, consistency, and timeliness of data across platforms and functions, supporting analytics, reporting, compliance, and digital transformation initiatives. You will be responsible for the day-to-day data operations, manage a team of data professionals, and drive process excellence in data intake, transformation, validation, and delivery. You will work closely with cross-functional teams including data engineering, analytics, IT, governance, and business stakeholders to align operational data capabilities with enterprise needs. Roles & Responsibilities: Lead and manage the enterprise data operations team, responsible for data ingestion, processing, validation, quality control, and publishing to various downstream systems. Define and implement standard operating procedures for data lifecycle management, ensuring accuracy, completeness, and integrity of critical data assets. Oversee and continuously improve daily operational workflows, including scheduling, monitoring, and troubleshooting data jobs across cloud and on-premise environments. Establish and track key data operations metrics (SLAs, throughput, latency, data quality, incident resolution) and drive continuous improvements. Partner with data engineering and platform teams to optimize pipelines, support new data integrations, and ensure scalability and resilience of operational data flows. Collaborate with data governance, compliance, and security teams to maintain regulatory compliance, data privacy, and access controls. Serve as the primary escalation point for data incidents and outages, ensuring rapid response and root cause analysis. Build strong relationships with business and analytics teams to understand data consumption patterns, prioritize operational needs, and align with business objectives. Drive adoption of best practices for documentation, metadata, lineage, and change management across data operations processes. Mentor and develop a high-performing team of data operations analysts and leads. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Experience managing data workflows in cloud environments such as AWS, Azure, or GCP. Strong problem-solving skills with the ability to analyze complex data flow issues and implement sustainable solutions. Working knowledge of SQL, Python, or scripting languages for process monitoring and automation. Experience collaborating with data engineering, analytics, IT operations, and business teams in a matrixed organization. Familiarity with data governance, metadata management, access control, and regulatory requirements (e.g., GDPR, HIPAA, SOX). Excellent leadership, communication, and stakeholder engagement skills. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications Any Degree and 9-13 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 4 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Visionqual IT Services a Hyderabad based IT Service company looking for + 4 yrs experienced resources for SAP Databricks Consultant Role 1. Proficiency in Databricks 2. Understanding of Medallion Architecture 3. Understanding of AWS environment 4. Good to have SAP Datasphere skill 5. Proficient in SQL for data manipulation andSA optimization 6. Strong understanding of data warehouse concepts and dimensional modeling 7. Advanced knowledge of DAX, M language, and PowerQuery for sophisticated data modeling 8. Strong expertise in semantic modeling principles and best practices 9. Extensive experience with custom visualizations and complex dashboard design 10. Good to have Python programming skill Interested resources who can join immediately may share your profiles to sateesh.varma@visionqual.com and info@visionqual.com

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies