Home
Jobs

2008 Versioning Jobs - Page 3

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Cuttack, Odisha, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Raipur, Chhattisgarh, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Guwahati, Assam, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Amritsar, Punjab, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Jamshedpur, Jharkhand, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Ranchi, Jharkhand, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Greater Lucknow Area

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Thane, Maharashtra, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Nashik, Maharashtra, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities Effectively work with the PMO, Development, Product Management, Business and Technical Operations, Systems Engineering, Infrastructure, Networks and Architecture teams Design and develop front-end React/Angular based application. Performs new development, maintenance fixes and enhancements to meet business requirements. Draft and review architectural diagrams, specifications, business requirements as well as various design documents. Work with our existing technology stack (Java, C++, Linux, Unix, Open Source Platforms, and SOA) as well as new technologies for our next generation solutions. Documents technical requirements, program and subsystem designs, resolutions to system problems, project task descriptions, effort estimates, and unit and integration tests. Acquire a robust understanding of financial products, services, processes, and organizational structure in order to find optimal solutions. Maintain awareness of industry trends, compliance concerns, risk control processes, and regulatory landscape Mentor other software developers Requirements 3 - 5+ years in web development in JavaScript, REACT and/or Angular & TypeScript, RxJs A solid understanding of responsive design/development and mobile-web best practices. Proficient understanding of code versioning tools, such as Git and Bitbucket Practical experience and understanding of low latency messaging middleware Practical experience working within the Scrum Framework and Agile methodologies Must possess strong analytical, troubleshooting, and problem-solving skills Excellent English verbal and written communication skills Willingness to occasionally wander into other areas (e.g., backend, devops, other teams’ codebase) and ship end-to-end solutions. Drive to learn, grow and “make things better than you found it”. Self-motivated, highly organized, team player who thrives in a fast-paced environment with the ability to learn quickly and work independently. DESIRED SKILLS: Knowledgeable in Financial Markets, Banking, or Wealth Management. Familiar with related non-development fields (Product, UX, Business Analytics, Information Mapping, etc.) Experience with Java, Spring, Spring Boot, Hibernate, UI development. Practical experience with relational databases (Oracle, Sybase) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Nagpur, Maharashtra, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

6.0 years

0 Lacs

Kanpur, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: SQL, dbt, Google Cloud Forbes Advisor is Looking for: Job Title: Lead Data Engineer Company: Forbes Advisor Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life. We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics. The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What you’ll do: Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring: 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have: Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Why Join Us: Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first culture with flexibility and trust Leadership role in a fast-growing global data team inside a globally recognized brand How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 22 hours ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title :: Senior Kafka Engineer Location :: Bengaluru, India (Remote) Note :: Minimum 7 years of relevant experience Must have banking domain experience Only Immediate Joiners Job Description: Design, develop, and implement real-time data streaming solutions using Apache Kafka and Apache Flink, focusing on Java-based stream processing. Build and optimize data pipelines to enable reliable ingestion from Precisely into MongoDB via Kafka topics and Flink jobs. Leverage Confluent Cloud features, including Schema Registry, Kafka Connect, and data contract enforcement. Ensure proper schema versioning and compatibility using Avro/Protobuf. Implement DevSecOps best practices by integrating pipelines into CI/CD workflows and enabling automated testing, deployment, and schema governance. Configure Dynatrace and other observability tools for monitoring throughput, latency, windowing, and watermarking operations. Optimise Flink applications for performance and scalability through event time processing, checkpointing, and stateful stream management. Support integration of Kafka connectors to cloud targets where required, ensuring alignment with client’s data delivery standards. Top must have skills: Java based real-time stream processing. Apache Kafka Apache Flink Banking domain About Ascendion: Ascendion is transforming the future of technology with AI-driven software engineering. Our global team accelerates innovation and delivers future-ready solutions for some of the world’s most important industry leaders. Our applied AI, software engineering, cloud, data, experience design, and talent transformation capabilities accelerate innovation for Global 2000 clients. Join us to build transformative experiences, pioneer cutting-edge solutions, and thrive in a vibrant, inclusive culture - powered by AI and driven by bold ideas.

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re joining a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunit yJoin our network as a .NET Developer and work with fast-growing startups and enterprises to build scalable, high-performance applications. You’ll work across a range of projects—APIs, backend services, admin panels, dashboards, and more—all while enjoying the freedom to choose engagements that fit your skills and schedule . Role Overvi ewAs a .NET Developer, you wil l:Design & Build Applications: Develop backend systems, RESTful APIs, and integrations using C#, .NET Framework, or .NET Cor e.Work Across the Stack: Collaborate with frontend developers, designers, and product teams to deliver full-featured application s.Maintain Code Quality: Write clean, maintainable code and follow best practices for testing, versioning, and documentatio n. What You’ll Do Application Develop mentDevelop backend modules, APIs, and services using C#, .NET Core/.NET Framew ork.Optimize applications for performance, scalability, and maintainabil ity.Work with relational databases like MySQL or SQL Ser ver.Integration & Deploy ment Build and consume REST APIs and third-party services (e.g., payment gateways, CRM like Zoho, e tc.).Deploy applications on cloud servers or local environments using IIS or Apache To mcat.Collabor ationWork closely with stakeholders, designers, and other developers to gather requirements and deliver feat ures.Use tools like ClickUp, Jira, or Trello to manage tasks and timel ines. Technical Requirements & SkillsMust-Have SkillsProficient in C#, .NET Framework or .NE T CoreStrong experience with REST API devel opmentFamiliarity with MySQL or other RDBMS What We’re Look ing ForA technically strong developer with proven experience building production-grade .NET appli cationsA detail-oriented freelancer who can work independently and communicate clearly with remot e teamsSomeone who thrives in fast-paced environments and can hit the ground runningWhy J oin Us?Remote & Flexible: Work when and how you want—hourly or project -based.Continuous Opportunities: Get matched with new projects that align with your exp ertise.Impactful Work: Build solutions for real-world problems with high-growth st artups.Supportive Network: Join a curated community of skilled tech freel ancers. Ready to get started? Apply now to become a key .NET Developer for our client and a valued member of the BeGig network!

Posted 1 day ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About the Team At DAZN, the Analytics Engineering team transforms raw data into insights that drive decision-making across our global business — from content and product to marketing and revenue. We build reliable and scalable data pipelines and models that make data accessible and actionable for everyone. About the Role We are looking for an Analytics Engineer with 2+ years of experience to help build and maintain our modern data platform. You'll work with dbt , Snowflake , and Airflow to develop clean, well-documented, and trusted datasets. This is a hands-on role ideal for someone who wants to grow their technical skills while contributing to a high-impact analytics function. Key Responsibilities Build and maintain scalable data models using dbt and Snowflake Develop and orchestrate data pipelines with Airflow or similar tools Partner with teams across DAZN to translate business needs into robust datasets Ensure data quality through testing, validation, and monitoring practices Follow best practices in code versioning, CI/CD, and data documentation Contribute to the evolution of our data architecture and team standards What We’re Looking For 2+ years of experience in analytics/data engineering or similar roles Strong skills in SQL and working knowledge of cloud data warehouses (Snowflake preferred) Experience with dbt for data modeling and transformation Familiarity with Airflow or other workflow orchestration tools Understanding of ELT processes, data modeling, and data governance principles Strong collaboration and communication skills Nice to Have Experience working in media, OTT, or sports technology domains Familiarity with BI tools like Looker , Tableau , or Power BI Exposure to testing frameworks like dbt tests or Great Expectations

Posted 1 day ago

Apply

0 years

1 - 15 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Highly proficient in JavaScript and Typescript, PassportJs, Sequelize. Working knowledge of Node.js frameworks such as ExpressJs, SailsJs, VueJs and ElectronJs. Good understanding of server-side templating languages such as Jade, EJS, Mustache etc. Good understanding of server-side CSS preprocessors such as Stylus, Less,Sass etc. Basic understanding of front-end technologies, such as HTML5, and CSS3 Sound knowledge of Testing frameworks such as Mocha, Chai, Jasmine, Cucumber etc. Experience of writing unit, integration and E2E tests. Experience of working with SQL databases. Experience of integrating third party APIs. Good understanding of code versioning tools, such as Git. Good Communication Skills. Skills:- Javascript, Redux/Flux, NodeJS (Node.js), Express, Vue.js, HTML/CSS and SQL

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

CWX is looking for a dynamic SENIOR AI/ML ENGINEER to become a vital part of our vibrant PROFESSIONAL SERVICES TEAM , working on-site in Hyderabad . Join the energy and be part of the momentum! At CloudWerx, we're looking for a Senior AI/ML Engineer to lead the design, development, and deployment of tailored AI/ML solutions for our clients. In this role, you'll work closely with clients to understand their business challenges and build innovative, scalable, and cost-effective solutions using tools like Google Cloud Platform (GCP), Vertex AI, Python, PyTorch, LangChain, and more. You'll play a key role in translating real-world problems into robust machine learning architectures, with a strong focus on Generative AI, multi-agent systems, and modern MLOps practices. From data preparation and ensuring data integrity to building and optimizing models, you'll be hands-on across the entire ML lifecycle — all while ensuring seamless deployment and scaling using cloud-native infrastructure. Clear communication will be essential as you engage with both technical teams and business stakeholders, making complex AI concepts understandable and actionable. Your deep expertise in model selection, optimization, and deployment will help deliver high-performing solutions tailored to client needs. We're also looking for someone who stays ahead of the curve — someone who's constantly learning and experimenting with the latest developments in generative AI, LLMs, and cloud technologies. Your curiosity and drive will help push the boundaries of what's possible and fuel the success of the solutions we deliver. This is a fantastic opportunity to join a fast-growing, engineering-led cloud consulting company that tackles some of the toughest challenges in the industry. At CloudWerx, every team member brings something unique to the table, and we foster a supportive environment that helps people do their best work. Our goal is simple: to be the best at what we do and help our clients accelerate their businesses through world-class cloud solutions. This role is an immediate full time position. Insight on your impact Conceptualize, Prototype, and Implement AI Solutions: Design and deploy advanced AI solutions using large language models (LLMs), diffusion models, and multimodal AI systems by leveraging Google Cloud tools such as Vertex AI, AutoML, and AI Platform (Agent Builder). Implement Retrieval-Augmented Generation (RAG) pipelines for chatbots and assistants, and create domain-specific transformers for NLP, vision, and cross-modal applications. Utilize Document AI, Translation AI, and Vision AI to develop full-stack, multimodal enterprise applications. Technical Expertise: models via LoRA, QLoRA, RLHF, and Dreambooth. Build multi-agent systems using Agent Development Kit (ADK), Agent-to-Agent (A2A) Protocol, and Model Context Protocol (MCP). Provide thought leadership on best practices, architecture patterns, and technical decisions across LLMs, generative AI, and custom ML pipelines, tailored to each client's unique business needs. Stakeholder Communication: Effectively communicate complex AI/ML concepts, architectures, and solutions to business leaders, technical teams, and non-technical stakeholders. Present project roadmaps, performance metrics, and model validation strategies to C-level executives and guide organizations through AI transformation initiatives. Understand client analytics & modeling needs: Collaborate with clients to extract, analyze, and interpret both internal and external data sources. Design and operationalize data pipelines that support exploratory analysis and model development, enabling business-aligned data insights and AI solutions. Database Management: Work with structured (SQL/BigQuery) and unstructured (NoSQL/Firestore, Cloud Storage) data. Apply best practices in data quality, versioning, and integrity across datasets used for training, evaluation, and deployment of AI/ML models. Cloud Expertise: Architect and deploy cloud-native AI/ML solutions using Google Cloud services including Vertex AI, BigQuery ML, Cloud Functions, Cloud Run, and GKE Autopilot. Provide consulting on GCP service selection, infrastructure scaling, and deployment strategies aligned with client requirements. MLOps & DevOps: Lead the implementation of robust MLOps and LLMOps pipelines using TensorFlow Extended (TFX), Kubeflow, and Vertex AI Pipelines. Set up CI/CD workflows using Cloud Build and Artifact Registry, and deploy scalable inference endpoints through Cloud Run and Agent Engine. Establish automated retraining, drift detection, and monitoring strategies for production ML systems. Prompt Engineering and fine tuning: Apply advanced prompt engineering strategies (e.g., few-shot, in-context learning) to optimize LLM outputs. Fine-tune models using state-of-the-art techniques including LoRA, QLoRA, Dreambooth, ControlNet, and RLHF to enhance instruction-following and domain specificity of generative models. LLMs, Chatbots & Text Processing: Develop enterprise-grade chatbots and conversational agents using Retrieval-Augmented Generation (RAG), powered by both open-source and commercial LLMs. Build state-of-the-art generative solutions for tasks such as intelligent document understanding, summarization, and sentiment analysis. Implement LLMOps workflows for lifecycle management of large-scale language applications. Consistently Model and Promote Engineering Best Practices: Promote a culture of technical excellence by adhering to software engineering best practices including version control, reproducibility, structured documentation, Agile retrospectives, and continuous integration. Mentor junior engineers and establish guidelines for scalable, maintainable AI/ML development. Our Diversity and Inclusion Commitment At CloudWerx, we are dedicated to creating a workplace that values and celebrates diversity. We believe that a diverse and inclusive environment fosters innovation, collaboration, and mutual respect. We are committed to providing equal employment opportunities for all individuals, regardless of background, and actively promote diversity across all levels of our organization. We welcome all walks of life, as we are committed to building a team that embraces and mirrors a wide range of perspectives and identities. Join us in our journey toward a more inclusive and equitable workplace. Background Check Requirement All candidates for employment will be subject to pre-employment background screening for this position. All offers are contingent upon the successful completion of the background check. For additional information on the background check requirements and process, please reach out to us directly. Our Story CloudWerx is an engineering-focused cloud consulting firm born in Silicon Valley - in the heart of hyper-scale and innovative technology. In a cloud environment we help businesses looking to architect, migrate, optimize, secure or cut costs. Our team has unique experience working in some of the most complex cloud environments at scale and can help businesses accelerate with confidence.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Title: Machine Learning Engineer Primary Location: Hyderabad/Trivandrum(Onsite) Job Type: Contract Secondary Location: Any Infosys Office Location In this position you will: • Design and implement NLP pipelines for document analysis and artifact generation. • Perform data cleaning and transformation on unstructured text using industry-standard techniques. • Develop embeddings and semantic search pipelines using OpenAI, HuggingFace, or custom models. • Integrate vectorized data with retrieval systems such as MongoDB Vector, FAISS or Pinecone. • Fine-tune and evaluate LLMs for use cases like test case generation, user story summarization, etc. • Monitor model performance and conduct regular evaluations with precision/recall/F1/BLEU. • Collaborate with backend developers to expose ML outputs via APIs. • Participate in architectural design and PoCs for GenAI-based solutions. • Adhere to and implement Responsible AI principles in all ML workflows. • Work closely with product owners and testers to ensure the quality and usability of generated outputs. Required Qualifications: • 5+ years of experience in in data science and AI/ML engineering with strong proficiency in Python and applied NLP • Deep expertise in NLP techniques including: Text classification, Named Entity Recognition (NER), Summarization, Sentiment analysis, Topic modeling • Strong experience in data preprocessing and cleaning :Tokenization, stop-word removal, stemming/lemmatization, normalization. • Strong Experience in vectorization methods: TF-IDF, Word2Vec, GloVe, BERT, Sentence Transformers. Demonstration experience of vectorization and implement solutions to contextual search is must • Hands on Experience in implementing Lang Chain, RAG architecture, and multi-agent orchestration, Agentic AI, scikit learn, python is must • Hands-on with embedding models (e.g., OpenAI, Hugging Face Transformers) and chunking strategies • Experience with vector stores: MongoDB atlas Vector DB, FAISS, Pinecone, Chroma DB. • Skilled in building and fine-tuning LLMs and prompt engineering is must • Experience with MLOps frameworks for model lifecycle, versioning, deployment, and monitoring. • Strong knowledge of LLMOps, NumPy, PySpark for data wrangling. • Experience deploying models on Azure (preferred), AWS, or GCP. • Understanding of Responsible AI practices including model fairness, transparency, and auditability. • Strong knowledge of machine learning frameworks, deep learning architectures, natural language processing and generative models (e.g., GANs, transformers). Preferred Qualifications: • 3 + years of experience building, scaling, and optimizing training and inferencing systems for deep neural networks and/or transformer architectures. • Demonstrated ability in research and development teams with a focus on generative AI technologies and suggesting new ideas or opportunities. • Experience in managing production scale pre training models (private or public cloud) or setting up GPU clusters for In house LLM deployments • Familiarity with AI Governance, ethics, compliance, and regulatory considerations. Education: • Bachelor’s degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline. • Master’s degree or PhD preferred. Thanks Aatmesh aatmesh.singh@ampstek.com

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Hi, Hope you are doing well. Please do let me know if you are interested and looking for a job change at this moment. find the detailed job description, and it will be really appreciated if you can share with me your updated resume and best number to reach. ML Engineer Hyderabad/Trivandrum-Onsite Contract Position Summary: We are seeking a dynamic Senior Machine Learning Engineer focused on advancing our generative AI capabilities. You will contribute to building scalable AI systems that impact real world Enterprise applications, while promoting responsible AI practices and collaborating across teams to accelerate innovation. In this position you will: • Design and implement NLP pipelines for document analysis and artifact generation. • Perform data cleaning and transformation on unstructured text using industry-standard techniques. • Develop embeddings and semantic search pipelines using OpenAI, HuggingFace, or custom models. • Integrate vectorized data with retrieval systems such as MongoDB Vector, FAISS or Pinecone. • Fine-tune and evaluate LLMs for use cases like test case generation, user story summarization, etc. • Monitor model performance and conduct regular evaluations with precision/recall/F1/BLEU. • Collaborate with backend developers to expose ML outputs via APIs. • Participate in architectural design and PoCs for GenAI-based solutions. • Adhere to and implement Responsible AI principles in all ML workflows. • Work closely with product owners and testers to ensure the quality and usability of generated outputs. Required Qualifications: • 5+ years of experience in in data science and AI/ML engineering with strong proficiency in Python and applied NLP • Deep expertise in NLP techniques including: Text classification, Named Entity Recognition (NER), Summarization, Sentiment analysis, Topic modeling • Strong experience in data preprocessing and cleaning :Tokenization, stop-word removal, stemming/lemmatization, normalization. • Strong Experience in vectorization methods: TF-IDF, Word2Vec, GloVe, BERT, Sentence Transformers. Demonstration experience of vectorization and implement solutions to contextual search is must • Hands on Experience in implementing Lang Chain, RAG architecture, and multi-agent orchestration, Agentic AI, scikit learn, python is must • Hands-on with embedding models (e.g., OpenAI, Hugging Face Transformers) and chunking strategies • Experience with vector stores: MongoDB atlas Vector DB, FAISS, Pinecone, Chroma DB. • Skilled in building and fine-tuning LLMs and prompt engineering is must • Experience with MLOps frameworks for model lifecycle, versioning, deployment, and monitoring. • Strong knowledge of LLMOps, NumPy, PySpark for data wrangling. • Experience deploying models on Azure (preferred), AWS, or GCP. • Understanding of Responsible AI practices including model fairness, transparency, and auditability. • Strong knowledge of machine learning frameworks, deep learning architectures, natural language processing and generative models (e.g., GANs, transformers). Preferred Qualifications: • 3 + years of experience building, scaling, and optimizing training and inferencing systems for deep neural networks and/or transformer architectures. • Demonstrated ability in research and development teams with a focus on generative AI technologies and suggesting new ideas or opportunities. • Experience in managing production scale pre training models (private or public cloud) or setting up GPU clusters for In house LLM deployments • Familiarity with AI Governance, ethics, compliance, and regulatory considerations. Education: • Bachelor’s degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline. • Master’s degree or PhD preferred. Thanks, and Regards Snehil Mishra snehil@ ampstek.com linkedin.com/in/snehil-mishra-1104b2154 Desk-6093602673Extension-125 www.ampstek.com https://www.linkedin.com/company/ampstek/jobs/ Ampstek – Global IT Partner Registered Offices: North America and LATM: USA|Canada|Costa Rica|Mexico Europe:UK|Germany|France|Sweden|Denmark|Austria|Belgium|Netherlands|Romania|Poland|Czeh Republic|Bulgaria|Hungary|Ireland|Norway|Croatia|Slovakia|Portugal|Spain|Italy|Switzerland|Malta| Portugal APAC:Australia|NZ|Singapore|Malaysia|South Korea|Hong Kong|Taiwan|Phillipines|Vietnam|Srilanka|India MEA :South Africa|UAE|Turkey|Egypt

Posted 1 day ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Role Title - Team Lead and Lead Developer – AI and Data engineering Role Type - Full time Role Reports to Chief Technology Officer Work Location - Plenome Technologies 8 th floor, E Block, IITM Research Park, Taramani Job Overview The Technical Lead will drive our AI strategy and implementation while managing a team of developers. Key responsibilities include architecting LLM solutions, ensuring scalability, implementing multilingual capabilities, and developing healthcare-specific AI models. You will oversee the development of AI agents that can understand and process medical information, interact naturally with healthcare professionals, and handle complex medical workflows. This includes ensuring data privacy, maintaining medical accuracy, and adapting models for different healthcare contexts. Job Specifications Educational Qualifications - Any UG/PG graduates Professional Experience 4+ years of Data Engineering/ML development experience 2+ years of team leadership experience 2+ years of Scrum/Agile management experience Key Job Responsibilities ML applications & training · Understanding of machine learning concepts and experience with ML frameworks like PyTorch, Tensorflow, or others · Experience with production of ML applications on web or mobile platforms NLP & feature engineering · Experience in developing customized AI powered features from scratch to production involving NLP and other models · Designing, deploying and subsequent training of multimodal applications based on clinical requirements LLMs & fine-tuning · Experience with open-source LLMs (preferably Llama models) and fine-tuning through client data and open-source data · Experience with LLM frameworks like LangChain, Llama Index or others, and with any vector databases · Implement RAG architecture to enhance model accuracy with real-time retrieval from clinical databases and medical literature Data pipelines & architecture · Design end-to-end clinical AI applications, from data ingestion to deployment in clinical settings with integrations · Experience with Docker and Kubernetes for application serving at large scale, and developing data pipelines and training workflows API development · Experience with deploying LLM models on cloud platforms (AWS, Azure or others) · Experience with backend and API developments for external integrators Documentation & improvements · Version control with Git, and ticketing bugs and features with tools like Jira or Confluence Behavioral competencies Attention to detail · Ability to maintain accuracy and precision in financial records, reports, and analysis, ensuring compliance with accounting standards and regulations. Integrity and Ethics · Commitment to upholding ethical standards, confidentiality, and honesty in financial practices and interactions with stakeholders. Time management · Effective prioritization of tasks, efficient allocation of resources, and timely completion of assignments to meet sprint deadlines and achieve goals. Adaptability and Flexibility · Capacity to adapt to changing business environments, new technologies, and evolving accounting standards, while remaining flexible in response to unexpected challenges. Communication & collaboration · Experience presenting to stakeholders and executive teams · Ability to bridge technical and non-technical communication · Excellence in written documentation and process guidelines to work with other frontend teams Leadership competencies Team leadership and team building · Lead and mentor a backend and database development team, including junior developers, and ensure good coding standards · Agile methodology to be followed, Scrum meetings to be conducted for sync-ups Strategic Thinking · Ability to develop and implement long-term goals and strategies aligned with the organization’s vision · Ability to adopt new tech and being able to handle tech debt to bring the team up to speed with client requirements Decision-Making · Capable of making informed and effective decisions, considering both short-term and long-term impacts · Insight into resource allocation and sprint building for various projects Team Building · Ability to foster a collaborative and inclusive team environment, promoting trust and cooperation among team members Code reviews · Troubleshooting, weekly code reviews and feature documentation and versioning, and standards improvement Improving team efficiency · Research and integrate AI-powered development tools (GitHub Copilot, Amazon Code Whisperer) Added advantage points Regulatory compliances · Experience with HIPAA, GDPR compliant software and data storage systems · Experience in working with PII data and analytical data in highly regulated domains (finance, healthcare, and others) · Understanding of healthcare data standards and codes (FHIR, SNOMED) for data engineering AI safety measures · Knowledge of privacy protection and anti-data leakage practices in AI deployments Interested candidates can share the updated resumes to below mentioned ID. Contact Person - Janani Santhosh - Senior HR Professional Email ID - careers@plenome.com

Posted 1 day ago

Apply

5.0 years

20 - 25 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Experience Experience: 5 to 12 Years Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. 5+ years of proven work experience as a Front End Developer or similar role. Extensive experience with HTML, CSS, and JavaScript. Strong understanding of server-side CSS pre-processing platforms, such as SASS or LESS. Proficiency in React.js, Next.js, Node.js, and other relevant web development technologies. Familiarity with code versioning tools such as Git. Experience with building and deploying applications on cloud platforms like AWS or Azure. Solid understanding of web application development processes, from the layout/user interface to relational database structures. Excellent problem-solving skills and a proactive approach to finding solutions. Strong communication and interpersonal skills with the ability to work effectively in a collaborative team environment. Preferred Qualifications Experience with responsive and adaptive design. Familiarity with RESTful and GraphQL APIs. Knowledge of UI/UX best practices and standards. Previous experience in an Agile development environment. Understanding of SEO principles and ensuring that applications adhere to them. What We’re Looking For Designing and developing high-volume, low-latency applications for mission-critical systems, delivering high-availability and performance. Collaborating with cross-functional teams to define, design, and ship new features. Ensuring the best possible performance, quality, and responsiveness of applications. Identifying and correcting bottlenecks and fixing bugs. Maintaining code integrity and organization. Implementing security and data protection measures. Staying up-to-date with emerging technologies and industry trends. Mentoring and providing guidance to junior developers. Skills: next.js,azure,seo principles,restful apis,aws,skills,react.js,ui/ux best practices,react,css,web,javascript,platforms,sass,agile development,graphql,git,less,node.js,html,nestjs

Posted 1 day ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We’re looking for people who are determined to make life better for people around the world. Lilly’s Purpose At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our 42,000+ employees across the globe work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work and put people first. Come build advanced software capabilities to accelerate our digital transformation and support Lilly’s evolution to be the leader in Pharma-tech! The Role The Software Product Engineering (SPE) organization is actively looking for a Senior Quality Engineer with strong hands-on experience in AI platform testing, chatbot testing, AI model validation, agents testing, AI test automation, and API testing . This is a highly specialized role focused on validating complex AI and ML systems and ensuring scalable, safe, and effective deployment of AI-based solutions. UI automation experience using tools like Selenium , Cypress , or Playwright is desirable as a secondary skill . What You’ll Be Doing You will drive quality engineering initiatives specifically focused on AI-powered platforms and solutions , including LLMs, chatbots, AI agents, and intelligent workflows. You’ll build robust test strategies and frameworks to validate data pipelines, model inference accuracy, prompt engineering, hallucination control, API contracts, and performance under real-world conditions. This role requires strong analytical and problem-solving skills, a deep understanding of AI systems testing, and the ability to collaborate across multidisciplinary teams such as SWE, SRE, ML Engineering, and Product. Key Responsibilities AI Platform & Model Testing (Primary Focus): Validate the behaviour and performance of AI/ML models, including LLMs, RAG pipelines, chatbots, and autonomous agents. Design and execute prompt evaluation, response accuracy, toxicity detection, and hallucination control test scenarios. Implement and enhance automated AI testing frameworks tailored to model versioning, retraining, and feedback loops. Ensure quality in human-in-the-loop (HITL) and continuous learning pipelines. API Testing: Conduct thorough API validation using Postman, REST Assured, or GraphQL, with a focus on AI service endpoints, inference APIs, and orchestrators. Build robust integration test suites to ensure seamless functionality between APIs and underlying AI systems. AI Test Automation: Build test harnesses to validate AI features through synthetic data, mock services, and model stubs. Integrate test suites into CI/CD pipelines to ensure continuous validation of AI behaviors. UI and Functional Test Automation (Secondary Focus): Support end-to-end automation of AI-powered applications using tools such as Selenium, Cypress, Playwright, and WebdriverIO. Automate critical user journeys involving AI-enabled decisions and interactions. Collaboration & Test Strategy: Work closely with ML Engineers, SREs, and Product Managers to translate model design into testable components. Monitor AI behavior in production using observability tools and adjust quality strategies based on live insights. Drive discussions on fairness, bias, explainability, and model drift. Agile & DevOps Integration: Participate in Agile ceremonies and actively contribute to sprint planning, test case reviews, and retrospectives. Collaborate with DevOps teams to embed AI testing into CI/CD workflows using tools like GitHub, Jenkins, and Azure DevOps. Required Technical Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, AI/ML, or a related field 6+ years of experience in Quality Engineering with at least 2 years in AI platform testing or model validation Hands-on experience in AI model testing, chatbot testing, prompt tuning, or agent workflows Proficiency in AI test automation and API testing tools (Postman, REST Assured, GraphQL) Working knowledge of Python, JavaScript, or TypeScript Experience integrating tests into CI/CD pipelines using GitHub, Jenkins, or Azure DevOps Knowledge of OpenAI, Bedrock, Anthropic, LangChain, RAG, and vector stores Understanding of LLM evaluation techniques, including metrics like BLEU, ROUGE, Toxicity Score, and RAGAs Preferred Qualifications Experience testing AI applications hosted in multi-geographical and cloud-native environments (e.g., AWS, GCP, Azure) Exposure to AI observability platforms such as Weights & Biases, Arize AI, or WhyLabs Understanding of prompt engineering, embedding quality, and tokenization behaviour Familiarity with security, performance, or accessibility testing Experience with AI governance frameworks and regulatory compliance (e.g., FDA, HIPAA in AI contexts) Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (https://careers.lilly.com/us/en/workplace-accommodation) for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response. Lilly does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status. #WeAreLilly

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies