Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 2 weeks ago
0.0 - 6.0 years
0 Lacs
Gurugram, Haryana
On-site
About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 2 weeks ago
0.0 - 6.0 years
0 Lacs
Gurugram, Haryana
On-site
Data Engineer Gurgaon, India; Ahmedabad, India; Hyderabad, India; Virtual, Gurgaon, India Information Technology 317426 Job Description About The Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 2 weeks ago
5.0 years
1 - 2 Lacs
Bengaluru, Karnataka, India
On-site
Experience Level : 3–5 Years Location : Bengaluru, India About The Role We are looking for a driven and experienced Machine Learning Engineer to join our team and help push the boundaries of what’s possible with Large Language Models (LLMs) and intelligent agents. This is a hands-on role for someone with a strong background in LLM tooling, evaluation, and data engineering, and a deep appreciation for building reusable, scalable, and open solutions. You’ll work across the stack—from agent and tool design, model evaluation, and dataset construction to serving infrastructure and fine-tuning pipelines. We're especially excited about candidates who have made meaningful contributions to open-source LLM/AI infrastructure and want to build foundational systems used by others across the ecosystem. Key Responsibilities Design, build, and iterate on LLM-powered agents and tools, from prototypes to production. Develop robust evaluation frameworks, benchmark suites, and tools to systematically test LLM behaviors. Construct custom evaluation datasets, both synthetic and real-world, to validate model outputs at scale. Build scalable, production-grade data pipelines using Apache Spark or similar frameworks. Work on fine-tuning and training workflows for open-source and proprietary LLMs. Integrate and optimize inference using platforms like vLLM, llama.cpp, and related systems. Contribute to the development of applications, emphasizing composability, traceability, and modularity. Actively participate in and contribute to open-source projects within the LLM/agent ecosystem. Requirements Must-Have Skills: 3–5 years of experience in machine learning, with a strong focus on LLMs, agent design, or tool building. Demonstrable experience building LLM-based agents, including tool usage, planning, and memory systems. Proficiency in designing and implementing evaluation frameworks, metrics, and pipelines. Strong data engineering background, with hands-on experience in Apache Spark, Airflow, or similar tools. Familiarity with serving and inference systems like vLLM, llama.cpp, or TensorRT-LLM. Deep understanding of building componentized ML systems. Open-Source Contributions Proven track record of contributing to open-source repositories related to LLMs, agent frameworks, evaluation tools, or model training. Experience maintaining your own open-source libraries or tooling is a major plus. Strong Git/GitHub practices, code documentation, and collaborative PR workflows. You’ll be expected to build tools, frameworks, or agents that may be released back to the community when possible. Nice-to-Have Familiarity with LLM orchestration frameworks like LangChain, CrewAI/AutoGen, Haystack, or DSPy. Experience training or fine-tuning models using LoRA, PEFT, or full-scale distributed training. Experience deploying LLM applications at scale in cloud or containerized environments (e.g., AWS, Kubernetes, Docker). What We Offer The opportunity to work on state-of-the-art LLM and agent technologies. Encouragement and support for open-source contributions as part of your day-to-day. A fast-paced, collaborative, and research-focused environment. Influence over architectural decisions in a rapidly evolving space. To Apply Please submit your resume and links to your GitHub, open-source projects, or public technical writing (blog posts, talks, etc.)
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
If you’re looking to gain real work experience and learn industry-leading workflows from senior software engineers at Microsoft, Google, Meta, and top unicorns, while working hands-on with production-scale projects, TechX’s Engineering Apprenticeship is for you. About TechX TechX bridges academic theory and industry practice. Our mission is to give you verified work experience, not just certificates, by embedding you in live codebases alongside ex-FAANG tech leads and senior architects. What You’ll Work On Own one focus area from design through deployment: Large-Scale Web Architecture Architect fault-tolerant, highly available systems Build efficient data pipelines and caching layers Tune performance under real user load Implement monitoring, logging, and alerting LLM Engineering Ingest and preprocess massive datasets Build and fine-tune transformer models Develop GPU/TPU training and inference workflows Deploy scalable inference endpoints with autoscaling Data Science & MLOps Craft end-to-end analytics pipelines (wrangling → modeling → viz) Train and validate ML models in production Set up CI/CD for data and model versioning Monitor model drift, performance, and costs How We Work Agile Scrum Meetings: Participate in sprint planning, daily stand-ups, and retrospectives with your mentor “team.” Hands-On Development: Push code, review PRs, and ship features in live repos. One-on-One Mentorship: Weekly pairing sessions with senior engineers who’ve shipped at scale. Code Reviews & Feedback: Get actionable guidance on design, code quality, testing, and CI/CD pipelines. Who Should Apply Recent CS/Engineering grads or career-changers craving more work experience Proficient in at least one backend language (C#, Java, Go, Python, etc.) Solid grasp of data structures, algorithms, and networking Self-motivated, able to commit ≈20 hrs/week Ready to learn FAANG-style best practices and workflows Why This Apprenticeship? Real Work Experience: Gain work experience that you can list on your resume. Industry Connections: Direct referrals and introductions to partner hiring teams. Ongoing Support: Program continues until you secure a full-time engineering role. Career Coaching: Built-in mock interviews, resume reviews, and job-search strategy. Program Details Type: Educational apprenticeship (not employment; no wages or benefits) Location: 100% Remote Duration: Until placement in a full-time role (average 3–6 months) Commitment: ≈20 hrs/week Spaces are limited, apply today to start writing code that matters and fast-track your engineering career!
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description JOB RESPONSIBILITY : Collaborate with cross-functional teams, including data scientists and product managers, to acquire, process, and manage data for AI/ML model integration and optimization. Design and implement robust, scalable, and enterprise-grade data pipelines to support state-of-the-art AI/ML models. Debug, optimize, and enhance machine learning models, ensuring quality assurance and performance improvements. Operate container orchestration platforms like Kubernetes, with advanced configurations and service mesh implementations, for scalable ML workload deployments. Design and build scalable LLM inference architectures, employing GPU memory optimization techniques and model quantization for efficient deployment. Engage in advanced prompt engineering and fine-tuning of large language models (LLMs), focusing on semantic retrieval and chatbot development. Document model architectures, hyperparameter optimization experiments, and validation results using version control and experiment tracking tools like MLflow or DVC. Research and implement cutting-edge LLM optimization techniques, such as quantization and knowledge distillation, ensuring efficient model performance and reduced computational costs. Collaborate closely with stakeholders to develop innovative and effective natural language processing solutions, specializing in text classification, sentiment analysis, and topic modeling. Stay up-to-date with industry trends and advancements in AI technologies, integrating new methodologies and frameworks to continually enhance the AI engineering function. Contribute to creating specialized AI solutions in healthcare, leveraging domain-specific knowledge for task adaptation and : Minimum education: Bachelors degree in any Engineering Stream Specialized training, certifications, and/or other special requirements: Nice to have Preferred education: Computer : Minimum relevant experience - 4+ years in AI AND COMPETENCIES Skills : Advanced proficiency in Python with expertise in data science libraries (NumPy, Pandas, scikit-learn) and deep learning frameworks (PyTorch, TensorFlow) Extensive experience with LLM frameworks (Hugging Face Transformers, LangChain) and prompt engineering techniques Experience with big data processing using Spark for large-scale data analytics Version control and experiment tracking using Git and MLflow Software Engineering & Development: Advanced proficiency in Python, familiarity with Go or Rust, expertise in microservices, test-driven development, and concurrency processing. DevOps & Infrastructure: Experience with Infrastructure as Code (Terraform, CloudFormation), CI/CD pipelines (GitHub Actions, Jenkins), and container orchestration (Kubernetes) with Helm and service mesh implementations. LLM Infrastructure & Deployment: Proficiency in LLM serving platforms such as vLLM and FastAPI, model quantization techniques, and vector database management. MLOps & Deployment: Utilization of containerization strategies for ML workloads, experience with model serving tools like TorchServe or TF Serving, and automated model retraining. Cloud & Infrastructure: Strong grasp of advanced cloud services (AWS, GCP, Azure) and network security for ML systems. LLM Project Experience: Expertise in developing chatbots, recommendation systems, translation services, and optimizing LLMs for performance and security. General Skills: Python, SQL, knowledge of machine learning frameworks (Hugging Face, TensorFlow, PyTorch), and experience with cloud platforms like AWS or GCP. Experience in creating LLD for the provided architecture. Experience working in microservices based Expertise : Strong mathematical foundation in statistics, probability, linear algebra, and optimization Deep understanding of ML and LLM development lifecycle, including fine-tuning and evaluation Expertise in feature engineering, embedding optimization, and dimensionality reduction Advanced knowledge of A/B testing, experimental design, and statistical hypothesis testing Experience with RAG systems, vector databases, and semantic search implementation Proficiency in LLM optimization techniques including quantization and knowledge distillation Understanding of MLOps practices for model deployment and Competencies : Strong analytical thinking with ability to solve complex ML challenges Excellent communication skills for presenting technical findings to diverse audiences Experience translating business requirements into data science solutions Project management skills for coordinating ML experiments and deployments Strong collaboration abilities for working with cross-functional teams Dedication to staying current with latest ML research and best practices Ability to mentor and share knowledge with team members (ref:hirist.tech)
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Chatbot / Conversational AI Technical Leader Responsibilities Provides technical leadership in Chat Bot/Voice Bot Space. Implement a Chat Bot for Various Business Adapt quickly to change in requirements and be willing to work with different technologies if required Lead the effort to build, implement and support the data infrastructure Manage the Intelligence Automation Engineer and vendor partners, ensuring to prioritize projects according to customer and internal needs, and develops top-quality data pipelines using industry best practices Own most deliverables for the ITSM Chat Bot team from a delivery perspective Lead cross-functional team members and stakeholders throughout projects of varying scope and complexity Gather requirements from business and IT users and Come up with Estimation. Responsible for the design, development, and implementation of chatbot & voicebot agents using the Azure cloud services and Genesys/Amazon connect/Smartassist/Avaya Interface and liaise with both business partners and (potentially) external vendors Required Skills And Qualifications At least 5 years of experience implementing Chatbot technologies Knowledgeable in basic concepts of NLP and NLU – Intent Classification, Keyword/Entity extraction , Text Similarity, Text Pre-processing, dialog flows, speech to text, text to speech and telephony systems. Basic knowledge of Machine Learning concepts – Training, Accuracy Evaluation Development experience in NodeJS, REST Services. Experience of working in cloud environment like Azure, AWS, Google Cloud Platform, IBM Cloud Ability to embed chatbot in multiple channels like Website, SMS, Email, Skype, Facebook Messenger, MS Teams, WhatsApp etc. Knowledge on all phases of software development, including UI design and development, microservices design and development, relational and non-relational DBs, APIs and external integration, quality assurance, validation documentation, security, and infrastructure. Knowledge on business functions and user stories, decompose them into technical specifications, and develop working application code for a cloud environment. Hands on experience in building applications using Java Script frameworks (NodeJS), AngularJS/ReactJS, SQL and No SQL Databases Experience with JSON Knowledge of analytics / visualization via dashboards and reporting tools Education Requirements Bachelor's degree in Computer Science, Engineering, Statistics, Technical Science, or 3+ years of IT/Programming experience Minimum 2 years of experience in solutioning for Artificial Intelligence use cases, plus web application development & systems integration experience e.g. REST/SOAP Prior solutioning experience with No SQL databases, integrating unstructured data Preferred Skills Hands on experience in one or more of the following AI technologies: Language – Natural Language Processing, Natural Language Understanding, Speech to Text, Text to Speech, Sentiment Analysis, Language Detection, Classification, Telephony channel experience AI solutions – Virtual Agents, intelligent case processing, Video Analytics, inference engines, stream monitoring, intelligent search, ontologies/knowledge representations, voice technologies (Speech To Text & Text To Speech), Custom Language Model creation. Knowledge and experience in some of the key AI platforms, e.g., Kore.ai, Servicenow VA, IBM Watson, Microsoft Azure Cognitive Services, Google Dialogflow Web UI and dashboard design experience Experience working in a DevOps environment, and using industry standard tools (GIT, JIRA) Able to explain technical concepts in a non-technical language Professional Skill Requirements Proven success in contributing to a team-oriented environment Proven ability to work creatively and analytically in a problem-solving environment
Posted 2 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Uber Eats is seeking a highly skilled and motivated Staff Data Scientist to join our Search Team. In this role, you will be a key driver in enhancing the search experience for millions of users across the globe. You'll bring deep statistical expertise to guide decision-making, improve product performance, and ensure our evaluations and insights are rooted in methodological rigor. What You'll Do Conduct robust statistical analyses on complex datasets to identify product opportunities, shape roadmap priorities, and optimize user experiences in search. Design and evaluate A/B tests and quasi-experiments, applying best practices in experimental methodology to ensure high-quality, unbiased insights. Build and maintain statistical frameworks for defining and measuring ground truth - identifying reliable signals to evaluate search relevance, personalization, and user satisfaction. Apply advanced sampling strategies to construct representative datasets for both offline and online evaluation pipelines, ensuring scalability and statistical power. Develop rigorous evaluation metrics that reflect real-world product performance and align closely with user and business goals. Lead initiatives to strengthen causal inference practices across the team, applying methods like matching, regression discontinuity, and difference-in-differences where appropriate. Partner with product managers, engineers, and other scientists to translate open-ended product questions into structured analytical approaches. Provide mentorship and technical leadership to other scientists, promoting a culture of statistical excellence and continuous learning. What You'll Need: M.S. or Bachelor's degree in Statistics, Economics, Mathematics, Operations Research, Computer Science, or a related quantitative field. 10+ years of industry experience in data science or applied analytics, ideally with experience in consumer products, search, or recommendation systems. Deep expertise in ground truth design, and evaluation methodologies for complex user-facing systems. Proven experience with statistical sampling techniques and working with offline evaluation pipelines to assess models and product performance. Strong fluency in experimentation design, causal inference, and observational data analysis. Proficiency with tools like SQL, Python, and R for data manipulation, modeling, and visualization. Excellent communication skills - able to present statistical findings clearly and influence product and engineering decisions through data. A strong product sense and the ability to balance analytical rigor with practical business impact.
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Company Description Symbiosis AI is a pioneering company transforming industries with advanced AI solutions. Our offerings include LLM Fusion for superior model orchestration, VectorStore for efficient vector embeddings storage, and scalable AI inference with InferGen and InferRAG. We also provide customized solutions for businesses, all designed to deliver unparalleled performance, scalability, and cost-effectiveness. Internship Details Duration: 3 Months Unpaid Internship Mode: Hybrid (2-3 days per week in-office at Horamavu, Bangalore) Schedule: Flexible timings, Monday-Saturday, Full-time role Role Description This is a full-time hybrid role for a Social Media Marketing Intern at Symbiosis AI, where your presence is required for 2-3 days per week at our office in Horamavu, Bangalore. The intern will be responsible for social media marketing, social media content creation, digital marketing, and communication tasks to support Symbiosis AI's online presence and brand promotion. Qualifications Social Media Marketing and Social Media Content Creation skills Digital Marketing and Marketing skills Strong communication skills Experience with social media platforms and content creation tools Creativity and ability to think outside the box Ability to work independently and remotely Pursuing or completed undergraduate studies in Marketing, Communication, or related field
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role We are looking for an exceptional data scientist to be a part of the CRM platform data science team. As a Data Scientist in this role, you will be responsible for leveraging data-driven insights and advanced analytics techniques to drive marketing strategies and optimize our product offerings. You will collaborate with cross-functional teams, including applied and data science, marketing, product management, and engineering, to develop data-driven solutions that enhance customer experiences and maximize business outcomes. What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Collaborate with measurement and optimization teams to design experiments and share readouts. Develop metrics and dashboards to monitor product performance, customer engagement, and conversion rates, and provide insights for continuous improvement. Collaborate with product management teams to prioritize product roadmap initiatives based on data-driven insights and market demand. Partner with internal stakeholders including Operations, Product and the Marketing Technology team to develop marketing strategies and budget decisions based on data insights Collaborate with marketing applied scientists on ML centered initiatives for comms and growth platform Basic Qualifications M.S., or Bachelor's degree in Statistics, Economics, Machine Learning, Operations Research, or other quantitative fields. Knowledge of underlying mathematical foundations of statistics, optimization, economics, and analytics. Knowledge of experimental design and analysis. Strong experience in data analysis, statistical modeling, and machine learning techniques, with a proven track record of solving complex business problems Meticulous attention to detail and rigorous data quality assessment to drive accurate and reliable insights Excellent communication skills and stakeholder management Advanced SQL expertise with a strong focus on time and space optimization Proficiency in Python, and experience working with data manipulation and analysis libraries (e.g., Pandas, NumPy, scikit-learn). Demonstrated experience working with big data frameworks (e.g. Hadoop, Spark) and prior experience in building long running / stable data pipelines Solid understanding of data visualization principles and experience with visualization tools (e.g., Tableau, Looker) to effectively communicate insights. Excellent judgment, critical-thinking, and decision-making skills Preferred Qualifications Prior experience in analysis of product usage patterns, root-cause determination, and customer feedback to identify opportunities for product enhancements & new feature development. Work experience in applying advanced analytics, applied science, or causal inference to marketing problems. Demonstrated capacity to clearly and concisely communicate complex business activities, technical requirements, and recommendations.
Posted 2 weeks ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Title: Senior Product Manager Location: Bangalore Reporting to: Senior Manager – Product & Data Science Purpose of the role The Global TestOps Team at Anheuser-Busch InBev (AB InBev) is responsible for driving a culture of experimentation and A/B Testing, enabling teams to make data-driven decisions with confidence. By implementing the right experimentation frameworks, TestOps ensures that hypotheses are measured in an unbiased way, isolating the impact of decisions which can fuel business growth. The TestOps Product aims to bring this centralized framework to teams across different business domains, to seamlessly design and analyze experiments. By embedding experimentation into regular decision-making, the product empowers ABInBev to drive continuous improvement, reduce risk and accelerate growth through relevant insights. We are seeking a strategic and analytical Senior Product Manager to lead the evolution of the TestOps Experimentation Platform, ensuring that the product’s capabilities are expanded to meet user needs and drive adoption. The ideal candidate will collaborate with cross-functional teams to enhance platform functionality, streamline experimentation workflows, and embed a culture of test-and-learn across the organization. Key Responsibilities User Personas & User Journeys: Detail out user personas and map out user journeys to enable change management and user adoption. Voice of the Customer: Deeply understand the users and act as their advocate within the organization. Product Research: Conduct research within similar products to identify opportunities where our product can be improved. Feedback Analysis: Build and implement a process to incorporate iterative user feedback. Opportunity Sizing: Conduct opportunity sizing for all domains to determine their potential value impact. Product Roadmap Development: Build use case level product roadmaps and align these with leadership to ensure strategic alignment and goal setting. Product Marketing: Create personalized product marketing and collateral plans in conjunction with product releases to drive user adoption and engagement. Product Documentation: Create easy-to-understand guides and product tutorials for every major feature and functionality. Qualifications Level Of Educational Attainment Required Bachelorʼs degree in business, Economics, Engineering, or a related field; an MBA from top colleges in India is a plus. Previous Work Experience Proven experience of 6+ years as a Product Manager, with a track record of successfully bringing innovative products to market. Preferred Skills Proven ability to manage cross-functional teams and lead projects to completion. Understanding of change management principles and practices. Proficiency in creating and executing user adoption strategies. Strong strategic thinking and analytical skills, with the ability to make data-driven decisions. Strong stakeholder management skills - comfortable working with both technical and business teams. Strong problem-solving, communication, and storytelling abilities. Experience and Understanding of AB Testing, Causal Inference, and statistical experimentation. And above all of this, an undying love for beer! We dream big to create future with more cheers.
Posted 2 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Cloud AI Engineer We're looking for a highly skilled and experienced Cloud AI Engineer to join our dynamic team. In this role, you'll be instrumental in designing, developing, and deploying cutting-edge artificial intelligence and machine learning solutions leveraging the full suite of Google Cloud Platform (GCP) services. Objectives of this role Lead the end-to-end development cycle of AI applications, from conceptualization and prototyping to deployment and optimization, with a core focus on LLM-driven solutions. Architect and implement highly performant and scalable AI services, effectively integrating with GCP's comprehensive AI/ML ecosystem. Collaborate closely with product managers, data scientists, and MLOps engineers to translate complex business requirements into tangible, AI-powered features. Continuously research and apply the latest advancements in LLM technology, prompt engineering, and AI frameworks to enhance application capabilities and performance. ## Responsibilities Develop and deploy production-grade AI applications and microservices primarily using Python and FastAPI, ensuring robust API design, security, and scalability. Design and implement end-to-end LLM pipelines, encompassing data ingestion, processing, model inference, and output generation. Utilize Google Cloud Platform (GCP) services extensively, including Vertex AI (Generative AI, Model Garden, Workbench), Cloud Functions, Cloud Run, Cloud Storage, and BigQuery, to build, train, and deploy LLMs and AI models. Expertly apply prompt engineering techniques and strategies to optimize LLM responses, manage context windows, and reduce hallucinations. Implement and manage embeddings and vector stores for efficient information retrieval and Retrieval-Augmented Generation (RAG) patterns. Work with advanced LLM orchestration frameworks such as LangChain, LangGraph, Google ADK, and CrewAI to build sophisticated multi-agent systems and complex AI workflows. Integrate AI solutions with other enterprise systems and databases, ensuring seamless data flow and interoperability. Participate in code reviews, establish best practices for AI application development, and contribute to a culture of technical excellence. Keep abreast of the latest advancements in GCP AI/ML services and broader AI/ML technologies, evaluating and recommending new tools and approaches. ## Required skills and qualifications Two or more years of hands-on experience as an AI Engineer with a focus on building and deploying AI applications, particularly those involving Large Language Models (LLMs). Strong programming proficiency in Python, with significant experience in developing web APIs using FastAPI. Demonstrable expertise with Google Cloud Platform (GCP), specifically with services like Vertex AI (Generative AI, AI Platform), Cloud Run/Functions, and Cloud Storage. Proven experience in prompt engineering, including advanced techniques like few-shot learning, chain-of-thought prompting, and instruction tuning. Practical knowledge and application of embeddings and vector stores for semantic search and RAG architectures. Hands-on experience with at least one major LLM orchestration framework (e.g., LangChain, LangGraph, CrewAI). Solid understanding of software engineering principles, including API design, data structures, algorithms, and testing methodologies. Experience with version control systems (Git) and CI/CD pipelines. Preferred Skills And Qualifications Bachelor’s or Master's degree in Computer Science Good To Have Experience with MLOps practices for deploying, monitoring, and maintaining AI models in production. Understanding of distributed computing and data processing technologies. Contributions to open-source AI projects or a strong portfolio showcasing relevant AI/LLM applications. Excellent analytical and problem-solving skills with a keen attention to detail. Strong communication and interpersonal skills, with the ability to explain complex technical concepts to non-technical stakeholders.
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Position Chevron invites applications for the role of AI/ML Engineer within our Enterprise AI team in India. This position is integral to designing and developing AI/ML models that significantly accelerate the delivery of business value. We are looking for a Machine Learning Engineer with the ability to bring their expertise, innovative attitude, and excitement for solving complex problems with modern technologies and approaches. We are looking for those few individuals with a passion for exploring, innovating, and delivering innovative Data Science solutions that provide immense value to our business. The expectation for this role is 5-10 years of relevant experience. Key Responsibilities Transform data science prototypes into appropriate scale solutions in a production environment Orchestrate and configure infrastructure that assists Data Scientists and analysts in building low latency, scalable and resilient machine learning, and optimization workloads into an enterprise software product Combine expertise in mathematics, statistics, computer science, and domain knowledge to create advanced AI/ML models. Collaborate closely with the AI Technical Manager and GCC Petro-technical professionals and data engineers to integrate and scale models into the business framework. Identify data, appropriate technology, and architectural design patterns to solve business challenges using Chevron approved standard analytical tools and AI design patterns and architectures Partner with Data Scientists and Chevron IT Foundational services to implement complex algorithms and models into enterprise scale machine learning pipelines Run machine learning experiments and fine-tune algorithms to ensure optimal performance Consistently deliver complex, innovative, and complete solutions, driving them through design, planning, development, and deployment that simplify business processes and workflows to drive business value Work collaboratively with a large variety of different teams, including data scientists, data engineers, and solution architects from various organizations within business units and IT Required Qualifications Minimum 5 years’ experience in Object Oriented Design and/or Functional Programming in Python. 5 - 10 years of experience Mature software engineering skills, such as source control versioning, requirement spec, architecture, and design review, testing methodologies, CI/CD, etc. Must have a disciplined, methodical, minimalist approach to designing and constructing layered software components that can be embedded within larger frameworks or applications. Experience implementing machine learning frameworks and libraries such as MLflow Experience with containers and container managements (docker, Kubernetes) Experience developing cloud first solutions using Microsoft Azure Services including building machine learning pipelines in Azure Machine Learning and/or Fabric, Hands-on experience in deploying machine learning pipelines with Azure Machine Learning SDK Working knowledge of mathematics (primarily linear algebra, probability, statistics), and algorithms. Proficient at orchestrating large-scale ML/DL jobs, leveraging big data tooling and modern container orchestration infrastructure, to tackle distributed training and massive parallel model executions on cloud infrastructure. Experience designing custom APIs for machine learning models for training and inference processes and designing, implementing, and delivering frameworks for MLOps. Experience with model lifecycle management and automation to support retraining and model monitoring Experience implementing and incorporating ML models on unstructured data using cognitive services and/or computer vision as part of AI solutions and workflows. History of working with large scale model optimization and hyperparameter tuning, applied to ML/DL models. Knowledge of enterprise SaaS complexities including security/access control, scalability, high availability, concurrency, online diagnoses, deployment, upgrade/migration, internationalization, and production support. Knowledge of data engineering and transformation tools and patterns such as Databricks, Spark, Azure Data Factory Ability to engage other technical experts at all organizational levels and assess opportunities to apply machine learning and analytics to improve business workflows and deliver information and insight to support business decisions. Ability to communicate in a clear, concise, and understandable manner both orally and in writing. Chevron ENGINE supports global operations, supporting business requirements across the world. Accordingly, the work hours for employees will be aligned to support business requirements. The standard work week will be Monday to Friday. Working hours are 8:00am to 5:00pm or 1.30pm to 10.30pm. Chevron participates in E-Verify in certain locations as required by law.
Posted 2 weeks ago
10.0 years
0 Lacs
Greater Hyderabad Area
On-site
Vertafore is a leading technology company whose innovative software solutions are advancing the insurance industry.. Our suite of products provides solutions to our customers that help them better manage their business, boost their productivity and efficiencies, and lower costs while strengthening relationships. Our mission is to move InsurTech forward by putting people at the heart of the industry. We are leading the way with product innovation, technology partnerships, and focusing on customer success. Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. We are headquartered in Denver, Colorado, with offices across the U.S., Canada, and India. Job Description We are seeking a highly skilled Sr. Tech Lead - Full Stack (Java & React) for the Certificate product development. This role requires in-depth knowledge of Java for back-end development and React for front-end development, with the ability to work across the full stack. As a Sr. Tech Lead, you will drive technical directions, oversee development, mentor a team of developers, and ensure that solutions are scalable, performant, and meet high-quality standards. Key Responsibilities Essential job functions included but are not limited to the following: Technical Leadership: Lead and oversee the design, development, and implementation of full-stack solutions for the Certificate project using Java and React. Architecture & Design: Define and guide the architectural patterns, including microservices architecture for the back end and component-based architecture for the front end. Full-Stack Development: Participate in hands-on coding across the front-end (React) and back-end (Java/Spring Boot) layers, ensuring code quality, scalability, and maintainability. Collaboration: Work closely with cross-functional teams, including product managers, UI/UX designers, QA engineers, and DevOps teams, to deliver high-quality software. Code Reviews & Best Practices: Conduct code reviews, enforce coding standards, and ensure adherence to best practices, design patterns, and performance optimizations. Mentorship: Mentor and guide junior and mid-level developers, providing technical oversight and problem-solving assistance. DevOps & Deployment: Collaborate with DevOps teams to define deployment pipelines, CI/CD practices, and automation to improve the efficiency of development and delivery. Innovation & Continuous Improvement: Stay updated with the latest in full-stack technologies and frameworks and integrate new practices into the project. Required Technical Skills Back-End Development (Java): Java (8+): Expertise in Java 8+ and its modern features (e.g., Streams, Lambda expressions, Optional, Functional interfaces). Hands-on experience building enterprise-grade applications using Java. Spring Framework: Proficiency in Spring Boot for building microservices and RESTful APIs. Experience with Spring Core, Spring MVC, Spring Data, and Spring Security. Understanding of dependency injection and aspect-oriented programming (AOP). Microservices Architecture: Expertise in designing and developing microservices using Spring Boot, adhering to best practices like service discovery, load balancing, and fault tolerance. Familiarity with tools like Spring Cloud, Netflix OSS, and Kubernetes for microservices orchestration. Kafka (Event Streaming & Messaging): Experience with Apache Kafka for building distributed, event-driven systems. Proficiency in using Kafka for real-time data streaming, event sourcing, and message-based communication between microservices. Database Management: Strong knowledge of SQL/NoSQL databases like MySQL, PostgreSQL, MongoDB, or Cassandra. Experience with JPA/Hibernate for ORM and understanding of database optimization techniques, query performance tuning, and designing efficient data models. APIs & Integrations: Proficiency in designing RESTful APIs and working with API specifications and documentation tools like Swagger/OpenAPI. Experience with OAuth 2.0, JWT for authentication and authorization mechanisms. Front-End Development (React / Angular): React/ Angular: Strong knowledge of React, including new features like Concurrent Rendering and Automatic Batching. Expertise in building and optimizing applications with React functional components and leveraging React Hooks for state and side effects management (e.g., useState, useEffect, useCallback, useMemo). Provider in React & Context API: Experience with setting up Context Providers and effectively using React Context API for managing and passing data across component trees without prop drilling. React Query: Proficiency in using React Query for efficient data fetching, caching, and synchronizing server state in React applications. Formik & Yup Validation: Strong experience in building and managing forms in React using Formik, including handling complex form states and validations. Expertise in using Yup for schema-based validation of forms, ensuring comprehensive client-side form validation. TypeScript: Strong hands-on experience with TypeScript for building type-safe React applications. Deep understanding of TypeScript features like interfaces, generics, type inference, and modules for ensuring code reliability and scalability. HTML/CSS & UI Frameworks: HTML/CSS: Strong understanding of semantic HTML and modern CSS for responsive web design. MaterialUI & Tailwind CSS: Experience with MaterialUI and Tailwind CSS for building modern, user-friendly, and scalable UI components. Full-Stack Development: Node.js & Express (Optional but Preferred): Familiarity with Node.js and Express for building lightweight back-end services or integrating with Java-based microservices. Version Control & CI/CD: Proficiency with Git and working with branching strategies such as GitFlow. Experience setting up CI/CD pipelines using tools like Jenkins or GitLab CI, ensuring automated testing and seamless deployments. Testing: Familiarity with Playwright or Selenium for end-to-end testing. Performance & Security: Performance Optimization: Experience with optimizing application performance, including JVM tuning, caching strategies, and improving query performance in databases. Security Best Practices: Strong understanding of security best practices for both front-end and back-end, including secure coding, protecting APIs, and using libraries like Spring Security for authentication and authorization. Soft Skills: Leadership & Communication: Strong leadership and decision-making abilities, with a proven track record of leading teams in delivering complex, high-quality software solutions. Excellent communication skills to work effectively with global teams and non-technical stakeholders. Problem-Solving & Innovation: Ability to approach problems creatively and analytically, ensuring continuous improvements in the software development lifecycle. Collaboration Tools: Proficiency in using tools like JIRA and Teams to manage Agile workflows and communicate effectively across global teams. Agile Methodologies: Experience working in Agile/Scrum environments, actively participating in sprint planning, retrospectives, and iterative development cycles. Qualifications Bachelor’s or master’s degree in computer science, Engineering, or a related field. Have an intense passion for building software and mentoring their teams. Are very strong in both design and implementation at an enterprise level. 10 - 15 years of professional experience with the above-mentioned tech stack. Are in-tune with high performance and high availability design/programming. Have experience in security best practices for software and data platforms. Design 'perfect-fit' solutions with engineering teams and business units to drive business growth and customer delight. Enjoy solving problems through the entire application stack. Are interested and capable of learning other programming languages as needed.
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Scope: Lead the research, design, and development of advanced AI and ML models powering a cutting edge AI-driven no-code development platform and a scalable AI inference and training orchestration system. Responsible for building scalable ML pipelines, optimizing models for production, mentoring team members, and translating research innovations into impactful product features aligned with business goals. Job Responsibilities: • Design and implement state-of-the-art machine learning and deep learning models for NLP, computer vision, and generative AI relevant to no-code AI coding and AI orchestration platforms. • Develop, optimize, and fine-tune large-scale models, including transformer-based architectures and generative models. • Architect and manage end-to-end machine learning pipelines: data processing, training, evaluation, deployment, and continuous monitoring. • Collaborate closely with software engineering teams to productionize models ensuring reliability, scalability, and performance. • Research and integrate cutting-edge AI techniques and algorithms to maintain product competitiveness. • Lead AI research efforts contributing to intellectual property generation, patents, and academic publications. • Provide technical leadership and mentorship to junior AI/ML team members. • Collaborate cross-functionally with product managers, UX designers, and engineers to deliver AI-powered product features. • Maintain up-to-date knowledge of AI research trends and technologies, assessing their applicability. • Ensure compliance with data privacy and security standards in AI model development. Good to have skills: • Experience with AI-driven no-code platforms or automated code generation. • Familiarity with AI workflow orchestration frameworks like LangChain, Crew AI, or similar. • Knowledge of probabilistic modeling and uncertainty quantification. • Hands-on experience with MLOps tools and practices, including CI/CD, model versioning, and monitoring. • Familiarity with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). • Contributions to open-source AI projects or patent filings. • Understanding of AI ethics, data privacy (GDPR, SOC2) compliance. • Strong academic research background with publications in top-tier AI/ML conferences. Qualification and Experience: • PhD in Computer Science, Electrical Engineering, Statistics, Mathematics, or related fields with a specialization in Artificial Intelligence, Machine Learning, or Deep Learning. • Strong research publication record in reputed AI/ML conferences (NeurIPS, ICML, ICLR, CVPR, ACL). • Demonstrated experience in developing and deploying deep learning models including transformers, CNNs, RNNs, GNNs, and generative AI models. • Proven skills in NLP and/or computer vision. • Hands-on experience with Python and ML frameworks such as PyTorch, TensorFlow, JAX. • Experience building scalable ML pipelines and applying MLOps best practices. • Knowledge of distributed training, GPU acceleration, and cloud infrastructure is highly desirable. • Excellent problem-solving, analytical, and communication skills. • Experience mentoring or leading junior AI researchers/engineers is a plus. • Prior exposure to AI-driven no-code platforms, AI orchestration frameworks, or automated code generation technologies is beneficial.
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
We run a recruitment agency and help our clients hire top 1% talent. We are looking for a Statistician to write, review, and validate prompt-based questions designed to train AI. Overview 💲 Pay – $50-$70 per hour 🕒 Flexible workload: 10–20 hours per week, with potential to increase to 40 hours 🌍 Fully remote and asynchronous—work on your own schedule 📅 Minimum duration: 1–2 months, with potential for extension Your expertise in statistical modeling, inference, probability, experimental design, and data interpretation will ensure each prompt and response is analytically rigorous and educationally sound. We welcome statisticians from academic, governmental, healthcare, or industry backgrounds to help create training materials that reflect real-world data challenges and methodology. You’ll have the opportunity to apply your statistical judgment to shape how advanced AI models interpret and generate quantitative insights. You are a good fit if you: Have over 3+ years of industry or academic experience applying statistics in real-world scenarios. Have a bachelor's degree in statistics, applied mathematics, or a related field. Have a strong background in one or more of the following areas: descriptive and inferential statistics, hypothesis testing and confidence intervals, linear and generalized linear models, survey sampling and experimental design, or multivariate analysis/Bayesian inference/survival analysis Demonstrate excellent verbal and written communication skills Have a strong attention to detail Demonstrate strong analytical reasoning and communication skills, and have a keen attention to detail. Preferred Qualification Proficiency in R, Python, or SAS is highly encouraged Experience with real-world datasets is a plus
Posted 2 weeks ago
0 years
0 - 0 Lacs
Cochin
Remote
https://pmspace.ai/ Company Profile: At pmspace.ai, we’re building next-generation AI tools for project management intelligence. Our platform leverages graph databases, NLP, and large language models (LLMs) to transform complex project data into actionable insights. Join us to pioneer cutting-edge solutions in a fast-paced, collaborative environment. Role Overview We seek a Python Developer with expertise in graph databases (Neo4j), RAG pipelines, and vLLM optimization. You’ll design scalable AI systems, enhance retrieval-augmented workflows, and deploy high-performance language models to power our project analytics engine. Key Responsibilities Architect and optimize graph database systems (Neo4j) to model project knowledge networks and relationships. Build end-to-end RAG (Retrieval-Augmented Generation) pipelines for context-aware AI responses. Implement and fine-tune vLLM for efficient inference of large language models (LLMs). Develop Python-based microservices for data ingestion, processing, and API integrations (FastAPI, Flask). Collaborate with ML engineers to deploy transformer models (e.g., BERT, GPT variants) and vector databases. Monitor system performance, conduct A/B tests, and ensure low-latency responses in production. Required Skills Proficiency in Python and AI/ML libraries (PyTorch, TensorFlow, Hugging Face Transformers). Hands-on experience with graph databases, especially Neo4j (Cypher queries, graph algorithms). Demonstrated work on RAG pipelines (retrieval, reranking, generation) using frameworks like LangChain or LlamaIndex. Experience with vLLM or similar LLM optimization tools (quantization, distributed inference). Knowledge of vector databases (e.g., FAISS, Pinecone) and embedding techniques. Familiarity with cloud platforms (AWS/GCP/Azure) and containerization (Docker, Kubernetes). Job Type: Full-time Pay: ₹5,000.00 - ₹7,000.00 per month Schedule: Day shift Work Location: Remote Expected Start Date: 01/08/2025
Posted 2 weeks ago
4.0 years
40 - 50 Lacs
Gurugram, Haryana, India
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 weeks ago
8.0 years
0 - 0 Lacs
Bengaluru
On-site
Senior Data Scientist About the Role: We are seeking a highly skilled Senior Data Scientist with expertise in Python, Machine Learning (ML), Natural Language Processing (NLP), Generative AI (GenAI), and Azure Cloud Services . The ideal candidate will be responsible for designing, developing, and deploying advanced AI/ML models to drive data-driven decision-making. This role requires strong analytical skills, proficiency in AI/ML technologies, and experience with cloud-based solutions. Key Responsibilities: · Design and develop ML, NLP, and GenAI models to solve complex business problems. · Build, train, and optimize AI models using Python and relevant ML frameworks. · Implement Azure AI/ML services for scalable deployment of models. · Develop and integrate APIs for real-time model inference and decision-making. · Work with large-scale data to extract insights and drive strategic initiatives. · Collaborate with cross-functional teams, including Data Engineers, Software Engineers, and Product Teams , to integrate AI/ML solutions into applications. · Implement CI/CD pipelines to automate model training, deployment, and monitoring. · Ensure adherence to software engineering best practices and Agile methodologies in AI/ML projects. · Stay updated on cutting-edge AI/ML advancements and continuously enhance models and algorithms. · Conduct research on emerging AI/ML trends and contribute to the development of innovative solutions. · Provide technical mentorship and guidance to junior data scientists. · Optimize model performance and scalability in a production environment . Required Skills & Qualifications: · Proficiency in Python and ML frameworks like TensorFlow, PyTorch, or Scikit-learn. · Hands-on experience in NLP techniques , including transformers, embeddings, and text processing. · Expertise in Generative AI models (GPT, BERT, LLMs, etc.). · Strong knowledge of Azure AI/ML services , including Azure Machine Learning, Azure Cognitive Services, and Azure Databricks. · Experience in developing APIs for model deployment and integration. · Familiarity with CI/CD pipelines for AI/ML models. · Strong understanding of software engineering principles and best practices. · Experience working in an Agile development environment . · Excellent problem-solving skills and ability to work in a fast-paced, dynamic environment. · Strong background in statistical analysis, data mining, and data visualization . Preferred Qualifications: · Experience in MLOps and automation of model lifecycle management. · Knowledge of vector databases and retrieval-augmented generation (RAG) techniques. · Exposure to big data processing frameworks (Spark, Hadoop). · Strong ability to communicate complex ideas to technical and non-technical stakeholders. · Experience with Graph Neural Networks (GNNs) and recommendation systems . · Familiarity with AutoML frameworks and hyperparameter tuning strategies. Job Types: Full-time, Part-time, Permanent, Contractual / Temporary Pay: ₹400.00 - ₹450.00 per hour Schedule: Day shift Experience: Senior Data Scientist: 8 years (Required) ML, NLP, and GenAI models: 8 years (Required) Python: 8 years (Required) Azure AI/ML services: 8 years (Required) Work Location: In person
Posted 2 weeks ago
3.0 - 5.0 years
2 - 4 Lacs
Bengaluru
Remote
Way of working - Remote : Employees will have the freedom to work remotely all through the year. These employees, who form a large majority, will come together in their base location for a week, once every quarter. Job Profile : Data Scientist II Location : Bangalore | Karnataka Years of Experienc e : 3 - 5 ABOUT THE TEAM & ROLE: Data Science at Swiggy Data Science and applied ML is ingrained deeply in decision making and product development at Swiggy. Our data scientists work closely with cross-functional teams to ship end-to-end data products, from formulating the business problem in mathematical/ML terms to iterating on ML/DL methods to taking them to production. We own or co-own several initiatives with a direct line of sight to impact on customer experience as well as business metrics. We also encourage open sharing of ideas and publishing in internal and external avenues What will you get to do here? You will leverage your strong ML/DL/Statistics background to build new and next generation of ML based solutions to improve the quality of ads recommendation and leverage various optimization techniques to improve the campaign performance. You will mine and extract relevant information from Swiggy's massive historical data to help ideate and identify solutions to business and CX problems. You will work closely with engineers/PMs/analysts on detailed requirements, technical designs, and implementation of end-to-end inference solutions at Swiggy scale. You will stay abreast with the latest in ML research for Ads Bidding algorithms, Recommendation Systems related areas and help adapt it to Swiggy's problem statements. You will publish and talk about your work in internal and external forums to both technical and layman audiences. Opportunity to work on challenging and impactful projects in the logistics domain. Collaborative and supportive work environment that fosters learning and growth. Conduct data analysis and modeling to identify opportunities for optimization and automation. What qualities are we looking for? Bachelors or Masters degree in a quantitative field with 3-5 years of industry/research lab experience Experience in Generative AI, Applied Mathematics, Machine Learning, Statistics Required: Excellent problem solving skills, ability to deconstruct and formulate solutions from first-principles Required: Depth and hands-on experience in applying ML/DL, statistical techniques to business problems Preferred: Experience working with ‘big data’ and shipping ML/DL models to production Required: Strong proficiency in Python, SQL, Spark, Tensorflow Required: Strong spoken and written communication skills Big plus: Experience in the space of ecommerce and logistics Experience in Agentic AI , LLMS and NLP, Previous experience in deep learning, operations research, and working in startup or product-based consumer/internet companies is preferred Excellent communication and collaboration skills, with the ability to work effectively in a team environment Visit our tech blogs to learn more about some the challenges we deal with: https://bytes.swiggy.com/the-swiggy-delivery-challenge-part-one-6a2abb4f82f6 https://bytes.swiggy.com/how-ai-at-swiggy-is-transforming-convenience-eae0a32055ae https://bytes.swiggy.com/decoding-food-intelligence-at-swiggy-5011e21dbc86 We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, disability status, or any other characteristic protected by the law.
Posted 2 weeks ago
2.0 years
3 - 5 Lacs
Bengaluru
On-site
Company: AHIPL Agilon Health India Private Limited Job Posting Location: India_Bangalore Job Title: Prospective Chart Reviewer-6 Job Description: Essential Job Functions: Performs pre-visit medical record reviews to identify chronic conditions reported in prior years Identify diagnoses that lack supporting documentation Prioritizes clinical alerts and presents those that are strongly suggestive of an underlying condition Present information to providers in a concise complete manner All other duties as assigned Other Job Functions: Understand, adhere to, and implement the Company’s policies and procedures. Provide excellent customer services skills, including consistently displaying awareness and sensitivity to the needs of internal and/or external clients. Proactively ensuring that these needs are met or exceeded. Take personal responsibility for personal growth including acquiring new skills, knowledge, and information. Engage in excellent communication which includes listening attentively and speaking professionally. Set and complete challenging goals. Demonstrate attention to detail and accuracy in work product by meeting productivity standards and maintaining a company standard of accuracy Qualifications: Minimum Experience 2+ years of clinical experience required Advanced level of clinical knowledge associated with chronic disease states required Relevant chart review experience required Education/Licensure: Medical Doctor or Nurse required Coding Certification through AHIMA or AAPC preferred Skills and Abilities: Language Skills: Strong communication skills both written and verbal to work with multiple internal and external clients in a fast-paced environment Mathematical Skills: Ability to work with mathematical concepts such as probability and statistical inference. Ability to apply concepts such as fractions, percentages, ratios, and proportions to practical situations. Reasoning Ability: Ability to apply principles of logical or scientific thinking to a wide range of intellectual and practical problems. Computer Skills: Ability to create and maintain documents using Microsoft Office (Word, Excel, Outlook, PowerPoint) Location: India_Bangalore
Posted 2 weeks ago
4.0 years
0 Lacs
Madurai
On-site
Job Location: Madurai Job Experience: 4-15 Years Model of Work: Work From Office Technologies: Artificial Intelligence Machine Learning Functional Area: Software Development Job Summary: Job Title: ML Engineer – TechMango Location: TechMango, Madurai Experience: 4+ Years Employment Type: Full-Time Role Overview We are seeking an experienced Machine Learning Engineer with strong proficiency in Python, time series forecasting, MLOps, and deployment using AWS services. This role involves building scalable machine learning pipelines, optimizing models, and deploying them in production environments. Key Responsibilities: Core Technical Skills Languages & Databases Programming Language: Python Databases: SQL Core Libraries & Tools Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet Machine Learning Models: State-of-the-art ML models, including boosting and ensemble methods Model Explainability: SHAP, LIME Deep Learning & Data Processing Frameworks: PyTorch, PyTorch Forecasting Libraries: Pandas, NumPy, PySpark, Polars (optional) Hyperparameter Tuning Tools: Optuna, Amazon SageMaker Automatic Model Tuning Deployment & MLOps Model Deployment: Batch & real-time with API endpoints Experiment Tracking: MLFlow Model Serving: TorchServe, SageMaker Endpoints / Batch Containerization & Pipelines Containerization: Docker Orchestration: AWS Step Functions, SageMaker Pipelines AWS Cloud Stack SageMaker (Training, Inference, Tuning) S3 (Data Storage) CloudWatch (Monitoring) Lambda (Trigger-based inference) ECR / ECS / Fargate (Container Hosting) Candidate Requirements Strong problem-solving and analytical mindset Hands-on experience with end-to-end ML project lifecycle Familiarity with MLOps workflows in production environments Excellent communication and documentation skills Comfortable working in agile, cross-functional teams
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
**Who you are** You’ve stepped beyond traditional QA—you test AI agents, not just UI clicks. You build automated tests that check for **hallucinations, bias, adversarial inputs**, prompt chain integrity, model outputs, and multi-agent orchestration failures. You script Python tests and use Postman/Selenium/Playwright for UI/API, and JMeter or k6 for load. You understand vector databases and can test embedding correctness and data flows. You can ask, “What happens when two agents clash?” or “If one agent hijacks context, does the system fail?” and then write tests for these edge cases. You’re cloud-savvy—Azure or AWS—and integrate tests into CI/CD. You debug failures in agent-manager systems and help triage model logic vs infra issues. You take ownership of AI test quality end-to-end. --- **What you’ll actually do** You’ll design **component & end-to-end tests** for multi-agent GenAI workflows (e.g., planner + execution + reporting agents). You’ll script pytest + Postman + Playwright suites that test API functionality, failover logic, agent coordination, and prompt chaining. You’ll simulate coordination failures, misalignment, hallucinations in agent dialogues. You’ll run load tests on LLM endpoints, track latency and cost. You’ll validate that vector DB pipelines (Milvus/FAISS/Pinecone) return accurate embeddings and retrieval results. You’ll build CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins) that gate merges based on model quality thresholds. You’ll implement drift, bias, hallucination metrics, and create dashboards for QA monitoring. You’ll occasion a human-in-the-loop sanity check for critical agent behavior. You’ll write guides so others understand how to test GenAI pipelines. --- **Skills and knowledge** • Python automation—pytest/unittest for component & agent testing • Postman/Newman, Selenium/Playwright/Cypress for UI/API test flows • Load/performance tools—JMeter, k6 for inference endpoints • SQL/NoSQL and data validation for vector DB pipelines • Vector DB testing—Milvus, FAISS, Pinecone embeddings/retrieval accuracy • GenAI evaluation—hallucinations, bias/fairness, embedding similarity (BLEU, ROUGE), adversarial/prompt injection testing • Multi-agent testing—understand component/unit tests per agent, inter-agent communications, coordination failure tests, message passing or blackboard rhythm, emergent behavior monitoring • CI/CD integration—Azure DevOps/GitHub Actions/Jenkins pipelines, gating on quality metrics • Cloud awareness—testing in Azure/AWS/GCP, GenAI endpoints orchestration and failure mode testing • Monitoring & observability—drift, latency, hallucination rate dashboards • Soft traits—detail oriented, QA mindset, self-driven, cross-functional communicator, ethical awareness around AI failures.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Ready to be pushed beyond what you think you’re capable of? At Coinbase, our mission is to increase economic freedom in the world. It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform — and with it, the future global financial system. To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system. We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up. We want someone who will run towards, not away from, solving the company’s hardest problems. Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be. While many roles at Coinbase are remote-first, we are not remote-only. In-person participation is required throughout the year. Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment. Attendance is expected and fully supported. As a Staff Machine Learning Platform Engineer at Coinbase, you will play a pivotal role in building an open financial system. The team builds the foundational components for training and serving ML models at Coinbase. Our platform is used to combat fraud, personalize user experiences, and to analyze blockchains. We are a lean team, so you will get the opportunity to apply your software engineering skills across all aspects of building ML at scale, including stream processing, distributed training, and highly available online services. What you’ll be doing (ie. job duties): Form a deep understanding of our Machine Learning Engineers’ needs and our current capabilities and gaps. Mentor our talented junior engineers on how to build high quality software, and take their skills to the next level. Continually raise our engineering standards to maintain high-availability and low-latency for our ML inference infrastructure that runs both predictive ML models and LLMs. Optimize low latency streaming pipelines to give our ML models the freshest and highest quality data. Evangelize state-of-the-art practices on building high-performance distributed training jobs that process large volumes of data. Build tooling to observe the quality of data going into our models and to detect degradations impacting model performance. What we look for in you (ie. job requirements): 5+ yrs of industry experience as a Software Engineer. You have a strong understanding of distributed systems. You lead by example through high quality code and excellent communication skills. You have a great sense of design, and can bring clarity to complex technical requirements. You treat other engineers as a customer, and have an obsessive focus on delivering them a seamless experience. You have a mastery of the fundamentals, such that you can quickly jump between many varied technologies and still operate at a high level. Nice to Have: Experience building ML models and working with ML systems. Experience working on a platform team, and building developer tooling. Experience with the technologies we use (Python, Golang, Ray, Tecton, Spark, Airflow, Databricks, Snowflake, and DynamoDB). Job #: GPBE06IN *Answers to crypto-related questions may be used to evaluate your onchain experience #LI-Remote Commitment to Equal Opportunity Coinbase is committed to diversity in its workforce and is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the Know Your Rights notice here . Additionally, Coinbase participates in the E-Verify program in certain locations, as required by law. Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations[at]coinbase.com to let us know the nature of your request and your contact information. For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here). Global Data Privacy Notice for Job Candidates and Applicants Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. For US applicants only, by submitting your application you are agreeing to arbitration of disputes as outlined here.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France