Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About The Role Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s In It For You Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India
Posted 4 weeks ago
2.0 - 6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India
Posted 4 weeks ago
5.0 years
0 Lacs
Hubli, Karnataka, India
On-site
Read Before Applying Candidates available to join within 15–30 days will be preferred. What we’re building demands hard work, tenacity, ownership, creativity, and significantly more man-hours than an average job. If you prefer routine over the dynamic unpredictability of innovation, Astr isn’t for you. Our fast-paced environment requires curiosity, resilience, a passion for continuous learning, and comfort with rapid change. Why You Should NOT Join Us If high stakes, tight deadlines, and significant challenges don’t excite you, then our mission to revolutionize defence technology might not align with your career goals. We seek individuals ready to push boundaries and motivated by urgency and impact. Why You SHOULD Join Us We offer the opportunity to work on groundbreaking projects that strengthen national defence , supported by a team of A-players . About This Role We are seeking a Computer Vision Engineer to develop and implement cutting-edge vision-based algorithms for long-range surveillance and defence applications . The ideal candidate will have prior experience working on thermal cameras, long-range object detection, or visual surveillance systems . You will play a key role in designing and developing real-time object detection and tracking solutions for our next-generation defence products. This role requires expertise in Deep Learning, Computer Vision, and Embedded AI . Key Responsibilities ✅ Computer Vision Algorithm Development Develop, optimize, and implement object detection and tracking algorithms . Work on small object detection and long-range target tracking using visible spectrum and thermal imaging . ✅ Deep Learning & AI Implementation Train and fine-tune CNN-based models (YOLO, Faster R-CNN, SSD, etc.) for real-world defence applications. Deploy deep learning models on edge devices (Nvidia Jetson, ARM-based processors, etc.) . ✅ Hardware Integration & Optimization Collaborate with the embedded team to ensure real-time inference on low-power devices . Optimize algorithms for low-latency and power-efficient operation . ✅ Testing & Deployment Conduct field testing and performance evaluation of computer vision models in real-world environments . Work with defence teams to integrate vision-based solutions into weapon systems and surveillance platforms . Requirements 🔹 Must-Have Skills 1–5 years of experience in Computer Vision / AI / Deep Learning . Hands-on experience with thermal imaging, long-range surveillance, or security systems . Proficiency in Python, C++, OpenCV, TensorFlow, PyTorch, or YOLO . Strong understanding of CNNs, object detection, and real-time tracking . Experience deploying models on Jetson, Raspberry Pi, or ARM-based edge devices . Understanding of multi-threading, real-time processing, and hardware acceleration (CUDA, OpenCL) . 🔹 Good-to-Have Skills Experience with sensor fusion (LiDAR, Radar, IR cameras, etc.) . Knowledge of SLAM, motion prediction. Prior work in defence, aerospace, or surveillance applications . What We Expect from You ✅ Genuine curiosity and a problem-solving mindset . ✅ Execution, not just ideas —we value doers over talkers . ✅ Ability to work in a high-pressure, fast-paced environment . ✅ A strong sense of ownership and accountability . What You Can Expect from Us 🚀 The chance to work on hard, meaningful problems that directly impact national security . 💡 An A-team of high-energy, high-performance individuals. ⚡ High levels of responsibility and the ability to take ownership. 🎯 A mission-driven environment where your work truly matters . About Astr Defence Astr Defence is an award-winning Indian Defence Manufacturer pioneering next-generation armaments and counter-drone solutions to address modern security challenges . We collaborate with India’s elite forces to develop mission-critical technologies.
Posted 4 weeks ago
0.0 - 8.0 years
0 Lacs
Gurugram, Haryana
On-site
About the Role: Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements: 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India
Posted 4 weeks ago
0.0 - 6.0 years
0 Lacs
Gurugram, Haryana
On-site
Data Engineer Gurgaon, India; Ahmedabad, India; Hyderabad, India; Virtual, Gurgaon, India Information Technology 317425 Job Description About The Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India
Posted 4 weeks ago
0.0 - 8.0 years
0 Lacs
Gurugram, Haryana
On-site
Senior Data Engineer Gurgaon, India; Ahmedabad, India; Hyderabad, India; Virtual, Gurgaon, India Information Technology 317427 Job Description About The Role: Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements: 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India
Posted 4 weeks ago
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Role: AI Architect Experience Required: 8–12 years Location: Bangalore | Karnataka Way of Working: Hybrid, Bangalore About Swiggy and the Team Swiggy is India's leading on-demand delivery platform, leveraging data science and cutting-edge AI to redefine convenience for millions of customers. The AI team works at the intersection of machine learning, advanced architecture, and applied research to shape AI-first systems that directly impact customer experience and business growth. The team values cross-functional collaboration, open sharing of ideas, and innovation to roll out AI solutions at scale. Bring your expertise in building scalable AI architectures and leading next-generation Generative AI and Agentic AI initiatives, and join a motivated team making a tangible business impact. About the Role As an AI Architect, you will lead the design and implementation of cutting-edge AI architectures, focusing on Large Language Models (LLMs), Generative AI applications, and Agentic AI platforms. This role demands system design expertise for AI-native products, hands-on experience with real-world GenAI deployment, and a clear understanding of trade-offs while architecting scalable solutions for production environments. As an AI Architect, you will be a pivotal leader in designing and implementing the next generation of AI-first systems at Swiggy. You will act as both a hands-on engineer and a strategic thinker—working with multi-agent systems, defining operational workflows, and ensuring the alignment of AI capabilities with Swiggy's business vision and customer needs. This role demands a unique blend of deep technical expertise in advanced AI concepts (especially Large Language Models, Generative AI, and Agentic AI) and strong architectural acumen to build scalable, high-performance, and cost-effective AI solutions. You will be instrumental in shaping our AI strategy, driving innovation, and ensuring the successful delivery of impactful AI products that directly influence customer experience and business metrics. We foster an environment of open sharing of ideas and encourage contributions to both internal and external forums. What qualities are we looking for? Technical Expertise: Overall 8-10 years of experience in AI and ML engineering field with 2–3 years of hands-on experience with Large Language Models (LLMs), Generative AI applications, and Agentic AI architectures (e.g., multi-agent workflows, copilots, autonomous agents). Proven track record of architecting AI-first systems (e.g., conversational agents, copilots). Strong system design skills with experience in designing scalable, low-latency AI-powered platforms. Familiarity with orchestration tools/technologies like LangGraph, CrewAI, AutoGen, and distributed multi-agent system architectures. Deep understanding of the trade-offs between open-source vs proprietary AI models and related tools. Experience building or integrating components such as Ontology Layers, Graph Knowledge Bases, and feedback loops into AI systems. Strategic Thinking: Expertise in evaluating build-vs-buy options and supporting recommendations with logical explanations. Ability to identify scalability, latency, and inference-cost bottlenecks and devise architecture solutions. Updated knowledge and insights on the latest tools, frameworks, and trends in AI and multi-agent ecosystems. Product Knowledge: Strong curiosity and empathy for end-user personas. Demonstrates end-to-end product thinking beyond model performance. Ability to connect AI capabilities with business KPIs while ensuring ROI on AI investments. Experience working closely with Product Managers to identify workflows and bottlenecks, collaboratively defining roadmaps. Communication and Leadership: Demonstrates the ability to act as a thought partner to product managers and stakeholders. Exceptional written and verbal communication skills to align cross-functional teams on objectives, deliverables, and methodologies. Thrives in a fast-paced environment with a "Founder Mentality" to move fast, break barriers, and deliver results. Preferred Skills & Experience Entrepreneurial stint or experience with startups building AI products R&D experience with language models What will you get to do here? Architect AI-first systems with a focus on multi-agent systems, LLMs, Generative AI, and Agentic AI workflows. Define and build scalable and efficient AI architectures with low latency at production scale. Design orchestration mechanisms like Ontology Layers and incorporate feedback loops for enriching system capabilities. Stay updated on emerging AI frameworks, tools, and technologies, applying learnings to both technological and business problem statements. Evaluate and benchmark open-source and proprietary AI frameworks based on cost, scalability, and suitability. Collaborate with engineering, product, data scientists, analysts, and business stakeholders to define workflows, address bottlenecks, and create product roadmaps. Provide recommendations on AI investments with a critical eye on ROI and business impact. Present your work in internal and external forums, thought-leadership events, and research publications. Why Join Swiggy? Opportunity to work on impactful and challenging projects in the AI domain. Chance to architect innovative solutions at scale, directly impacting millions of users. Collaborative work culture fosters learning, growth, and innovation. Visit our tech blogs to learn more about some of the challenges we deal with: The Swiggy Delivery Challenge Part One How AI at Swiggy is Transforming Convenience Decoding Food Intelligence at Swiggy We believe in promoting equal opportunities and diversity. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, or disability. Ready to Shape the Future with AI? Apply now!
Posted 4 weeks ago
0.0 - 6.0 years
0 Lacs
Gurugram, Haryana
On-site
About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title : AI/ML Engineer Data Platform Location : Bangalore, Pune, India Employment Type : Full Time, Permanent Industry : Software Product Department : Engineering Software & QA Job Description We are looking for a talented and passionate AI/ML Engineer to join our growing development team in Gandhinagar. You will be responsible for building scalable and efficient machine learning pipelines and deploying intelligent solutions for real-world problems. The ideal candidate is hands-on with modern ML/DL frameworks and is comfortable working in a DevOps environment with containerized applications. Roles & Responsibilities Core Responsibilities : Architect and implement ML models using containerized infrastructure (Docker, Kubernetes) with CI/CD integration. Develop intelligent solutions for search engines, text classification, entity recognition, and generative tasks using LLMs. Build end-to-end pipelines : Dataset processing, Model training, Deployment, Prediction, and Retraining. Design APIs for ML operations : dataset preparation, training, inference, and monitoring. Optimize performance and accuracy of NLP models including NER, NLU, and BERT-based architectures. Collaborate with cross-functional teams to enhance products with intelligent features. Expected Skills & Experience Technical Proficiency : Strong hands-on experience in Python and ML frameworks : PyTorch, TensorFlow, SpaCy, SKLearn, RasaNLU. Solid understanding of NLP techniques, including NER, BERT, transformer models, and LLMs. Experience with FastAPI, Flask, or Django for building APIs. Experience working with databases like MariaDB, MongoDB. Familiarity with Unix/Linux systems and version control systems. DevOps & Deployment CI/CD experience with tools like Jenkins, Maven, etc. Proficient in Docker image creation, container orchestration (Kubernetes / Docker Swarm). Skilled in deploying ML models in production with monitoring and versioning. Search & Indexing Experience with Elasticsearch for building intelligent search engines. Bonus Skills (Preferred But Not Mandatory) Experience with Computer Vision, Azure DevOps, or Neo4j. Hands-on with HuggingFace models, Minio, Knative, ArgoCD/Argo Workflows. Experience in document parsing with libraries like Publaynet, PDFTron, PDFjs, PSPDFKit. Worked with Azure Cognitive Services (OCR, Entity Recognition). Knowledge of Azure Blob Storage or Amazon S3. Education UG : Any Graduate PG : Any Postgraduate (ref:hirist.tech)
Posted 4 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Graviton is a privately funded quantitative trading firm striving for excellence in financial markets' research. We trade across a multitude of asset classes and trading venues using a gamut of concepts and techniques ranging from time series analysis, filtering, classification, stochastic models, pattern recognition to statistical inference analysing terabytes of data to come up with ideas to identify pricing anomalies in financial markets. Position Overview: As a Quantitative Analyst, you will play a crucial role in developing and enhancing high-frequency trading strategies. You will analyze complex financial data, build predictive models, and collaborate with our trading and technology teams to drive performance improvements. Your work will involve applying advanced statistical techniques and programming skills to solve challenging problems in a fast-paced trading environment. Your Objectives: Work alongside senior traders to implement and enhance automated trading strategies, ensuring they operate efficiently and effectively. Analyze production trades to identify areas for improvement. Develop and propose enhancements to trading strategies based on your findings. Design and implement monitoring tools to detect potential issues in production strategies, ensuring timely identification and resolution. Write comprehensive, scalable scripts in C++ and Python to analyze production strategies, focusing on risk attribution, performance breakdowns, and other critical metrics. Develop innovative and scalable post-trade systems to analyze a wide range of statistics across all production strategies, enhancing overall strategy performance. Create and implement tools for centralized market data analysis across various exchanges, improving data integration and insights. Oversee the deployment and release cycles of trading systems, collaborating with senior traders to ensure smooth and effective rollouts. Your Skills & Talents: Hold a degree in a highly analytical field such as Engineering, Mathematics, or Computer Science from top tier global institutes. Basic familiarity with Linux and command-line tools, along with foundational programming skills in C/C++ and scripting experience in Perl/Python/Shell. Demonstrate strong problem-solving abilities, with advanced mathematical and quantitative reasoning skills. Exhibit excellent communication skills, with the ability to clearly convey complex technical concepts. Possess a strong quantitative mindset with a keen interest in data-driven problem-solving. Good to have skills: Experience with high-frequency trading systems and infrastructure. Knowledge of market microstructure and high-frequency trading strategies. Familiarity with low-latency programming and optimization techniques. Benefits: Our open and collaborative work culture gives you the freedom to innovate and experiment. Our cubicle free offices, non-hierarchical work culture and insistence to hire the very best creates a melting pot for great ideas and technological innovations. Everyone on the team is approachable, there is nothing better than working with friends! Our perks have you covered. Competitive compensation Annual international team outing Fully covered commuting expenses Best-in-class health insurance Delightful catered breakfasts and lunches A well-stocked kitchen 4 week annual leaves along with market holidays Gym and sports club memberships Regular social events and clubs After work parties
Posted 4 weeks ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Sr Manager Data Sciences —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Develop and execute a multi-year data-science strategy and roadmap that directly supports corporate objectives, translating it into measurable quarterly OKRs for the team. Lead, mentor and grow a high-performing staff of data scientists and ML engineers, providing technical direction, career development, and continuous‐learning opportunities. Own the end-to-end delivery of advanced analytics and machine-learning solutions—from problem framing and data acquisition through model deployment, monitoring and iterative improvement—ensuring each project delivers clear business value. Prioritise and manage a balanced portfolio of initiatives, applying ROI, risk and resource-capacity criteria to allocate effort effectively across research, clinical, manufacturing and commercial domains. Provide hands-on guidance on algorithm selection and experimentation (regression, classification, clustering, time-series, deep learning, generative-AI, causal inference), ensuring methodological rigour and reproducibility. Establish and enforce best practices for code quality, version control, MLOps pipelines, model governance and responsible-AI safeguards (privacy, fairness, explainability). Partner with Data Engineering, Product, IT Security and Business stakeholders to integrate models into production systems via robust APIs, dashboards or workflow automations with well-defined SLAs. Manage cloud and on-prem analytics environments, optimising performance, reliability and cost; negotiate vendor contracts and influence platform roadmaps where appropriate. Champion a data-driven culture by communicating insights and model performance to VP/SVP-level leaders through clear storytelling, visualisations and actionable recommendations. Track emerging techniques, regulatory trends and tooling in AI/ML; pilot innovations that keep the organisation at the forefront of data-science practice and compliance requirements. Must-Have Skills: Leadership & Delivery: 10+ years in advanced analytics with 4+ years managing high-performing data-science or ML teams, steering projects from problem framing through production. Algorithmic Expertise: Deep command of classical ML, time-series, deep-learning (CNNs, transformers) and causal-inference techniques, with sound judgement on when and how to apply each. Production Engineering: Expert Python and strong SQL, plus hands-on experience deploying models via modern MLOps stacks (MLflow, Kubeflow, SageMaker, Vertex AI or Azure ML) with automated monitoring and retraining. Business Influence: Proven ability to translate complex analytics into concise, outcome-oriented narratives that inform VP/SVP-level decisions and secure investment. Cloud & Cost Governance: Working knowledge of AWS, Azure or GCP, including performance tuning and cost-optimisation for large-scale data and GPU/CPU workloads. Responsible AI & Compliance: Familiarity with privacy, security and AI-governance frameworks (GDPR, HIPAA, GxP, EU AI Act) and a track record embedding fairness, explainability and audit controls throughout the model lifecycle. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science and or Data Science Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 4 weeks ago
8.0 - 12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 12 The Team As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice To Have Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India
Posted 4 weeks ago
8.0 - 12.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About The Role Grade Level (for internal use): 12 The Team As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice To Have Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India
Posted 4 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders. Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business needs Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What We’re Looking For... You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree in computer science or another technical field or four or more years of work experience Four or more years of relevant work experience Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders. Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business needs Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What We’re Looking For... You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree in computer science or another technical field or four or more years of work experience Four or more years of relevant work experience Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 4 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Us We're a tech-driven creative studio building innovative tools for the VFX and animation industry . Our mission is to blend cutting-edge engineering with artistic workflows — streamlining production, enhancing creativity, and reimagining how visual stories are made. If you’re excited by the intersection of technology, design, and storytelling, you’ll feel right at home here. About The Role We’re developing next-gen tools and platforms — from artist-focused products to pipeline automation. We're looking for a Senior Backend Engineer who enjoys crafting scalable systems, thrives in creative environments, and is curious about how AI/ML (especially agents and intelligent automation) can augment human creativity. Here, your engineering will directly support artists and storytellers. You won’t just build infrastructure — you'll help shape the future of how stories are imagined and brought to life. What You’ll Do Design and build robust backend systems and APIs for our internal tools and creative products Collaborate with cross-functional teams — from artists to AI engineers — to understand needs and ship effective solutions Optimize system performance, scalability, and reliability Integrate AI/ML services and tools into backend workflows (e.g., inference pipelines, smart agents) Contribute to overall backend architecture and system design thinking What We’re Looking For 4+ years of experience in backend development (Python preferred; Go, Node.js, or similar are a plus) Solid experience with RESTful APIs, databases (PostgreSQL, MongoDB, etc.), and microservices Familiarity with containerization (Docker), CI/CD pipelines, and cloud platforms (AWS/GCP) Interest or experience in AI/ML — whether through experimentation, prototypes, or production use A Bachelor's degree (BE/BTech in CS, IT, or related fields) is preferred Bonus Points Hands-on experience with AI/ML frameworks (Transformers, LangChain, OpenAI, etc.) Familiarity with animation/VFX production pipelines or creative tools (Unreal, Blender, Houdini, etc.) Contributions to open-source or side projects relevant to creative tech
Posted 4 weeks ago
12.0 years
6 - 9 Lacs
Hyderābād
On-site
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Principal / Senior Systems Performance Engineer Micron Data Center and Client Workload Engineering in Hyderabad, India, is seeking a senior/principal engineer to join our dynamic team. The successful candidate will primarily contribute to the ML development, ML DevOps, HBM program in the data center by analyzing how AI/ML workloads perform on the latest MU-HBM, Micron main memory, expansion memory and near memory (HBM/LP) solutions, conduct competitive analysis, showcase the benefits that workloads see with MU-HBM’s capacity / bandwidth / thermals, contribute to marketing collateral, and extract AI/ML workload traces to help optimize future HBM designs. Job Responsibilities: The Job Responsibilities include but are not limited to the following: Design, implement, and maintain scalable & reliable ML infrastructure and pipelines. Collaborate with data scientists and ML engineers to deploy machine learning models into production environments. Automate and optimize ML workflows, including data preprocessing, model training, evaluation, and deployment. Monitor and manage the performance, reliability, and scalability of ML systems. Troubleshoot and resolve issues related to ML infrastructure and deployments. Implement and manage distributed training and inference solutions to enhance model performance and scalability. Utilize DeepSpeed, TensorRT, vLLM for optimizing and accelerating AI inference and training processes. Understand key care abouts when it comes to ML models such as: transformer architectures, precision, quantization, distillation, attention span & KV cache, MoE, etc. Build workload memory access traces from AI models. Study system balance ratios for DRAM to HBM in terms of capacity and bandwidth to understand and model TCO. Study data movement between CPU, GPU and the associated memory subsystems (DDR, HBM) in heterogeneous system architectures via connectivity such as PCIe/NVLINK/Infinity Fabric to understand the bottlenecks in data movement for different workloads. Develop an automated testing framework through scripting. Customer engagements and conference presentations to showcase findings and develop whitepapers. Requirements: Strong programming skills in Python and familiarity with ML frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience in data preparation: cleaning, splitting, and transforming data for training, validation, and testing. Proficiency in model training and development: creating and training machine learning models. Expertise in model evaluation: testing models to assess their performance. Skills in model deployment: launching server, live inference, batched inference Experience with AI inference and distributed training techniques. Strong foundation in GPU and CPU processor architecture Familiarity with and knowledge of server system memory (DRAM) Strong experience with benchmarking and performance analysis Strong software development skills using leading scripting, programming languages and technologies (Python, CUDA, C, C++) Familiarity with PCIe and NVLINK connectivity Preferred Qualifications: Experience in quickly building AI workflows: building pipelines and model workflows to design, deploy, and manage consistent model delivery. Ability to easily deploy models anywhere: using managed endpoints to deploy models and workflows across accessible CPU and GPU machines. Understanding of MLOps: the overarching concept covering the core tools, processes, and best practices for end-to-end machine learning system development and operations in production. Knowledge of GenAIOps: extending MLOps to develop and operationalize generative AI solutions, including the management of and interaction with a foundation model. Familiarity with LLMOps: focused specifically on developing and productionizing LLM-based solutions. Experience with RAGOps: focusing on the delivery and operation of RAGs, considered the ultimate reference architecture for generative AI and LLMs. Data management: collect, ingest, store, process, and label data for training and evaluation. Configure role-based access control; dataset search, browsing, and exploration; data provenance tracking, data logging, dataset versioning, metadata indexing, data quality validation, dataset cards, and dashboards for data visualization. Workflow and pipeline management: work with cloud resources or a local workstation; connect data preparation, model training, model evaluation, model optimization, and model deployment steps into an end-to-end automated and scalable workflow combining data and compute. Model management: train, evaluate, and optimize models for production; store and version models along with their model cards in a centralized model registry; assess model risks, and ensure compliance with standards. Experiment management and observability: track and compare different machine learning model experiments, including changes in training data, models, and hyperparameters. Automatically search the space of possible model architectures and hyperparameters for a given model architecture; analyze model performance during inference, monitor model inputs and outputs for concept drift. Synthetic data management: extend data management with a new native generative AI capability. Generate synthetic training data through domain randomization to increase transfer learning capabilities. Declaratively define and generate edge cases to evaluate, validate, and certify model accuracy and robustness. Embedding management: represent data samples of any modality as dense multi-dimensional embedding vectors; generate, store, and version embeddings in a vector database. Visualize embeddings for improvised exploration. Find relevant contextual information through vector similarity search for RAGs. Education: Bachelor’s or higher (with 12+ years of experience) in Computer Science or related field. About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.
Posted 4 weeks ago
7.0 years
20 - 30 Lacs
Hyderābād
On-site
About the Role We are seeking a visionary and hands-on AI Lead to architect, build, and scale next-generation Generative and Agentic AI systems. In this role, you will drive the end-to-end lifecycle—from research and prototyping to production deployment—guiding a team of AI engineers and collaborating cross-functionally to deliver secure, scalable, and impactful AI solutions across multimodal and LLM-based ecosystems. Key Responsibilities Architect and oversee the development of GenAI and Agentic AI workflows, including multi-agent systems and LLM-based pipelines. Guide AI engineers in best practices for RAG (Retrieval-Augmented Generation), prompt engineering, and agent design. Evaluate and implement the right technology stack: open source (Hugging Face, LangChain, LlamaIndex) vs. closed source (OpenAI, Anthropic, Mistral). Lead fine-tuning and adapter-based training (e.g., LoRA, QLoRA, PEFT). Drive inference optimization using quantization, ONNX, TensorRT, and related tools. Build and refine RAG pipelines using embedding models, vector DBs (FAISS, Qdrant), chunking strategies, and hybrid knowledge graph systems. Manage LLMOps with tools like Weights & Biases, MLflow, and ClearML, ensuring experiment reproducibility and model versioning. Design and implement evaluation frameworks for truthfulness, helpfulness, toxicity, and hallucinations. Integrate guardrails, content filtering, and data privacy best practices into GenAI systems. Lead development of multi-modal AI systems (VLMs, CLIP, LLaVA, video-text fusion models). Oversee synthetic data generation for fine-tuning in low-resource domains. Design APIs and services for Model-as-a-Service (MaaS) and AI agent orchestration. Collaborate with product, cloud, and infrastructure teams to align on deployment, GPU scaling, and cost optimization. Translate cutting-edge AI research into usable product capabilities, from prototyping to production. Mentor and grow the AI team, establishing R&D best practices and benchmarks. Stay up-to-date with emerging trends (arXiv, Papers With Code) to keep the organization ahead of the curve. Required Skills & Expertise AI & ML Foundations: Generative AI, LLMs, Diffusion Models, Agentic AI Systems, Multi-Agent Planning, Prompt Engineering, Feedback Loops, Task Decomposition Ecosystem & Frameworks: Hugging Face, LangChain, OpenAI, Anthropic, Mistral, LLaMA, GPT, Claude, Mixtral, Falcon, etc. Fine-tuning & Inference: LoRA, QLoRA, PEFT, ONNX, TensorRT, DeepSpeed, vLLM Data & Retrieval Systems: FAISS, Qdrant, Chroma, Pinecone, Hybrid RAG + Knowledge Graphs MLOps & Evaluation: Weights & Biases, ClearML, MLflow, Evaluation metrics (truthfulness, helpfulness, hallucination) Security & Governance: Content moderation, data privacy, model alignment, ethical constraints Deployment & Ops: Cloud (AWS, GCP, Azure) with GPU scaling, Serverless LLMs, API-based inference, Docker/Kubernetes Other: Multi-modal AI (images, video, audio), API Design (Swagger/OpenAPI), Research translation and POC delivery Preferred Qualifications 7+ years in AI/ML roles, with at least 2–3 years in a technical leadership capacity Proven experience deploying LLM-powered systems at scale Experience working with cross-functional product and infrastructure teams Contributions to open-source AI projects or published research papers (a plus) Strong communication skills to articulate complex AI concepts to diverse stakeholders Why Join Us? Work at the forefront of AI innovation with opportunities to publish, build, and scale impactful systems Lead a passionate team of engineers and researchers Shape the future of ethical, explainable, and usable AI products Ready to shape the next wave of AI? Apply now and join us on this journey! Job Type: Full-time Pay: ₹2,000,000.01 - ₹3,002,234.14 per year Benefits: Flexible schedule Health insurance Paid time off Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Yearly bonus Work Location: In person
Posted 4 weeks ago
0 years
0 Lacs
Hyderābād
Remote
We are united in our mission to make a positive impact on healthcare. Join Us! South Florida Business Journal, Best Places to Work 2024 Inc. 5000 Fastest-Growing Private Companies in America 2024 2024 Black Book Awards, ranked #1 EHR in 11 Specialties 2024 Spring Digital Health Awards, "Web-based Digital Health" category for EMA Health Records (Gold) 2024 Stevie American Business Award (Silver), New Product and Service: Health Technology Solution (Klara) Who we are: We Are Modernizing Medicine (WAMM)! We're a team of bright, passionate, and positive problem-solvers on a mission to place doctors and patients at the center of care through an intelligent, specialty-specific cloud platform. Our vision is a world where the software we build increases medical practice success and improves patient outcomes. Founded in 2010 by Daniel Cane and Dr. Michael Sherling, we have grown to over 3400 combined direct and contingent team members serving eleven specialties, and we are just getting started! ModMed's global headquarters is based in Boca Raton, FL, with a growing office in Hyderabad, India, and a robust remote workforce across the US, Chile, and Germany. ModMed is hiring a driven ML Ops Engineer 2 to join our positive, passionate, and high-performing team focused on scalable ML Systems. This is an exciting opportunity to You as you will collaborate with data scientists, engineers, and other cross-functional teams to ensure seamless model deployment, monitoring, and automation. If you're passionate about cloud infrastructure, automation, and optimizing ML pipelines, this is the role for you within a fast-paced Healthcare IT company that is truly Modernizing Medicine! Key Responsibilities: Model Deployment & Automation: Develop, deploy, and manage ML models on Databricks using MLflow for tracking experiments, managing models, and registering them in a centralized repository. Infrastructure & Environment Management: Set up scalable and fault-tolerant infrastructure to support model training and inference in cloud environments such as AWS, GCP, or Azure. Monitoring & Performance Optimization: Implement monitoring systems to track model performance, accuracy, and drift over time. Create automated systems for re-training and continuous learning to maintain optimal performance. Data Pipeline Integration: Collaborate with the data engineering team to integrate model pipelines with real-time and batch data processing frameworks, ensuring seamless data flow for training and inference. Skillset & Qualification Model Deployment: Experience with deploying models in production using cloud platforms like AWS Sagemaker, GCP AI Platform, or Azure ML Studio. Version Control & Automation: Experience with MLOps tools such as MLflow, Kubeflow, or Airflow to automate and monitor the lifecycle of machine learning models. Cloud Expertise: Experience with cloud-based machine learning services on AWS, Google Cloud, or Azure, ensuring that models are scalable and efficient. Engineers must be skilled in measuring and optimizing model performance through metrics like AUC, precision, recall, and F1-score, ensuring that models are robust and reliable in production settings. Education: Bachelor's or Master's degree in Data Science, Statistics, Mathematics, or a related technical field. ModMed in India Benefit Highlights: High growth, collaborative, transparent, fun, and award-winning culture Comprehensive benefits package including medical for you, your family, and your dependent parents The company supported community engagement opportunities along with a paid Voluntary Time Off day to use for volunteering in your community of interest Global presence, and in-person collaboration opportunities; dog-friendly HQ (US), Hybrid office-based roles and remote availability Company-sponsored Employee Resource Groups that provide engaged and supportive communities within ModMed ModMed Benefits Highlight: At ModMed, we believe it's important to offer a competitive benefits package designed to meet the diverse needs of our growing workforce. Eligible Modernizers can enroll in a wide range of benefits: India Meals & Snacks: Enjoy complimentary office lunches & dinners on select days and healthy snacks delivered to your desk, Insurance Coverage: Comprehensive health, accidental, and life insurance plans, including coverage for family members, all at no cost to employees, Allowances: Annual wellness allowance to support your well-being and productivity, Earned, casual, and sick leaves to maintain a healthy work-life balance, Bereavement leave for difficult times and extended medical leave options, Paid parental leaves, including maternity, paternity, adoption, surrogacy, and abortion leave, Celebration leave to make your special day even more memorable, and company-paid holidays to recharge and unwind. United States Comprehensive medical, dental, and vision benefits, including a company Health Savings Account contribution, 401(k): ModMed provides a matching contribution each payday of 50% of your contribution deferred on up to 6% of your compensation. After one year of employment with ModMed, 100% of any matching contribution you receive is yours to keep. Generous Paid Time Off and Paid Parental Leave programs, Company paid Life and Disability benefits, Flexible Spending Account, and Employee Assistance Programs, Company-sponsored Business Resource & Special Interest Groups that provide engaged and supportive communities within ModMed, Professional development opportunities, including tuition reimbursement programs and unlimited access to LinkedIn Learning, Global presence and in-person collaboration opportunities; dog-friendly HQ (US), Hybrid office-based roles and remote availability for some roles, Weekly catered breakfast and lunch, treadmill workstations, Zen, and wellness rooms within our BRIC headquarters. PHISHING SCAM WARNING: ModMed is among several companies recently made aware of a phishing scam involving imposters posing as hiring managers recruiting via email, text and social media. The imposters are creating misleading email accounts, conducting remote "interviews," and making fake job offers in order to collect personal and financial information from unsuspecting individuals. Please be aware that no job offers will be made from ModMed without a formal interview process, and valid communications from our hiring team will come from our employees with a ModMed email address (first.lastname@modmed.com). Please check senders' email addresses carefully. Additionally, ModMed will not ask you to purchase equipment or supplies as part of your onboarding process. If you are receiving communications as described above, please report them to the FTC website.
Posted 4 weeks ago
3.0 - 5.0 years
4 - 10 Lacs
Pune
On-site
Job Title: Data Scientist Location: Pune (Work From Office) Experience Level: 3–5 Years Positions Available: 5 Job Description: We are seeking a highly motivated and analytically strong Data Scientist to join our team. The ideal candidate should have experience in building and deploying machine learning models, working with large-scale data processing tools, and delivering actionable insights in real-time systems. Key Responsibilities: Develop, train, and deploy ML models using Python , Scikit-Learn , TensorFlow , and PyTorch Perform real-time inference , anomaly detection , and unsupervised learning techniques on high-volume data Build and manage data pipelines using Spark , Airflow , and messaging systems like Kafka Work with modern data architectures including ClickHouse , SQL , PostgreSQL , and analytical databases Implement and optimize risk engines , credit scoring models , and payment system analytics Collaborate with cross-functional teams to integrate data science solutions into production Skills Required: Languages & Libraries: Python, Pandas, Scikit-Learn, TensorFlow, PyTorch Techniques: Real-Time Inference, Anomaly Detection, Unsupervised Learning Tools & Frameworks: Spark, Airflow, Kafka Databases: ClickHouse, SQL, PostgreSQL, Analytical Databases Domain Knowledge: Risk Engines, Credit Scoring, Payments Job Types: Full-time, Permanent Pay: ₹483,299.97 - ₹1,000,000.00 per year Benefits: Provident Fund Schedule: Day shift Monday to Friday Application Question(s): How soon you can join? What is your current CTC & expected CTC? Experience: Scikit-Learn: 5 years (Preferred) TensorFlow: 5 years (Preferred) Pandas: 5 years (Preferred) Kafka: 5 years (Preferred) Work Location: In person
Posted 4 weeks ago
10.0 years
6 - 8 Lacs
Bengaluru
On-site
Company Description Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description Ready to make a global impact by industrializing AI? Visa AI as Services (AIaS) operationalizes the delivery of AI and decision intelligence to ensure their ongoing business values. Built with composable AI capabilities, privacy-enhancing computation, and cloud native platforms, AIaS powers and automates industrialization of data, models, and applications for predictive and generative AI. Combined with strong governance, AIaS optimizes the performance, scalability, interpretability and reliability of AI models and services. If you want to be in the exciting payment and AI space, learn fast, and make big impacts, Visa AI as Services is an ideal place for you! This role is for a Lead ML Engineer – Visa Feature Platform, with a strong development background, whose primary objective will be to extend our AI as a Service platform to provide faster time to market and building sophisticated Feature Engineering for training & inference while enhancing, optimizing our existing codebase and development procedures, as well as developing new solutions. We are seeking for a strong tech leader and architect with a solid background in data engineering and AI/ML production systems. The ideal candidate will have a strong mix of hands-on technical knowledge and leadership skills, with the ability to inspire and drive the team towards achieving our technical objectives. They should be hands-on, knowledgeable about cloud technologies, business drivers, and emerging AI/ML trends, and experienced with Big Data & Streaming platforms. The role demands a proactive leader who can guide the team through uncertainty, educate stakeholders, and influence decisions. The candidate should possess strong interpersonal skills, excellent written and verbal communication, and the ability to manage complex projects and deadlines. A problem-solving mindset and a hands-on approach are key to succeeding in this role. This role offers ample opportunities for learning and growth, and the chance to be part of delivering the next big thing for our AI as Services team. If you are experienced and passionate about cloud technology, AI, and machine learning, and are excited about making a significant impact, we would love to hear from you. Essential Functions: Collaborate with project team members (Product Managers, Architects, Analysts, Software Engineers, Project Managers, etc.) to ensure development and implementation of new data driven business solutions. Drive development effort End-to-End for on-time delivery of high-quality solutions that conform to requirements, conform to the architectural vision, and comply with all applicable standards. Responsibilities span all phases of solution development. Collaborate with senior technical staff and PM to identify, document, plan contingency, track and manage risks and issues until all are resolved Present technical solutions, capabilities, considerations, and features in business terms. Effectively communicate status, issues, and risks in a precise and timely manner. Coaching and mentoring junior team members and evolving team talent pipeline. This is a hybrid position. Expectation of days in the office will be confirmed by your Hiring Manager. Qualifications Basic Qualifications: 12 or more years of relevant work experience with a bachelor's degree or at least 10 years of experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or 8 years of work experience with a PhD 4+ years of related hands-on experience in delivering robust and scalable solutions on Big Data applications Experience in at least one or two of the following: Rust, Python, Golang, Java, or C/C++. Experience with Rust, Flink, Spark, NoSQL or Kafka highly preferred. Web service standards and related patterns (REST, gRPC). Experience implementing solutions for low-latency, distributed services using open standard technologies. e.g. Streaming Systems, NoSQL and Containers. Exposure to leading-edge areas such as Machine Learning, Big Data, Distributed Systems or SRE. Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law.
Posted 4 weeks ago
0 years
0 Lacs
India
On-site
Ø Take full ownership of AI development — from research and prototyping to deployment and maintenance of language model-based applications. Ø Design and build intelligent AI agents using LLMs (e.g., GPT, Claude, LLaMA) that can perform reasoning, automation, and tool-based tasks. Ø Architect and implement systems for prompt engineering, context management, and memory in AI agents. Ø Select, fine-tune, or self-host language models to meet specific business needs, balancing cost, performance, and scalability. Ø Build integrations with internal tools, APIs, and databases to enable agents to act autonomously in real-world use cases. Ø Establish best practices for AI safety, testing, evaluation, and user experience, ensuring reliability and ethical outputs. Ø Work closely with leadership to identify high-impact AI opportunities across the business and define the AI roadmap. Ø Set up and manage infrastructure for running models (cloud, on-prem, or hybrid), including inference optimization and latency reduction. Ø Stay current with developments in generative AI, multi-agent systems, and autonomous AI, bringing innovation to the organization. Ø Document processes and build internal tools to scale future AI initiatives and potentially onboard additional team members. Job Types: Full-time, Part-time, Fresher, Freelance, Volunteer Pay: ₹180,000.00 - ₹1,150,329.84 per year Work Location: In person
Posted 4 weeks ago
7.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: AI/ML Engineer – Generative AI Specialist Experience: 7-9 Years Location: Chennai/On-site Job Type: Full-Time About the Role: We are looking for a highly motivated AI/ML Engineer with 3–5 years of experience to join our growing team. This role focuses on building and optimizing solutions using Generative AI . You’ll work at the intersection of AI research and real-world application development, helping us deliver next-generation intelligent systems for enterprise use cases. Responsibilities: • Design, build ML models with a focus on LLMs and Generative AI. • Conduct research that contributes to the state-of-the-art in LLM architectures. • Expertise in full and parameter-efficient fine-tuning (LoRA, QLoRA, Adapter, Prefix Tuning), instruction tuning, RLHF, and multi-task learning. • Proficient in model compression (AWQ, GPTQ, GPTQ-for-LLaMA) and optimized inference engines (vLLM, DeepSpeed, FP6-LLM). • Develop and maintain scalable APIs to serve model outputs in production environments. • Collaborate with cross-functional teams (product, data and engineering) to identify AI opportunities and deliver solutions. • Implement Retrieval-Augmented Generation (RAG) pipelines and integrate vector databases (e.g., FAISS, Pinecone). • Database Knowledge: Experience with PostgreSQL. • Research and apply the latest techniques in prompt engineering, model compression, and optimization. • Monitor, evaluate, and continuously improve model performance and reliability. Required Skills & Experience: • Bachelor's or Master’s degree in Computer Science, Data Science, AI, or a related field. • 7-9 years of hands-on experience in AI/ML development and Expertise in Agentic approach. • Strong programming skills in Python and experience with ML frameworks such as Proficiency in TensorFlow, PyTorch, Keras • Experience in distributed training, NVIDIA AI platforms, and cloud/on-premise infrastructure. • Familiarity with integrating MCP-based architectures into broader system workflows, ensuring semantic consistency and interoperability. • Experience with performance tuning, caching strategies, and cost optimization, particularly in the context of production-grade LLM deployments. • Experience with LLMs, including fine-tuning, prompt engineering, and using frameworks like Hugging Face Transformers. • Familiarity with GenAI techniques and tools such as LangChain, LangGraph, LLMOps, LoRA, and PEFT. • Experience deploying models using FastAPI, Flask, Docker, or on cloud platforms (AWS/GCP/Azure). • Understanding of NLP concepts, deep learning architectures, and transformer models. Preferred Qualifications: • B Tech or MTech in AI -Data Science - Gen AI -Computer Science, Data Science with Real Gen AI Project/Product Development Experience/Fine Tuned LLM Experience is added advantage • Knowledge of AI guardrails, compliance frameworks (e.g., Microsoft AI Guidance), and responsible AI practices. • Previous work on chatbots, copilots, or AI assistants. • Publications, contributions to open-source projects, or participation in AI competitions.
Posted 4 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Experience in LLM models like PaLM, GPT4, Mistral (open-source models), Work through the complete lifecycle of Gen AI model development, from training and testing to deployment and performance monitoring. Developing and maintaining AI pipelines with multimodalities like text, image, audio etc. Have implemented in real-world Chat bots or conversational agents at scale handling different data sources. Experience in developing Image generation/translation tools using any of the latent diffusion models like stable diffusion, Instruct pix2pix. Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of cxzsetup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push, raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Qualifications Bachelor’s or Master’s Degree in Computer Science, Engineering, Have undergone any modern NLP/LLM courses or participation in open competitions are added advantage.
Posted 4 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
🏢 About Us We build cutting-edge, next-generation AI products designed to simplify and enhance everyday experiences. Our mission is to apply the latest in computer vision and deep learning to create seamless, intelligent tools that deliver real-world impact at scale. 🧠 Role Description We’re developing a powerful document scanning app and are looking for a talented Senior Computer Vision/OpenCV Engineer to help us build AI-driven document enhancement filters, similar to CamScanner’s Magic Pro Filter etc. This is an exciting opportunity for someone passionate about transforming traditional visual processing into lightweight, production-ready AI systems - especially for mobile platforms like Android/iOS. 🎯 Responsibilities As part of this role, you will: Remove document shadows Enhance brightness & contrast Preserve original colors where needed Output clean, high-quality black & white scans Build and optimize a TFLite model with inference time under 1.5 seconds Write efficient pre- and post-processing logic using OpenCV and lightweight ML Build more Ai models around document scanning enhancement usecases Deliver well-documented, clean code ready for mobile deployment Work closely with our Android team to support seamless integration ✅ Requirements 3+ years experience in Computer Vision, Image Processing, and OpenCV Solid background in building and deploying TensorFlow Lite (TFLite) models for Android Strong understanding of model optimization and mobile inference speed Ability to deliver visually sharp, fast, and robust filters Experience replicating or improving filters from apps like CamScanner is a strong plus 🌟 Nice to Have Android app integration experience (Java/Kotlin + native CV/ML) Experience with hybrid CV + CNN filter design Knowledge of GANs or advanced enhancement models Familiarity with OCR-related preprocessing pipelines 🔍 What We’re Looking For We’re aiming to develop the visual quality and real-time performance of CamScanner Magic Pro Filter - delivering clean, vibrant, and professional document scans. 🚀 Why Join Us? Be part of a fast-moving AI product team Solve real-world visual problems used by 300K+ users Work on cutting-edge vision pipelines and deploy at scale Flexible, remote-first collaboration 📩 Interested? If this sounds like a great fit, we’d love to connect! Please share your portfolio, GitHub, or any relevant past work you’ve done in this space.
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi