Jobs
Interviews

1746 Inference Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

175.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Responsible for contacting clients with overdue accounts to secure the settlement of the account. Also they do preventive work to avoid future overdues with accounts that have a high exposure. L aunched in 2012, Amex Offers is a digital advertising platform that connects Merchants and brands with the tens of millions of American Express Card Members across the globe. The Amex Offers team develops strategic marketing partnerships that deliver outstanding value for Merchants and Advertisers to reach these high-spending Card Members in the digital channels where they engage with American Express, delivering deep insights and improving results. Through this complete marketing solution, we can help advertisers get laser-focused on who our mutual customers are, what they want, and how we can meet their needs. We are seeking a commercially minded Director of International Advertiser Analytics & Optimization to lead the strategic analytics function supporting our merchant partnerships and the scaling of proven AMEX Offers strategies. In this highly strategic role, you will work together with Business Development, Customer Success and Product to develop compelling data-driven insights that unlock growth opportunities for merchants, optimize campaign performance, and shape our CLO value proposition. The role will be interfacing stakeholders in Australia, Canada, United Kingdom and the United States. This role requires a strong focus on defining efficient growth opportunities, critical market and advertiser specific nuances and an ability to lead analytics capacity prioritization and support global top advertiser prioritization. Core to this is stakeholder management, clear communication, and data-drive perspectives. Ideal candidates will have analytics or data consulting backgrounds, experience directly working with transaction-based data and familiarity with CLO platforms or bank loyalty ecosystems. Key Responsibilities Act as the analytics lead in strategic merchant / advertiser conversations, providing pre-sales insights, performance forecasts and ROI modelling; in our top markets. Deliver post-campaign analysis and insights that quantify lift, incrementality and customer behavior change. Ensure clarity and integrity in data interpretation presented to merchants and internal stakeholders. Partner closely with Business Development, Customer Success, Product and broader analytics teams to refine and evolve AmEx Offers & Digital Media value proposition. Deliver insights more broadly Influence go-to-market strategies by identifying vertical-specific trends and merchant priorities. Act as a key partner to the broader Analytics, Product, Commercial & Strategy teams, ensuring measurement frameworks are embedded across campaign planning and performance workflows. Lead and mentor a 12+ team of India-based analytics while fostering a strong technical culture of rigor, transparency and intellectual curiosity. Minimum Qualifications 8+ years in media analytics, reporting, or performance measurement - ideally in ad tech, martech, or digital media. Strong solid understanding of digital media KPIs, CLO campaign dynamics, and marketing measurement techniques. Fluency in SQL and proficiency in Python; experience with BI Tools (e.g. Tableau & Power BI) Excellent communication and storytelling skills – comfortable presenting to C-level merchant/advertiser clients Strong foundation In statistical modelling, experimentation, and causal inference. Strong leadership skills and experience leading a high-output analytics or insights team Excellent communication and stakeholder management abilities. Self-starter, ability to drive insights from the data, provide actionable steps and drive results We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Job Description - AI Data Scientist Location: Remote Department: Data & AI Engineering Employment Type: Full-time Experience Level: Mid-level About the Role: We are seeking an experienced AI Data Engineer to design, build, and deploy data pipelines and ML infrastructure to power scalable AI/ML solutions. This role involves working at the intersection of data engineering, MLOps, and model deployment—supporting the end-to-end lifecycle from data ingestion to model production. Key Responsibilities: Data Engineering & Development Design, develop, and train AI models to solve complex business problems and enable intelligent automation. Design, develop, and maintain scalable data pipelines and workflows for AI/ML applications. Ingest, clean, and transform large volumes of structured and unstructured data from diverse sources (APIs, streaming, databases, flat files). Build and manage data lakes, data warehouses, and feature stores. Prepare training datasets and implement data preprocessing logic. Perform data quality checks, validation, lineage tracking, and schema versioning. Model Deployment & MLOps Package and deploy AI/ML models to production using CI/CD workflows. Implement model inference pipelines (batch or real-time) using containerized environments (Docker, Kubernetes). Use MLOps tools (e.g., MLflow, Kubeflow, SageMaker, Vertex AI) for model tracking, versioning, and deployment. Monitor deployed models for performance, drift, and reliability. Integrate deployed models into applications and APIs (e.g., REST endpoints). Platform & Cloud Engineering Manage cloud-based infrastructure (AWS, GCP, or Azure) for data storage, compute, and ML services. Automate infrastructure provisioning using tools like Terraform or CloudFormation. Optimize pipeline performance and resource utilization for cost-effectiveness. Requirements: Must-Have Skills Bachelor's/Master’s in Computer Science, Engineering, or related field. 2+ years of experience in data engineering, ML engineering, or backend infrastructure. Proficient in Python, SQL, and data processing frameworks (e.g., Spark, Pandas). Experience with cloud platforms (AWS/GCP/Azure) and services like S3, BigQuery, Lambda, or Databricks. Hands-on experience with CI/CD, Docker, and container orchestration (Kubernetes, ECS, EKS). Preferred Skills Experience deploying ML models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Familiarity with API development (Flask/FastAPI) for serving models. Experience with Airflow, Prefect, or Dagster for orchestrating pipelines. Understanding of DevOps and MLOps best practices. Soft Skills: Strong communication and collaboration with cross-functional teams. Proactive problem-solving attitude and ownership mindset. Ability to document and communicate technical concepts clearly.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title: GenAI Developer Key Skills: GenAI, Python, GenAI platforms, LLM , MLOps, cloud platforms, Docker, Kubernetes, CI/CD pipelines, GenAI frameworks, LangChain, AutoGen, LlamaIndex, CrewA Job Locations: Hyderabad, Bangalore Experience: 3 - 8 Years Budget: 7 - 15 LPA Education Qualification : Any Graduation Work Mode: Hybrid Employment Type: Contract Notice Period: Immediate - 15 Days Interview Mode: 2 Rounds of Technical Interview + Including Client round Job Description: We are seeking a highly skilled and motivated GenAI Infrastructure Architect and Developer to lead the design, development, and deployment of scalable GenAI platforms. This role will focus on building robust infrastructure to support agent-based systems, LLM orchestration, and real-time AI-driven automation across enterprise environments. Key Responsibilities: Architect and implement GenAI infrastructure using cloud-native technologies (AWS, Azure, GCP). Design and deploy scalable, secure, and resilient GenAI pipelines for model training, inference, and monitoring. Collaborate with cross-functional teams to integrate GenAI services into enterprise workflows (e.g., automation, observability, data pipelines). Optimize performance, cost, and reliability of GenAI workloads. Ensure compliance with security, governance, and data privacy standards. Required Skills: Strong experience with cloud platforms (AWS Bedrock, Azure AI, GCP Vertex AI). Proficiency in containerization (Docker, Kubernetes) and CI/CD pipelines. Familiarity with GenAI frameworks like LangChain, AutoGen, LlamaIndex, or CrewAI. Hands-on experience with observability, logging, and monitoring tools. Understanding of LLM lifecycle management, prompt engineering, and fine-tuning. Knowledge of data engineering and MLOps practices. Interested Candidates please share your CV to pnomula@people-prime.com

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

India

Remote

If you haven't built and maintained AI/LLM systems in production, developed full-stack applications with complex backend architectures, or debugged distributed systems under pressure, we kindly ask that you don't apply. We need a hands-on developer with strong support engineering experience who codes daily while ensuring system reliability. About Us We're an AI-first startup revolutionizing speech-to-text technology through cutting-edge LLM integration and machine learning pipelines. Our platform combines advanced AI models with real-time processing capabilities, serving enterprise clients who demand both accuracy and reliability. As we scale our AI infrastructure globally, we need a technical leader who can both build and support our systems. Role Overview We're seeking a Senior Support Engineer who is primarily a hands-on full-stack developer with deep AI/LLM infrastructure experience and production support expertise. This isn't a traditional support role - you'll spend 70% of your time coding and building systems, 30% on support and reliability engineering. You'll architect and implement AI/LLM integrations, develop full-stack applications, optimize backend performance, and maintain production systems. This role requires someone who can write production code, debug complex distributed systems, and take ownership of both development and operational excellence. Key Responsibilities 1. AI/LLM Infrastructure Development - Design and implement LLM integration pipelines (OpenAI, Anthropic, local models) - Build AI model inference systems with real-time processing capabilities - Develop prompt engineering frameworks and model optimization systems - Create AI/ML monitoring and evaluation frameworks - Implement vector databases and semantic search capabilities - Build automated model training and deployment pipelines 2. Full Stack Development - Backend Focus - Develop scalable backend APIs using Python/Node.js/Go - Design and optimize database architectures (PostgreSQL, MongoDB, Redis) - Build microservices architectures with proper service communication - Implement authentication, authorization, and security frameworks - Create data processing pipelines for audio/text transcription workflows - Develop real-time WebSocket and event-driven systems 3. Production Support & System Reliability - Monitor and maintain production AI/LLM systems with 99.9% uptime - Respond to critical incidents and perform root cause analysis - Debug complex distributed system issues across the full stack - Implement comprehensive monitoring, alerting, and observability systems - Maintain CI/CD pipelines and automated deployment processes - Create technical documentation and incident response procedures Technical Requirements 1. AI/LLM Infrastructure Experience - 3+ years hands-on experience with LLM APIs (OpenAI, Anthropic, Hugging Face) - Production experience with AI model deployment and inference systems - Knowledge of vector databases (Pinecone, Weaviate, Chroma) and embeddings - Experience with ML frameworks (PyTorch, TensorFlow, Transformers) - Understanding of prompt engineering, RAG systems, and AI evaluation metrics 2. Backend Development Expertise - 5+ years full-stack development with strong backend focus - Expert-level Python, Node.js, or Go for backend services - Advanced database optimization (PostgreSQL, MongoDB, Redis) - Microservices architecture and API design patterns - Experience with message queues (RabbitMQ, Apache Kafka) - Cloud infrastructure expertise (AWS, GCP, Azure) 3. Production Support Experience - 3+ years maintaining production systems under high load - Incident response and on-call rotation experience - Proficiency with monitoring tools (Datadog, New Relic, Grafana) - Experience with containerization (Docker, Kubernetes) - Knowledge of CI/CD pipelines and Infrastructure as Code 4. Full Stack Capabilities - Frontend development with React, Vue.js, or Angular - Understanding of modern web technologies and performance optimization - Experience with real-time applications and WebSocket implementation - Mobile development experience (React Native, Flutter) preferred Preferred Qualifications - Experience with speech-to-text, NLP, or audio processing systems - Background in fintech, healthcare, or regulated industries - Contributions to open-source AI/ML projects - Experience with startup environments and rapid scaling - DevOps and infrastructure automation experience What You'll Build - AI-powered transcription services with multi-model inference - Real-time audio processing pipelines with LLM integration - Scalable backend APIs serving millions of requests - Monitoring dashboards for AI model performance and system health - Automated deployment systems for AI/ML models - Full-stack applications for enterprise clients Technical Environment - AI/ML Stack: OpenAI GPT-4, Anthropic Claude, Hugging Face models, PyTorch - Backend: Python/FastAPI, Node.js, PostgreSQL, Redis, Docker, Kubernetes - Cloud: AWS (Lambda, ECS, RDS, S3), infrastructure automation with Terraform - Monitoring: Datadog, Grafana, ELK stack, custom AI model monitoring - Frontend: React, TypeScript, modern web frameworks Working Arrangements - 100% remote, full-time position with rotating shift schedule for global engineering support coverage - Engineering support coverage across multiple time zones (building toward 24/7 coverage as the team grows) - Collaborative environment with structured handoffs between regional support teams - Reasonable on-call responsibilities with fair rotation - Modern collaboration tools, comprehensive documentation systems, and remote-first culture What We Offer - Competitive compensation package - Opportunity to work with cutting-edge AI technology and solve complex technical challenges at scale - Supportive team culture despite global support requirements - Clear career growth path toward senior technical leadership, specialized expertise, and architectural roles How to Apply Submit your resume with a cover letter addressing: - Your hands-on experience building AI/LLM systems in production - Specific examples of full-stack applications you've developed - Your approach to maintaining production systems under pressure - Experience with both development and support engineering responsibilities - Examples of complex backend optimization or distributed system debugging Include GitHub profile or portfolio demonstrating: - AI/ML projects with real-world applications - Full-stack development capabilities - Production system monitoring and reliability engineering We're looking for a technical leader who can build our AI infrastructure while ensuring operational excellence. If you're passionate about both creating and maintaining cutting-edge AI systems, we'd love to hear from you.

Posted 3 weeks ago

Apply

0.0 - 12.0 years

0 Lacs

Gurugram, Haryana

On-site

Associate Director, ML Engineering Gurgaon, India; Ahmedabad, India; Hyderabad, India; Virtual, Gurgaon, India Information Technology 317386 Job Description About The Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India

Posted 3 weeks ago

Apply

0.0 - 12.0 years

0 Lacs

Gurugram, Haryana

On-site

About the Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana

On-site

Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are seeking a highly skilled and visionary Senior Embedded Systems Architect to lead the design and implementation of next-generation AI-powered embedded platforms. This role demands deep technical proficiency across embedded systems, AI model deployment, hardware–software co-design, and media-centric inference pipelines. You will architect full-stack embedded AI solutions using custom AI accelerators such as Google Coral (Edge TPU), Hailo, BlackHole (Torrent), and Kendryte, delivering real-time performance in vision, audio, and multi-sensor edge deployments. The ideal candidate brings a combination of system-level thinking, hands-on prototyping, and experience in optimizing AI workloads for edge inference. This is a high-impact role where you will influence product architecture, ML tooling, hardware integration, and platform scalability for a range of IoT and intelligent device applications. Requirements Key Responsibilities ️ System Architecture & Design Define and architect complete embedded systems for AI workloads — from sensor acquisition to real-time inference and actuation . Design multi-stage pipelines for vision/audio inference: e.g., ISP preprocessing CNN inference postprocessing. Evaluate and benchmark hardware platforms with AI accelerators (TPU/NPU/DSP) for latency, power, and throughput. Edge AI & Accelerator Integration Work with Coral, Hailo, Kendryte, Movidius, and Torrent accelerators using their native SDKs (EdgeTPU Compiler, HailoRT, etc.). Translate ML models (TensorFlow, PyTorch, ONNX) for inference on edge devices using cross-compilation , quantization , and toolchain optimization . Lead efforts in compiler flows such as TVM, XLA, Glow, and custom runtime engines. ️ Media & Sensor Processing Pipelines Architect pipelines involving camera input , ISP tuning , video codecs , audio preprocessors , or sensor fusion stacks . Integrate media frameworks such as V4L2 , GStreamer , and OpenCV into real-time embedded systems. Optimize for frame latency, buffering, memory reuse, and bandwidth constraints in edge deployments. ️ Embedded Firmware & Platform Leadership Lead board bring-up, firmware development (RTOS/Linux), peripheral interface integration, and low-power system design. Work with engineers across embedded, AI/ML, and cloud to build robust, secure, and production-ready systems. Review schematics and assist with hardware–software trade-offs, especially around compute, thermal, and memory design. Required Qualifications ‍ Education: BE/B.Tech/M.Tech in Electronics, Electrical, Computer Engineering, Embedded Systems, or related fields. Experience: Minimum 5+ years of experience in embedded systems design. Minimum 3 years of hands-on experience with AI accelerators and ML model deployment at the edge. Technical Skills Required Embedded System Design Strong C/C++, embedded Linux, and RTOS-based development experience. Experience with SoCs and MCUs such as STM32, ESP32, NXP, RK3566/3588, TI Sitara, etc. Cross-architecture familiarity: ARM Cortex-A/M, RISC-V, DSP cores. ML & Accelerator Toolchains Proficiency with ML compilers and deployment toolchains: ONNX, TFLite, HailoRT, EdgeTPU compiler, TVM, XLA . Experience with quantization , model pruning , compiler graphs , and hardware-aware profiling . Media & Peripherals Integration experience with camera modules , audio codecs , IMUs , and other digital/analog sensors . Experience with V4L2 , GStreamer , OpenCV , MIPI CSI , and ISP tuning is highly desirable. System Optimization Deep understanding of compute budgeting , thermal constraints , memory management , DMA , and low-latency pipelines . Familiarity with debugging tools: JTAG , SWD , logic analyzers , oscilloscopes , perf counters , and profiling tools. Preferred (Bonus) Skills Experience with Secure Boot , TPM , Encrypted Model Execution , or Post-Quantum Cryptography (PQC) . Familiarity with safety standards like IEC 61508 , ISO 26262 , UL 60730 . Contributions to open-source ML frameworks or embedded model inference libraries. Why Join Us? At EURTH TECHTRONICS PVT LTD , you won't just be optimizing firmware — you will architect full-stack intelligent systems that push the boundary of what's possible in embedded AI. Work on production-grade, AI-powered devices for industrial, consumer, defense, and medical applications . Collaborate with a high-performance R&D team that builds edge-first, low-power, secure, and scalable systems . Drive core architecture and set the technology direction for a fast-growing, innovation-focused organization. How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana

On-site

Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are looking for a hands-on and technically proficient Embedded Software Team Lead to drive the development of intelligent edge systems that combine embedded firmware, machine learning inference, and hardware acceleration. This role is perfect for someone who thrives at the intersection of real-time firmware design, AI model deployment, and hardware-software co-optimization. You will lead a team delivering modular, scalable, and efficient firmware pipelines that run quantized ML models on accelerators like Hailo, Coral, Torrent (BlackHole), Kendryte, and other emerging chipsets. Your focus will include model runtime integration, low-latency sensor processing, OTA-ready firmware stacks, and CI/CD pipelines for embedded products at scale Requirements Key Responsibilities Technical Leadership & Planning Own the firmware lifecycle across multiple AI-based embedded product lines. Define system and software architecture in collaboration with hardware, ML, and cloud teams. Lead sprint planning, code reviews, performance debugging, and mentor junior engineers. ️ ML Model Deployment & Runtime Integration Collaborate with ML engineers to port, quantize, and deploy models using TFLite , ONNX , or HailoRT . Build runtime pipelines that connect model inference with real-time sensor data (vision, IMU, acoustic). Optimize memory and compute flows for edge model execution under power/bandwidth constraints. Firmware Development & Validation Build production-grade embedded stacks using RTOS (FreeRTOS/Zephyr) or embedded Linux . Implement secure bootloaders, OTA update mechanisms, and encrypted firmware interfaces. Interface with a variety of peripherals including cameras, IMUs, analog sensors, and radios (BLE/Wi-Fi/LoRa). ️ CI/CD, DevOps & Tooling for Embedded Set up and manage CI/CD pipelines for firmware builds, static analysis, and validation. Integrate Docker-based toolchains, hardware-in-loop (HIL) testing setups, and simulators/emulators. Ensure codebase quality, maintainability, and test coverage across the embedded stack. Required Qualifications ‍ Education: BE/B.Tech/M.Tech in Embedded Systems, Electronics, Computer Engineering, or related fields. Experience: Minimum 4+ years of embedded systems experience. Minimum 2 years in a technical lead or architect role. Hands-on experience in ML model runtime optimization and embedded system integration. Technical Skills Required Embedded Development & Tools Expert-level C/C++ , hands-on with RTOS and Yocto-based Linux . Proficient with toolchains like GCC/Clang, OpenOCD, JTAG/SWD, Logic Analyzers. Familiarity with OTA , bootloaders , and memory management (heap/stack analysis, linker scripts). ML Model Integration Proficiency in TFLite , ONNX Runtime , HailoRT , or EdgeTPU runtimes . Experience with model conversion, quantization (INT8, FP16), runtime optimization. Ability to read/modify model graphs and connect to inference APIs. Connectivity & Peripherals Working knowledge of BLE, Wi-Fi, LoRa, RS485 , USB, and CAN protocols. Integration of camera modules , MIPI CSI , IMUs , and custom analog sensors . ️ DevOps for Embedded Hands-on with GitLab/GitHub CI, Docker, and containerized embedded builds. Build system expertise: CMake , Make , Bazel , or Yocto preferred. Experience in automated firmware testing (HIL, unit, integration). Preferred (Bonus) Skills Familiarity with machine vision pipelines , ISP tuning , or video/audio codec integration . Prior work on battery-operated devices , energy-aware scheduling , or deep sleep optimization . Contributions to embedded ML open-source projects or model deployment tools. Why Join Us? At EURTH TECHTRONICS PVT LTD , we go beyond firmware—we’re designing and deploying embedded intelligence on every device, from industrial gateways to smart consumer wearables. Build and lead teams working on cutting-edge real-time firmware + ML integration . Work on full-stack embedded ML systems using the latest AI accelerators and embedded chipsets . Drive product-ready, scalable software platforms that power IoT, defense, medical , and consumer electronics . How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.

Posted 3 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

Hyderabad, Telangana

On-site

Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are seeking a passionate and skilled Embedded ML Engineer to work on cutting-edge ML inference pipelines for low-power, real-time embedded platforms. You will help design and deploy highly efficient ML models on custom hardware accelerators like Hailo, Coral (Edge TPU), Kendryte K210, and Torrent/BlackHole in real-world IoT systems. This role combines model optimization, embedded firmware development, and toolchain management. You will be responsible for translating large ML models into efficient quantized versions, benchmarking them on custom hardware, and integrating them with embedded firmware pipelines that interact with real-world sensors and peripherals. Requirements Key Responsibilities ML Model Optimization & Conversion Convert, quantize, and compile models built in TensorFlow, PyTorch , or ONNX to hardware-specific formats. Work with compilers and deployment frameworks like TFLite , HailoRT , EdgeTPU Compiler , TVM , or ONNX Runtime . Use techniques such as post-training quantization , pruning , distillation , and model slicing . ️ Embedded Integration & Inference Deployment Integrate ML runtimes in C/C++ or Python into firmware stacks built on RTOS or embedded Linux . Handle real-time sensor inputs (camera, accelerometer, microphone) and pass them through inference engines. Manage memory, DMA transfers, inference buffers, and timing loops for deterministic behavior. Benchmarking & Performance Tuning Profile and optimize models for latency, memory usage, compute load , and power draw . Work with runtime logs, inference profilers, and vendor SDKs to squeeze maximum throughput on edge hardware. Conduct accuracy vs performance trade-off studies for different model variants. Testing & Validation Design unit, integration, and hardware-in-loop (HIL) tests to validate model execution on actual devices. Collaborate with hardware and firmware teams to debug runtime crashes, inference failures, and edge cases. Build reproducible benchmarking scripts and test data pipelines. Required Qualifications ‍ Education: BE/B.Tech/M.Tech in Electronics, Embedded Systems, Computer Science, or related disciplines. Experience: 2–4 years in embedded ML, edge AI, or firmware development with ML inference integration. Technical Skills Required Embedded Firmware & Runtime Strong experience in C/C++ , basic Python scripting. Experience with RTOS (FreeRTOS, Zephyr) or embedded Linux. Understanding of memory-mapped I/O, ring buffers, circular queues, and real-time execution cycles. ML Model Toolchains Experience with TensorFlow Lite , ONNX Runtime , HailoRT , EdgeTPU , uTensor , or TinyML . Knowledge of quantization-aware training or post-training quantization techniques. Familiarity with model conversion pipelines and hardware-aware model profiling. Media & Sensor Stack Ability to work with input/output streams from cameras , IMUs , microphones , etc. Experience integrating inference with V4L2, GStreamer, or custom ISP preprocessors is a plus. Tooling & Debugging Git, Docker, cross-compilation toolchains (Yocto, CMake). Debugging with SWD/JTAG, GDB, or serial console-based logging. Profiling with memory maps, timing charts, and inference logs. Preferred (Bonus) Skills Previous work with low-power vision devices , audio keyword spotting , or sensor fusion ML . Familiarity with edge security (encrypted models, secure firmware pipelines). Hands-on with simulators/emulators for ML testing (Edge Impulse, Hailo’s HEF emulator, etc.). Participation in TinyML forums , open-source ML toolkits, or ML benchmarking communities. Why Join Us? At EURTH TECHTRONICS PVT LTD , we're not just building IoT firmware—we're deploying machine learning intelligence on ultra-constrained edge platforms , powering real-time decisions at the edge. Get exposure to full-stack embedded ML pipelines — from model quantization to runtime integration. Work with a world-class team focused on ML efficiency, power optimization, and embedded system scalability .️ Contribute to mission-critical products used in industrial automation, medical wearables, smart infrastructure , and more. How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We’re a fast-moving, early-stage startup working at the cutting edge of AI to solve real-world problems in the future of work space. As part of the broader AI ecosystem, we’re not just following trends—we’re building the next wave of vertical AI infrastructure. We’re seeking a driven Senior Backend Engineer to join our team. This role is perfect for an engineer who thrives in chaos, ships confidently, and can own DevOps, backend infra, and AI-powered feature delivery like a pro. You must be available to start in 1 to 2 weeks! What You’ll Own: Architect, scale, and secure our platform on AWS (EC2, Lambda, RDS, etc.). Automate deployments, logging, monitoring, and backups. Optimize and expand our FastAPI backend and Next.js platform to support new workflows, smart inference, and user-triggered pipelines. Troubleshoot issues across AI APIs, improve prompt strategy, and guide the integration of ML models and data-driven components into the backend. Implement security best practices across APIs, databases, auth flows, and user data. Be the grown-up in the room when it comes to system design. Unblock the team. Ship high-impact features weekly. Handle what the full-stack lead can’t get to. Be the difference between “2 weeks” and “2 days.” You’re a Fit If You: Have 7+ years of experience in backend/platform/devops roles, ideally within a startup or SaaS environment Are fluent in AWS, FastAPI, and CI/CD pipelines Have built and scaled APIs in production and know how to handle rate limits, timeouts, retries, and error handling Can debug and optimize AI-driven workflows, APIs, and prompt-based interfaces Understand data security, auth, encryption, and compliance Enjoy working with founders directly and thrive in high-ownership, low-structure environments Bonus: Experience with Hugging Face, Pandas, Supabase, Postgres , or building AI-first apps

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

Company Description Triple I is a leading provider of AI-powered tools for ESG reporting automation. The company offers solutions that handle the entire ESG process, from real-time data integration to audit-ready reports aligned with industry regulations. Trusted by teams across various industries, Triple I simplifies ESG reporting to help enterprises move faster, stay compliant, and reduce workloads. Role Description We’re looking for a skilled AI Engineer to build a powerful AI-driven system that can analyze, transform, and standardize raw datasets into a predefined destination schema — with full language normalization, schema mapping, and intelligent data validation. This role is perfect for someone with deep expertise in data pipelines, NLP, and intelligent schema inference who thrives on creating scalable, adaptable solutions that go far beyond hardcoded logic. What You’ll Be Doing Develop a generalizable AI algorithm that transforms raw, unstructured (or semi-structured) source datasets into a standardized schema Automate schema mapping, data enrichment, PK/FK handling, language translation, and duplicate detection Build logic to flag unresolved data, generate an UnresolvedData_Report, and explain confidence or failure reasons Ensure all outputs are generated in English only, regardless of input language Experiment with 2–3 AI/ML approaches (e.g. NLP models, rule-based logic, transformers, clustering) and document tradeoffs Deliver all outputs (destination tables) in clean, validated formats (CSV/XLSX) Maintain detailed documentation of preprocessing, validation, and accuracy logic Key Responsibilities Design AI logic to dynamically extract, map, and organize data into 10+ destination tables Handle primary key/foreign key relationships across interconnected tables Apply GHG Protocol logic to assign Scope 1, 2, or 3 emissions automatically based on activity type Build multilingual support: auto-translate non-English input and ensure destination is 100% English Handle duplicate and conflicting records with intelligent merging or flagging logic Generate automated validation logs for transparency and edge case handling

Posted 3 weeks ago

Apply

0 years

0 Lacs

Delhi, India

On-site

We're looking for a hands-on Computer Vision Engineer who thrives in fast-moving environments and loves building real-world, production-grade AI systems. If you enjoy working with video, visual data, cutting-edge ML models, and solving high-impact problems, we want to talk to you. This role sits at the intersection of deep learning, computer vision, and edge AI, building scalable models and intelligent systems that power our next-generation sports tech platform Responsibilities Design, train, and optimize deep learning models for real-time object detection, tracking, and video understanding. Implement and deploy AI models using frameworks like PyTorch, TensorFlow/Keras, and Transformers. Work with video and image datasets using OpenCV, YOLO, NumPy, Pandas, and visualization tools like Matplotlib. Collaborate with data engineers and edge teams to deploy models on real-time streaming pipelines. Optimize inference performance for edge devices (Jetson, T4 etc. ) and handle video ingestion workflows. Prototype new ideas rapidly, conduct A/B tests, and validate improvements in real-world scenarios. Document processes, communicate findings clearly, and contribute to our growing AI knowledge base. Requirements Strong command of Python and familiarity with C/C++ Experience with one or more deep learning frameworks: PyTorch, TensorFlow, Keras. Solid foundation in YOLO, Transformers, or OpenCV for real-time visual AI. Understanding of data preprocessing, feature engineering, and model evaluation using NumPy, Pandas, etc. Good grasp of computer vision, convolutional neural networks (CNNs), and object detection techniques. Exposure to video streaming workflows (e. g., GStreamer, FFmpeg, RTSP). Ability to write clean, modular, and efficient code. Experience deploying models in production, especially on GPU/edge devices. Interest in reinforcement learning, sports analytics, or real-time systems An undergraduate degree (Master's or PhD preferred) in Computer Science, Artificial Intelligence, or a related discipline is preferred. A strong academic background is a plus. This job was posted by Siddhartha Dutta from Tech At Play.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

As a Senior Machine Learning Engineer, you will be responsible for designing, developing, and deploying cutting-edge models for end-to-end content generation, including AI-driven image/video generation, lip syncing, and multimodal AI systems. You will work on the latest advancements in deep generative modeling to create highly realistic and controllable AI-generated media. Responsibilities Research and Develop: Design and implement state-of-the-art generative models, including Diffusion Models, 3D VAEs, and GANs for AI-powered media synthesis. End-to-End Content Generation: Build and optimize AI pipelines for high-fidelity image/video generation and lip syncing using diffusion and autoencoder models. Speech and Video Synchronization: Develop advanced lip-syncing and multimodal generation models that integrate speech, video, and facial animation for hyper-realistic AI-driven content. Real-Time AI Systems: Implement and optimize models for real-time content generation and interactive AI applications using efficient model architectures and acceleration techniques. Scaling and Production Deployment: Work closely with software engineers to deploy models efficiently on cloud-based architectures (AWS, GCP, or Azure). Collaboration and Research: Stay ahead of the latest trends in deep generative models, diffusion models, and transformer-based vision systems to enhance AI-generated content quality. Experimentation and Validation: Design and conduct experiments to evaluate model performance, improve fidelity, realism, and computational efficiency, and refine model architectures. Code Quality and Best Practices: Participate in code reviews, improve model efficiency, and document research findings to enhance team knowledge-sharing and product development. Requirements Bachelor's or Master's degree in Computer Science, Machine Learning, or a related field. 3+ years of experience working with deep generative models, including Diffusion Models, 3D VAEs, GANs, and autoregressive models. Strong proficiency in Python and deep learning frameworks such as PyTorch. Expertise inmulti-modal AI, text-to-image, and image-to-video generation, audio to lipsync Strong understanding of machine learning principles and statistical methods. Good to have experience in real-time inference optimization, cloud deployment, and distributed training. Strong problem-solving abilities and a research-oriented mindset to stay updated with the latest AI advancements. Familiarity with generative adversarial techniques, reinforcement learning for generative models, and large-scale AI model training. Preferred Qualifications Experience with transformers and vision-language models(e. g., CLIP, BLIP, GPT-4V). Background in text-to-video generation, lip-sync generation, and real-time synthetic media applications. Experience in cloud-based AI pipelines (AWS, Google Cloud, or Azure) and model compression techniques (quantization, pruning, distillation). Contributions to open-source projects or published research in AI-generated content, speech synthesis, or video synthesis. This job was posted by Meghna Sidda from TrueFan.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Responsibilities Ship Micro-services - Build FastAPI services that handle 800 req/s today and will triple within a year (sub-200 ms p95). Power Real-Time Learning - Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily. Design for Scale & Safety - Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch. Deploy Globally - Roll out Dockerised services behind NGINX on AWS (EC2 S3 SQS) and GCP (GKE) via Kubernetes. Automate Releases - GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week. Own Reliability - Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend. Expose Gen-AI at Scale - Publish LLM inference and vector-search endpoints in partnership with the AI team. Ship Fast, Learn Fast - Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in Requirements 2+ yrs Python back-end experience (FastAPI / Flask). Strong with Docker and container orchestration basics. Hands-on with GitLab CI/CD, AWS (EC2 S3 SQS), or GCP (GKE / Compute) in production. SQL/NoSQL (Postgres, MongoDB) + You've built systems from scratch and have solid system-design fundamentals. k8s at scale, Terraform. Experience with AI/ML inference services (LLMs, vector DBs). Go / Rust for high-perf services. Observability: Prometheus, Grafana, OpenTelemetry. This job was posted by Rimjhim Tripathi from CareerNinja.

Posted 3 weeks ago

Apply

5.0 years

50 Lacs

Pune/Pimpri-Chinchwad Area

Remote

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

About This Role As a Staff AI Engineer you will get to play with petabyte data gathered from a multitude of data sources including Balbix proprietary sensors and 3rd party threat feeds. You will leverage a variety of AI techniques including deep learning, probabilistic graphical models, graph learning, recommendation systems, reinforcement learning, NLP, etc. And of course, you will be part of a team building a world-class product addressing one of the grand challenges in the technology industry. DATA SCIENCE AT BALBIX At Balbix we believe in using the right algorithms and tools to ensure correctness, performance and deliver an excellent user experience. We draw boldly from the latest in AI/ML research but are unafraid to go beyond bayesian inference and statistical models if the situation demands it. We are generalists, caring as much about storytelling with data, as about bleeding edge techniques, scalable model training and deployment. We are building a data science culture with equal emphasis on knowing our data, grokking security first principles, caring about customer needs, explaining our model predictions, deploying at scale, communicating our work, and adapting the latest advances. We look out for each other, enjoy each others’ company, and keep an open channel of communication about all things data and non-data. You Will Design and develop an ensemble of classical and deep learning algorithms for modeling complex interactions between people, software, infrastructure and policies in an enterprise environment Design and implement algorithms for statistical modeling of enterprise cybersecurity risk Apply data-mining, AI and graph analysis techniques to address a variety of problems including modeling, relevance and recommendation. Build production quality solutions that balance complexity and performance Participate in the engineering life-cycle at Balbix, including designing high quality ML infrastructure and data pipelines, writing production code, conducting code reviews and working alongside our infrastructure and reliability teams Drive the architecture and the usage of open source software library for numerical computation such as TensorFlow, PyTorch, and ScikitLearn You Are Able to take on very complex problems, learn quickly, iterate, and persevere towards a robust solution Product-focused and passionate about building truly usable systems Collaborative and comfortable working across teams including data engineering, front end, product management, and DevOps Responsible and like to take ownership of challenging problems A good communicator, and facilitate teamwork via good documentation practices Comfortable with ambiguity and thrive in designing algorithms for evolving needs Intuitive in using the right type of models to address different product needs Curious about the world and your profession, constant learner You Have A Ph.D./M.S. in Computer Science or Electrical Engineering with hands-on software engineering experience 5+ years of experience in the field of Machine Learning and programming in Python. Expertise in programming concepts and building large scale systems. Knowledge of state-of-the-art algorithms combined with expertise in statistical analysis and modeling. Robust understanding of NLP, Probabilistic Graphical Models, Deep Learning with graphs structures, model explainability, etc. Foundational knowledge of probability, statistics and linear algebra

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

Who you are You're someone who’s already shipped GenAI stuff—even if it was small: a chatbot, a RAG tool, or an agent prototype. You live in Python, LangChain, LlamaIndex, Hugging Face, and vector DBs like FAISS or Milvus. You know your way around prompts—noisy chains, rerankers, retrievals. You've deployed models or services on Azure/AWS/GCP, wrapped them into FastAPI endpoints, and maybe even wired a bit of terraform/ARM. You’re not building from spreadsheets; you're iterating with real data, debugging hallucinations, and swapping out embeddings in production. You can read blog posts and paper intros, follow new methods like QLoRA, and build on them. You're fine with ambiguity and startup chaos—no strict specs, no roadmap, just a mission. You work in async Slack, ask quick questions, push code that works, and help teammates stay afloat. You're not satisfied with just getting things done—you want GenAI to feel reliable, usable, and maybe even fun. What you’ll actually do You’ll build real GenAI features: agentic chatbots for document lookup, conversation assistants, or knowledge workflows. You’ll design and implement RAG systems: data ingestion, embeddings, vector indexing, retrievals, and prompt pipelines. You’ll write inference APIs in FastAPI that work with vector stores and cloud LLM endpoints. You’ll containerize services with Docker, push to Azure/AWS/GCP, wire basic CI/CD, monitor latency and faulty responses, and iterate fast. You’ll experiment with LoRA/QLoRA fine-tuning on small LLMs, test prompt variants, and measure output quality. You’ll collaborate with DevOps to ensure deployment reliability, QA to make tests more robust, and frontend folks to shape UX. You’ll share your work in quick “demo & dish” sessions: what's working, what's broken, what you're trying next. You’ll tweak embeddings, watch logs, and improve pipelines one experiment at a time. You’ll help write internal docs or “how-tos” so others can reuse your work. Skills and knowledge You have solid experience in Python backend development (FastAPI/Django) Experienced with LLM frameworks: LangChain, LlamaIndex, CrewAI, or similar Comfortable with vector databases: FAISS, Pinecone, Milvus Able to fine-tune models using PEFT/LoRA/QLoRA Knowledge of embeddings, retrieval systems, RAG pipelines, and prompt engineering Familiar with cloud deployment and infra-as-code (Azure, AWS, GCP with Docker/K8s, Terraform/ARM) Good understanding of monitoring and observability—tracking response latency, hallucinations, and costs Able to read current research, try prototypes, and apply them pragmatically Works well in minimal-structure startups; self-driven, team-minded, proactive communicator

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Delhi, India

On-site

What is Hunch? Hunch is a dating app that helps you land a date without swiping like a junkie. Designed for people tired of mindless swiping and commodified matchmaking, Hunch leverages a powerful AI-engine to help users find meaningful connections by focusing on personality over just looks. With 2M+ downloads and a 4.4-star rating , Hunch is going viral in the US by challenging the swipe-left/right norm of traditional apps. Hunch is a Series A funded ($23 Million) startup building the future of social discovery in a post-AI world. Link to our fundraising announcement Key Offerings Of Hunch Swipe Less, Vibe More: Curated profiles, cutting the clutter of endless swiping. Personality Matters: Opinion-based, belief-based, and thought-based compatibility rather than just focusing on looks. Every Match, Verified: No bots, no catfishing—just real, trustworthy connections Match Scores: Our AI shows compatibility percentages, helping users identify their “100% vibe match.” We're looking for a highly motivated and skilled Data Engineer . You'll design, build, and optimize our robust data infrastructure. You'll also develop scalable data pipelines, ensure data quality, and collaborate closely with our machine learning teams. We're looking for someone passionate about data who thrives in a dynamic environment. If you enjoy tackling complex challenges with cutting-edge technologies, we encourage you to apply. What You'll Do: Architect & Optimize Data Infrastructure: Design, implement, and maintain highly scalable data infrastructure. This includes processes for auto-scaling and easy maintainability of our data pipelines. Develop & Deploy Data Pipelines: Lead the design, implementation, testing, and deployment of resilient data pipelines. These pipelines will ingest, transform, and process large datasets efficiently. Empower ML Workflows: Partner with Machine Learning Engineers to understand their specific data needs. This includes providing high-quality data for model training and ensuring low-latency data delivery for real-time inference. Ensure seamless data flow and efficient integration with ML models. Ensure Data Integrity: Establish and enforce robust systems and processes. These will ensure comprehensive data quality assurance, validation, and reliability across the entire data lifecycle. What You'll Bring: Experience: A minimum of 2+ years of professional experience in data engineering. You should have a proven track record of delivering solutions in a production environment. Data Storage Expertise: Hands-on experience with relational databases (e.g., PostgreSQL, MySQL, Redshift) and cloud object storage (e.g., S3) is required. Experience with distributed file systems (e.g., HDFS) and NoSQL databases is a plus. Big Data Processing: Demonstrated proficiency with big data processing platforms and frameworks. Examples include Hadoop, Spark, Hive, Presto, and Trino. Pipeline Orchestration & Messaging: Practical experience with key data pipeline tools. This includes message queues (e.g., Kafka, Kinesis), workflow orchestrators (e.g., dbt, Airflow), change data capture (e.g., Debezium), and ETL services (e.g., AWS Glue ETL). Programming Prowess: Strong programming skills in Python and SQL are essential. Proficiency in at least one JVM-based language (e.g., Java, Scala) is also required. ML Acumen: A solid understanding of machine learning workflows. This includes data preparation and feature engineering concepts. Innovation & Agility: You should be a creative problem-solver. You'll need a proactive approach to experimenting with new technologies. What we have to offer Competitive financial rewards + annual PLI (Performance Linked Incentives). Meritocracy-driven, candid, and diverse culture. Employee benefits like Medical Insurance One annual all expenses paid by company trip for all employees to bond Although we work from our office in New Delhi, we are flexible in our style and approach Life @Hunch Work Culture: At Hunch we take our work seriously but don’t take ourselves too seriously. Everyone is encouraged to think as owners and not renters, and we prefer to let builders build, empowering people to pursue independent ideas. Impact: Your work will shape the future of social engagement and connect people around the world. Collaboration: Join a diverse team of creative minds and be part of a supportive community. Growth: We invest in your development and provide opportunities for continuous learning. Backed by Global Investors: Hunch is a Series A funded startup, backed by Hashed, AlphaWave, Brevan Howard and Polygon Studios Experienced Leadership: Hunch is founded by a trio of industry veterans - Ish Goel (CEO), Nitika Goel (CTO), and Kartic Rakhra (CMO) - serial entrepreneurs with the last exit from Nexus Mutual, a web3 consumer-tech startup.

Posted 3 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As a Data Scientist at Kyndryl you are the bridge between business problems and innovative solutions, using a powerful blend of well-defined methodologies, statistics, mathematics, domain expertise, consulting, and software engineering. You'll wear many hats, and each day will present a new puzzle to solve, a new challenge to conquer. You will dive deep into the heart of our business, understanding its objectives and requirements – viewing them through the lens of business acumen, and converting this knowledge into a data problem. You’ll collect and explore data, seeking underlying patterns and initial insights that will guide the creation of hypotheses. Analytical professional who uses statistical methods, machine learning, and programming skills to extract insights and knowledge from data. Their primary goal is to solve complex business problems, make predictions, and drive strategic decision-making by uncovering patterns and trends within large datasets. In this role, you will embark on a transformative process of business understanding, data understanding, and data preparation. Utilizing statistical and mathematical modeling techniques, you'll have the opportunity to create models that defy convention – models that hold the key to solving intricate business challenges. With an acute eye for accuracy and generalization, you'll evaluate these models to ensure they not only solve business problems but do so optimally. Additionally, you're not just building and validating models – you’re deploying them as code to applications and processes, ensuring that the model(s) you've selected sustains its business value throughout its lifecycle. Your expertise doesn't stop at data; you'll become intimately familiar with our business processes and have the ability to navigate their complexities, identifying issues and crafting solutions that drive meaningful change in these domains. You will develop and apply standards and policies that protect our organization's most valuable asset – ensuring that data is secure, private, accurate, available, and, most importantly, usable. Your mastery extends to data management, migration, strategy, change management, and policy and regulation. Key Responsibilities: Problem Framing: Collaborating with stakeholders to understand business problems and translate them into data-driven questions. Data Collection and Cleaning: Sourcing, collecting, and cleaning large, often messy, datasets from various sources, preparing them for analysis. Exploratory Data Analysis (EDA): Performing initial investigations on data to discover patterns, spot anomalies, test hypotheses, and check assumptions with the help of summary statistics and graphical representations. Model Development: Building, training, and validating machine learning models (e.g., regression, classification, clustering, deep learning) to predict outcomes or identify relationships. Statistical Analysis: Applying statistical tests and methodologies to draw robust conclusions from data and quantify uncertainty. Feature Engineering: Creating new variables or transforming existing ones to improve model performance and provide deeper insights. Model Deployment: Working with engineering teams to deploy models into production environments, making them operational for real-time predictions or insights. Communication and Storytelling: Presenting complex findings and recommendations clearly and concisely to both technical and non-technical audiences, often through visualizations and narratives. Monitoring and Maintenance: Tracking model performance in production and updating models as data patterns evolve or new data becomes available. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 8 - 10 years of experience as an Data Scientist . Programming Languages: Strong proficiency in Python and/or R, with libraries for data manipulation (e.g., Pandas, dplyr), scientific computing (e.g., NumPy), and machine learning (e.g., Scikit-learn, TensorFlow, PyTorch). Statistics and Probability: A solid understanding of statistical inference, hypothesis testing, probability distributions, and experimental design. Machine Learning: Deep knowledge of various machine learning algorithms, their underlying principles, and when to apply them. Database Querying: Proficiency in SQL for extracting and manipulating data from relational databases. Data Visualization: Ability to create compelling and informative visualizations using tools like Matplotlib, Seaborn, Plotly, or Tableau. Big Data Concepts: Familiarity with concepts and tools for handling large datasets, though often relying on Data Engineers for infrastructure. Domain Knowledge: Understanding of the specific industry or business domain to contextualize data and insights. Preferred Technical And Professional Experience Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About The Role The Core Analytics & Science Team (CAS) is Uber's primary science organisation, covering both our main lines of business as well as the underlying platform technologies on which those businesses are built. We are a key part of Uber's cross-functional product development teams, helping to drive every stage of product development through data analytic, statistical, and algorithmic expertise. CAS owns the experience and algorithms powering Uber's global Mobility and Delivery products. We optimise and personalise the rider experience, target incentives and introduce customizations for routing and matching for products and use cases that go beyond the core Uber capabilities. What the Candidate Will Do ---- Refine ambiguous questions and generate new hypotheses and design ML based solutions that benefit product through a deep understanding of the data, our customers, and our business Deliver end-to-end solutions rather than algorithms, working closely with the engineers on the team to productionize, scale, and deploy models world-wide. Use statistical techniques to measure success, develop northstar metrics and KPIs to help provide a more rigorous data-driven approach in close partnership with Product and other subject areas such as engineering, operations and marketing Design experiments and interpret the results to draw detailed and impactful conclusions. Collaborate with data scientists and engineers to build and improve on the availability, integrity, accuracy, and reliability of data logging and data pipelines. Develop data-driven business insights and work with cross-functional partners to find opportunities and recommend prioritisation of product, growth, and optimisation initiatives. Present findings to senior leadership to drive business decisions Basic Qualifications ---- Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative fields. 4+ years experience as a Data Scientist, Machine learning engineer, or other types of data science-focused functions Knowledge of underlying mathematical foundations of machine learning, statistics, optimization, economics, and analytics Hands-on experience building and deployment ML models Ability to use a language like Python or R to work efficiently at scale with large data sets Significant experience in setting up and evaluation of complex experiments Experience with exploratory data analysis, statistical analysis and testing, and model development Knowledge in modern machine learning techniques applicable to marketplace, platforms Proficiency in technologies in one or more of the following: SQL, Spark, Hadoop Preferred Qualifications Advanced SQL expertise Proven track record to wrangle large datasets, extract insights from data, and summarise learnings/takeaways. Proven aptitude toward Data Storytelling and Root Cause Analysis using data Advanced understanding of statistics, causal inference, and machine learning Experience designing and analyzing large scale online experiments Ability to deliver on tight timelines and prioritise multiple tasks while maintaining quality and detail Ability to work in a self-guided manner Ability to mentor, coach and develop junior team members Superb communication and organisation skills

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru East, Karnataka, India

Remote

We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems. This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Learning. Key Responsibilities Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks. Own the complete ML lifecycle : data ingestion, model development, experimentation, evaluation, deployment, and monitoring. Leverage transfer learning, foundation models, or self-supervised approaches where suitable. Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure. Continuously monitor model performance and implement retraining workflows to ensure accuracy over time. Stay ahead of the curve on cutting-edge AI research (e.g., generative AI, video understanding, audio embeddings) and incorporate innovations into production systems. Write clean, well-documented, and reusable code to support agile experimentation and long-term platform sustainability. Requirements Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 5-8+ years of experience in AI/ML Engineering, with at least 3 years in applied deep Skills : Languages : Expert in Python; good knowledge of R or Java is a plus. ML/DL Frameworks : Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX. Computer Vision : Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe). Audio AI : Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc.). Data Engineering : Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data. NLP/LLMs : Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred. Cloud & MLOps : Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML. Deployment & Infrastructure : Experience with Docker, Kubernetes, REST APIs, serverless ML inference. CI/CD & Version Control : Git, DVC, ML pipelines, Jenkins, Airflow, etc. Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components. Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders. Proven ability to work cross-functionally with designers, engineers, product managers, and analysts. Demonstrated bias for action, rapid experimentation, and iterative delivery of impact. Benefits Competitive compensation and full-time benefits. Opportunities for certification and professional growth. Flexible work hours and remote work options. Inclusive, innovative, and supportive team culture. (ref:hirist.tech)

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your Role Job Description Edge AI Data Scientists will be responsible for designing, developing, and validating machine learning models—particularly in the domain of computer vision—for deployment on edge devices. This role involves working with data from cameras, sensors, and embedded platforms to enable real-time intelligence for applications such as object detection, activity recognition, and visual anomaly detection. The position requires close collaboration with embedded systems and AI engineers to ensure models are lightweight, efficient, and hardware-compatible. Candidate Requirements Education Bachelor's or Master’s degree in Data Science, Computer Science, or a related field. Experience 3+ years of experience in data science or machine learning with a strong focus on computer vision. Experience in developing models for edge deployment and real-time inference. Familiarity with video/image datasets and deep learning model training. Skills Proficiency in Python and libraries such as OpenCV, PyTorch, TensorFlow, and FastAI. Experience with model optimization techniques (quantization, pruning, etc.) for edge devices. Hands-on experience with deployment tools like TensorFlow Lite, ONNX, or OpenVINO. Strong understanding of computer vision techniques (e.g., object detection, segmentation, tracking). Familiarity with edge hardware platforms (e.g., NVIDIA Jetson, ARM Cortex, Google Coral). Experience in processing data from camera feeds or embedded image sensors. Strong problem-solving skills and ability to work collaboratively with cross-functional teams. Your Profile Responsibilities Develop and train computer vision models tailored for constrained edge environments. Analyze camera and sensor data to extract insights and build vision-based ML pipelines. Optimize model architecture and performance for real-time inference on edge hardware. Validate and benchmark model performance on various embedded platforms. Collaborate with embedded engineers to integrate models into real-world hardware setups. Stay up-to-date with state-of-the-art computer vision and Edge AI advancements. Document models, experiments, and deployment configurations. What You Will Love About Working Here· We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital andiCa sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion.

Posted 3 weeks ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh About ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise a multi-year AI-platform vision and reference architecture that advances Amgen’s digital-transformation, cloud-modernisation and product-delivery objectives. Design and evolve foundational platform components —feature stores, model registry, experiment tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Establish modelling and algorithm-selection standards that span classical ML, tree-based ensembles, clustering, time-series, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques; advise product squads on choosing and operationalising the right algorithm for each use-case. Orchestrate the full delivery pipeline for AI solutions —pilot → regulated validation → production rollout → post-launch monitoring—defining stage-gates, documentation and sign-off criteria that meet GxP/CSV and global privacy requirements. Scale AI workloads globally by engineering autoscaling GPU/CPU clusters, distributed training, low-latency inference and cost-aware load-balancing, maintaining <100 ms P95 latency while optimising spend. Implement robust MLOps and release-management practices (CI/CD for models, blue-green & canary deployments, automated rollback) to ensure zero-downtime releases and auditable traceability. Embed responsible-AI and security-by-design controls —data privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Package reusable solution blueprints and APIs that enable product teams to consume AI capabilities consistently, cutting time-to-production by ≥ 50 %. Provide deep technical mentorship and architecture reviews to product squads, troubleshooting performance bottlenecks and guiding optimisation of cloud resources. Develop TCO models and FinOps practices, negotiate enterprise contracts for cloud/AI infrastructure and deliver continuous cost-efficiency improvements. Establish observability frameworks —metrics, distributed tracing, drift detection, SLA dashboards—to keep models performant, reliable and compliant at scale. Track emerging technologies and regulations (serverless GPUs, confidential compute, EU AI Act) and integrate innovations that maintain Amgen’s leadership in enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software. Comprehensive command of machine-learning algorithms—regression, tree-based ensembles, clustering, dimensionality reduction, time-series models, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques—with the judgment to choose, tune and operationalise the right method for a given business problem. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science and or Data Science Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 3 weeks ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise the multi-year AI-platform vision, architecture blueprints and reference implementations that align with Amgen’s digital-transformation and cloud-modernization objectives. Design and evolve foundational platform components—feature stores, model-registry, experiment-tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Implement robust MLOps pipelines (CI/CD for models, automated testing, canary releases, rollback) and enforce reproducibility from data ingestion to model serving. Embed responsible-AI and security-by-design controls—data-privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Serve as the ultimate technical advisor to product squads: codify best practices, review architecture/PRs, troubleshoot performance bottlenecks and guide optimisation of cloud resources. Partner with Procurement and Finance to develop TCO models, negotiate enterprise contracts for cloud/AI infrastructure, and continuously optimise spend. Drive platform adoption via self-service tools, documentation, SDKs and internal workshops; measure success through developer NPS, time-to-deploy and model uptime SLAs. Establish observability frameworks—metrics, distributed tracing, drift detection—to ensure models remain performant, reliable and compliant in production. Track emerging technologies (serverless GPUs, AI accelerators, confidential compute, policy frameworks like EU AI Act) and proactively integrate innovations that keep Amgen at the forefront of enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software, including 3+ years leading senior ICs or managers. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science, Data Science or MBA with AI focus. Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Delhi

On-site

Overview: The Clinton Health Access Initiative, Inc. (CHAI) is a global health organization committed to the mission of saving lives and reducing the burden of disease in low-and middle-income countries. We work at the invitation of governments to support them and the private sector to create and sustain high-quality health systems. CHAI was founded in 2002 in response to the HIV/AIDS epidemic with the goal of dramatically reducing the price of life-saving drugs and increasing access to these medicines in the countries with the highest burden of the disease. Over the following two decades, CHAI has expanded its focus. Today, along with HIV, we work in conjunction with our partners to prevent and treat infectious diseases such as COVID-19, malaria, tuberculosis, and hepatitis. Our work has also expanded into cancer, diabetes, hypertension, and other non-communicable diseases, and we work to accelerate the rollout of lifesaving vaccines, reduce maternal and child mortality, combat chronic malnutrition, and increase access to assistive technology. We are investing in horizontal approaches to strengthen health systems through programs in human resources for health, digital health, and health financing. With each new and innovative program, our strategy is grounded in maximizing sustainable impact at scale, ensuring that governments lead the solutions, that programs are designed to scale nationally, and learnings are shared globally. At CHAI, our people are our greatest asset, and none of this work would be possible without their talent, time, dedication and passion for our mission and values. We are a highly diverse team of enthusiastic individuals across 40 countries with a broad range of skillsets and life experiences. CHAI is deeply grounded in the countries we work in, with majority of our staff based in programming countries. In India, CHAI works in partnership with its India registered affiliate William J Clinton Foundation (WJCF) under the guidance of the Ministry of Health and Family Welfare (MoHFW) at the Central and States' levels on an array of high priority initiatives aimed at improving health outcomes. Currently, WJCF supports government partners across projects to expand access to quality care and treatment for HIV/AIDS, Hepatitis, tuberculosis, COVID-19, common cancers, sexual and reproductive health, immunization, and essential medicines. Learn more about our exciting work: http://www.clintonhealthaccess.org Program Overview: India continues to bear the world’s highest burden of tuberculosis (TB) in terms of absolute numbers of incident TB cases. National TB prevalence survey (2019-21) revealed a significant 31.3% (estimated) crude prevalence of TB infection (TBI) among India’s population aged 15 years and above. Moreover, India has set an ambitious target of eliminating TB by 2025. The National Strategic Plan 2017–2025 outlines a critical target of initiating 95% of identified/eligible TBI cases on TB Preventive Treatment (TPT) by 2025. The TB Household Contact Management (TB HCM) project is a pioneering initiative addressing critical gaps in coverage and completion of TPT amongst household contacts of notified drug sensitive pulmonary TB patients, with particular focus on under five (U5) children. Planned to be implemented in Bihar and Uttar Pradesh, this four-year TB HCM project aims to impact over 2.5 million individuals through a community-based service delivery model that leverages community health workers from the National Tuberculosis Elimination Programme (NTEP) and general health systems. As the first large-scale implementation of TPT while focusing on Universal Health Coverage strategies, the project focuses on decentralizing and strengthening TB care within general health systems. Additionally, it incorporates an impact evaluation component, further enhancing its significance in advancing TB prevention and care in alignment with national health priorities and international best practices. Position Summary: WJCF seeks a highly motivated, results-oriented Senior Research Associate to support the TB HCM project, reporting to the National Monitoring, Evaluation & Research Manager. The role involves supporting study implementation, coordinating evaluation activities, providing technical input, and contributing to evidence generation to advance TB prevention strategies. The ideal candidate is a strategic thinker with strong leadership, analytical, and problem-solving skills, capable of working independently and collaboratively in a fast-paced, multicultural environment with appropriate guidance and mentorship. The Senior Research Associate will support engagement with government counterparts, donors, and external partners, and work across WJCF/CHAI teams to ensure project success. Responsibilities: 1. Coordination of external evaluation activities –40% Support and coordinate communication with the evaluation agency, ensuring alignment between the evaluation and program implementation, with the objective of ensuring timely information flow regarding any risks to the core elements of the program Support fieldwork for the planned RCT embedded within the program, ensuring high-quality data collection training. The candidate will also be expected to establish quality control mechanisms, implement them, and provide regular updates to the core national and global teams. Proactively identify and address any challenges affecting the design and implementation of the evaluation. Serve as the primary day-to-day point of contact for the evaluation agency, managing ongoing coordination activities not explicitly listed above, and ensuring the evaluation and implementation processes remain aligned under the guidance of the senior team. 2. Technical review and input – 25% Contribute to the technical review of study protocols, instruments, evaluation design, and analysis plans, in collaboration with the broader technical team Support the design, refinement, and implementation of an embedded randomized controlled trial (RCT) and other qualitative components (e.g., process evaluations, qualitative interviews) to assess the impact of the CbHCM model Assist with the submission of study tools to the Institutional Review Board (IRB) and other relevant Indian authorities (such as HMSC), as required Where needed, analyze quantitative data using Stata or other statistical software. Additionally, they contribute to the design of qualitative tools and assist in their implementation and analysis, including transcript coding using appropriate qualitative analysis software Collaborate with the technical team to respond to donor inquiries related to the impact evaluation and/or data from routine program monitoring 3. Evidence generation & Synthesis of learning – 35% Conduct primary and secondary research to address learning and evidence gaps in strategically relevant areas of implementation and evaluation. Support the in-country learning agenda by identifying and addressing evidence gaps for NTEP and CHAI/WJCF through complementary analyses. Participate in systematic reviews of secondary literature on related themes and maintain a bibliography of key citations using reference management software Work closely with the National Monitoring & Evaluation Manager to align evaluation and program monitoring workstreams. Contribute to synthesizing learnings from implementation and evaluation efforts to inform new ideas and guide intervention design Support the development and delivery of learning and dissemination materials, including reports, manuscripts, and other documentation Qualifications: Bachelor’s or Master’s in epidemiology, economics, biostatistics, or a related field with significant focus on quantitative skills (e.g., epidemiology and public/global health) with a strong understanding of inferential statistics). Minimum 5 years of applied work experience in resource-limited settings and/or a field requiring analytical problem-solving. Technical Skills: Strong command of experimental, quasi-experimental study designs and qualitative research methods Experience in designing and implementing quantitative models and/or impact evaluation and/or qualitative research; fluency in concepts of statistical inference and data analysis Strong skills in quantitative modeling, data management, and statistical analysis using software like Stata/R Demonstrated experience with data collection workflows and platforms, such as SurveyCTO, Google sheets or similar tools Demonstrated experience with or involvement in the implementation of RCTs/Or quasi experimental or similar studies in India Experience piloting survey instruments, training data collectors, and leading field logistics for large-scale studies Stakeholder management and communication: An ability to communicate complex concepts clearly and support the development of actionable recommendations for a range of audiences including Ministries of Health, global donors and policy makers Strong interpersonal skills, and an ability to navigate multi-cultural, multi-stakeholder situations collaboratively to achieve intended results Organization, time management and self-motivation: Exceptional organizational skills and ability to approach complex problems in a structured manner Strong ability to work independently, to develop and execute work-plans, and to achieve specified goals with limited guidance and oversight in a fast-paced environment Demonstrated capacity to thrive in a work environment that requires effective balancing across parallel workstreams and deliverables Willingness to travel (at least 25%) to Bihar and Uttar Pradesh Last Date to Apply: 27th July, 2025

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies