Jobs
Interviews

1773 Inference Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

60 - 65 Lacs

Nashik, Maharashtra, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Kanpur, Uttar Pradesh, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Kochi, Kerala, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

6.0 years

60 - 65 Lacs

Visakhapatnam, Andhra Pradesh, India

Remote

Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Data Science @Dream Sports: Data Science at Dream Sports comprises seasoned data scientists thriving to drive value with data across all our initiatives. The team has developed state-of-the-art solutions for forecasting and optimization, data-driven risk prevention systems, Causal Inference and Recommender Systems to enhance product and user experience. We are a team of Machine Learning Scientists and Research Scientists with a portfolio of projects ranges from production ML systems that we conceptualize, build, support and innovate upon, to longer term research projects with potential game-changing impact for Dream Sports. This is a unique opportunity for highly motivated candidates to work on real-world applications of machine learning in the sports industry, with access to state-of-the-art resources, infrastructure, and data from multiple sources streaming from 250 million users and contributing to our collaboration with Columbia Dream Sports AI Innovation Center. Your Role: Executing clean experiments rigorously against pertinent performance guardrails and analysing performance metrics to infer actionable findings Developing and maintaining services with proactive monitoring and can incorporate best industry practices for optimal service quality and risk mitigation Breaking down complex projects into actionable tasks that adhere to set management practices and ensure stakeholder visibility Managing end-to-end lifecycle of large scale ML projects from data preparation, model training, deployment, monitoring, and upgradation of experiments Leveraging a strong foundation in ML, statistics, and deep learning to adeptly implement research-backed techniques for model development Staying abreast of the best ML practices and developments of the industry to mentor and guide team members Qualifiers: 3-5 years of experience in building, deploying and maintaining ML solutions Extensive experience with Python, Sql, Tensorflow/Pytorch and atleast one distributed data framework (Spark/Ray/Dask ) Working knowledge of Machine Learning, probability & statistics and Deep Learning Fundamentals Experience in designing end to end machine learning systems that work at scale About Dream Sports: Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , a premier sports content & commerce platform and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/ Dream11 is the world’s largest fantasy sports platform with 230 million users playing fantasy cricket, football, basketball & hockey on it. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Team And Our Scope We are a forward-thinking tech organization within Swiss Re, delivering transformative AI/ML solutions that redefine how businesses operate. Our mission is to build intelligent, secure, and scalable systems that deliver real-time insights, automation, and high-impact user experiences to clients globally. You'll join a high-velocity AI/ML team working closely with product managers, architects, and engineers to create next-gen enterprise-grade solutions. Our team is built on a startup mindset — bias to action, fast iterations, and ruthless focus on value delivery. We’re not only shaping the future of AI in business — we’re shaping the future of talent. This role is ideal for someone passionate about advanced AI engineering today and curious about evolving into a product leadership role tomorrow. You'll get exposure to customer discovery, roadmap planning, and strategic decision-making alongside your technical contributions. Role Overview As an AI/ML Engineer, you will play a pivotal role in the research, development, and deployment of next-generation GenAI and machine learning solutions . Your scope will go beyond retrieval-augmented generation (RAG) to include areas such as prompt engineering, long-context LLM orchestration, multi-modal model integration (voice, text, image, PDF), and agent-based workflows. You will help assess trade-offs between RAG and context-native strategies, explore hybrid techniques, and build intelligent pipelines that blend structured and unstructured data. You’ll work with technologies such as LLMs, vector databases, orchestration frameworks, prompt chaining libraries, and embedding models, embedding intelligence into complex, business-critical systems. This role sits at the intersection of rapid GenAI prototyping and rigorous enterprise deployment, giving you hands-on influence over both the technical stack and the emerging product direction. Key Responsibilities Build Next-Gen GenAI Pipelines: Design, implement, and optimize pipelines across RAG, prompt engineering, long-context input handling, and multi-modal processing. Prototype, Validate, Deploy: Rapidly test ideas through PoCs, validate performance against real-world business use cases, and industrialize successful patterns. Ingest, Enrich, Embed: Construct ingestion workflows including OCR, chunking, embeddings, and indexing into vector databases to unlock unstructured data. Integrate Seamlessly: Embed GenAI services into critical business workflows, balancing scalability, compliance, latency, and observability. Explore Hybrid Strategies: Combine RAG with context-native models, retrieval mechanisms, and agentic reasoning to build robust hybrid architectures. Drive Impact with Product Thinking: Collaborate with product managers and UX designers to shape user-centric solutions and understand business context. Ensure Enterprise-Grade Quality: Deliver solutions that are secure, compliant (e.g., GDPR), explainable, and resilient — especially in regulated environments. What Makes You a Fit Must-Have Technical Expertise Proven experience with GenAI techniques and LLMs, including RAG, long-context inference, prompt tuning, and multi-modal integration. Strong hands-on skills with Python, embedding models, and orchestration libraries (e.g., LangChain, Semantic Kernel, or equivalents). Comfort with MLOps practices, including version control, CI/CD pipelines, model monitoring, and reproducibility. Ability to operate independently, deliver iteratively, and challenge assumptions with data-driven insight. Understanding of vector search optimization and retrieval tuning. Exposure to multi-modal models Nice-To-Have Qualifications Experience building and operating AI systems in regulated industries (e.g., insurance, finance, healthcare). Familiarity with Azure AI ecosystem (e.g., Azure OpenAI, Azure AI Document Intelligence, Azure Cognitive Search) and deployment practices in cloud-native environments. Experience with agentic AI architectures, tools like AutoGen, or prompt chaining frameworks. Familiarity with data privacy and auditability principles in enterprise AI. Bonus: You Think Like a Product Manager While this role is technical at its core, we highly value candidates who are curious about how AI features become products . If you’re excited by the idea of influencing roadmaps, shaping requirements, or owning end-to-end value delivery — we’ll give you space to grow into it. This is a role where engineering and product are not silos . If you’re keen to move in that direction, we’ll mentor and support your evolution. Why Join Us? You’ll be part of a team that’s pushing AI/ML into uncharted, high-value territory. We operate with urgency, autonomy, and deep collaboration. You’ll prototype fast, deliver often, and see your work shape real-world outcomes — whether in underwriting, claims, or data orchestration. And if you're looking to transition from deep tech to product leadership , this role is a launchpad. Swiss Re is an equal opportunity employer . We celebrate diversity and are committed to creating an inclusive environment for all employees. About Swiss Re Swiss Re is one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. Keywords Reference Code: 134317 Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Medicine moves too slow. At Velsera, we are changing that. Velsera was formed in 2023 through the shared vision of Seven Bridges and Pierian, with a mission to accelerate the discovery, development, and delivery of life-changing insights. Velsera provides software and professional services for: AI-powered multimodal data harmonization and analytics for drug discovery and development IVD development, validation, and regulatory approval Clinical NGS interpretation, reporting, and adoption With our headquarters in Boston, MA, we are growing and expanding our teams located in different countries! What will you do? Train, fine-tune, and deploy Large Language Models (LLMs) to solve real-world problems effectively Design, implement, and optimize AI/ML pipelines to support model development, evaluation, and deployment Collaborate with Architect, software engineers, and product teams to integrate AI solutions into applications Ensure model performance, scalability, and efficiency through continuous experimentation and improvements Work on LLM optimization techniques, including Retrieval-Augmented Generation (RAG), prompt tuning, etc Manage and automate the infrastructure necessary for AI/ML workloads while keeping the focus on model development Work with DevOps teams to ensure smooth deployment and monitoring of AI models in production Stay updated on the latest advancements in AI, LLMs, and deep learning to drive innovation What do you bring to the table? Strong experience in training, fine-tuning, and deploying LLMs using frameworks like PyTorch, TensorFlow, or Hugging Face Transformers Hands-on experience in developing and optimizing AI/ML pipelines, from data preprocessing to model inference Solid programming skills in Python and familiarity with libraries like NumPy, Pandas, and Scikit-learn Strong understanding of tokenization, embeddings, and prompt engineering for LLM-based applications Hands-on experience in building and optimizing RAG pipelines using vector databases (FAISS, Pinecone, Weaviate, or ChromaDB) Experience with cloud-based AI infrastructure (AWS, GCP, or Azure) and containerization technologies (Docker, Kubernetes) Experience in model monitoring, A/B testing, and performance optimization in a production environment Familiarity with MLOps best practices and tools (Kubeflow, MLflow, or similar) Ability to balance hands-on AI development with necessary infrastructure management Strong problem-solving skills, teamwork, and a passion for building AI-driven solutions Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

Client Type: US Client Location: Remote About the Role We’re creating a new certification: Google AI Ecosystem Architect (Gemini & DeepMind) - Subject Matter Expert . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement. Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana

Remote

Data Engineer II Hyderabad, Telangana, India + 2 more locations Date posted Jun 18, 2025 Job number 1829143 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Data Engineering Employment type Full-Time Overview Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the Microsoft Fabric platform team builds and maintains the operating system and provides customers a unified data stack to run an entire data estate. The platform provides a unified experience, unified governance, enables a unified business model and a unified architecture. The Fabric Data Analytics, Insights, and Curation team is leading the way at understanding the Microsoft Fabric composite services and empowering our strategic business leaders. We work with very large and fast arriving data and transform it into trustworthy insights. We build and manage pipelines, transformation, platforms, models, and so much more that empowers the Fabric product. As an Engineer on our team your core function will be Data Engineering with opportunities in Analytics, Science, Software Engineering, DEVOps, and Cloud Systems. You will be working alongside other Engineers, Scientists, Product, Architecture, and Visionaries bringing forth the next generation of data democratization products. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Qualifications Required /Minimum Qualifications Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years' experience in business analytics, data science, software development, data modeling or data engineering work o OR Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 2+ years' experience in business analytics, data science, software development, or data engineering work o OR equivalent experience 2+ years of experience in software or data engineering, with proven proficiency in C#, Java, or equivalent 2+ years in one scripting language for data retrieval and manipulation (e.g., SQL or KQL) 2+ years of experience with ETL and data cloud computing technologies, including Azure Data Lake, Azure Data Factory, Azure Synapse, Azure Logic Apps, Azure Functions, Azure Data Explorer, and Power BI or equivalent platforms Preferred/Additional Qualifications 1+ years of demonstrated experience implementing data governance practices, including data access, security and privacy controls and monitoring to comply with regulatory standards. Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Equal Opportunity Employer (EOP) #azdat #azuredata #fabricdata #dataintegration #azure #synapse #databases #analytics #science Responsibilities You will develop and maintain data pipelines, including solutions for data collection, management, transformation, and usage, ensuring accurate data ingestion and readiness for downstream analysis, visualization, and AI model training You will review, design, and implement end-to-end software life cycles, encompassing design, development, CI/CD, service reliability, recoverability, and participation in agile development practices, including on-call rotation You will review and write code to implement performance monitoring protocols across data pipelines, building visualizations and aggregations to monitor pipeline health. You’ll also implement solutions and self-healing processes that minimize points of failure across multiple product features You will anticipate data governance needs, designing data modeling and handling procedures to ensure compliance with all applicable laws and policies You will plan, implement, and enforce security and access control measures to protect sensitive resources and data You will perform database administration tasks, including maintenance, and performance monitoring. You will collaborate with Product Managers, Data and Applied Scientists, Software and Quality Engineers, and other stakeholders to understand data requirements and deliver phased solutions that meet test and quality programs data needs, and support AI model training and inference You will become an SME of our teams’ products and provide inputs for strategic vision You will champion process, engineering, architecture, and product best practices in the team You will work with other team Seniors and Principles to establish best practices in our organization Embody our culture and values Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 month ago

Apply

0.0 - 40.0 years

0 Lacs

Gurugram, Haryana

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. GenAI / AI Platform Architect Join Boston Scientific at the forefront of innovation as we embrace AI to transform healthcare and deliver cutting‑edge solutions. We are seeking an experienced GenAI / AI Platform Architect to define, build, and continuously improve a secure, governable, and enterprise‑grade Generative‑AI platform that underpins copilots, RAG search, intelligent document processing, agentic workflows, and other high‑value use cases. Your responsibilities will include: Own the reference architecture for GenAI: LLM hosting, vector DBs, orchestration layer, real‑time inference, and evaluation pipelines. Design and govern Retrieval‑Augmented Generation (RAG) pipelines—embedding generation, indexing, hybrid retrieval, and prompt assembly—for authoritative, auditable answers. Select and integrate toolchains (LangChain, LangGraph, LlamaIndex, MLflow, Kubeflow, Airflow) and ensure compatibility with cloud GenAI services (Azure OpenAI, Amazon Bedrock, Vertex AI). Implement MLOps / LLMOps: automated CI/CD for model fine‑tuning, evaluation, rollback, and blue‑green deployments; integrate model‑performance monitoring and drift detection. Embed “shift‑left” security and responsible‑AI guardrails—PII redaction, model‑output moderation, lineage logging, bias checks, and policy‑based access controls—working closely with CISO and compliance teams. Optimize cost‑to‑serve through dynamic model routing, context‑window compression, and GPU / Inferentia auto‑scaling; publish charge‑back dashboards for business units. Mentor solution teams on prompt engineering, agentic patterns (ReAct, CrewAI), and multi‑modal model integration (vision, structured data). Establish evaluation frameworks (e.g., LangSmith, custom BLEU/ROUGE/BERT‑Score pipelines, human‑in‑the‑loop) to track relevance, hallucination, toxicity, latency, and carbon footprint. Report KPIs (MTTR for model incidents, adoption growth, cost per 1k tokens) and iterate roadmap in partnership with product, data, and infrastructure leads. Required Qualifications: 10+ years designing cloud‑native platforms or AI/ML systems; 3+ years leading large‑scale GenAI, LLM, or RAG initiatives. Deep knowledge of LLM internals, fine‑tuning, RLHF, and agentic orchestration patterns (ReAct, Chain‑of‑Thought, LangGraph). Proven delivery on vector‑database architectures (Pinecone, Weaviate, FAISS, pgvector, Milvus) and semantic search optimization. Mastery of Python and API engineering; hands‑on with LangChain, LlamaIndex, FastAPI, GraphQL, gRPC. Strong background in security, governance, and observability across distributed AI services (IAM, KMS, audit trails, OpenTelemetry). Preferred Qualifications: Certifications: AWS Certified GenAI Engineer – Bedrock or Microsoft Azure AI Engineer Associate. Experience orchestrating multimodal models (images, video, audio) and streaming inference on edge devices or medical sensors. Published contributions to open‑source GenAI frameworks or white‑papers on responsible‑AI design. Familiarity with FDA or HIPAA compliance for AI solutions in healthcare. Demonstrated ability to influence executive stakeholders and lead cross‑functional tiger teams in a fast‑moving AI market. Requisition ID: 608452 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 1 month ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Location: Bangalore, Karnataka Factspan Overview: Factspan is a pure play analytics organization. We partner with fortune 500 enterprises to build an analytics center of excellence, generating insights and solutions from raw data to solve business challenges, make strategic recommendations and implement new processes that help them succeed. With offices in Seattle, Washington and Bangalore, India; we use a global delivery model to service our customers. Our customers include industry leaders from Retail, Financial Services, Hospitality, and technology sectors. Job Description As an LLM (Large Language Model) Engineer, you will be responsible for designing, optimizing, and standardizing the architecture, codebase, and deployment pipelines of LLM-based systems. Your primary mission will focus on modernizing legacy machine learning codebases (including 40+ models) for a major retail client—enabling consistency, modularity, observability, and readiness for GenAI-driven innovation. You’ll work at the intersection of ML, software engineering, and MLOps to enable seamless experimentation, robust infrastructure, and production-grade performance for language-driven systems. This role requires deep expertise in NLP, transformer-based models, and the evolving ecosystem of LLM operations (LLMOps), along with a hands-on approach to debugging, refactoring, and building unified frameworks for scalable GenAI workloads. Responsibilities: Lead the standardization and modernization of legacy ML codebases by aligning to current LLM architecture best practices. Re-architect code for 40+ legacy ML models, ensuring modularity, documentation, and consistent design patterns. Design and maintain pipelines for fine-tuning, evaluation, and inference of LLMs using Hugging Face, OpenAI, or open- source stacks (e.g., LLaMA, Mistral, Falcon). Build frameworks to operationalize prompt engineering, retrieval-augmented generation (RAG), and few-shot/in- context learning methods. Collaborate with Data Scientists, MLOps Engineers, and Platform teams to implement scalable CI/CD pipelines, feature stores, model registries, and unified experiment tracking. Benchmark model performance, latency, and cost across multiple deployment environments (on-premise, GCP, Azure). Develop governance, access control, and audit logging mechanisms for LLM outputs to ensure data safety and compliance. Mentor engineering teams in code best practices, versioning, and LLM lifecycle maintenance. 2nd Floor, South Block, Vaishnavi Tech Park, Ambalipura, Sarjapur Marathahalli Rd, Bengaluru, Karnataka 560102 info@factspan.com Key Skills: Deep understanding of transformer architectures, tokenization, attention mechanisms, and training/inference optimization Proven track record in standardizing ML systems using OOP design, reusable components, and scalable service APIs Hands-on experience with MLflow, LangChain, Ray, Prefect/Airflow, Docker, K8s, Weights & Biases, and model- serving platforms. Strong grasp of prompt tuning, evaluation metrics, context window management, and hybrid search strategies using vector databases like FAISS, pgvector, or Milvus Proficient in Python (must), with working knowledge of shell scripting, YAML, and JSON schema standardization Experience managing compute, memory, and storage requirements of LLMs across GCP, Azure, or AWS environments Qualifications & Experience: 5+ years in ML/AI engineering with at least 2 years working on LLMs or NLP-heavy systems. Able to reverse-engineer undocumented code and reimagine it with strong documentation and testing in mind. Clear communicator who collaborates well with business, data science, and DevOps teams. Familiar with agile processes, JIRA, GitOps, and confluence-based knowledge sharing. Curious and future-facing—always exploring new techniques and pushing the envelope on GenAI innovation. Passionate about data ethics, responsible AI, and building inclusive systems that scale Why Should You Apply? Grow with Us: Be part of a hyper- growth startup with ample number of opportunities to Learn & Innovate. People: Join hands with the talented, warm, collaborative team and highly accomplished leadership. Buoyant Culture: Regular activities like Fun-Fridays, Sports tournaments, Trekking and you can suggest few more after joining us 2nd Floor, South Block, Vaishnavi Tech Park, Ambalipura, Sarjapur Marathahalli Rd, Bengaluru, Karnataka 560102 info@factspan.com

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Bengaluru, Karnataka Factspan Overview: Factspan is a pure-play data and analytics services organization. We partner with Fortune 500 enterprises to build an analytics center of excellence, generating insights and solutions from raw data to solve business challenges, make strategic recommendations, and implement new processes that help them succeed. With offices in Seattle, Washington, and Bengaluru, India, we use a global delivery model to service our customers. Our customers include industry leaders from the retail, financial services, hospitality, and technology sectors. Job Overview (Primary Skills - GCP, Kubeflow, Python, Vertex.ai) As a Machine Learning Engineer, you will oversee the entire lifecycle of machine learning models. Your role involves collaborating with cross-functional teams, including data scientists, data engineers, software engineers, and DevOps specialists, to bridge the gap between experimental model development and reliable production systems. You will be responsible for automating ML pipelines, optimizing model training and serving, ensuring model governance, and maintaining the stability of deployed systems. This position requires a blend of experience in software engineering, data engineering, and machine learning systems, along with a strong understanding of DevOps practices to enable faster experimentation, consistent performance, and scalable ML operations. What You Will Do  Work with data science leadership and stakeholders to understand business objectives, map the scope of work, and support colleagues in achieving technical deliverables.  Invest in strong relationships with colleagues and build a successful followership around a common goal.  Build and optimize ML pipelines for feature engineering, model training, and inference.  Develop low-latency, high-throughput model endpoints for distributed environments.  Maintain cloud infrastructure for ML workloads, including GPUs/TPUs, across platforms like GCP, AWS, or Azure  Troubleshoot, debug, and validate ML systems for performance and reliability.  Write and maintain automated tests (unit and integration).  Supports discussions with Data Engineers to work on data collection, storage, and retrieval processes. Collaborate with Data Governance to identify data issues and propose data cleansing or enhancement solutions.  Drive continuous improvement efforts in enhancing performance and providing increased functionality, including developing processes for automation. Skills You Will Need  Group Work Lead: Ability to lead portions of pod iteratives; can clearly communicate priorities and play an effective technical support role for colleagues.  Communication: Maintaining timely communication with management and stakeholders on project progress, issues, and concerns. Developing effective communication plans tailored to diverse audiences.  Consultive Mindset: Go beyond just providing analytics and actively engage stakeholders to understand their challenges and goals. Ability to have a business-first viewpoint when developing solutions.  Cloud & ML Ops: Expertise in managing cloud-based ML infrastructures (GCP, AWS, or Azure), coupled with DevOps practices, ensures seamless model deployment, scalability, and system reliability. This includes containerization, CI/CD pipelines, and infrastructure-as-code tools. Proficiency in programming languages such as Python, SQL, and Java. Who You Are  5+ years of industry experience working with machine learning tools and technologies.  Familiarity with agile development frameworks and collaboration tools (e.g., JIRA, Confluence).  Experience using Tensorflow, PyTorch, scikit-learn, Kubeflow, pandas and numpy. and frameworks like Ray, Dask preferred.  Expertise in data engineering, object-oriented programming, and familiarity with microservices and cloud technologies.  An ongoing learner who seeks out emerging technology and can influence others to think innovatively.  Gets energized by fast-paced environments and capable of supporting multiple projects - can identify primary and secondary objectives, prioritize time, and communicate timelines to team members.  Dedicated to fulfilling ideals of diversity, inclusion, and respect that the client aspire to achieve every day in every way.  Regularly required to sit, talk, hear; use hands/fingers to touch, handle, and feel. Occasionally required to move about the workplace and reach with hands and arms. Requires close vision. If you are passionate about leveraging technology to drive business innovation, possess excellent problem-solving skills, and thrive in a dynamic environment, we encourage you to apply for this exciting opportunity. Join us in shaping the future of data analytics and making a meaningful impact in the industry Why Should You Apply? People: Join hands with the talented, Buoyant Culture: Embark on an Grow with Us: Be part of a hyper- growth warm, collaborative team and highly exciting journey with a team startup with great opportunities to Learn that innovates solutions, & Innovate. accomplished leadership. tackles challenges head-on and crafts a vibrant work environment .

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gachibowli, Hyderabad, Telangana

On-site

Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Ashutosh Panda Sponsorship Available: No Relocation Assistance Available: No Job Description Roles and Responsibilties : Analyze, design and develop new processes, programs and configuration - Taking into account the complex inter-relationships of system-wide components Provide system-wide support and maintenance for a complex system or business process. Maintain and modify existing processes, programs and configuration through use of current IT Toolsets. Troubleshoot, investigate and persist. Develop solutions to problems with unknown causes where precedents do not exist, by applying logic and inference with persistence and experience to see the problem through to resolution. Confer with the stakeholder community on problem determination. Make joint analysis decisions on cause and correction methods. Perform tasks (as necessary) to ensure data integrity and system stability. Complete life-cycle testing (unit and integration) of all work processes (including Cross Platform Interaction). Create applications and databases with main focus on Data Collection Systems - Supporting Analysis, Data Capture, Design Tools, Library Functions, Reporting, Request Systems and Specification Systems used in the Tire Development Process. Knowlege,Skills,Abilities : Developing an understanding of skills needed in other disciplines, of second business process area and basic Cost/Benefit Analysis Method. 3+ Years of strong development experience with Java and Springboot. 3+ Years of strong experience of working with cloud based environment(AWS - Event Driven Architecture). Strong experience of working with Microservices & SQL Server. Good to have some knowledge on Salesforce application. Basic Organizational, Communication and Time Management skills Participate as an active Team Member (Effective Listening and Collaboration skills) Achieve all IT Objectives through use of approved Standards and Guidelines Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate #Li-Hybrid

Posted 1 month ago

Apply

8.0 years

0 Lacs

Madhavaram, Tamil Nadu, India

On-site

Job Title: Delivery Excellence Operations Manager Location: Chennai / Kolkata Experience Required: 8- 14+ years in BPO operations with a strong focus on process improvement and transformation Job Description We are seeking a dynamic and experienced Delivery Excellence Operations Manager to join our team in Chennai or Kolkata. This role is pivotal in driving operational excellence and continuous improvement initiatives across our global BPO engagements. The ideal candidate will have a proven track record of leading Lean Six Sigma projects, delivering impactful results through transformation strategies, and leveraging automation technologies. Key Responsibilities Lead and implement Continuous Improvement (CI) initiatives across assigned engagements, fostering a culture of operational excellence. Deploy and mentor Lean Six Sigma (LSS) projects with a focus on digital transformation and Robotic Process Automation (RPA). Drive the adoption of Quality Management Systems (QMS) to standardize best-in-class processes. Conduct process assessments, identify improvement opportunities, and lead ideation-to-implementation cycles. Promote global collaboration by sharing innovations, new methodologies, and benchmarks across centers. Design and maintain Balanced Scorecards and leadership dashboards for performance reporting. Support training initiatives to strengthen the organization's DNA in Lean and Six Sigma practices. Collaborate with teams to adopt emerging technologies such as AI, chatbots, process mining, and cloud-based analytics solutions. Provide consulting support for Big Data Analytics and help shape cloud computing strategies. Qualifications & Skills Lean Six Sigma certification is required; Black Belt (BB) preferred (internal or external certification). Must have led at least one high-impact BB project (e.g., FTE savings, revenue impact, or significant dollar savings via DMAIC), along with 4-5 other improvement projects. Strong data analysis skills including statistical inference and use of tools such as Minitab, R, Python, or SAS. Hands-on experience in CSAT improvement, AHT reduction, and TAT optimization projects. Excellent understanding of RPA tools such as UiPath, Blue Prism, Automation Anywhere, and basic exposure to AI technologies. Proficiency in dashboard and reporting tools like Power BI, Tableau, or QlikView. Understanding of AGILE project management methodologies is a plus. Prior experience in conducting training sessions/workshops for Lean Six Sigma and transformation initiatives. Preferred Background 8-14+ years of experience in the BPO industry, with strong exposure to delivery excellence functions. Demonstrated ability to lead transformation efforts with measurable business outcomes. Experience with cloud-based services, AI integration, and modern automation tools. Project leadership experience rather than merely supporting roles in LSS projects. Join us in shaping the future of BPO delivery through innovation, transformation, and excellence. (ref:iimjobs.com) Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

About the Role You’ll join a small, fast team turning cutting-edge AI research into shippable products across text, vision, and multimodal domains. One sprint you’ll be distilling an LLM for WhatsApp chat-ops; the next you’ll be converting CAD drawings to BOM stories, or training a computer-vision model that flags onsite safety risks. You own the model life-cycle end-to-end: data prep ➞ fine-tune/distil ➞ evaluate ➞ deploy ➞ monitor. Key Responsibilities Model Engineering • Fine-tune and quantise open-weight LLMs (Llama 3, Mistral, Gemma) and SLMs for low-latency edge inference. • Train or adapt computer-vision models (YOLO, Segment Anything, SAM-DINO) to detect site hazards, drawings anomalies, or asset states. Multimodal Pipelines • Build retrieval-augmented-generation (RAG) stacks: loaders → vector DB (FAISS / OpenSearch) → ranking prompts. • Combine vision + language outputs into single “scene → story” responses for dashboards and WhatsApp bots. Serving & MLOps • Package models as Docker images, SageMaker endpoints, or ONNX edge bundles; expose FastAPI/GRPC handlers with auth, rate-limit, telemetry. • Automate CI/CD: GitHub Actions → Terraform → blue-green deploys. Evaluation & Guardrails • Design automatic eval harnesses (BLEU, BERTScore, CLIP similarity, toxicity & bias checks). • Monitor drift, hallucination, latency; implement rollback triggers. Enablement & Storytelling • Write prompt playbooks & model cards so other teams can reuse your work. • Run internal workshops: “From design drawing to narrative” / “LLM safety by example”. Required Skills & Experience 3+ yrs ML/NLP/CV in production; at least 1 yr hands-on with Generative AI . Strong Python (FastAPI, Pydantic, asyncio) and HuggingFace Transformers OR diffusers . Experience with minima­l-footprint models (LoRA, QLoRA, GGUF, INT-4) and vector search. Comfortable on AWS/GCP/Azure for GPU instances, serverless endpoints, IaC. Solid grasp of evaluation/guardrail frameworks (Helm, PromptLayer, Guardrails-AI, Triton metrics). Bonus Points Built a RAG or function-calling agent used by 500+ users. Prior CV pipeline (object-detection, segmentation) or speech-to-text real-time project. Live examples of creative prompt engineering or story-generation. Familiarity with LangChain, LlamaIndex, or BentoML. Why You’ll Love It Multidomain playground – text, vision, storytelling, decision-support. Tech freedom – pick the right model & stack; justify it; ship it. Remote-first – work anywhere ±4 hrs of IST; quarterly hack-weeks in Hyderabad. Top-quartile pay – base + milestone bonus + conference stipend. How to Apply Send a resume and link to GitHub / HF / Kaggle showcasing LLM or CV work. Include a 200-word note describing your favourite prompt or model tweak and the impact it had. Short-listed candidates complete a practical take-home (fine-tune tiny model, build RAG or vision demo, brief write-up) and a 45-min technical chat. We hire builders, not resume keywords. Show us you can ship AI that works in the real world—and explain it clearly—and you’re in. Show more Show less

Posted 1 month ago

Apply

3.0 - 4.0 years

0 Lacs

India

On-site

Note : Kindly note only below qualification will be considered for this positions Mandatory educational qualification: Bachelor’s in engineering (specifically Computer Science) from a premier institute in India (IITs or NITs or IIITs) Bonus educational qualification: Masters (MTech) in Computer Science from IISC or IITs. Role Description We are currently looking for bright Deep learning talent who have 3-4 years of industry experience (1.5-2 years in practical Deep Learning industry projects), working on problems related to the Video AI space. A Deep Learning engineer gets exposed to building solutions related to human activity analysis in videos 1. reads research papers to understand the latest developments in activity recognition, multi-camera object tracking problems 2. Devlops production quality code to convert the research to usable features on https://deeplabel.app. 3. Gets to work on Video Language model architectures. 4. Along with strong research skills, good coding skills and knowledge of Data structures and algorithms is required. Qualifications Strong coding skills in Python Thorough hands-on knowledge of Deep Learning frameworks - Pytorch, Onnx Excellent understanding and experience of deploying data pipelines for Computer Vision projects. Good working knowledge of troubleshooting, fixing and patching Linux system level problems (we expect engineers to set up their own workstations, install, troubleshoot Cuda problems and OpenCV2 problems). Good understanding of Deep Learning concepts (model parameters, tuning of models, optimizers, learning rates, attention mechanisms, masking etc). Thorough understanding of deep learning implementations of activity detection or object detection. Ability to read research papers and implement new approaches for activity detection and object detection. Knowledge of deployment tools like Triton inference server or Ray are a plus. In addition to the above : we need a few key personality attributes Willingness to learn and try till you succeed Curiosity to learn and experiment. Ability to work with full stack engineers of the AI platform team, to deploy your new innovations. Good communication skills Ability to take ownership of the respective modules for the whole lifecycle till deployment. Company Description Drillo.AI specializes in delivering tailored AI solutions that empower small and medium-sized businesses (SMBs) to innovate and grow. Our mission is to make advanced technology accessible, helping SMBs streamline operations, enhance decision-making, and drive sustainable success. With a deep understanding of the unique challenges faced by smaller enterprises, we provide scalable, cost-effective AI strategies to unlock new opportunities. By working closely with our clients, we help elevate their businesses to the next level. Show more Show less

Posted 1 month ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency 3. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous Work Experience Required Proven experience of 1+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical Skills Required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-IN-Hyderaba-ProductManager Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Category: AIML Job Type: Full Time Job Location: Bengaluru Mangalore Experience: 4-8 Years Skills: AI AWS/AZURE/GCP Azure ML C computer vision data analytics Data Modeling Data Visualization deep learning Descriptive Analytics GenAI Image processing Java LLM models ML ONNX Predictive Analytics Python R Regression/Classification Models SageMaker SQL TensorFlow Position Overview We are looking for an experienced AI/ML Engineer to join our team in Bengaluru. The ideal candidate will bring a deep understanding of machine learning, artificial intelligence, and big data technologies, with proven expertise in developing scalable AI/ML solutions. You will lead technical efforts, mentor team members, and collaborate with cross-functional teams to design, develop, and deploy cutting edge AI/ML applications. Job Details Job Category: AI/ML Engineer. Job Type: Full-Time Job Location: Bengaluru Experience Required: 4-8 Years About Us We are a multi-award-winning creative engineering company. Since 2011, we have worked with our customers as a design and technology enablement partner, guiding them on their digital transformation journeys. Roles And Responsibilities Design, develop, and deploy deep learning models for object classification, detection, and segmentation using CNNs and Transfer Learning. Implement image preprocessing and advanced computer vision pipelines. Optimize deep learning models using pruning, quantization, and ONNX for deployment on edge devices. Work with PyTorch, TensorFlow, and ONNX frameworks to develop and convert models. Accelerate model inference using GPU programming with CUDA and cuDNN. Port and test models on embedded and edge hardware platforms. ( Orin, Jetson, Hailo ) Conduct research and experiments to evaluate and integrate GenAI technologies in computer vision tasks. Explore and implement cloud-based AI workflows, particularly using AWS/Azure AI/ML services. Collaborate with cross-functional teams for data analytics, data processing, and large-scale model training. Required Skills Strong programming experience in Python. Solid background in deep learning, CNNs, and transfer learning and Machine learning basics. Expertise in object detection, classification, segmentation. Proficiency with PyTorch, TensorFlow, and ONNX. Experience with GPU acceleration (CUDA, cuDNN). Hands-on knowledge of model optimization (pruning, quantization). Experience deploying models to edge devices (e.g., Jetson, mobile, Orin, Hailo ) Understanding of image processing techniques. Familiarity with data pipelines, data preprocessing, and data analytics. Willingness to explore and contribute to Generative AI and cloud-based AI solutions. Good problem-solving and communication skills. Preferred (Nice-to-Have) Experience with C/C++. Familiarity with AWS Cloud AI/ML tools (e.g., SageMaker, Rekognition). Exposure to GenAI frameworks like OpenAI, Stable Diffusion, etc. Knowledge of real-time deployment systems and streaming analytics. Qualifications Graduation/Post-graduation in Computers, Engineering, or Statistics from a reputed institute. What We Offer Competitive salary and benefits package. Opportunity to work in a dynamic and innovative environment. Professional development and learning opportunities. Visit us on: CodeCraft Technologies LinkedIn : CodeCraft Technologies LinkedIn Instagram : CodeCraft Technologies Instagram Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Senior Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-IN-Gurgaon-SeniorProductM Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Senior Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-IN-Chennai-SeniorProductM Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-IN-Mumbai-ProductManager Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Rawatsar, Rajasthan, India

Remote

AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Senior Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-LK-COUNTRY-SeniorProductM Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

About Loti AI, Inc Loti AI specializes in protecting major celebrities, public figures, and corporate IP from online threats, focusing on deepfake and impersonation detection. Founded in 2022, Loti offers likeness protection, content location and removal, and contract enforcement across various online platforms including social media and adult sites. The company's mission is to empower individuals to control their digital identities and privacy effectively. We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications And Skills Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

ORANTS AI is a cutting-edge technology company at the forefront of AI and Big Data innovation. We specialize in developing advanced marketing and management platforms, leveraging data mining, data integration, and artificial intelligence to deliver efficient and impactful solutions for our corporate clients. We're a dynamic, remote-first team committed to fostering a collaborative and flexible work environment. Salary: 40 - 43 LPA + Variable Location: Remote (India) Work Schedule: Flexible Working Hours Join ORANTS AI as a Senior AI Engineer and contribute to the development of our intelligent marketing and management platforms. We're looking for an experienced professional who can design, implement, and deploy advanced AI models and algorithms to solve complex business problems. Responsibilities: Design, develop, and deploy machine learning and deep learning models for various applications (e.g., natural language processing, predictive analytics, recommendation systems). Collaborate with data scientists to translate research prototypes into production-ready solutions. Optimize AI models for performance, scalability, and efficiency. Implement robust data pipelines for training and inference. Stay current with the latest advancements in AI/ML research and technologies. Participate in the entire AI lifecycle, from data collection and preparation to model deployment and monitoring. Requirements: 5+ years of experience as an AI/ML Engineer. Strong proficiency in Python and relevant AI/ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Experience with various machine learning algorithms and techniques. Solid understanding of data structures, algorithms, and software design principles. Experience with cloud platforms (AWS, Azure, GCP) and MLOps practices. Familiarity with big data technologies (e.g., Spark, Hadoop) is a plus. Excellent problem-solving skills and a strong analytical mindset. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies