Jobs
Interviews

1801 Inference Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Location: Remote Job Summary : We are looking for a highly motivated AI/ML Engineer with hands-on experience in building applications using LangChain and large language models (LLMs). The ideal candidate will have a strong foundation in machine learning, natural language processing (NLP), and prompt engineering, along with a passion for solving real-world problems using cutting-edge AI technologies. Key Responsibilities : - Design, develop, and deploy AI-powered applications using LangChain and LLM frameworks. - Build and optimize prompt chains, memory modules, and tools for conversational agents. - Integrate third-party APIs, vector databases (like Pinecone, FAISS, or Weaviate), and knowledge bases into AI workflows. - Train, fine-tune, or fine-control LLMs for custom use cases. - Collaborate with product, backend, and data science teams to deliver AI-driven solutions. - Implement evaluation metrics and testing frameworks for model performance and response quality. - Stay current with advancements in generative AI, LLMs, and LangChain ecosystem. Requirements : - Bachelors or Masters degree in Computer Science, Artificial Intelligence, or a related field. - 3+ years of experience in machine learning, NLP, or related AI/ML fields. - Proficiency in Python and libraries such as Hugging Face Transformers, LangChain, OpenAI, etc. - Experience with vector stores and retrieval-augmented generation (RAG). - Strong understanding of LLM architecture, prompt engineering, and inference pipelines. - Familiarity with cloud platforms (AWS, GCP, Azure) and MLOps workflows.

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Who We Are Ema is building the next generation AI technology to empower every employee in the enterprise to be their most creative and productive. Our proprietary tech allows enterprises to delegate most repetitive tasks to Ema, the AI employee. We are founded by ex-Google, Coinbase, Okta executives and serial entrepreneurs. We’ve raised capital from notable investors such as Accel Partners, Naspers, Section32 and a host of prominent Silicon Valley Angels including Sheryl Sandberg (Facebook/Google), Divesh Makan (Iconiq Capital), Jerry Yang (Yahoo), Dustin Moskovitz (Facebook/Asana), David Baszucki (Roblox CEO) and Gokul Rajaram (Doordash, Square, Google). Our team is a powerhouse of talent, comprising engineers from leading tech companies like Google, Microsoft Research, Facebook, Square/Block, and Coinbase. All our team members hail from top-tier educational institutions such as Stanford, MIT, UC Berkeley, CMU and Indian Institute of Technology. We’re well funded by the top investors and angels in the world. Ema is based in Silicon Valley and Bangalore, India. This will be a hybrid role where we expect employees to work from office three days a week. Who You Are We're looking for an innovative and passionate Machine Learning Engineers to join our team. You are someone who loves solving complex problems, enjoys the challenges of working with huge data sets, and has a knack for turning theoretical concepts into practical, scalable solutions. You are a strong team player but also thrive in autonomous environments where your ideas can make a significant impact. You love utilizing machine learning techniques to push the boundaries of what is possible within the realm of Natural Language Processing, Information Retrieval and related Machine Learning technologies. Most importantly, you are excited to be part of a mission-oriented high-growth startup that can create a lasting impact. You Will Conceptualize, develop, and deploy machine learning models that underpin our NLP, retrieval, ranking, reasoning, dialog and code-generation systems. Implement advanced machine learning algorithms, such as Transformer-based models, reinforcement learning, ensemble learning, and agent-based systems to continually improve the performance of our AI systems. Lead the processing and analysis of large, complex datasets (structured, semi-structured, and unstructured), and use your findings to inform the development of our models. Work across the complete lifecycle of ML model development, including problem definition, data exploration, feature engineering, model training, validation, and deployment. Implement A/B testing and other statistical methods to validate the effectiveness of models. Ensure the integrity and robustness of ML solutions by developing automated testing and validation processes. Clearly communicate the technical workings and benefits of ML models to both technical and non-technical stakeholders, facilitating understanding and adoption. Ideally, You'd Have A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. Proven industry experience in building and deploying production-level machine learning models. Deep understanding and practical experience with NLP techniques and frameworks, including training and inference of large language models. Deep understanding of any of retrieval, ranking, reinforcement learning, and agent-based systems and experience in how to build them for large systems. Proficiency in Python and experience with ML libraries such as TensorFlow or PyTorch. Excellent skills in data processing (SQL, ETL, data warehousing) and experience working with large-scale data systems. Experience with machine learning model lifecycle management tools, and an understanding of MLOps principles and best practices. Familiarity with cloud platforms like GCP or Azure. Familiarity with the latest industry and academic trends in machine learning and AI, and the ability to apply this knowledge to practical projects. Good understanding of software development principles, data structures, and algorithms. Excellent problem-solving skills, attention to detail, and a strong capacity for logical thinking. The ability to work collaboratively in an extremely fast-paced, startup environment. Ema Unlimited is an equal opportunity employer and is committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, gender identity, or genetics.

Posted 1 month ago

Apply

6.0 years

0 Lacs

India

Remote

Join Tether and Shape the Future of Digital Finance At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction. Innovate with Tether Tether Finance : Our innovative product suite features the world’s most trusted stablecoin, USDT, relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services. But that’s just the beginning: Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities. Tether Data : Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET, our flagship app that redefines secure and private data sharing. Tether Education: Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity. Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways. Why Join Us? Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry. If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you. Are you ready to be part of the future? About the job: As a Senior Software Developer, you will be a part of the team that building desktop and mobile AI apps on top of new and cutting edge Tether SDK. Responsibilities: AI-Driven Desktop Integration You will develop and maintain backend services and APIs that power AI-enhanced desktop applications. These services support intelligent features like local inference, contextual awareness, and model interaction, tailored specifically for Electron-based or hybrid clients. Platform-Aware API Design Collaborating closely with desktop and React Native teams, you will shape API contracts that reflect platform constraints and performance considerations — ensuring native-like responsiveness and cross-platform consistency. Scalable Model Invocation & Resource Management You’ll contribute to backend services that handle concurrent model invocations, manage GPU/CPU workloads, and intelligently queue or throttle requests based on system constraints — ensuring smooth AI on-device performance. +6 years of experience working with Nodejs/JavaScript. Experience with Desktop app development (Electron, Tauri, other) Experience working with React Native or bridging backend systems into mobile/desktop hybrid stacks Experience optimizing performance and resource usage on desktop/mobile clients Have actively participated in the development of a complex platform Ability to quickly learn new technologies Good understanding of security practices Nice to have Familiarity with secure inter-process communication Familiar with Peer-to-Peer technologies (Kademlia, bittorent, libp2p) C++/Swift/Kotlin skills are a plus Familiar with AI/Agentic domain applications (RAG, AI SDKs) Familiarity with real-time data delivery (NodeJS/other streaming)

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Senior Gen AI Engineer Job Description Brightly Software is seeking an experienced candidate to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Lead the evaluation and selection of foundation models and vector databases based on performance and business needs Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Guide the design of multi-step RAG, agentic, or tool-augmented workflows Implement governance, safety layers, and responsible AI practices (e.g., guardrails, moderation, auditability) Mentor junior engineers and review GenAI design and implementation plans Drive experimentation, benchmarking, and continuous improvement of GenAI capabilities Collaborate with leadership to align GenAI initiatives with product and business strategy Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS Opensearch Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate techn

Posted 1 month ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Gen AI Engineer Job Description Brightly Software is seeking a high performer to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. As a Gen AI Engineer, you will play a critical role in building AI offerings for Brightly. You will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster. This will include the following: Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities: Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS OpenSearch Develop GenAI applications using Hugging Face Transformers, LangChain, and Llama related frameworks Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph. Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate technical concepts clearly to cross-functional and non-technical stakeholders Thrive in a fast-paced, lean environment and contribute to scalable GenAI system design Qualifications Bachelor’s degree is required 2-4 years of experience of total experience with a strong focus on AI and ML and 1+ years in core GenAI Engineering Demonstrated expertise in working with large language models (LLMs) and generative AI systems, including both text-based and multimodal models. Strong programming skills in Python, including proficiency with data science libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, and/or PyTorch. Familiarity with MLOps principles and tools for automating and streamlining the ML lifecycle. Experience working with agentic AI. Capable of building Retrieval-Augmented Generation (RAG) pipelines leveraging vector stores like Pinecone, Chroma, or FAISS. Strong programming skills in Python, with experience using leading AI/ML libraries such as Hugging Face Transformers and LangChain. Practical experience in working with vector databases and embedding methodologies for efficient information retrieval. Possess experience in developing and exposing API endpoints for accessing AI model capabilities using frameworks like FastAPI. Knowledgeable in prompt engineering techniques, including prompt chaining and performance evaluation strategies. Solid grasp of natural language processing (NLP) fundamentals and transformer-based model architectures. Experience in deploying machine learning models to cloud platforms (preferably AWS) and containerized environments using Docker or Kubernetes. Skilled in fine-tuning and assessing open-source models using methods such as LoRA, PEFT, and supervised training. Strong communication skills with the ability to convey complex technical concepts to non-technical stakeholders. Able to operate successfully in a lean, fast-paced organization, and to create a vision and organization that can scale quickly Senior Gen AI Engineer

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Global Data Insight & Analytics organization is looking for a top-notch Software Engineer who has also got Machine Learning knowledge & Experience to add to our team to drive the next generation of AI/ML (Mach1ML) platform. In this role you will work in a small, cross-functional team. The position will collaborate directly and continuously with other engineers, business partners, product managers and designers from distributed locations, and will release early and often. The team you will be working on is focused on building Mach1ML platform – an AI/ML enablement platform to democratize Machine Learning across Ford enterprise (like OpenAI’s GPT, Facebook’s FBLearner, etc.) to deliver next-gen analytics innovation. We strongly believe that data has the power to help create great products and experiences which delight our customers. We believe that actionable and persistent insights, based on high quality data platform, help business and engineering make more impactful decisions. Our ambitions reach well beyond existing solutions, and we are in search of innovative individuals to join this Agile team. This is an exciting, fast-paced role which requires outstanding technical and organization skills combined with critical thinking, problem-solving and agile management tools to support team success. Responsibilities What you'll be able to do: As a Software Engineer, you will work on developing features for Mach1ML platform, support customers in model deployment using Mach1ML platform on GCP and On-prem. You will follow Rally to manage your work. You will incorporate an understanding of product functionality and customer perspective for model deployment. You will work on the cutting-edge technologies such as GCP, Kubernetes, Docker, Seldon, Tekton, Airflow, Rally, etc. Position Responsibilities: Work closely with Tech Anchor, Product Manager and Product Owner to deliver machine learning use cases using Ford Agile Framework. Work with Data Scientists and ML engineers to tackle challenging AI problems. Work specifically on the Deploy team to drive model deployment and AI/ML adoption with other internal and external systems. Help innovate by researching state-of-the-art deployment tools and share knowledge with the team. Lead by example in use of Paired Programming for cross training/upskilling, problem solving, and speed to delivery. Leverage latest GCP, CICD, ML technologies Critical Thinking: Able to influence the strategic direction of the company by finding opportunities in large, rich data sets and crafting and implementing data driven strategies that fuel growth including cost savings, revenue, and profit. Modelling: Assessments, and evaluating impacts of missing/unusable data, design and select features, develop, and implement statistical/predictive models using advanced algorithms on diverse sources of data and testing and validation of models, such as forecasting, natural language processing, pattern recognition, machine vision, supervised and unsupervised classification, decision trees, neural networks, etc. Analytics: Leverage rigorous analytical and statistical techniques to identify trends and relationships between different components of data, draw appropriate conclusions and translate analytical findings and recommendations into business strategies or engineering decisions - with statistical confidence Data Engineering: Experience with crafting ETL processes to source and link data in preparation for Model/Algorithm development. This includes domain expertise of data sets in the environment, third-party data evaluations, data quality Visualization: Build visualizations to connect disparate data, find patterns and tell engaging stories. This includes both scientific visualization as well as geographic using applications such as Seaborn, Qlik Sense/PowerBI/Tableau/Looker Studio, etc. Qualifications Minimum Requirements we seek: Bachelor’s or master’s degree in computer science engineering or related field or a combination of education and equivalent experience. 3+ years of experience in full stack software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, Streamlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Our Preferred Requirements: Master’s degree in computer science engineering, or related field or a combination of education and equivalent experience. Demonstrated successful application of analytical methods and machine learning techniques with measurable impact on product/design/business/strategy. Proficiency in programming languages such as Python with a strong emphasis on machine learning libraries, generative AI frameworks, and monitoring tools. Utilize tools and technologies such as TensorFlow, PyTorch, scikit-learn, and other machine learning libraries to build and deploy machine learning solutions on cloud platforms. Design and implement cloud infrastructure using technologies such as Kubernetes, Terraform, and Tekton to support scalable and reliable deployment of machine learning models, generative AI models, and applications. Integrate machine learning and generative AI models into production systems on cloud platforms such as Google Cloud Platform (GCP) and ensure scalability, performance, and proactive monitoring. Implement monitoring solutions to track the performance, health, and security of systems and applications, utilizing tools such as Prometheus, Grafana, and other relevant monitoring tools. Conduct code reviews and provide constructive feedback to team members on machine learning-related projects. Knowledge and experience in agentic workflow based application development and DevOps Stay up to date with the latest trends and advancements in machine learning and data science.

Posted 1 month ago

Apply

10.0 years

7 - 9 Lacs

Hyderābād

On-site

Summary At Novartis, we are reimagining medicine by harnessing the power of data and AI. As a Senior Architect – AI Products supporting our Commercial function, you will drive the architectural strategy that enables seamless integration of data and AI products across omnichannel engagement, customer analytics, field operations, and real-world insights. You will work across commercial business domains, data platforms, and AI product teams to design scalable, interoperable, and compliant solutions that maximize the impact of data and advanced analytics on how we engage with healthcare professionals and patients. About the Role Position Title: Assoc. Dir. DDIT US&I AI Architect (Commercial) Location – Hyd-India #LI Hybrid About the Role At Novartis, we are reimagining medicine by harnessing the power of data and AI. As a Senior Architect – AI Products supporting our Commercial function, you will drive the architectural strategy that enables seamless integration of data and AI products across omnichannel engagement, customer analytics, field operations, and real-world insights. You will work across commercial business domains, data platforms, and AI product teams to design scalable, interoperable, and compliant solutions that maximize the impact of data and advanced analytics on how we engage with healthcare professionals and patients. Your responsibilities include but are not limited to: Commercial Architecture Strategy: Define and drive the reference architecture for commercial data and AI products, ensuring alignment with enterprise standards and business priorities. Cross-Product Integration: Architect how data products (e.g., HCP 360, engagement data platforms, real-world data assets) connect with AI products (e.g., field force recommendations, predictive models, generative AI copilots) and downstream tools. Modular, Scalable Design: Ensure architecture promotes reuse, scalability, and interoperability across multiple markets, brands, and data domains within the commercial landscape. Stakeholder Alignment: Partner with commercial product managers, data science teams, platform engineering, and global/local stakeholders to guide solution design, delivery, and lifecycle evolution. Data & AI Lifecycle Enablement: Support the full lifecycle of data and AI—from ingestion and transformation to model training, inference, and monitoring—within compliant and secure environments. Governance & Compliance: Ensure architecture aligns with GxP, data privacy, and commercial compliance requirements (e.g., consent management, data traceability). Innovation & Optimization: Recommend architectural improvements, modern technologies, and integration patterns to support personalization, omnichannel engagement, segmentation, targeting, and performance analytics. What you’ll bring to the role: Proven ability to lead cross-functional architecture efforts across business, data, and technology teams. Good understanding of security, compliance, and privacy regulations in a commercial pharma setting. Experience with pharmaceutical commercial ecosystems and data (e.g., IQVIA, Veeva, Symphony). Familiarity with customer data platforms (CDPs), identity resolution, and marketing automation tools. Desirable Requirements: Bachelor's or master’s degree in computer science, Engineering, Data Science, or a related field. 10+ years of experience in enterprise or solution architecture, with significant experience in commercial functions (preferably in pharma or life sciences). Strong background in data platforms, pipelines, and governance (e.g., Snowflake, Databricks, CDP, Salesforce integration). Hands-on experience integrating solutions across Martech, CRM, and omnichannel systems. Strong knowledge of AI/ML architectures, particularly those supporting commercial use cases (recommendation engines, predictive analytics, NLP, LLMs). Exposure to GenAI applications in commercial (e.g., content generation, intelligent assistants). Understanding of global-to-local deployment patterns and data sharing requirements Commitment to Diversity & Inclusion: Novartis embraces diversity, equal opportunity, and inclusion. We are committed to building diverse teams, representative of the patients and communities we serve, and we strive to create an inclusive workplace that cultivates bold innovation through collaboration and empowers our people to unleash their full potential. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division Operations Business Unit CTS Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Technology Transformation Job Type Full time Employment Type Regular Shift Work No

Posted 1 month ago

Apply

8.0 years

4 - 8 Lacs

Gurgaon

On-site

JOB DESCRIPTION AI Lead - Innovation & Product Development About Us KPMG is a dynamic and forward-thinking Professional service firm committed to leveraging cutting-edge artificial intelligence to create transformative products and solutions. We are building a team of passionate innovators who thrive on solving complex challenges and pushing the boundaries of what's possible with AI. Job Summary We are seeking an experienced and visionary AI Lead to spearhead our AI innovation and product development. The ideal candidate will be a hands-on leader with a strong background in solution architecture, a proven track record in developing AI-based products, and deep expertise in Generative AI applications, including Agentic AI. This role requires a comprehensive understanding of AI models, frameworks, and Agentic AI, along with exposure to GPU infrastructure, to design, build, and deploy scalable AI solutions. You will drive our AI strategy, lead cross-functional teams, and transform complex ideas into tangible, market-ready products, with a strong understanding of enterprise requirements from a professional services perspective. Key Responsibilities Strategic Leadership & Innovation: o Define and drive the AI innovation roadmap, identifying emerging trends in AI, Generative AI and Agentic AI. o Lead research, evaluation, and adoption of new AI models, algorithms, and frameworks. o Foster a culture of continuous learning, experimentation, and innovation. AI Product Development & Management: o Lead end-to-end development of AI-based products, from ideation to deployment and optimization. o Collaborate with product managers, designers, and stakeholders to translate business requirements into viable AI solutions. o Ensure successful delivery of high-quality, scalable, and performant AI products. o Client Engagement & Solutioning: Work with multiple clients to understand requirements, design tailored AI solutions, develop proofs-of-concept (POCs), and ensure successful implementation in a professional services context. Solution Architecture & Design: o Design robust, scalable, and secure AI solution architectures across multi-cloud platforms and on-premise infrastructure. o Provide technical guidance and architectural oversight for AI initiatives, focusing on optimizing for GPU infrastructure . o Evaluate and recommend AI technologies, tools, and infrastructure, including Large Language Models (LLMs) and Small Language Models (SLMs) on cloud and on-premise. Team Leadership & Mentorship: o Lead, mentor, and grow a team of talented AI engineers, data scientists, and machine learning specialists. o Conduct code reviews and ensure adherence to coding standards and architectural principles. o Promote collaboration and knowledge sharing. Technical Expertise & Implementation: o Hands-on experience in developing and deploying Generative AI applications (e.g., LLMs, RAG, GraphRags , image generation, code generation), including Agentic AI and Model Context Protocol (MCP). o Proficiency with Agentic AI orchestration frameworks such as LangChain, LlamaIndex, and/or similar tools. o Experience with leading LLM providers and models including OpenAI, Llama, Anthropic, and others. o Familiarity with AI-powered tools and platforms such as Microsoft Copilot, GitHub Copilot etc. o Strong understanding of various machine learning models (deep learning, supervised, unsupervised, reinforcement learning). o Experience with large datasets, ensuring data quality, feature engineering, and efficient data processing for AI model training. o Deep understanding of GPU infrastructure, for AI model training or/ and inference. Qualifications Bachelor's or Master's degree in Computer Science, AI, ML, Data Science, or a related quantitative field. 8+ years in AI/ML development, with at least 3 years in a leadership or lead architect role. Mandatory: Proven experience in leading the development and deployment of AI-based products and solutions. Mandatory: Extensive hands-on experience with Generative AI models and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs, etc.), including practical application of Agentic AI. Proficiency with Agentic AI orchestration frameworks such as LangChain, LlamaIndex, and/or similar tools. Experience in leveraging and integrating various LLM providers and models, including but not limited to OpenAI, Llama, and Anthropic. Familiarity with AI-powered development tools and platforms such as Microsoft Copilot, GitHub Copilot, and other code generation/assistance tools. Strong understanding of solution architecture principles for large-scale AI systems, including multi-cloud platforms and on-premise deployments. Mandatory: Exposure to and understanding of GPU infrastructure, especially NVIDIA, for AI workloads. Experience with Large Language Models (LLMs) and Small Language Models (SLMs) on both cloud and on-premise environments. Proficiency in programming languages such as Python, with strong software engineering fundamentals. Familiarity with MLOps practices, including model versioning, deployment, monitoring, and retraining. Mandatory: Demonstrated industry exposure to professional services, with a proven track record of working with multiple clients to solution requirements, conduct POCs, and understand enterprise-level needs. Excellent communication, interpersonal, and presentation skills, with the ability to articulate complex technical concepts to diverse audiences. Strong problem-solving abilities and a strategic mindset. What We Offer Opportunity to work on cutting-edge AI technologies and shape the future of our products. A collaborative and innovative work environment. Competitive salary and benefits package. Professional development and growth opportunities. The chance to make a significant impact on our business and our customers. If you are a passionate AI leader with a drive for innovation and a desire to build groundbreaking AI products, we encourage you to apply!

Posted 1 month ago

Apply

3.0 - 5.0 years

6 - 11 Lacs

Thiruvananthapuram

On-site

Experience Required: 3-5 years of hands-on experience in full-stack development, system design, and supporting AI/ML data-driven solutions in a production environment. Key Responsibilities Implementing Technical Designs: Collaborate with architects and senior stakeholders to understand high-level designs and break them down into detailed engineering tasks. Implement system modules and ensure alignment with architectural direction. Cross-Functional Collaboration: Work closely with software developers, data scientists, and UI/UX teams to translate system requirements into working code. Clearly communicate technical concepts and implementation plans to internal teams. Stakeholder Support: Participate in discussions with product and client teams to gather requirements. Provide regular updates on development progress and raise flags early to manage expectations. System Development & Integration: Develop, integrate, and maintain components of AI/ML platforms and data-driven applications. Contribute to scalable, secure, and efficient system components based on guidance from architectural leads. Issue Resolution: Identify and debug system-level issues, including deployment and performance challenges. Proactively collaborate with DevOps and QA to ensure resolution. Quality Assurance & Security Compliance: Ensure that implementations meet coding standards, performance benchmarks, and security requirements. Perform unit and integration testing to uphold quality standards. Agile Execution: Break features into technical tasks, estimate efforts, and deliver components in sprints. Participate in sprint planning, reviews, and retrospectives with a focus on delivering value. Tool & Framework Proficiency: Use modern tools and frameworks in your daily workflow, including AI/ML libraries, backend APIs, front-end frameworks, databases, and cloud services, contributing to robust, maintainable, and scalable systems. Continuous Learning & Contribution: Keep up with evolving tech stacks and suggest optimizations or refactoring opportunities. Bring learnings from the industry into internal knowledge-sharing sessions. Proficiency in using AI-copilots for Coding: Adaptation to emerging tools and knowledge of prompt engineering to effectively use AI for day-to-day coding needs. Technical Skills Hands-on experience with Python-based AI/ML development using libraries such as TensorFlow , PyTorch , scikit-learn , or Keras . Hands-on exposure to self-hosted or managed LLMs , supporting integration and fine-tuning workflows as per system needs while following architectural blueprints. Practical implementation of NLP/CV modules using tools like SpaCy , NLTK , Hugging Face Transformers , and OpenCV , contributing to feature extraction, preprocessing, and inference pipelines. Strong backend experience using Django , Flask , or Node.js , and API development (REST or GraphQL). Front-end development experience with React , Angular , or Vue.js , with a working understanding of responsive design and state management. Development and optimization of data storage solutions , using SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra), with hands-on experience configuring indexes, optimizing queries, and using caching tools like Redis and Memcached . Working knowledge of microservices and serverless patterns , participating in building modular services, integrating event-driven systems, and following best practices shared by architectural leads. Application of design patterns (e.g., Factory, Singleton, Observer) during implementation to ensure code reusability, scalability, and alignment with architectural standards. Exposure to big data tools like Apache Spark , and Kafka for processing datasets. Familiarity with ETL workflows and cloud data warehouse , using tools such as Airflow , dbt , BigQuery , or Snowflake . Understanding of CI/CD , containerization (Docker), IaC (Terraform), and cloud platforms (AWS, GCP, or Azure). Implementation of cloud security guidelines , including setting up IAM roles , configuring TLS/SSL , and working within secure VPC setups, with support from cloud architects. Exposure to MLOps practices , model versioning, and deployment pipelines using MLflow , FastAPI , or AWS SageMaker . Configuration and management of cloud services such as AWS EC2 , RDS , S3 , Load Balancers , and WAF , supporting scalable infrastructure deployment and reliability engineering efforts. Personal Attributes Proactive Execution and Communication: Able to take architectural direction and implement it independently with minimal rework with regular communication with stakeholders Collaboration: Comfortable working across disciplines with designers, data engineers, and QA teams. Responsibility: Owns code quality and reliability, especially in production systems. Problem Solver: Demonstrated ability to debug complex systems and contribute to solutioning. Preferred Skills: Key : Python, Django, Django ORM, HTML, CSS, Bootstrap, JavaScript, jQuery, Multi-threading, Multi-processing, Database Design, Database Administration, Cloud Infrastructure, Data Science, self-hosted LLMs Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. Relevant certifications in cloud or machine learning are a plus. Package: 6-11 LPA Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹1,100,000.00 per year Schedule: Day shift Monday to Friday

Posted 1 month ago

Apply

1.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Title: Bioinformatician Date: 20 Jun 2025 Job Location: Bangalore Pay Grade Year of Experience: Job Title: Bioinformatician Job Location: Bangalore About Syngene: Syngene ( www.syngeneintl.com ) is an innovation-led contract research, development and manufacturing organization offering integrated scientific services from early discovery to commercial supply. At Syngene, safety is at the heart of everything we do personally and professionally. Syngene has placed safety at par with business performance with shared responsibility and accountability, including following safety guidelines, procedures, and SOPs, in letter and spirit Overall adherence to safe practices and procedures of oneself and the teams aligned Contributing to development of procedures, practices and systems that ensures safe operations and compliance to company’s integrity & quality standards Driving a corporate culture that promotes environment, health, and safety (EHS) mindset and operational discipline at the workplace at all times. Ensuring safety of self, teams, and lab/plant by adhering to safety protocols and following environment, health, and safety (EHS) requirements at all times in the workplace. Ensure all assigned mandatory trainings related to data integrity, health, and safety measures are completed on time by all members of the team including self Compliance to Syngene’ s quality standards at all times Hold self and their teams accountable for the achievement of safety goals Govern and Review safety metrics from time to time We are seeking a highly skilled and experienced computational biologist to join our team. The ideal candidate will have a proven track record in multi-omics data analysis. They will be responsible for integrative analyses and contributing to the development of novel computational approaches to uncover biological insights. Experience: 1-4 years Core Purpose of the Role To support data-driven biological research by performing computational analysis of omics data, and generating translational insights through bioinformatics tools and pipelines. Position Responsibilities Conduct comprehensive analyses of multi-omics datasets, including genomics, transcriptomics, proteomics, metabolomics, and epigenomics. Develop computational workflows to integrate various -omics data to generate inference and hypotheses for testing. Conduct differential expression and functional enrichment analyses. Implement and execute data processing workflows and automate the pipelines with best practices for version control, modularization, and documentation. Apply advanced multivariate data analysis techniques, including regression, clustering, and dimensionality reduction, to uncover patterns and relationships in large datasets. Collaborate with researchers, scientists, and other team members to translate computational findings into actionable biological insights. Educational Qualifications Master’s degree in bioinformatics. Mandatory Technical Skills Programming: Proficiency in Python for data analysis, visualization, and pipeline development. Multi-omics analysis: Proven experience in analyzing and integrating multi-omics datasets. Statistics: Knowledge of probability distributions, correlation analysis, and hypothesis testing. Data visualization: Strong understanding of data visualization techniques and tools (e.g., ggplot2, matplotlib, seaborn). Preferred Machine learning: Familiarity with AI/ML concepts Behavioral Skills Excellent communication skills Objective thinking Problem solving Proactivity Syngene Values All employees will consistently demonstrate alignment with our core values Excellence Integrity Professionalism Equal Opportunity Employer It is the policy of Syngene to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by applicable legislation or local law. In addition, Syngene will provide reasonable accommodations for qualified individuals with disabilities.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About The Role Have you ever wondered why it's taking so long for an earner to be matched to your trip, how the price is determined for your trip, or how an earner is picked from the many around you? If so, the Mobility Marketplace Health Science team is for you! The Marketplace Health Science team at Uber plays a pivotal role in monitoring marketplace performance, detecting issues in real time, and driving solutions through algorithmic and data-driven interventions. Our work is essential to maintaining Uber's market leadership and delivering reliable experiences to riders and earners. We are seeking experienced data scientists who thrive on solving complex problems at scale. The ideal candidate brings a strong foundation in causal inference, experimentation and analytics, along with a deep understanding of marketplace dynamics and metric trade-offs. What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Refine ambiguous questions and generate new hypotheses about whether marketplace levers such as Rider and Driver Pricing, Matching, Surge etc are functioning appropriately through a deep understanding of the data, our customers, and our business. Define how our teams measure success by developing Key Performance Indicators and other user/business metrics, in close partnership with Product and other subject areas such as engineering, operations, and marketing. Collaborate with applied scientists and engineers to build and improve the availability, integrity, accuracy, and reliability of our models, tables etc. Design and develop algorithms to increase the speed and accuracy with which we react to marketplace changes. Develop data-driven business insights and work with cross-functional partners to find opportunities and recommend prioritization of product, growth, and optimization initiatives. Basic Qualifications Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative fields. 6+ years of experience as a Data Scientist, Product Analyst, Senior Data Analyst, or other types of data analysis-focused functions. Deep understanding of core statistical concepts such as hypothesis testing, regression, and causal inference Advanced SQL expertise. Experience with either Python or R for data analysis. Knowledge of experimental design and analysis (A/B, Switchbacks, Synthetic Control, Diff in Diff, etc.). Experience with exploratory data analysis, statistical analysis and testing, and model development. Proven track record to wrangle large datasets, extract insights from data, and summarize learnings/takeaways. Experience with Excel and some dashboarding/data visualization (i.e., Tableau, Mixpanel, Looker, or similar). Preferred Qualifications Proven aptitude toward Data Storytelling and Root Cause Analysis using data. Excellent communication skills across technical, non-technical, and executive audiences. Have a growth mindset; love solving ambiguous, ambitious, and impactful problems. Ability to work in a self-guided manner. Ability to deliver on tight timelines and prioritize multiple tasks while maintaining quality and detail.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

About Media.net : Media.net is a leading, global ad tech company that focuses on creating the most transparent and efficient path for advertiser budgets to become publisher revenue. Our proprietary contextual technology is at the forefront of enhancing Programmatic buying, the latest industry standard in ad buying for digital platforms. The Media.net platform powers major global publishers and ad tech businesses at scale across ad formats like display, video, mobile, native, as well as search. Media.net’s U.S. HQ is based in New York, and the Global HQ is in Dubai. With office locations and consultant partners across the world. Media.net takes pride in the value-add it offers to its 50+ demand and 21K+ publisher partners, in terms of both products and services. Role : Software Development Engineer 2 Location : Hyderabad ( Remote ) What is the job like? As a Developer, you will contribute to the product engineering efforts of multiple areas of our intent discovery secret sauce and work with world-class R&D teams to develop game-changing search and text inference algorithms that will help millions of internet users discover what they are looking for. Our Search/Ad Platform involves myriad technologies, diverse platforms, complex algorithms and the latest application paradigms such as NoSQL databases, eventual consistency, distributed queues and are deployed across hundreds of servers in a super-scalable fashion where a 10ms delay in response time could mean the difference between success and failure. In this role, you will manage/work with a team of energized developers and will be responsible for the entire lifecycle of one or more areas, including architecture, design, coding, deployment etc. We believe that ‘code speaks louder than words and as such expect everyone at every level in the engineering team to be comfortable with rolling up their sleeves, firing up their favourite IDE and writing clean, testable and well-designed code. Who should apply for this role? 2–4 years of software development experience in Python Strong understanding and hands-on experience with deep learning frameworks such as PyTorch, including implementation of real-world projects Good knowledge of relational and NoSQL databases Ability to write complex and optimized SQL queries Solid programming fundamentals, including OOP, Design Patterns, and Data Structures Excellent analytical, logical, and problem-solving skills Familiarity with cloud platforms is a plus Experience with big data technologies like Spark, and Hive is a plus Ability to understand business requirements, work independently, and take full ownership of tasks Passionate and enthusiastic about building and maintaining large-scale, high-performance systems.

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Summary Under limited supervision designs, develops and maintains test procedures, tester hardware and software for electronic circuit board production. ESSENTIAL DUTIES AND RESPONSIBILITIES include the following. Other duties may be assigned. Leadership And Management Responsibilities Recruitment and Retention: Recruit and interview Process Technicians. Communicate criteria to recruiters for Process Technician position candidates. Coach technicians in the interviewing/hiring process. Monitor team member turnover; identify key factors that can be improved; make improvements. Employee and Team Development: Identify individual and team strengths and development needs on an ongoing basis. Create and/or validate training curriculum in area of responsibility. Coach and mentor Process Technicians to deliver excellence to every internal and external customer. Performance Management: Establish clear measurable goals and objectives by which to determine individual and team results (i.e. operational metrics, results against project timelines, training documentation, attendance records, knowledge of operational roles and responsibilities, personal development goals). Solicit ongoing feedback from Assistant Test Engineering Manager, Workcell Manager (WCM), Business Unit Manager (BUM), peers and team member on team member’s contribution to the Workcell team. Provide ongoing coaching and counseling to team member based on feedback. Express pride in staff and encourage them to feel good about their accomplishments. Perform team member evaluations professionally and on time. Drive individuals and the team to continuously improve in key operational metrics and the achievement of the organizational goals. Coordinate activities of large teams and keep them focused in times of crises. Ensure recognition and rewards are managed fairly and consistently in area of responsibility. Communication: Provide communication forum for the exchange of ideas and information with the department. Organize verbal and written ideas clearly and use an appropriate business style. Ask questions; encourage input from team members. Assess communication style of individual team members and adapt own communication style accordingly. Technical Management Responsibilities Review circuit board designs for testability requirements. Support manufacturing with failure analysis, tester debugging, reduction of intermittent failures and downtime of test equipment. Prepare recommendations for testing and documentation of procedures to be used from the product design phase through to initial production. Generate reports and analysis of test data, prepares documentation and recommendations. Review test equipment designs, data and RMA issues with customers regularly. Design, and direct engineering and technical personnel in fabrication of testing and test control apparatus and equipment. Direct and coordinate engineering activities concerned with development, procurement, installation, and calibration of instruments, equipment, and control devices required to test, record, and reduce test data. Determine conditions under which tests are to be conducted and sequences and phases of test operations. Direct and exercise control over operational, functional, and performance phases of tests. Perform moderately complex assignments of the engineering test function for standard and/or custom devices. Analyze and interpret test data and prepares technical reports for use by test engineering and management personnel. Develop or use computer software and hardware to conduct tests on machinery and equipment. Perform semi-routine technique development and maintenance, subject to established Jabil standards, including ISO and QS development standards. Provide training in new procedures to production testing staff. Adhere to all safety and health rules and regulations associated with this position and as directed by supervisor. Comply and follow all procedures within the company security policy. Minimum Requirements Bachelors of Science in Electronics or Electrical Engineering from four-year college or university; and three to five years experience Language Skills Ability to read, analyze, and interpret general business periodicals, professional journals, technical procedures, or governmental regulations. Ability to write reports, business correspondence, and procedure manuals. Ability to effectively present information and respond to questions from groups of managers, clients, customers, and the general public. Mathematical Skills Ability to work with mathematical concepts such as probability and statistical inference, and fundamentals of plane and solid geometry and trigonometry. Ability to apply concepts such as fractions, percentages, ratios, and proportions to practical situations. REASONING ABILITY Ability to define problems, collect data, establish facts, and draw valid conclusions. Ability to interpret an extensive variety of technical instructions in mathematical or diagram form and deal with several abstract and concrete variables. PHYSICAL DEMANDS The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. The employee is frequently required to walk, and to lift and carry PC’s and test equipment weighing up to 50 lbs. Specific vision abilities required by this job include close vision and use of computer monitor screens a great deal of time. WORK ENVIRONMENT The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Individual’s primary workstation is located in the office area, with some time spent each day on the manufacturing floor. The noise level in this environment ranges from low to moderate. , BE AWARE OF FRAUD: When applying for a job at Jabil you will be contacted via correspondence through our official job portal with a jabil.com e-mail address; direct phone call from a member of the Jabil team; or direct e-mail with a jabil.com e-mail address. Jabil does not request payments for interviews or at any other point during the hiring process. Jabil will not ask for your personal identifying information such as a social security number, birth certificate, financial institution, driver’s license number or passport information over the phone or via e-mail. If you believe you are a victim of identity theft, contact your local police department. Any scam job listings should be reported to whatever website it was posted in.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Key Responsibilities: AI/ML Development Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for: Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience: Must-Have 3–5 years of experience in AI/ML (especially vision-based systems). Strong experience with PyTorch or TensorFlow for model development. Proficient in Python with experience building RESTful APIs. Hands-on experience with Amazon SageMaker, Lambda, API Gateway, and S3. Knowledge of OpenSSL/PyCryptodome or basic cryptographic concepts. Understanding of model deployment, serialization, and performance tuning. Nice-to-Have Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies: Python, PyTorch, TensorFlow FastAPI, Flask AWS: SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL

Posted 1 month ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: AI Engineer Location: Gurgaon (On-site) Type: Full-Time Experience: 2–6 Years Role Overview We are seeking a hands-on AI Engineer to architect and deploy production-grade AI systems that power our real-time voice intelligence suite. You will lead AI model development, optimize low-latency inference pipelines, and integrate GenAI, ASR, and RAG systems into scalable platforms. This role combines deep technical expertise with team leadership and a strong product mindset. Key Responsibilities Build and deploy ASR models (e.g., Whisper, Wav2Vec2.0) and diarization systems for multi-lingual, real-time environments. Design and optimize GenAI pipelines using OpenAI, Gemini, LLaMA, and RAG frameworks (LangChain, LlamaIndex). Architect and implement vector database systems (FAISS, Pinecone, Weaviate) for knowledge retrieval and indexing. Fine-tune LLMs using SFT, LoRA, RLHF, and craft effective prompt strategies for summarization and recommendation tasks. Lead AI engineering team members and collaborate cross-functionally to ship robust, high-performance systems at scale. Preferred Qualification 2–6 years of experience in AI/ML, with demonstrated deployment of NLP, GenAI, or STT models in production. Proficiency in Python, PyTorch/TensorFlow, and real-time architectures (WebSockets, Kafka). Strong grasp of transformer models, MLOps, and low-latency pipeline optimization. Bachelor’s/Master’s in CS, AI/ML, or related field from a reputed institution (IITs, BITS, IIITs, or equivalent). What We Offer Compensation: Competitive salary + equity + performance bonuses Ownership: Lead impactful AI modules across voice, NLP, and GenAI Growth: Work with top-tier mentors, advanced compute resources, and real-world scaling challenges Culture: High-trust, high-speed, outcome-driven startup environment

Posted 1 month ago

Apply

4.0 - 6.0 years

0 Lacs

New Delhi, Delhi, India

On-site

About Knowdis.ai Knowdis.ai is an AI-first company specializing in e-commerce applications. We harness the power of machine learning and AI to enhance e-commerce operations, optimize customer experiences, and drive growth. If you are passionate about AI-driven product innovation, this is the perfect opportunity to make a meaningful impact. Key Responsibilities: - Infrastructure Management: Build scalable and robust infrastructure for ML models, ensuring seamless production integration. - CI/CD Expertise: Develop and maintain CI/CD pipelines with a focus on ML model deployment. - Model Deployment and Monitoring: Deploy ML models using TensorFlow Serving, Pytorch Serving, Triton Inference Server, or TensorRT and monitor their performance in production. - Collaboration: Work closely with data scientists and software engineers to transition ML models from research to production. - Security and Compliance: Uphold security protocols and ensure regulatory compliance in ML systems. Skills and Experience Required: - Proficiency in Docker and Kubernetes for containerization and orchestration. - Experience with CI/CD pipeline development and maintenance. - Experience in deploying ML models using TensorFlow Serving, Pytorch Serving, Triton Inference Server, and TensorRT. - Experience with cloud platforms like AWS, Azure, and GCP. - Strong problem-solving, communication, and teamwork skills. Qualifications: - Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. - 4-6 years of experience in ML project management, with a recent focus on MLOps. Additional Competencies: - AI Technologies Deployment, Data Engineering, IT Performance, Scalability Testing, and Security Practices. SELECTION PROCESS: Interested Candidates are mandatorily required to apply through this listing on Jigya. Only applications received through Jigya will be evaluated further. Shortlisted candidates may be required to appear in an Online Assessment administered by Jigya on behalf of the Client. Candidates selected after the screening test will be interviewed by the Client

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Roles & Responsibilities Design, implement, and train deep learning models for: Text-to-Speech (e.g., SpeechT5, StyleTTS2, YourTTS, XTTS-v2 or similar models) Voice Cloning with speaker embeddings (x-vectors, d-vectors), few-shot adaptation, prosody and emotion transfer Engineer multilingual audio-text preprocessing pipelines: Text normalization, grapheme-to-phoneme (G2P) conversion, Unicode normalization (NFC/NFD) Silence trimming, VAD-based audio segmentation, audio enhancement for noisy corpora, speech prosody modification and waveform manipulation Build scalable data loaders using PyTorch for: Large-scale, multi-speaker datasets with variable-length sequences and chunked streaming Extract and process acoustic features: Log-mel spectrograms, pitch contours, MFCCs, energy, speaker embeddings Optimize training using: Mixed precision (FP16/BFloat16), gradient checkpointing, label smoothing, quantization-aware training Build serving infrastructure for inference using: TorchServe, ONNX Runtime, Triton Inference Server, FastAPI (for REST endpoints), including batch and real-time modes Optimize models for production: Quantization, model pruning, ONNX conversion, parallel decoding, GPU/CPU memory profiling Create automated and human evaluation logics: MOS, PESQ, STOI, BLEU, WER/CER, multi-speaker test sets, multilingual subjective listening tests Implement ethical deployment safeguards: Digital watermarking, impersonation detection, and voice verification for cloned speech Conduct literature reviews and reproduce state-of-the-art papers; adapt and improve on open benchmarks Mentor junior contributors, review code, and maintain shared research and model repositories Collaborate across teams (MLOps, backend, product, linguists) to translate research into deployable, user-facing solutions Required Skills Advanced proficiency in Python and PyTorch (TensorFlow a plus) Strong grasp of deep learning concepts: Sequence-to-sequence models, Transformers, autoregressive and non-autoregressive decoders, attention mechanisms, VAEs, GANs Experience with modern speech processing toolkits: ESPnet, NVIDIA NeMo, Coqui TTS, OpenSeq2Seq, or equivalent Design custom loss function for custom models based on: Mel loss, GAN loss, KL divergence, attention losses, etc.,, learning rate schedules, training stability Hands-on experience with multilingual and low-resource language modeling Understanding of transformer architecture, LLMs and working with existing AI models, tools and APIs Model serving & API integration: TorchServe, FastAPI, Docker, ONNX Runtime Preferred (Bonus) Skills CUDA kernel optimization, custom GPU operations, memory footprint profiling Experience deploying on AWS/GCP with GPU acceleration Experience developing RESTful APIs for real-time TTS/voice cloning endpoints Publications or open-source contributions in TTS, ASR, or speech processing Working knowledge of multilingual translation pipelines Knowledge of speaker diarization, voice anonymization, and speech synthesis for agglutinative/morphologically rich languages Milestones & Expectations (First 3–6 Months) Deliver at least one production-ready TTS or Voice Cloning model integrated with India Speaks’ Dubbing Studio or SaaS APIs Create a fully reproducible experiment pipeline for multilingual speech modeling, complete with model cards and performance benchmarks Contribute to custom evaluation tools for measuring quality across Indian languages Deploy optimized models to live staging environments using Triton, TorchServe, or ONNX Demonstrate impact through real-world integration in education, media, or defence deployments

Posted 1 month ago

Apply

50.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Data Axle Inc. has been an industry leader in data, marketing solutions, sales, and research for over 50 years in the USA. Data Axle now has an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business and consumer databases. Data Axle India is recognized as a Great Place to Work! This prestigious designation is a testament to our collective efforts in fostering an exceptional workplace culture and creating an environment where every team member can thrive. Roles & Responsibilities We are looking for Associate Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects. Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture. Design or enhance ML workflows for data ingestion, model design, model inference and scoring. Oversight on team project execution and delivery. Establish peer review guidelines for high quality coding to help develop junior team members' skill set growth, cross-training, and team efficiencies. Visualize and publish model performance results and insights to internal and external audiences.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

At AryaXAI , we’re building the future of explainable, scalable, and aligned AI —designed specifically for high-stakes environments where trust, transparency, and performance are non-negotiable. From financial services to energy and other regulated industries, our platform powers intelligent decision-making through safe and robust AI systems. We’re looking for a Data Scientist with a deep understanding of both classical and deep learning techniques, experience building enterprise-scale ML pipelines, and the ambition to tackle real-world, high-impact problems. You will work at the intersection of modeling, infrastructure, and regulatory alignment—fine-tuning models that must be auditable, performant, and production-ready. Responsibilities: Modeling & AI Development Design, build, and fine-tune machine learning models (both classical and deep learning) for complex mission-critical use cases in domains like banking, finance, energy, etc. Work on supervised, unsupervised, and semi-supervised learning problems using structured, unstructured, and time-series data. Fine-tune foundation models for specialized use cases requiring high interpretability and performance. Platform Integration Develop and deploy models on AryaXAI’s platform to serve real-time or batch inference needs. Leverage explainability tools (e.g., DLBacktrace, SHAP, LIME, or AryaXAI’s native xai_evals stack) to ensure transparency and regulatory compliance. Design pipelines for data ingestion, transformation, model training, evaluation, and deployment using MLOps best practices. Enterprise AI Architecture Collaborate with product and engineering teams to implement scalable and compliant ML pipelines across cloud and hybrid environments. Contribute to designing secure, modular AI workflows that meet enterprise needs—latency, throughput, auditability, and policy constraints. Ensure models meet strict regulatory and ethical requirements (e.g., bias mitigation, traceability, explainability). Requirements : 3+ years of experience building ML systems in production, ideally in regulated or enterprise environments. Strong proficiency in Python , with experience in libraries like scikit-learn, XGBoost, PyTorch, TensorFlow , or similar. Experience with end-to-end model lifecycle : from data preprocessing and feature engineering to deployment and monitoring. Deep understanding of enterprise ML architecture —model versioning, reproducibility, CI/CD for ML, and governance. Experience working with regulatory, audit, or safety constraints in data science or ML systems. Familiarity with ML Ops tools (MLflow, SageMaker, Vertex AI, etc.) and cloud platforms (AWS, Azure, GCP). Strong communication skills and an ability to translate technical outcomes into business impact. Bonus Points For Prior experience in regulated industries : banking, insurance, energy, or critical infrastructure. Experience with time-series modeling , anomaly detection, underwriting, fraud detection or risk scoring systems. Knowledge of RAG architectures , generative AI , or foundation model fine-tuning . Exposure to privacy-preserving ML , model monitoring , and bias mitigation frameworks. What You’ll Get Competitive compensation with performance-based upside Comprehensive health coverage for you and your family Opportunity to work on mission-critical AI systems where your models drive real-world decisions Ownership of core components in a platform used by top-tier enterprises Career growth in a fast-paced, high-impact startup environment Remote-first, collaborative, and high-performance team culture If you’re excited to build data science solutions that truly matter , especially in the most demanding industries, we want to hear from you.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

At Roche you can show up as yourself, embraced for the unique qualities you bring. Our culture encourages personal expression, open dialogue, and genuine connections, where you are valued, accepted and respected for who you are, allowing you to thrive both personally and professionally. This is how we aim to prevent, stop and cure diseases and ensure everyone has access to healthcare today and for generations to come. Join Roche, where every voice matters. The Position A healthier future. That’s what drives us. We are looking for a highly skilled Artificial Intelligence (AI) / Machine Learning (ML) Engineer with expertise in building AI-powered applications. We will be building AI & GenAI solutions end-to-end: from concept, through prototyping, production, to operations. The Opportunity: Generative AI Application Development: Collaborate with developers and stakeholders in Agile teams to integrate LLMs and classical AI techniques into end-user applications, focusing on user experience, and real-time performance Algorithm Development: Design, develop, customize, optimize, and fine-tune LLM-based and other AI-infused algorithms tailored to specific use cases such as text generation, summarization, information extraction, chatbots, AI agents, code generation, document analysis, sentiment analysis, data analysis, etc LLM Fine-Tuning and Customization: Fine-tune pre-trained LLMs to specific business needs, leveraging prompt engineering, transfer learning, and few-shot techniques to enhance model performance in real-world scenarios End-to-End Pipeline Development: Build and maintain production-ready end-to-end ML pipelines, including data ingestion, preprocessing, training, evaluation, deployment, and monitoring; automate workflows using MLOps best practices to ensure scalability and efficiency Performance Optimization: Optimize model inference speed, reduce latency, and manage resource usage across cloud services and GPU/TPU architectures Scalable Model Deployment: Collaborate with other developers to deploy models at scale, using cloud-based infrastructure (AWS, Azure) and ensuring high availability and fault tolerance Monitoring and Maintenance: Implement continuous monitoring and refining strategies for deployed models, using feedback loops and e.g. incremental fine-tuning to ensure ongoing accuracy and reliability; address drifts and biases as they arise Software Development: Apply software development best practices, including writing unit tests, configuring CI/CD pipelines, containerizing applications, prompt engineering and setting up APIs; ensure robust logging, experiment tracking, and model monitoring Who are: Minimum overall 5-7 years of experience and hold B.Sc., B.Eng., M.Sc., M.Eng., Ph.D. or D.Eng. in Computer Science or equivalent degree Experience: 3+ years of experience in AI/ML engineering, with exposure to both classical machine learning methods and language model-based applications Technical Skills: Advanced proficiency in Python and experience with deep learning frameworks such as PyTorch or TensorFlow; expertise with Transformer architectures; hands-on experience with LangChain or similar LLM frameworks; experience with designing end-to-end RAG systems using state of the art orchestration frameworks (hands on experience with fine-tuning LLMs for specific tasks and use cases considered as an additional advantage) MLOps Knowledge: Strong understanding of MLOps tools and practices, including version control, CI/CD pipelines, containerization, orchestration, Infrastructure as Code, automated deployment Deployment: Experience in deploying LLM and other AI models with cloud platforms (AWS, Azure) and machine learning workbenches for robust and scalable productizations Practical overview and experience with AWS services to design cloud solutions, familiarity with Azure is a plus; experience with working with GenAI specific services like Azure OpenAI, Amazon Bedrock, Amazon SageMaker JumpStart, etc. Data Engineering: Expertise in working with structured and unstructured data, including data cleaning, feature engineering with data stores like vector, relational, NoSQL databases and data lakes through APIs Model Evaluation and Metrics: Proficiency in evaluating both classical ML models and LLMs using relevant metrics Relocation benefits are not available for this posting. Who we are A healthier future drives us to innovate. Together, more than 100’000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact. Let’s build a healthier future, together. Roche is an Equal Opportunity Employer.

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Senior Gen AI Engineer Job Description Brightly Software is seeking an experienced candidate to join our Product team in the role of Gen AI engineer to drive best in class client-facing AI features by creating and delivering insights that advise client decisions tomorrow. Role As a Gen AI Engineer , you will play a critical role in building AI offerings for Brightly. Y ou will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster . This will include the following: Lead the evaluation and selection of foundation models and vector databases based on performance and business needs Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities Guide the design of multi-step RAG, agentic, or tool-augmented workflows Implement governance, safety layers, and responsible AI practices (e.g., guardrails, moderation, auditability) Mentor junior engineers and review GenAI design and implementation plans Drive experimentation, benchmarking, and continuous improvement of GenAI capabilities Collaborate with leadership to align GenAI initiatives with product and business strategy Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS Opensearch Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong exerience in predictive and stastical modelling. Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph . Develop GenAI applications using Hugging Face Transformers, LangChain , and Llama related frameworks Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate techn

Posted 1 month ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Gen AI Engineer Job Description Brightly Software is seeking a high performer to join our Product team in the role of Gen AI engineer to drive best in class client - facing AI features by creating and delivering insights that advise client decisions tomorrow. Role As a Gen AI Engineer , you will play a critical role in building AI offering s for Brightly. Y ou will partner with our various software Product teams to drive client facing insights to inform smarter decisions faster . This will include the following: Design and implement applications powered by generative AI (e.g., LLMs, diffusion models), delivering contextual and actionable insights for clients. Establish best practices and documentation for prompt engineering, model fine-tuning, and evaluation to support cross-domain generative AI use cases. Build, test, and deploy generative AI applications using standard tools and frameworks for model inference, embeddings, vector stores, and orchestration pipelines. Key Responsibilities Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector stores like Pinecone, FAISS, or AWS OpenSearch D evelop GenAI applications using Hugging Face Transformers, LangChain , and Llama related frameworks Perform exploratory data analysis (EDA), data cleaning, and feature engineering to prepare data for model building. Design, develop, train, and evaluate machine learning models (e.g., classification, regression, clustering, natural language processing) with strong ex erience in predictive and stastical modelling . Implement and deploy machine learning models into production using AWS services, with a strong focus on Amazon SageMaker (e.g., SageMaker Studio, training jobs, inference endpoints, SageMaker Pipelines). Understanding and development of state management workflows using Langraph . Engineer and evaluate prompts, including prompt chaining and output quality assessment Apply NLP and transformer model expertise to solve language tasks Deploy GenAI models to cloud platforms (preferably AWS) using Docker and Kubernetes Monitor and optimize model and pipeline performance for scalability and efficiency Communicate technical concepts clearly to cross-functional and non-technical stakeholders Thrive in a fast-paced, lean environment and contribute to scalable GenAI system design Qualifications Bachelor’s degree is required 2-4 years of experience of total experience with a strong focus on AI and ML and 1+ years in core GenAI Engineer ing Demonstrated expertise in working with large language models (LLMs) and generative AI systems, including both text-based and multimodal models. S trong programming skills in Python, including proficiency with data science libraries such as NumPy, Pandas, Scikit-learn, TensorFlow, and/or PyTorch . Familiarity with MLOps principles and tools for automating and streamlining the ML lifecycle. Experience working with agentic AI . Capable of building Retrieval-Augmented Generation (RAG) pipelines leveraging vector stores like Pinecone, Chroma, or FAISS. St rong programming skills in Python, with experience using leading AI/ML libraries such as Hugging Face Transformers and LangChain . Practical experience in working with vector databases and embedding methodologies for efficient information retrieval. P ossess experience in developing and exposing API endpoints for accessing AI model capabilities using frameworks like FastAPI . Knowledgeable in prompt engineering techniques, including prompt chaining and performance evaluation strategies . Solid grasp of natural language processing (NLP) fundamentals and transformer-based model architectures. Experience in deploying machine learning models to cloud platforms (preferably AWS) and containerized environments using Docker or Kubernetes. Skilled in fine-tuning and assessing open-source models using methods such as LoRA , PEFT, and supervised training. Strong communication skills with the ability to convey complex technical concepts to non-technical stakeholders. Able to operate successfully in a lean, fast-paced organization, and to create a vision and organization that can scale quickly Senior Gen AI Engineer

Posted 1 month ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Data Scientist – AIML, GenAI & Agentic AI Location: Pune/ Bangalore/ Indore/ Kolkata Job Type: Full-time Experience Level: 4+ Years NP : Immediate Joiner OR15 Days Max Job Description We are seeking a highly skilled and innovative Data Scientist / AI Engineer with deep expertise in AI/ML, Generative AI, and Agentic AI frameworks to join our advanced analytics and AI team. The ideal candidate will possess a robust background in data science and machine learning, along with hands-on experience in building and deploying end-to-end intelligent systems using modern AI technologies including RAG (Retrieval-Augmented Generation), LLMs, and agent orchestration tools. Key Responsibilities Design, build, and deploy machine learning models and Generative AI solutions for a wide range of use cases (text, vision, and tabular data). Develop and maintain AI/ML pipelines for large-scale training and inference in production environments. Leverage frameworks such as LangChain, LangGraph, CrewAI for building Agentic AI workflows. Fine-tune and prompt-engineer LLMs (e.g., GPT, BERT) for enterprise-grade RAG and NLP solutions. Collaborate with business and engineering teams to translate business problems into AI/ML models that deliver measurable value. Apply advanced analytics techniques such as regression, classification, clustering, sequence modeling, association rules, computer vision, and NLP. Architect and implement scalable AI solutions using Python , PyTorch , TensorFlow , and cloud-native technologies. Ensure integration of AI solutions within existing enterprise architecture using containerized services and orchestration (e.g., Docker, Kubernetes). Maintain documentation and present insights and technical findings to stakeholders. Required Skills and Qualifications Bachelor’s/Master’s/PhD in Computer Science, Data Science, Statistics, or related field. Strong proficiency in Python and libraries such as Pandas, NumPy, Scikit-learn, etc. Extensive experience with deep learning frameworks : PyTorch and TensorFlow. Proven experience with Generative AI , LLMs , RAG , BERT , and related architectures. Familiarity with LangChain , LangGraph , and CrewAI and strong knowledge of agent orchestration and autonomous workflows. Experience with large-scale ML pipelines , MLOps practices, and cloud platforms (AWS, GCP, or Azure). Deep understanding of software engineering principles , design patterns, and enterprise architecture. Strong problem-solving, analytical thinking, and debugging skills. Excellent communication, presentation, and cross-functional collaboration abilities. Preferred Qualifications Experience in fine-tuning LLMs and optimizing prompt engineering techniques. Publications, open-source contributions, or patents in AI/ML/NLP/GenAI. Experience with vector databases and tools such as Pinecone, FAISS, Weaviate, or Milvus. Why Join Us? Work on cutting-edge AI/ML and GenAI innovations. Collaborate with top-tier scientists, engineers, and product teams. Opportunity to shape the next generation of intelligent agents and enterprise AI solutions. Flexible work arrangements and continuous learning culture. To Apply: Please submit your resume and portfolio of relevant AI/ML work (e.g., GitHub, papers, demos) to Shanti.upase@calsoftinc.com

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

Role Summary We’re hiring a Founding Full-Stack AI/ML Engineer to help build and scale the backbone of our AI system. You’ll lead development across agent orchestration, tool execution, Model Context Protocol (MCP), API integration, and browser-based research workflows. You’ll work closely with the founder on hands-on roadmap development, rapid prototyping, and fast iteration cycles to evolve the product quickly based on real user needs. Responsibilities Build multi-agent systems capable of reasoning, tool use, and autonomous action Implement Model Context Protocol (MCP) strategies to manage complex, multi-source context Integrate third-party APIs (e.g., Crunchbase, PitchBook, CB Insights), scraping APIs, and data aggregators Develop browser-based agents enhanced with computer vision for dynamic research, scraping, and web interaction Optimize inference pipelines, task planning, and system performance Collaborate on architecture, prototyping, and iterative development Experiment with prompt chaining, tool calling, embeddings, and vector search Requirements 5+ years of experience in software engineering or AI/ML development Strong Python skills and experience with LangChain, LlamaIndex, or agentic frameworks Proven experience with multi-agent systems, tool calling, or task planning agents Familiarity with Model Context Protocol (MCP), Retrieval-Augmented Generation (RAG), and multi-modal context handling Experience with browser automation frameworks (e.g., Playwright, Puppeteer, Selenium) Cloud deployment and systems engineering experience (GCP, AWS, etc.) Self-starter attitude with strong product sense and iteration speed Bonus Points Experience with AutoGen, CrewAI, OpenAgents, or ReAct-style frameworks Background in building AI systems that blend structured and unstructured data Experience working in a fast-paced startup environment Previous startup or technical founding team experience This is a unique opportunity to work directly with an industry leader in AI to build a cutting-edge, next-generation AI system from the ground up.

Posted 1 month ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Java Full Stack Developer_Full-Time_Hyderabad, 5 Days WFO Job Title: Java Full Stack Developer Job Type: Full-Time Experience: 8 to 10 Years Location: Hyderabad, 5 Days WFO Must Have: Java Full Stack Developer with React & React Native Job Description: Must-Have Skills: • professionals with 6 to 10 years of industry experience, preferably with a background in product startups. • 5+ years of hands-on experience in Java Spring Boot and Microservices architecture • Strong proficiency in React.js and React Native (web & mobile development) • Al/ML knowledge — using pre-trained models for inference • Solid experience with MySQL and PostgreSQL — data modelling and query optimization • Expertise in MongoDB and handling document-based data • Familiar with Kafka (producer & consumer) and event-driven systems, WebRTC, WebSocket protocols. • Experience deploying on AWS Cloud, EC2, S3, RDS, EKS/Kubernetes • Cl/CD implementation experience • Must have proven experience in building scalable products and infrastructure on Video driven platforms Good to Have: • API Gateway experience (Kong Konnect or similar) • Exposure to Video Analytics or Computer Vision • Experience in building mobile apps from scratch • Familiarity with Low-code and Agentic workflow platforms • Previous startup experience is a big plus!

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies