Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 - 0 Lacs
Bengaluru
On-site
Senior Data Scientist About the Role: We are seeking a highly skilled Senior Data Scientist with expertise in Python, Machine Learning (ML), Natural Language Processing (NLP), Generative AI (GenAI), and Azure Cloud Services . The ideal candidate will be responsible for designing, developing, and deploying advanced AI/ML models to drive data-driven decision-making. This role requires strong analytical skills, proficiency in AI/ML technologies, and experience with cloud-based solutions. Key Responsibilities: · Design and develop ML, NLP, and GenAI models to solve complex business problems. · Build, train, and optimize AI models using Python and relevant ML frameworks. · Implement Azure AI/ML services for scalable deployment of models. · Develop and integrate APIs for real-time model inference and decision-making. · Work with large-scale data to extract insights and drive strategic initiatives. · Collaborate with cross-functional teams, including Data Engineers, Software Engineers, and Product Teams , to integrate AI/ML solutions into applications. · Implement CI/CD pipelines to automate model training, deployment, and monitoring. · Ensure adherence to software engineering best practices and Agile methodologies in AI/ML projects. · Stay updated on cutting-edge AI/ML advancements and continuously enhance models and algorithms. · Conduct research on emerging AI/ML trends and contribute to the development of innovative solutions. · Provide technical mentorship and guidance to junior data scientists. · Optimize model performance and scalability in a production environment . Required Skills & Qualifications: · Proficiency in Python and ML frameworks like TensorFlow, PyTorch, or Scikit-learn. · Hands-on experience in NLP techniques , including transformers, embeddings, and text processing. · Expertise in Generative AI models (GPT, BERT, LLMs, etc.). · Strong knowledge of Azure AI/ML services , including Azure Machine Learning, Azure Cognitive Services, and Azure Databricks. · Experience in developing APIs for model deployment and integration. · Familiarity with CI/CD pipelines for AI/ML models. · Strong understanding of software engineering principles and best practices. · Experience working in an Agile development environment . · Excellent problem-solving skills and ability to work in a fast-paced, dynamic environment. · Strong background in statistical analysis, data mining, and data visualization . Preferred Qualifications: · Experience in MLOps and automation of model lifecycle management. · Knowledge of vector databases and retrieval-augmented generation (RAG) techniques. · Exposure to big data processing frameworks (Spark, Hadoop). · Strong ability to communicate complex ideas to technical and non-technical stakeholders. · Experience with Graph Neural Networks (GNNs) and recommendation systems . · Familiarity with AutoML frameworks and hyperparameter tuning strategies. Job Types: Full-time, Part-time, Permanent, Contractual / Temporary Pay: ₹400.00 - ₹450.00 per hour Schedule: Day shift Experience: Senior Data Scientist: 8 years (Required) ML, NLP, and GenAI models: 8 years (Required) Python: 8 years (Required) Azure AI/ML services: 8 years (Required) Work Location: In person
Posted 2 weeks ago
3.0 - 5.0 years
2 - 4 Lacs
Bengaluru
Remote
Way of working - Remote : Employees will have the freedom to work remotely all through the year. These employees, who form a large majority, will come together in their base location for a week, once every quarter. Job Profile : Data Scientist II Location : Bangalore | Karnataka Years of Experienc e : 3 - 5 ABOUT THE TEAM & ROLE: Data Science at Swiggy Data Science and applied ML is ingrained deeply in decision making and product development at Swiggy. Our data scientists work closely with cross-functional teams to ship end-to-end data products, from formulating the business problem in mathematical/ML terms to iterating on ML/DL methods to taking them to production. We own or co-own several initiatives with a direct line of sight to impact on customer experience as well as business metrics. We also encourage open sharing of ideas and publishing in internal and external avenues What will you get to do here? You will leverage your strong ML/DL/Statistics background to build new and next generation of ML based solutions to improve the quality of ads recommendation and leverage various optimization techniques to improve the campaign performance. You will mine and extract relevant information from Swiggy's massive historical data to help ideate and identify solutions to business and CX problems. You will work closely with engineers/PMs/analysts on detailed requirements, technical designs, and implementation of end-to-end inference solutions at Swiggy scale. You will stay abreast with the latest in ML research for Ads Bidding algorithms, Recommendation Systems related areas and help adapt it to Swiggy's problem statements. You will publish and talk about your work in internal and external forums to both technical and layman audiences. Opportunity to work on challenging and impactful projects in the logistics domain. Collaborative and supportive work environment that fosters learning and growth. Conduct data analysis and modeling to identify opportunities for optimization and automation. What qualities are we looking for? Bachelors or Masters degree in a quantitative field with 3-5 years of industry/research lab experience Experience in Generative AI, Applied Mathematics, Machine Learning, Statistics Required: Excellent problem solving skills, ability to deconstruct and formulate solutions from first-principles Required: Depth and hands-on experience in applying ML/DL, statistical techniques to business problems Preferred: Experience working with ‘big data’ and shipping ML/DL models to production Required: Strong proficiency in Python, SQL, Spark, Tensorflow Required: Strong spoken and written communication skills Big plus: Experience in the space of ecommerce and logistics Experience in Agentic AI , LLMS and NLP, Previous experience in deep learning, operations research, and working in startup or product-based consumer/internet companies is preferred Excellent communication and collaboration skills, with the ability to work effectively in a team environment Visit our tech blogs to learn more about some the challenges we deal with: https://bytes.swiggy.com/the-swiggy-delivery-challenge-part-one-6a2abb4f82f6 https://bytes.swiggy.com/how-ai-at-swiggy-is-transforming-convenience-eae0a32055ae https://bytes.swiggy.com/decoding-food-intelligence-at-swiggy-5011e21dbc86 We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, disability status, or any other characteristic protected by the law.
Posted 2 weeks ago
2.0 years
3 - 5 Lacs
Bengaluru
On-site
Company: AHIPL Agilon Health India Private Limited Job Posting Location: India_Bangalore Job Title: Prospective Chart Reviewer-6 Job Description: Essential Job Functions: Performs pre-visit medical record reviews to identify chronic conditions reported in prior years Identify diagnoses that lack supporting documentation Prioritizes clinical alerts and presents those that are strongly suggestive of an underlying condition Present information to providers in a concise complete manner All other duties as assigned Other Job Functions: Understand, adhere to, and implement the Company’s policies and procedures. Provide excellent customer services skills, including consistently displaying awareness and sensitivity to the needs of internal and/or external clients. Proactively ensuring that these needs are met or exceeded. Take personal responsibility for personal growth including acquiring new skills, knowledge, and information. Engage in excellent communication which includes listening attentively and speaking professionally. Set and complete challenging goals. Demonstrate attention to detail and accuracy in work product by meeting productivity standards and maintaining a company standard of accuracy Qualifications: Minimum Experience 2+ years of clinical experience required Advanced level of clinical knowledge associated with chronic disease states required Relevant chart review experience required Education/Licensure: Medical Doctor or Nurse required Coding Certification through AHIMA or AAPC preferred Skills and Abilities: Language Skills: Strong communication skills both written and verbal to work with multiple internal and external clients in a fast-paced environment Mathematical Skills: Ability to work with mathematical concepts such as probability and statistical inference. Ability to apply concepts such as fractions, percentages, ratios, and proportions to practical situations. Reasoning Ability: Ability to apply principles of logical or scientific thinking to a wide range of intellectual and practical problems. Computer Skills: Ability to create and maintain documents using Microsoft Office (Word, Excel, Outlook, PowerPoint) Location: India_Bangalore
Posted 2 weeks ago
4.0 years
0 Lacs
Madurai
On-site
Job Location: Madurai Job Experience: 4-15 Years Model of Work: Work From Office Technologies: Artificial Intelligence Machine Learning Functional Area: Software Development Job Summary: Job Title: ML Engineer – TechMango Location: TechMango, Madurai Experience: 4+ Years Employment Type: Full-Time Role Overview We are seeking an experienced Machine Learning Engineer with strong proficiency in Python, time series forecasting, MLOps, and deployment using AWS services. This role involves building scalable machine learning pipelines, optimizing models, and deploying them in production environments. Key Responsibilities: Core Technical Skills Languages & Databases Programming Language: Python Databases: SQL Core Libraries & Tools Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet Machine Learning Models: State-of-the-art ML models, including boosting and ensemble methods Model Explainability: SHAP, LIME Deep Learning & Data Processing Frameworks: PyTorch, PyTorch Forecasting Libraries: Pandas, NumPy, PySpark, Polars (optional) Hyperparameter Tuning Tools: Optuna, Amazon SageMaker Automatic Model Tuning Deployment & MLOps Model Deployment: Batch & real-time with API endpoints Experiment Tracking: MLFlow Model Serving: TorchServe, SageMaker Endpoints / Batch Containerization & Pipelines Containerization: Docker Orchestration: AWS Step Functions, SageMaker Pipelines AWS Cloud Stack SageMaker (Training, Inference, Tuning) S3 (Data Storage) CloudWatch (Monitoring) Lambda (Trigger-based inference) ECR / ECS / Fargate (Container Hosting) Candidate Requirements Strong problem-solving and analytical mindset Hands-on experience with end-to-end ML project lifecycle Familiarity with MLOps workflows in production environments Excellent communication and documentation skills Comfortable working in agile, cross-functional teams
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
**Who you are** You’ve stepped beyond traditional QA—you test AI agents, not just UI clicks. You build automated tests that check for **hallucinations, bias, adversarial inputs**, prompt chain integrity, model outputs, and multi-agent orchestration failures. You script Python tests and use Postman/Selenium/Playwright for UI/API, and JMeter or k6 for load. You understand vector databases and can test embedding correctness and data flows. You can ask, “What happens when two agents clash?” or “If one agent hijacks context, does the system fail?” and then write tests for these edge cases. You’re cloud-savvy—Azure or AWS—and integrate tests into CI/CD. You debug failures in agent-manager systems and help triage model logic vs infra issues. You take ownership of AI test quality end-to-end. --- **What you’ll actually do** You’ll design **component & end-to-end tests** for multi-agent GenAI workflows (e.g., planner + execution + reporting agents). You’ll script pytest + Postman + Playwright suites that test API functionality, failover logic, agent coordination, and prompt chaining. You’ll simulate coordination failures, misalignment, hallucinations in agent dialogues. You’ll run load tests on LLM endpoints, track latency and cost. You’ll validate that vector DB pipelines (Milvus/FAISS/Pinecone) return accurate embeddings and retrieval results. You’ll build CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins) that gate merges based on model quality thresholds. You’ll implement drift, bias, hallucination metrics, and create dashboards for QA monitoring. You’ll occasion a human-in-the-loop sanity check for critical agent behavior. You’ll write guides so others understand how to test GenAI pipelines. --- **Skills and knowledge** • Python automation—pytest/unittest for component & agent testing • Postman/Newman, Selenium/Playwright/Cypress for UI/API test flows • Load/performance tools—JMeter, k6 for inference endpoints • SQL/NoSQL and data validation for vector DB pipelines • Vector DB testing—Milvus, FAISS, Pinecone embeddings/retrieval accuracy • GenAI evaluation—hallucinations, bias/fairness, embedding similarity (BLEU, ROUGE), adversarial/prompt injection testing • Multi-agent testing—understand component/unit tests per agent, inter-agent communications, coordination failure tests, message passing or blackboard rhythm, emergent behavior monitoring • CI/CD integration—Azure DevOps/GitHub Actions/Jenkins pipelines, gating on quality metrics • Cloud awareness—testing in Azure/AWS/GCP, GenAI endpoints orchestration and failure mode testing • Monitoring & observability—drift, latency, hallucination rate dashboards • Soft traits—detail oriented, QA mindset, self-driven, cross-functional communicator, ethical awareness around AI failures.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Ready to be pushed beyond what you think you’re capable of? At Coinbase, our mission is to increase economic freedom in the world. It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform — and with it, the future global financial system. To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system. We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up. We want someone who will run towards, not away from, solving the company’s hardest problems. Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be. While many roles at Coinbase are remote-first, we are not remote-only. In-person participation is required throughout the year. Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment. Attendance is expected and fully supported. As a Staff Machine Learning Platform Engineer at Coinbase, you will play a pivotal role in building an open financial system. The team builds the foundational components for training and serving ML models at Coinbase. Our platform is used to combat fraud, personalize user experiences, and to analyze blockchains. We are a lean team, so you will get the opportunity to apply your software engineering skills across all aspects of building ML at scale, including stream processing, distributed training, and highly available online services. What you’ll be doing (ie. job duties): Form a deep understanding of our Machine Learning Engineers’ needs and our current capabilities and gaps. Mentor our talented junior engineers on how to build high quality software, and take their skills to the next level. Continually raise our engineering standards to maintain high-availability and low-latency for our ML inference infrastructure that runs both predictive ML models and LLMs. Optimize low latency streaming pipelines to give our ML models the freshest and highest quality data. Evangelize state-of-the-art practices on building high-performance distributed training jobs that process large volumes of data. Build tooling to observe the quality of data going into our models and to detect degradations impacting model performance. What we look for in you (ie. job requirements): 5+ yrs of industry experience as a Software Engineer. You have a strong understanding of distributed systems. You lead by example through high quality code and excellent communication skills. You have a great sense of design, and can bring clarity to complex technical requirements. You treat other engineers as a customer, and have an obsessive focus on delivering them a seamless experience. You have a mastery of the fundamentals, such that you can quickly jump between many varied technologies and still operate at a high level. Nice to Have: Experience building ML models and working with ML systems. Experience working on a platform team, and building developer tooling. Experience with the technologies we use (Python, Golang, Ray, Tecton, Spark, Airflow, Databricks, Snowflake, and DynamoDB). Job #: GPBE06IN *Answers to crypto-related questions may be used to evaluate your onchain experience #LI-Remote Commitment to Equal Opportunity Coinbase is committed to diversity in its workforce and is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the Know Your Rights notice here . Additionally, Coinbase participates in the E-Verify program in certain locations, as required by law. Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations[at]coinbase.com to let us know the nature of your request and your contact information. For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here). Global Data Privacy Notice for Job Candidates and Applicants Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. For US applicants only, by submitting your application you are agreeing to arbitration of disputes as outlined here.
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At Electronix.AI, we’re building tools for the engineers of tomorrow, tools that understand, automate, and accelerate the hardware development cycle. Website : Electronix AI | Accelerate Hardware Design Decisions We are now looking for an AI Full-stack Engineering Intern to join our team! This is your opportunity to be part of a fast-paced startup building AI-powered tools for the hardware world. You'll work with people who value autonomy, rapid prototyping, and deep problem-solving, and you'll help shape systems used in the field, not just in the cloud. ⚡What You’ll Be Doing (Responsibilities): Develop and optimize backend services using Python (FastAPI, HuggingFace Transformers, OpenCV). Design and deploy APIs for AI-powered automation, video analytics, and search applications. Develop and deploy robust on-premises solutions that integrate seamlessly with existing infrastructure. Implement and manage DevOps pipelines, ensuring continuous integration and deployment workflows across cloud and on-prem environments. Ensure scalability, security, and reliability across infrastructure and applications. Work with databases and caching layers like MySQL, Elasticsearch, Redis, and Vector DBs (Qdrant, ChromaDB, etc.). Develop frontend components using TypeScript, React, and GraphQL to create intuitive user experiences. Troubleshoot and resolve complex system performance issues, particularly for on-prem deployments and high-performance computing environments. (Nice to Have) Experience with Kubernetes for container orchestration ⚡ What we need to see: Backend : Python (FastAPI, HuggingFace Transformers, OpenCV), MySQL, RabbitMQ, GraphQL Front-End: React, TypeScript AI & Search: Elasticsearch, Redis, Vector DBs (Qdrant, ChromaDB) Cloud & DevOps: Docker, AWS, Azure, CI/CD, On-Prem Deployments Bonus : Kubernetes, GPU-based workloads (NVIDIA GPUs preferred) ⚡ Internship Details: Duration : 4-6 months Location : Onsite/Hybrid (min 3x a week) - Bengaluru (Jayanagar/Richmond Town) Stipend : Monetary compensation included. Additional Perks : Early access to product decisions, real-world deployment experience, high ownership. ⚡ Good to Have: These are extras that help us spot naturally curious, hands-on builders. None are hard requirements especially the hardware items, which are purely optional bonuses. Hands-On AI Exploration: Personal / academic projects showing end-to-end use of modern AI stacks—e.g., fine-tuning or serving models with Hugging Face Transformers, building RAG pipelines, or experimenting with OpenAI, Gemini, or Ollama APIs. Evidence you can move beyond tutorials: custom data pipelines, evaluation scripts, or deployment artefacts that solve real problems. Tooling & Framework Depth Comfort with open-source LLM toolchains such as LangChain, LlamaIndex, FastEmbed, or Haystack. Experience running or optimising models on GPU/CPU edge devices; bonus points for on-device inference tricks (quantisation, pruning, TensorRT, ONNX, GGUF). Familiarity with vector databases (Qdrant, Chroma, Weaviate) and search frameworks (Elastic, OpenSearch). (Optional Bonus) Hardware-Aware Mindset : Purely a plus, great if you have it, absolutely fine if you don’t. Basic exposure to the semiconductor / EDA landscape, PCB design, or HDLs (Verilog/VHDL). Past tinkering that bridges software with sensors, FPGAs, Raspberry Pi, Jetson, or lab equipment. Appreciation of constraints unique to on-prem or embedded deployments: latency, memory, thermals and how they influence architecture. MLOps & Dev-Infra Curiosity Initial exposure to MLOps concepts: experiment tracking (Weights & Biases, MLflow), model registry, automated evaluation. Comfort scripting IaC with Terraform/Ansible or writing GitHub Actions to ship prototypes rapidly. Show-Don’t-Tell Proof An active GitHub with readable READMEs, issues, and commit history reflecting iterative learning. Blog posts, lightning talks, or demo videos explaining challenges, trade-offs, and learnings. Clear communication counts. Contributions : big or small, to open-source projects in AI, DevOps, or hardware realms. ⚡ Personal Traits We Value: Relentless curiosity: you ask why and keep digging. Bias for rapid prototyping: build, test, iterate. System thinking: see the whole stack and optimize the right layer. Collaborative clarity: explain complex ideas simply and receive feedback constructively. Bring tangible evidence: code, designs, write-ups, documentation - showcasing how you learn and build. We’re excited to see what you’ve been tinkering with! ⚡ Hiring Process: GitHub Review Profile Shortlisting Technical Task/Assignment Deep Dive Interview Join the Team!
Posted 2 weeks ago
13.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking an experienced Cloud AIOps Architect to lead the design and implementation of advanced AI-driven operational systems across multi-cloud and hybrid cloud environments. This role demands a blend of technical expertise, innovation, and leadership to develop scalable solutions for complex IT systems with a focus on automation, machine learning, and operational efficiency. Responsibilities Architect and design the AIOps solution leveraging AWS, Azure, and Cloud Agnostic services, ensuring portability and scalability Develop an end-to-end automated machine learning (ML) pipeline from data ingestion, DataOps, model training, to inference pipelines across multi-cloud environments Design hybrid architectures leveraging cloud-native services like Amazon SageMaker, Azure Machine Learning, and Kubernetes for development, model deployment, and orchestration Design and implement ChatOps integration, allowing users to interface with the platform through Slack, Microsoft Teams, or similar communication platforms Leverage Jupyter Notebooks in AWS SageMaker, Azure Machine Learning Studio, or cloud-agnostic environments to create model prototypes and experiment with datasets Lead the design of classification models and other ML models using AWS SageMaker training jobs, Azure ML training jobs, or open-source tools in a Kubernetes container Implement automated rule management systems using Python in containers deployed to AWS ECS/EKS, Azure AKS, or Kubernetes for cloud-agnostic solutions Architect the integration of ChatOps backend services using Python containers running in AWS ECS/EKS, Azure AKS, or Kubernetes for real-time interactions and updates Oversee the continuous deployment and retraining of models based on updated data and feedback loops, ensuring models remain efficient and adaptive Design platform-agnostic solutions to ensure that the system can be ported across different cloud environments or run in hybrid clouds (on-premises and cloud) Requirements 13+ years of overall experience and 7+ years of experience in AIOps, Cloud Architecture, or DevOps roles Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Hands-on experience working on the design, development, and deployment of contact centre solutions at scale Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Experience in Kafka, Azure DevOps, and AWS DevOps for CI/CD pipelines
Posted 2 weeks ago
0 years
0 - 0 Lacs
Bengaluru, Karnataka, India
On-site
As an AI Research Apprentice you'll push the frontiers of generative and multimodal learning that power our autonomous robots. You will prototype diffusion-based vision models, vision-language architectures (VLAs/VLMs) and automated data-annotation pipelines that turn raw site footage into training gold. Key Responsibilities Design and train diffusion-based generative models for realistic, high-resolution synthetic data Build compact Vision-Language Models (VLMs) to caption, query and retrieve job-site scenes for downstream perception tasks Develop Vision-Language Alignment (VLA) objectives that link textual work-orders with pixel-level segmentation masks Architect large-scale auto-annotation pipelines that transform unlabeled images / point-clouds into high-quality labels with minimal human input Benchmark model performance on accuracy, latency and memory for deployment on Jetson-class hardware; compress with distillation or LoRA Collaborate with perception and robotics teams to integrate research prototypes into live ROS 2 stacks Qualifications & Skills Strong foundation in deep learning, probabilistic modeling and computer vision (coursework or research projects) Hands-on experience with diffusion models (e.g., DDPM, Latent Diffusion) in PyTorch or JAX Familiarity with multimodal transformers / VLMs (CLIP, BLIP, Flamingo, LLaVA, etc.) and contrastive pre-training objectives Working knowledge of data-centric AI: active learning, self-training, pseudo-labeling and large-scale annotation pipelines Solid coding skills in Python, PyTorch / Lightning, plus git-driven workflows; bonus for C++ and CUDA kernels Bonus: experience with on-device inference (TensorRT, ONNX Runtime) & synthetic data tools (Isaac Sim) Why Join Us Research bleeding-edge generative & multimodal tech and watch it land on real construction robots Publish, patent and open-source: we encourage conference submissions and community engagement Help build a company from the ground up—your experiments can become flagship product features Requirements PyTorch or JAX C++ CUDA kernels ONNX Runtime TensorRT Isaac Sim Latent Diffusion
Posted 2 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role Overview We are seeking an innovative and passionate Machine Learning Engineer specializing in Computer Vision to design, develop, and deploy cutting-edge computer vision models and algorithms for real-world robotic applications. In this role, you will work alongside a talented team to leverage state-of-the-art deep learning techniques to solve complex vision problems, from object detection to image segmentation and 3D perception. Your contributions will have a direct impact on shaping the future of intelligent systems in robotics. This position is on-site in Pune and requires immediate availability. Please apply only if you are available to join within one month. Essential Qualifications Educational Background - B.Tech., B.S., or B.E. in CSE, EE, ECE, ME, Data Science, AI, or related fields with ≥ 2 years of hands-on experience in Computer Vision and Machine Learning. - M.S. or M.Tech. in the same disciplines with ≥ 2 years of practical experience in Computer Vision and Python programming. Technical Expertise - Strong experience developing deep learning models and algorithms for computer vision tasks (e.g., object detection, image classification, segmentation, keypoint detection). - Proficiency with Python and ML frameworks such as PyTorch , TensorFlow , or Keras . - Experience with OpenCV for image processing and computer vision pipelines. - Solid understanding of convolutional neural networks (CNNs) and other vision-specific architectures (e.g., YOLO, Mask R-CNN, EfficientNet , etc.). - Ability to build, test, and deploy robust models with PyTest or PyUnit testing frameworks. - Hands-on experience with data augmentation, transformation, and preprocessing techniques for visual data. - Familiarity with version control using Git . Desirable Skills Experience with 3D vision , stereo vision , or depth sensing technologies. Familiarity with ROS2 for integrating vision systems into robotic platforms. Understanding of sensor fusion techniques, including LiDAR, depth cameras, and IMUs. Exposure to MLOps for deploying and maintaining computer vision models in production environments. Knowledge of CMake for building and integrating machine learning and vision-based solutions. Experience working with cloud-based solutions for computer vision, including cloud inference services. Ability to work with CUDA , GPU-accelerated libraries , and distributed computing environments. Location On-site in Pune , India. Immediate availability required. Why Join Us? Shape the future of robotics and AI, focusing on state-of-the-art computer vision applications. Work in a dynamic and collaborative environment with a team of highly motivated engineers. Competitive salary and benefits package, along with performance-based incentives. Opportunities for growth and professional development, including mentorship from industry experts. If you're passionate about combining deep learning and computer vision to build intelligent systems that perceive and interact with the world, we want to hear from you!
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
• Develop strategies/solutions to solve problems in logical yet creative ways, leveraging state-of-the-art machine learning, deep learning and GEN AI techniques. • Technically lead a team of data scientists to produce project deliverables on time and with high quality. • Identify and address client needs in different domains, by analyzing large and complex data sets, processing, cleansing, and verifying the integrity of data, and performing exploratory data analysis (EDA) using state-of-the-art methods. • Select features, build and optimize classifiers/regressors, etc. using machine learning and deep learning techniques. • Enhance data collection procedures to include information that is relevant for building analytical systems, and ensure data quality and accuracy. • Perform ad-hoc analysis and present results in a clear manner to both technical and non-technical stakeholders. • Create custom reports and presentations with strong data visualization and storytelling skills to effectively communicate analytical conclusions to senior officials in a company and other stakeholders. • Expertise in data mining, EDA, feature selection, model building, and optimization using machine learning and deep learning techniques. • Strong programming skills in Python. • Excellent communication and interpersonal skills, with the ability to present complex analytical concepts to both technical and non-technical stakeholders. Primary Skills : - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc. - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). - Proficient in coding in common data science language & tools such as R, Python. - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc. - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc. - Exposure or knowledge in cloud (Azure/AWS). - Experience on deployment of model in production.
Posted 2 weeks ago
1.0 years
0 Lacs
Greater Kolkata Area
Remote
Company Overview : At Growth Loops Technology, we are at the forefront of AI innovation, leveraging cutting-edge machine learning and natural language processing (NLP) techniques to build transformative products. We are looking for an experienced and passionate LLM Engineer to join our team and help us develop and optimize state-of-the-art language models that push the boundaries of what's possible with AI. Job Description : As an LLM Engineer, you will be responsible for designing, building, and fine-tuning large-scale language models (LLMs) to solve complex real-world problems. You will work alongside data scientists, machine learning engineers, and product teams to ensure our models are not only accurate but also efficient, scalable, and capable of handling diverse use cases. The ideal candidate will have a strong background in natural language processing, deep learning, and large-scale distributed systems. You should be passionate about advancing the field of AI and have hands-on experience with LLMs, such as GPT, BERT, or similar architectures. Key Responsibilities : Model Development : Design, develop, and fine-tune large language models (LLMs) for various applications, including text generation, translation, summarization, and question answering. Research & Innovation : Stay up to date with the latest advancements in NLP and LLM architectures, and propose new approaches to improve model performance and efficiency. Optimization : Implement optimization techniques to reduce computational resource requirements and improve model inference speed without sacrificing accuracy or performance. Scalability : Develop strategies for training and deploying models at scale, ensuring robustness and reliability in production environments. Collaboration : Work closely with cross-functional teams (data science, software engineering, product) to integrate LLM capabilities into our products and solutions. Evaluation & Benchmarking : Establish and maintain rigorous testing, validation, and benchmarking procedures to assess model quality, performance, and generalization. Model Explainability : Develop methods to improve the interpretability and explainability of language models, ensuring that outputs can be understood and trusted by end-users. Qualifications : Education : Bachelor's, Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Experience : Proven experience (1+ years) working with NLP, deep learning, and LLM architectures (e.g., GPT, BERT, T5, etc.). Expertise in programming languages such as Python and experience with machine learning frameworks like TensorFlow, PyTorch, or JAX. Solid understanding of transformer models, attention mechanisms, and the architecture of large-scale neural networks. Experience with distributed computing, GPU acceleration, and cloud-based machine learning platforms (e.g., AWS, GCP, Azure). Familiarity with model deployment tools and practices (e.g., TensorFlow Serving, or Hugging Face). Skills : Strong problem-solving skills and ability to work on complex, ambiguous tasks. Solid understanding of model evaluation metrics for NLP tasks. Experience with large datasets and parallel computing for training and fine-tuning LLMs. Familiarity with optimization techniques such as pruning, quantization, or knowledge distillation. Excellent communication skills, both written and verbal, with the ability to explain complex technical concepts to non-technical stakeholders. Nice to Have : Experience with reinforcement learning or few-shot learning in the context of language models. Contributions to open-source projects or publications in top-tier AI/ML conferences (e.g., NeurIPS, ACL, ICML). What We Offer : Competitive salary and benefits package Flexible work schedule with remote work options Opportunity to work on cutting-edge AI technology with a passionate team A collaborative and inclusive work culture focused on innovation
Posted 2 weeks ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of Weekday's clients Min Experience: 6 years Location: Bangalore JobType: full-time Requirements About the Role: We are looking for an experienced Market Mix Modelling (MMM) Specialist with deep expertise in marketing analytics and advanced statistical modelling. This role is critical in supporting strategic business decisions through the development and interpretation of MMM models, with a strong focus on optimizing media spend and maximizing ROI. The ideal candidate will bring strong Python proficiency, domain knowledge in retail and FMCG/CPG sectors, and the ability to communicate complex insights to senior stakeholders. Key Responsibilities: 🔹 Market Mix Modelling Development Design, build, and maintain robust Market Mix Models to assess the impact of various marketing channels on sales and business KPIs. Apply both short-term and long-term modelling techniques to evaluate marketing efficiency and brand equity effects. Implement Adstock and other transformation methods to accurately reflect delayed media effects and diminishing returns. 🔹 Optimization & Strategy Perform budget allocation and media optimization simulations using model outputs to identify the most efficient marketing mix. Develop scenarios and ROI simulations to guide media planning and resource allocation. Partner with media and strategy teams to deliver actionable recommendations for campaign optimization. 🔹 Analytics & Interpretation Analyze model outputs and translate them into clear, concise business insights. Use Bayesian methods and other advanced statistical techniques for model enhancement and credibility. Ensure model diagnostics, validation, and updates are well documented and regularly performed. 🔹 Stakeholder Management Present analytical findings to business and marketing stakeholders in an easy-to-understand manner. Collaborate with cross-functional teams including marketing, finance, and sales to gather inputs and validate assumptions. Provide consultative support on marketing performance and strategy development. Required Skills & Qualifications: Bachelor's or Master's degree in Statistics, Economics, Data Science, Marketing Analytics, or a related quantitative field. 6+ years of hands-on experience in building and implementing Market Mix Models, preferably in the retail or FMCG/CPG industry. Proficiency in Python for data processing, modelling, and data visualization. In-depth knowledge of statistical modelling techniques, marketing response functions, and causal inference. Practical experience in Adstock, decay curves, and various transformation techniques used in MMM. Exposure to Bayesian modelling frameworks and/or MCMC techniques is highly desirable. Strong knowledge of campaign planning, digital and offline media metrics, and marketing attribution. Preferred Experience: Previous experience working with syndicated retail data (e.g., Nielsen, IRI). Familiarity with marketing effectiveness platforms or MMM software/tools. Experience with dashboarding tools for MMM output visualization is a plus (e.g., Tableau, Power BI).
Posted 2 weeks ago
2.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Company : We are seeking a talented and driven Machine Learning Engineer with 2-5 years of experience to join our dynamic team in Chennai. The ideal candidate will have a strong foundation in machine learning principles and extensive hands-on experience in building, deploying, and managing ML models in production environments. A key focus of this role will be on MLOps practices and orchestration, ensuring our ML pipelines are robust, scalable, and automated. About the Role : A short paragraph summarizing the key role responsibilities. Responsibilities : ML Model Deployment & Management : Design, develop, and implement end-to-end MLOps pipelines for deploying, monitoring, and managing machine learning models in production. Orchestration : Utilize orchestration tools (e.g., Apache Airflow, Kubeflow, AWS Step Functions, Azure Data Factory) to automate ML workflows, including data ingestion, feature engineering, model training, validation, and deployment. CI/CD for ML : Implement Continuous Integration/Continuous Deployment (CI/CD) practices for ML code, models, and infrastructure, ensuring rapid and reliable releases. Monitoring & Alerting : Establish comprehensive monitoring and alerting systems for deployed ML models to track performance, detect data drift, model drift, and ensure operational health. Infrastructure as Code (IaC) : Work with IaC tools (e.g., Terraform, CloudFormation) to manage and provision cloud resources required for ML workflows. Containerization : Leverage containerization technologies (Docker, Kubernetes) for packaging and deploying ML models and their dependencies. Collaboration : Collaborate closely with Data Scientists, Data Engineers, and Software Developers to translate research prototypes into production-ready ML solutions. Performance Optimization : Optimize ML model inference and training performance, focusing on efficiency, scalability, and cost-effectiveness. Troubleshooting & Debugging : Troubleshoot and debug issues across the entire ML lifecycle, from data pipelines to model serving. Documentation : Create and maintain clear technical documentation for MLOps processes, pipelines, and infrastructure. Qualifications : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field. 2-5 years of professional experience as a Machine Learning Engineer, MLOps Engineer, or a similar role. Required Skills : Strong proficiency in Python and its ML ecosystem (e.g., scikit-learn, TensorFlow, PyTorch, Pandas, NumPy). Hands-on experience with at least one major cloud platform (AWS, Azure, GCP) and their relevant ML/MLOps services (e.g., AWS SageMaker, Azure ML, GCP Vertex AI). Proven experience with orchestration tools like Apache Airflow, Kubeflow, or similar. Solid understanding and practical experience with MLOps principles and best practices. Experience with containerization technologies (Docker, Kubernetes). Familiarity with CI/CD pipelines and tools (e.g., GitLab CI/CD, Jenkins, Azure DevOps, AWS CodePipeline). Knowledge of database systems (SQL and NoSQL). Excellent problem-solving, analytical, and debugging skills. Strong communication and collaboration abilities, with a capacity to work effectively in an Agile environment.
Posted 2 weeks ago
6.0 years
0 Lacs
India
Remote
Company Description Loyyal is a loyalty and payments innovation company that offers an Enterprise SaaS Suite powered by patented blockchain technology. We focus on disrupting the loyalty industry by delivering efficiency, security, and scalability at a low cost. Our platform is designed to reduce operational complexity and boost revenue for loyalty programs, driving customer engagement and loyalty in a competitive marketplace. About the Role We’re looking for a seasoned AI Engineer who thrives on solving complex challenges and building intelligent systems that scale. This role is ideal for someone passionate about deep learning, GenAI, and production-grade AI systems. You’ll work closely with our data, engineering, and product teams to design, build, and deploy advanced AI models across a variety of real-world use cases. As a Senior AI Engineer, you’ll play a key role in architecting, developing, and optimizing our AI systems—from fine-tuning large language models to building robust MLOps pipelines. This is an opportunity to be part of a high-impact team shaping next-generation AI experiences. Key Responsibilities Design, build, and deploy scalable AI models, with a focus on NLP, LLMs, and Generative AI use cases Fine-tune open-source or proprietary LLMs (e.g., LLaMA, Mistral, GPT-J) for domain-specific tasks Collaborate with product and engineering teams to integrate AI models into user-facing applications Develop MLOps pipelines using tools like MLflow, Kubeflow, or Vertex AI for model versioning, monitoring, and deployment Optimize inference performance, memory usage, and cost efficiency in production environments Apply prompt engineering, retrieval-augmented generation (RAG), and few-shot techniques where appropriate Conduct experiments, A/B testing, and evaluations to continuously improve model accuracy and reliability Stay up to date with the latest developments in AI/ML research, especially in LLM and GenAI domains Write clean, modular, and well-documented code and contribute to technical design reviews Mentor junior team members and collaborate in agile sprint cycles Requirements 6+ years of experience in machine learning or AI engineering 2+ years working with LLMs, Transformers, or Generative AI models Proficiency in Python and deep learning frameworks (e.g., PyTorch, TensorFlow, Hugging Face Transformers) Experience deploying AI models in production (cloud-native or on-prem) Strong grasp of model fine-tuning, quantization, and serving at scale Familiarity with MLOps, including experiment tracking, CI/CD, and containerization (Docker, Kubernetes) Experience integrating AI with REST APIs, cloud services (AWS/GCP), and vector databases (e.g., Pinecone, Weaviate, FAISS) Understanding of ethical AI, data privacy, and fairness in model outcomes Strong debugging, problem-solving, and communication skills Experience working in agile teams with code review and version control (Git) Nice to Have Hands-on experience with Retrieval-Augmented Generation (RAG) pipelines Familiarity with OpenAI, Anthropic, or Cohere APIs and embedding models Knowledge of LangChain, LlamaIndex, or Haystack for AI application orchestration Experience with streaming data and real-time inference systems Understanding of multi-modal models (e.g., combining text, image, audio inputs) Prior experience in a startup, product-focused, or fast-paced R&D environment What We Offer Competitive compensation (base + performance-based bonuses or token equity) Fully remote and flexible work culture A front-row seat to build next-gen AI experiences in a high-growth environment Opportunity to shape AI strategy, tools, and infrastructure from the ground up Access to high-end GPU infrastructure and compute resources How to Apply Send your resume and a short cover letter highlighting: Your experience with LLMs, GenAI, and deployed AI systems Links to AI/ML projects, GitHub repos, or research (if public) Why you're interested in this role and how you envision contributing.
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Requirements Job Requirements Role/ Job Title: Senior Data Scientist Function/ Department: Data & Analytics Job Purpose In this specialized role, you will leverage your expertise in machine learning and statistics to derive valuable insights from data. Your role will include developing predictive models, interpreting data and working closely with out ML engineers to ensure the effective deployment and functioning of these models. Key / Primary Responsibilities Lead cross-functional teams in the design, development, and deployment of Generative AI solutions, with a strong focus on Large Language Models (LLMs). Architect, train, and fine-tune state-of-the-art LLMs (e.g., GPT, BERT, T5) for various business applications, ensuring alignment with project goals. Deploy and scale LLM-based solutions, integrating them seamlessly into production environments and optimizing for performance and efficiency. Develop and maintain machine learning workflows and pipelines for training, evaluating, and deploying Generative AI models, using Python or R, and leveraging libraries like Hugging Face Transformers, TensorFlow, and PyTorch. Collaborate with product, data, and engineering teams to define and refine use cases for LLM applications such as conversational agents, content generation, and semantic search. Design and implement fine-tuning strategies to adapt pre-trained models to domain-specific tasks, ensuring high relevance and accuracy. Evaluate and optimize LLM performance, including handling challenges such as prompt engineering, inference time, and model bias. Manage and process large, unstructured datasets using SQL and NoSQL databases, ensuring smooth integration with AI models. Build and deploy AI-driven APIs and services, providing scalable access to LLM-based solutions. Use data visualization tools (e.g., Matplotlib, Seaborn, Tableau) to communicate AI model performance, insights, and results to non-technical stakeholders. Secondary Responsibilities Contribute to data analysis projects, with a strong emphasis on text analytics, natural language understanding, and Generative AI applications. Build, validate, and deploy predictive models specifically tailored to text data, including models for text generation, classification, and entity recognition. Handle large, unstructured text datasets, performing essential preprocessing and data cleaning steps, such as tokenization, lemmatization, and noise removal, for machine learning and NLP tasks. Work with cutting-edge text data processing techniques, ensuring high-quality input for training and fine-tuning Large Language Models (LLMs). Collaborate with cross-functional teams to develop and deploy scalable AI-powered solutions that process and analyze textual data at scale. Key Success Metrics Ensure timely deliverables. Spot Training Infrastructure fixes. Lead technical aspects of the projects. Error free deliverables. Education Qualification Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA Experience: 5-10 years of relevant experience
Posted 2 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
TCS is hiring for Real World Evidence Data Scientist Job Location – Mumbai/Pune/ Bangalore/ Delhi/ NCR Educational Qualification(s) Required – Any Life Science Graduate Interested can share their CV on dolly.suryavanshi@tcs.com Key Responsibilities: Analyze structured and unstructured healthcare data including EHRs, claims, registries, and patient-reported outcomes. Develop predictive models, causal inference frameworks, and machine learning algorithms to support RWE studies. Data Transformation OMOP, FHIR and other industry standards specific to RWE Collaborate with cross-functional teams including statisticians, epidemiologists, clinicians, and regulatory experts. Design and implement data pipelines and workflows for efficient data processing and analysis. Create dashboards and visualizations to communicate findings to stakeholders. Ensure data quality, integrity, and compliance with relevant regulations (e.g., GDPR, HIPAA). Contribute to publications, white papers, and conference presentations.
Posted 2 weeks ago
8.0 years
0 Lacs
India
On-site
About MeltPlan Construction is the biggest industry in the world and on a declining productivity for last three decades. MeltPlan is at the forefront of revolutionizing construction industry through an innovative, construction-aware AI platform. Our mission is to eliminate the inefficiencies of manual processes and constant rework, offering a focused, AI-powered solution designed for the complexities of the built environment. We envision a "Melting Pot for the Built Environment", a singular platform that connects every stakeholder, from owners and architects to general contractors and specialty trades, fostering better plans and unlocking unparalleled value across the project lifecycle. Learn more about MeltPlan by reading our founders manifesto here. The Opportunity We are seeking a highly skilled and passionate Founding Backend Engineer to join our early team. As one of the first five engineers, you will play a pivotal role in shaping our core backend infrastructure, architecture, and engineering culture. This is a unique opportunity to contribute significantly to a product that will redefine the construction industry. You will work closely with the founders and future hires to build scalable, robust, and intelligent systems that power our AI solutions. People We take pride in the company we keep and want to maintain it that way. Our co-founder built fastest growing SaaS company from India in the past, $0 to $200 Mn ARR and $3 Bn+ valuation in 8 years, Innovaccer. Other co-founder used to manage $1 Bn construction projects ranging from most iconic office campuses in Silicon Valley to most sophisticated hospitals. Our CTO is a reputed AI research with more than 40 million downloads on his open source AI models which outcompete biggest AI labs on some benchmarks. You will join team of this caliber and we hope you raise our bar. More about our people here. What You'll Do Design, develop, and maintain highly scalable, reliable, and secure backend services for MeltPlan's AI-powered platform. Architect and implement core infrastructure, APIs, and data models to support products like Melt Code, Melt Takeoff, and Melt Coordinate. Work on cutting edge technology to do build sub-second real-time AI inference model production systems. Ensure the performance, security, and stability of our backend systems, making crucial technology stack decisions. Participate actively in technical discussions, code reviews, and architectural decisions. Help define and cultivate MeltPlan's engineering culture and best practices. Mentor and potentially lead future backend hires as the team grows. What We're Looking For Strong System Design & Scalability Expertise: Proven ability to design and build distributed, scalable, and resilient backend systems. Experience with cloud platforms (AWS, GCP, Azure) and micro-services architecture is highly valued. Exceptional Problem-Solving & Adaptability: A proactive, resourceful, and adaptable mindset, comfortable navigating ambiguity and rapidly evolving requirements in a startup environment. Ability to take ownership and drive solutions from conception to completion. Deep Technical Proficiency: Strong command of at least one modern backend programming language (e.g., Python, Go, Java, Node.js) and relevant frameworks. Extensive experience with database systems (SQL and/or NoSQL), including schema design, optimization, and data modeling. Expertise in designing, building, and maintaining robust APIs (REST, GraphQL). Solid understanding of clean code principles, testing methodologies, and version control (Git). Ownership & Communication: A high degree of ownership over your work, with excellent verbal and written communication skills. Ability to articulate complex technical concepts clearly to both technical and non-technical stakeholders, and a strong desire to collaborate effectively within a small, dynamic team. Familiarity with AI/ML integration patterns and data pipelines is a plus.
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us: Traya is an Indian direct-to-consumer hair care brand platform provides a holistic treatment for consumers dealing with hairloss. The Company provides personalized consultations that help determine the root cause of hair fall among individuals, along with a range of hair care products that are curated from a combination of Ayurveda, Allopathy, and Nutrition. Traya's secret lies in the power of diagnosis. Our unique platform diagnoses the patient’s hair & health history, to identify the root cause behind hair fall and delivers customized hair kits to them right at their doorstep. We have a strong adherence system in place via medically-trained hair coaches and proprietary tech, where we guide the customer across their hair growth journey, and help them stay on track. Traya is founded by Saloni Anand, a techie-turned-marketeer and Altaf Saiyed, a Stanford Business School alumnus. Our Vision: Traya was created with a global vision to create awareness around hair loss, de-stigmatise it while empathizing with the customers that it has an emotional and psychological impact. Most importantly, to combine 3 different sciences (Ayurveda, Allopathy and Nutrition) to create the perfect holistic solution for hair loss patients. Responsibilities: Data Analysis and Exploration: Conduct in-depth analysis of large and complex datasets to identify trends, patterns, and anomalies. Perform exploratory data analysis (EDA) to understand data distributions, relationships, and quality. Machine Learning and Statistical Modeling: Develop and implement machine learning models (e.g., regression, classification, clustering, time series analysis) to solve business problems. Evaluate and optimize model performance using appropriate metrics and techniques. Apply statistical methods to design and analyze experiments and A/B tests. Implement and maintain models in production environments. Data Engineering and Infrastructure: Collaborate with data engineers to ensure data quality and accessibility. Contribute to the development and maintenance of data pipelines and infrastructure. Work with cloud platforms (e.g., AWS, GCP, Azure) and big data technologies (e.g., Spark, Hadoop). Communication and Collaboration: Effectively communicate technical findings and recommendations to both technical and non-technical audiences. Collaborate with product managers, engineers, and other stakeholders to define and prioritize projects. Document code, models, and processes for reproducibility and knowledge sharing. Present findings to leadership. Research and Development: Stay up-to-date with the latest advancements in data science and machine learning. Explore and evaluate new tools and techniques to improve data science capabilities. Contribute to internal research projects. Qualifications: Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, or a related field. 3-5 years of experience as a Data Scientist or in a similar role. Leverage SageMaker's features, including SageMaker Studio, Autopilot, Experiments, Pipelines, and Inference, to optimize model development and deployment workflows. Proficiency in Python and relevant libraries (e.g., scikit-learn, pandas, NumPy, TensorFlow, PyTorch). Solid understanding of statistical concepts and machine learning algorithms. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Experience deploying models to production. Experience with version control (Git) Preferred Qualifications: Experience with specific industry domains (e.g., e-commerce, finance, healthcare). Experience with natural language processing (NLP) or computer vision. Experience with building recommendation engines. Experience with time series forecasting.
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
About The Job Be the expert customers turn to when they need to build strategic, scalable systems. Red Hat Services is looking for a well-rounded Architect to join our team in Mumbai covering Asia Pacific. In this role, you will design and implement modern platforms, onboard and build cloud-native applications, and lead architecture engagements using the latest open source technologies. You’ll be part of a team of consultants who are leaders in open hybrid cloud, platform modernisation, automation, and emerging practices - including foundational AI integration. Working in agile teams alongside our customers, you’ll build, test, and iterate on innovative prototypes that drive real business outcomes. This role is ideal for architects who can work across application, infrastructure, and modern AI-enabling platforms like Red Hat OpenShift AI. If you're passionate about open source, building solutions that scale, and shaping the future of how enterprises innovate — this is your opportunity. What Will You Do Design and implement modern platform architectures with a strong understanding of Red Hat OpenShift, container orchestration, and automation at scale. Strong experience in managing “Day-2” operations of Kubernetes container platforms by collaborating with infrastructure teams in defining practices for platform deployment, platform hardening, platform observability, monitoring and alerting, capacity management, scalability, resiliency, security operations. Lead the discovery, architecture, and delivery of modern platforms and cloud-native applications, using technologies such as containers, APIs, microservices, and DevSecOps patterns. Collaborate with customer teams to co-create AI-ready platforms, enabling future use cases with foundational knowledge of AI/ML workloads. Remain hands-on with development and implementation — especially in prototyping, MVP creation, and agile iterative delivery. Present strategic roadmaps and architectural visions to customer stakeholders, from engineers to executives. Support technical presales efforts, workshops, and proofs of concept, bringing in business context and value-first thinking. Create reusable reference architectures, best practices, and delivery models, and mentor others in applying them. Contribute to the development of standard consulting offerings, frameworks, and capability playbooks. What Will You Bring Strong experience with Kubernetes, Docker, and Red Hat OpenShift or equivalent platforms In-depth expertise in managing multiple Kubernetes clusters across multi-cloud environments. Proven expertise in operationalisation of Kubernetes container platform through the adoption of Service Mesh, GitOps principles, and Serverless frameworks Migrating from XKS to OpenShift Proven leadership of modern software and platform transformation projects Hands-on coding experience in multiple languages (e.g., Java, Python, Go) Experience with infrastructure as code, automation tools, and CI/CD pipelines Practical understanding of microservices, API design, and DevOps practices Applied experience with agile, scrum, and cross-functional team collaboration Ability to advise customers on platform and application modernisation, with awareness of how platforms support emerging AI use cases. Excellent communication and facilitation skills with both technical and business audiences Willingness to travel up to 40% of the time Nice To Have Experience with Red Hat OpenShift AI, Open Data Hub, or similar MLOps platforms Foundational understanding of AI/ML, including containerized AI workloads, model deployment, open source AI frameworks Familiarity with AI architectures (e.g., RAG, model inference, GPU-aware scheduling) Engagement in open source communities or contributor background About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply.
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
SUMMARY Under limited supervision designs, develops and maintains test procedures, tester hardware and software for electronic circuit board production. ESSENTIAL DUTIES AND RESPONSIBILITIES include the following. Other duties may be assigned. LEADERSHIP AND MANAGEMENT RESPONSIBILITIES Recruitment and Retention: · Recruit and interview Process Technicians. · Communicate criteria to recruiters for Process Technician position candidates. · Coach technicians in the interviewing/hiring process. · Monitor team member turnover; identify key factors that can be improved; make improvements. Employee and Team Development: · Identify individual and team strengths and development needs on an ongoing basis. · Create and/or validate training curriculum in area of responsibility. · Coach and mentor Process Technicians to deliver excellence to every internal and external customer. Performance Management: · Establish clear measurable goals and objectives by which to determine individual and team results (i.e. operational metrics, results against project timelines, training documentation, attendance records, knowledge of operational roles and responsibilities, personal development goals). · Solicit ongoing feedback from Assistant Test Engineering Manager, Workcell Manager (WCM), Business Unit Manager (BUM), peers and team member on team member’s contribution to the Workcell team. Provide ongoing coaching and counseling to team member based on feedback. · Express pride in staff and encourage them to feel good about their accomplishments. · Perform team member evaluations professionally and on time. · Drive individuals and the team to continuously improve in key operational metrics and the achievement of the organizational goals. · Coordinate activities of large teams and keep them focused in times of crises. · Ensure recognition and rewards are managed fairly and consistently in area of responsibility. Communication: · Provide communication forum for the exchange of ideas and information with the department. · Organize verbal and written ideas clearly and use an appropriate business style. · Ask questions; encourage input from team members. · Assess communication style of individual team members and adapt own communication style accordingly. TECHNICAL MANAGEMENT RESPONSIBILITIES · Review circuit board designs for testability requirements. · Support manufacturing with failure analysis, tester debugging, reduction of intermittent failures and downtime of test equipment. · Prepare recommendations for testing and documentation of procedures to be used from the product design phase through to initial production. · Generate reports and analysis of test data, prepares documentation and recommendations. · Review test equipment designs, data and RMA issues with customers regularly. · Design, and direct engineering and technical personnel in fabrication of testing and test control apparatus and equipment. · Direct and coordinate engineering activities concerned with development, procurement, installation, and calibration of instruments, equipment, and control devices required to test, record, and reduce test data. · Determine conditions under which tests are to be conducted and sequences and phases of test operations. · Direct and exercise control over operational, functional, and performance phases of tests. · Perform moderately complex assignments of the engineering test function for standard and/or custom devices. · Analyze and interpret test data and prepares technical reports for use by test engineering and management personnel. · Develop or use computer software and hardware to conduct tests on machinery and equipment. · Perform semi-routine technique development and maintenance, subject to established Jabil standards, including ISO and QS development standards. · Provide training in new procedures to production testing staff. · Adhere to all safety and health rules and regulations associated with this position and as directed by supervisor. · Comply and follow all procedures within the company security policy. MINIMUM REQUIREMENTS Bachelors of Science in Electronics or Electrical Engineering from four-year college or university; and three to five years experience LANGUAGE SKILLS Ability to read, analyze, and interpret general business periodicals, professional journals, technical procedures, or governmental regulations. Ability to write reports, business correspondence, and procedure manuals. Ability to effectively present information and respond to questions from groups of managers, clients, customers, and the general public. MATHEMATICAL SKILLS Ability to work with mathematical concepts such as probability and statistical inference, and fundamentals of plane and solid geometry and trigonometry. Ability to apply concepts such as fractions, percentages, ratios, and proportions to practical situations. REASONING ABILITY Ability to define problems, collect data, establish facts, and draw valid conclusions. Ability to interpret an extensive variety of technical instructions in mathematical or diagram form and deal with several abstract and concrete variables. PHYSICAL DEMANDS The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. The employee is frequently required to walk, and to lift and carry PC’s and test equipment weighing up to 50 lbs. Specific vision abilities required by this job include close vision and use of computer monitor screens a great deal of time. WORK ENVIRONMENT The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Individual’s primary workstation is located in the office area, with some time spent each day on the manufacturing floor. The noise level in this environment ranges from low to moderate.
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
SUMMARY Under limited supervision designs, develops and maintains test procedures, tester hardware and software for electronic circuit board production. ESSENTIAL DUTIES AND RESPONSIBILITIES include the following. Other duties may be assigned. · Review circuit board designs for testability requirements. · Support manufacturing with failure analysis, tester debugging, reduction of intermittent failures and downtime of test equipment. · Prepare recommendations for testing and documentation of procedures to be used from the product design phase through to initial production. · Generate reports and analysis of test data, prepares documentation and recommendations. · Review test equipment designs, data and RMA issues with customers regularly. · Design, and direct engineering and technical personnel in fabrication of testing and test control apparatus and equipment. · Direct and coordinate engineering activities concerned with development, procurement, installation, and calibration of instruments, equipment, and control devices required to test, record, and reduce test data. · Determine conditions under which tests are to be conducted and sequences and phases of test operations. · Direct and exercise control over operational, functional, and performance phases of tests. · Perform moderately complex assignments of the engineering test function for standard and/or custom devices. · Analyze and interpret test data and prepares technical reports for use by test engineering and management personnel. · Develop or use computer software and hardware to conduct tests on machinery and equipment. · Perform semi-routine technique development and maintenance, subject to established Jabil standards, including ISO and QS development standards. · May provide training in new procedures to production testing staff. · Adhere to all safety and health rules and regulations associated with this position and as directed by supervisor. · Comply and follow all procedures within the company security policy. MINIMUM REQUIREMENTS Bachelors of Science in Electronics or Electrical Engineering from four-year college or university preferred; or related experience and/or training; or equivalent combination of education and experience. LANGUAGE SKILLS Ability to read, analyze, and interpret general business periodicals, professional journals, technical procedures, or governmental regulations. Ability to write reports, business correspondence, and procedure manuals. Ability to effectively present information and respond to questions from groups of managers, clients, customers, and the general public. MATHEMATICAL SKILLS Ability to work with mathematical concepts such as probability and statistical inference, and fundamentals of plane and solid geometry and trigonometry. Ability to apply concepts such as fractions, percentages, ratios, and proportions to practical situations. REASONING ABILITY Ability to define problems, collect data, establish facts, and draw valid conclusions. Ability to interpret an extensive variety of technical instructions in mathematical or diagram form and deal with several abstract and concrete variables. PHYSICAL DEMANDS The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. The employee is frequently required to walk, and to lift and carry PC’s and test equipment weighing up to 50 lbs. Specific vision abilities required by this job include close vision and use of computer monitor screens a great deal of time. WORK ENVIRONMENT The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Individual’s primary workstation is located in the office area, with some time spent each day on the manufacturing floor. The noise level in this environment ranges from low to moderate.
Posted 2 weeks ago
4.0 years
0 Lacs
India
Remote
Job Title: AI/ML Engineer Experience: 4+ Years Location: Remote Employment Type: Full-time Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with 4+ years of experience to design, develop, and deploy scalable AI and Machine Learning solutions. The ideal candidate will have strong expertise in Python , traditional ML algorithms , deep learning , and Generative AI frameworks like LLMs, diffusion models, or transformers . Key Responsibilities: Design, implement, and optimize machine learning models for classification, regression, clustering, and recommendation systems. Work with Generative AI models such as GPT, BERT, Stable Diffusion, or custom transformers for text/image/audio applications. Build and deploy end-to-end ML pipelines from data ingestion to model deployment and monitoring. Collaborate with cross-functional teams including data engineers, product managers, and software developers. Conduct research and experimentation with new ML algorithms and GenAI architectures. Optimize models for performance, scalability, and inference efficiency. Ensure data integrity, privacy, and compliance in AI workflows. Document models, experiments, and ML infrastructure. Required Skills: 4+ years of experience in AI/ML Engineering or similar roles. Strong programming experience in Python and libraries like scikit-learn, Pandas, NumPy . Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX . Experience building or fine-tuning LLMs (GPT, T5, Llama, etc.) or image generation models like Stable Diffusion . Familiarity with prompt engineering and retrieval-augmented generation (RAG). Experience with ML lifecycle tools like MLflow, Weights & Biases , or SageMaker . Exposure to APIs, containerization (Docker), and cloud platforms (AWS/GCP/Azure) for ML deployment. Strong understanding of statistics, optimization, and data preprocessing techniques. Nice to Have: Knowledge of LangChain, LlamaIndex, Haystack for GenAI workflows. Experience in Vector databases like Pinecone, FAISS, or Weaviate. Contributions to open-source AI/ML projects or research publications. Experience with MLOps practices and CI/CD for ML models.
Posted 2 weeks ago
2.0 - 5.0 years
0 Lacs
Kochi, Kerala, India
On-site
🌐 We're Hiring: Advanced AI Researchers & Engineers | Join JTW's Innovation Team At JohnTuringWatson Software Solutions Pvt Ltd (JTW) , we're building the future of AI—and we're looking for exceptional talent to help lead it. Our AI Research Lab is actively working on cutting-edge solutions in Agentic AI , Edge Intelligence , and Quantum Computing , and we’re expanding our team. If you're passionate about solving real-world problems using AI, Machine Learning , and next-gen computation models , we’d love to connect with you. Experience Level: 2-5 years of experience. 🔍 We’re Looking for Talent With Expertise in One or More of the Following Areas: Agentic AI – Designing autonomous, reasoning-driven systems Edge AI – Building low-latency, on-device inference models Quantum Computing – Practical experience with IBM Quantum or Azure Quantum Quantum Development Frameworks – Proficiency in Qiskit , Q# , or Cirq Python (required) for ML/AI pipelines and quantum programming Deep Learning – Using TensorFlow, PyTorch, or Hugging Face frameworks NLP and LLMs – Experience with transformer architectures , prompt engineering, or fine-tuning models ML Ops – Knowledge in model deployment, performance tuning , and monitoring Strong foundations in Mathematics, Probability, Linear Algebra , and Statistics 🧠 Ideal Candidate Profile: Solid understanding of AI/ML concepts , including supervised, unsupervised, and reinforcement learning Ability to research, prototype, and scale AI models into production Comfortable working in a cross-functional R&D environment Prior experience in contributing to AI publications, open-source projects, or patents (a plus) 🚀 Why Join JTW? Work on industry-defining projects with an elite research team Exposure to emerging domains like Agentic AI , Edge-native models , and Quantum-AI fusion A collaborative, innovation-first culture with global delivery reach Access to dedicated infrastructure, including AI model labs and quantum simulators 📩 Ready to make an impact? Send your resume to jobs@johnturingwatson.ai Explore more at www.johnturingwatson.ai
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We are seeking an exceptionally skilled Technical Lead who possess deep expertise in modern front-end technologies, and a passion for exploring and implementing AI-driven features that elevate our products and user experiences. Key Responsibilities: Technical Leadership & Architecture Define and drive the technical vision, strategy, and roadmap for our front-end architecture, ensuring scalability, performance, maintainability, and security. Lead the design and implementation of complex, highly interactive, and responsive user interfaces using cutting-edge front-end frameworks (e.g., React, Angular, Vue.js) and associated technologies. Establish and enforce best practices for front-end development, including code quality, testing, accessibility, and performance optimization. Conduct in-depth code reviews, provide constructive feedback, and mentor team members to foster technical excellence and growth. Collaborate closely with product managers, UX/UI designers, back-end engineers, and AI/ML engineers to translate product requirements into robust technical solutions. Evaluate and recommend new front-end technologies, tools, and libraries to continuously improve our development stack. AI Integration & Innovation Pioneer AI-driven Front-End Experiences: Proactively identify and explore opportunities to embed AI/ML capabilities directly into the front-end to enhance user engagement, personalization, predictive interactions, intelligent search, content recommendation, and dynamic UI generation. Orchestrate AI Model Consumption: Design and implement efficient and performant ways for front-end applications to consume and interact with AI/ML models (e.g., via APIs, WebSockets, client-side inference where applicable). Data-Driven UI/UX: Leverage AI for A/B testing, user behavior analysis, and predictive analytics to inform UI/UX design decisions and drive continuous improvement. Ethical AI Implementation: Ensure responsible and ethical implementation of AI features, considering data privacy, fairness, and transparency in user-facing applications. Stay abreast of the latest advancements in AI, particularly as they apply to front-end development (e.g., Generative AI for UI, AI-powered design tools, client-side ML frameworks). Team Leadership & Mentorship Lead, inspire, and empower a team of front-end engineers, fostering a culture of collaboration, innovation, and continuous learning. Provide technical guidance, coaching, and mentorship to team members, helping them develop their skills and careers.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France