Jobs
Interviews

586 Drift Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

4 - 9 Lacs

Gurgaon

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. About the position: Senior Engineer – Agentic AI: Join Boston Scientific at the forefront of innovation as we embrace AI to transform healthcare and deliver cutting-edge solutions. As a Senior Engineer – Agentic AI, you will architect and deliver autonomous, goal-driven agents powered by large language models (LLMs) and multi-agent frameworks. Key Responsibilities: Design and implement agentic AI systems leveraging LLMs for reasoning, multi-step planning, and tool execution. Evaluate and build upon multi-agent frameworks such as LangGraph, AutoGen, and CrewAI to coordinate distributed problem-solving agents. Develop context-handling, memory, and API-integration layers enabling agents to interact reliably with internal services and third-party tools. Create feedback-loop and evaluation pipelines (LangSmith, RAGAS, custom metrics) that measure factual grounding, safety, and latency. Own backend services that scale agent workloads, optimize GPU / accelerator utilization, and enforce cost governance. Embed observability, drift monitoring, and alignment guardrails throughout the agent lifecycle. Collaborate with research, product, and security teams to translate emerging agentic patterns into production-ready capabilities. Mentor engineers on prompt engineering, tool-use chains, and best practices for agent deployment in regulated environments. Required: 8+ years of software engineering experience, including 3+ years building AI/ML or NLP systems. Expertise in Python and modern LLM APIs (OpenAI, Anthropic, etc.), plus agentic orchestration frameworks (LangGraph, AutoGen, CrewAI, LangChain, LlamaIndex). Proven delivery of agentic systems or LLM-powered applications that invoke external APIs or tools. Deep knowledge of vector databases (Azure AI Search, Weaviate, Pinecone, FAISS, pgvector) and Retrieval-Augmented Generation (RAG) pipelines. Hands-on experience with LLMOps: CI/CD for fine-tuning, model versioning, performance monitoring, and drift detection. Strong background in cloud-native micro-services, security, and observability. Requisition ID: 610421 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 11 hours ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are looking for a highly skilled and proactive Senior DevOps Specialist to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting Familiarity with SQL for validating database changes, debugging issues, and running schema checks Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups Strong exposure to API testing, load/performance testing, and reliability validation Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog) Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders

Posted 12 hours ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Company: DataRepo Private Limited Location: Remote (Work from Home) Working Hours: 6:00 PM to 2:00 AM IST (Strict shift adherence required) Salary: ₹35,000/month (Fixed) About the Role: We are hiring a Data Science Engineer to join our growing remote team focused on building and deploying real-time fraud detection models for credit card transactions. This role is ideal for a professional with strong hands-on experience in machine learning, big data engineering, and financial risk systems . You will work closely with cross-functional teams to develop production-ready solutions using Azure , Databricks , and Spark , helping prevent financial fraud at scale. Key Responsibilities: Design, build, and deploy machine learning models for real-time fraud detection Analyze large-scale financial transaction data to identify suspicious patterns Create and manage data pipelines and workflows using Databricks on Azure Collaborate with engineering, fraud operations, and compliance teams Monitor model performance and implement feedback loops for retraining and drift handling Optimize data workflows for cost, performance, and accuracy Required Skills & Qualifications: Minimum 7 years of experience in data science or ML engineering roles Strong programming experience in Python and SQL Solid understanding of ML algorithms (supervised, unsupervised, anomaly detection, etc.) Proven experience with fraud detection systems or financial transaction modeling Hands-on experience with Databricks , Apache Spark , and Azure ML Strong knowledge of model evaluation , monitoring , and retraining strategies Ability to work remotely with strict adherence to the 6 PM – 2 AM IST shift Preferred Skills (Nice to Have): Prior experience in payments , banking , or financial services Familiarity with Microsoft Fabric and stream analytics Exposure to Kafka and real-time data processing Experience working directly with fraud ops , risk , or compliance teams Additional Notes: You will be required to sign an NDA . Disclosure of internal work or salary details is strictly prohibited. Strong commitment and communication are expected — you’ll be working with a remote team and may need to interface with clients.

Posted 12 hours ago

Apply

2.0 - 5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Job Title: Data Scientist Job Location: Jaipur Experience: 2 to 5 years Job Description: We are seeking a highly skilled and innovative Data Scientist to join our dynamic and forward-thinking team. This role is ideal for someone who is passionate about advancing the fields of Classical Machine Learning, Conversational AI, and Deep Learning Systems, and thrives on translating complex mathematical challenges into actionable machine learning models. The successful candidate will focus on developing, designing, and maintaining cutting-edge AI-based systems, ensuring seamless and engaging user experiences. Additionally, the role involves active participation in a wide variety of Natural Language Processing (NLP) tasks, including refining and optimizing prompts to enhance the performance of Large Language Models (LLMs). Key Responsibilities: • Generative AI Solutions: Develop innovative Generative AI solutions using machine learning and AI technologies, including building and fine-tuning models such as GANs, VAEs, and Transformers. • Classical ML Models: Design and develop machine learning models (regression, decision trees, SVMs, random forests, gradient boosting, clustering, dimensionality reduction) to address complex business challenges. • Deep Learning Systems: Train, fine-tune, and deploy deep learning models such as CNNs, RNNs, LSTMs, GANs, and Transformers to solve AI problems and optimize performance. • NLP and LLM Optimization: Participate in Natural Language Processing activities, refining and optimizing prompts to improve outcomes for Large Language Models (LLMs), such as GPT, BERT, and T5. • Data Management s Feature Engineering: Work with large datasets, perform data preprocessing, augmentation, and feature engineering to prepare data for machine learning and deep learning models. • Model Evaluation s Monitoring: Fine-tune models through hyperparameter optimization (grid search, random search, Bayesian optimization) to improve performance metrics (accuracy, precision, recall, F1-score). Monitor model performance to address drift, overfitting, and bias. • Code Review s Design Optimization: Participate in code and design reviews, ensuring quality and scalability in system architecture and development. Work closely with other engineers to review algorithms, validate models, and improve overall system efficiency. • Collaboration s Research: Collaborate with cross-functional teams including data scientists, engineers, and product managers to integrate machine learning solutions into production. Stay up to date with the latest AI/ML trends and research, applying cutting-edge techniques to projects. Qualifications: • Educational Background: Bachelor’s or Master’s degree in Computer Science, Mathematics, Statistics, Data Science, or any related field. • Experience in Machine Learning: Extensive experience in both classical machine learning techniques (e.g., regression, SVM, decision trees) and deep learning systems (e.g., neural networks, transformers). Experience with frameworks such as TensorFlow, PyTorch, or Keras. • Natural Language Processing Expertise: Proven experience in NLP, especially with Large Language Models (LLMs) like GPT, BERT, or T5. Experience in prompt engineering, fine-tuning, and optimizing model outcomes is a strong plus. • Programming Skills: Proficiency in Python and relevant libraries such as NumPy, Pandas, Scikit-learn, and natural language processing libraries (e.g., Hugging Face Transformers, NLTK, SpaCy). • Mathematical s Statistical Knowledge: Strong understanding of statistical modeling, probability theory, and mathematical optimization techniques used in machine learning. • Model Deployment s Automation: Experience with deploying machine learning models into production environments using platforms such as AWS SageMaker or Azure ML, GCP AI, or similar. Familiarity with MLOps practices is an advantage. • Code Review s System Design: Experience in code review, design optimization, and ensuring quality in large-scale AI/ML systems. Understanding of distributed computing and parallel processing is a plus. Soft Skills s Behavioural Qualifications: • Must be a good team player and self-motivated to achieve positive results • Must have excellent communication skills in English. • Exhibits strong presentation skills with attention to detail. • It’s essential to have a strong aptitude for learning new techniques. • Takes ownership for responsibilities • Demonstrates a high degree of reliability, integrity, and trustworthiness • Ability to manage time, displays appropriate sense of urgency and meet/exceed all deadlines • Ability to accurately process high volumes of work within established deadlines. Interested candidate can share your cv or reference at sulabh.tailang@celebaltech.com

Posted 17 hours ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

Remote

ML Ops Engineer (Remote). Are you passionate about scaling machine learning models in the cloud? Were on the hunt for an experienced ML Ops Engineer to help us build scalable, automated, and production-ready ML infrastructure across multi-cloud environments. Location : Remote. Experience : 5+ Years. What Youll Do Design and manage scalable ML pipelines and deployment frameworks. Own the full ML lifecycle: training versioning deployment monitoring. Build cloud-native infrastructure on AWS, GCP, or Azure. Automate deployment using CI/CD tools like Jenkins, GitLab. Containerize and orchestrate ML apps with Docker and Kubernetes. Use tools like MLflow, TensorFlow Serving, Kubeflow. Partner with Data Scientists & DevOps to ship robust ML solutions. Set up monitoring systems for model drift and performance Were Looking For : 5+ years of experience in MLOps or DevOps for ML systems. Hands-on with at least two cloud platforms : AWS, GCP, or Azure. Proficient in Python and ML libraries (TensorFlow, Scikit-learn, etc.) Strong skills in Docker, Kubernetes, and cloud infrastructure automation Experience building CI/CD pipelines (Jenkins, GitLab CI/CD, etc. Familiarity with tools like MLflow, TensorFlow have skills : Strong experience in any two cloud technologies (Azure, AWS, GCP). (ref:hirist.tech)

Posted 22 hours ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. About The Position Senior Engineer – Agentic AI: Join Boston Scientific at the forefront of innovation as we embrace AI to transform healthcare and deliver cutting-edge solutions. As a Senior Engineer – Agentic AI, you will architect and deliver autonomous, goal-driven agents powered by large language models (LLMs) and multi-agent frameworks. Key Responsibilities Design and implement agentic AI systems leveraging LLMs for reasoning, multi-step planning, and tool execution. Evaluate and build upon multi-agent frameworks such as LangGraph, AutoGen, and CrewAI to coordinate distributed problem-solving agents. Develop context-handling, memory, and API-integration layers enabling agents to interact reliably with internal services and third-party tools. Create feedback-loop and evaluation pipelines (LangSmith, RAGAS, custom metrics) that measure factual grounding, safety, and latency. Own backend services that scale agent workloads, optimize GPU / accelerator utilization, and enforce cost governance. Embed observability, drift monitoring, and alignment guardrails throughout the agent lifecycle. Collaborate with research, product, and security teams to translate emerging agentic patterns into production-ready capabilities. Mentor engineers on prompt engineering, tool-use chains, and best practices for agent deployment in regulated environments. Required 8+ years of software engineering experience, including 3+ years building AI/ML or NLP systems. Expertise in Python and modern LLM APIs (OpenAI, Anthropic, etc.), plus agentic orchestration frameworks (LangGraph, AutoGen, CrewAI, LangChain, LlamaIndex). Proven delivery of agentic systems or LLM-powered applications that invoke external APIs or tools. Deep knowledge of vector databases (Azure AI Search, Weaviate, Pinecone, FAISS, pgvector) and Retrieval-Augmented Generation (RAG) pipelines. Hands-on experience with LLMOps: CI/CD for fine-tuning, model versioning, performance monitoring, and drift detection. Strong background in cloud-native micro-services, security, and observability. Requisition ID: 610421 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 1 day ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Business Analyst Lead – Generative AI Experience: 7–15 Years Location: Bangalore Designation Level: Lead Role Overview: We are looking for a Business Analyst Lead with a strong grounding in Generative AI to bridge the gap between innovation and business value. In this role, you'll drive adoption of GenAI tools (LLMs, RAG systems, AI agents) across enterprise functions, aligning cutting-edge capabilities with practical, measurable outcomes. Key Responsibilities: 1. GenAI Strategy & Opportunity Identification Collaborate with cross-functional stakeholders to identify high-impact Generative AI use cases (e.g., AI-powered chatbots, content generation, document summarization, synthetic data). Lead cost-benefit analyses (e.g., fine-tuning open-source models vs. adopting commercial LLMs like GPT-4 Enterprise). Evaluate ROI and adoption feasibility across departments. 2. Requirements Engineering for GenAI Projects Define and document both functional and non-functional requirements tailored to GenAI systems: Accuracy thresholds (e.g., hallucination rate under 5%) Ethical guardrails (e.g., PII redaction, bias mitigation) Latency SLAs (e.g., <2 seconds response time) Develop prompt engineering guidelines, testing protocols, and iteration workflows. 3. Stakeholder Collaboration & Communication Translate technical GenAI concepts into business-friendly language. Manage expectations on probabilistic outputs and incorporate validation workflows (e.g., human-in-the-loop review). Use storytelling and outcome-driven communication (e.g., “Automated claims triage reduced handling time by 40%.”) 4. Business Analysis & Process Modeling Create advanced user story maps for multi-agent workflows (AutoGen, CrewAI). Model current and future business processes using BPMN to reflect human-AI collaboration. 5. Tools & Technical Proficiency Hands-on experience with LangChain, LlamaIndex for LLM integration. Knowledge of vector databases, RAG architectures, LoRA-based fine-tuning. Experience using Azure OpenAI Studio, Google Vertex AI, Hugging Face. Data validation using SQL and Python; exposure to synthetic data generation tools (e.g., Gretel, Mostly AI). 6. Governance & Performance Monitoring Define KPIs for GenAI performance: Token cost per interaction User trust scores Automation rate and model drift tracking Support regulatory compliance with audit trails and documentation aligned with EU AI Act and other industry standards. Required Skills & Experience: 7–10 years of experience in business analysis or product ownership, with recent focus on Generative AI or applied ML. Strong understanding of the GenAI ecosystem and solution lifecycle from ideation to deployment. Experience working closely with data science, engineering, product, and compliance teams. Excellent communication and stakeholder management skills, with a focus on enterprise environments. Preferred Qualifications: Certification in Business Analysis (CBAP/PMI-PBA) or AI/ML (e.g., Coursera/Stanford/DeepLearning.ai) Familiarity with compliance and AI regulations (GDPR, EU AI Act). Experience in BFSI, healthcare, telecom, or other regulated industries.

Posted 1 day ago

Apply

7.0 years

24 Lacs

Bharūch

On-site

Role: Sr Data Scientist – Digital & Analytics Experience: 7+ Years | Industry: Exposure to manufacturing, energy, supply chain or similar Location: On-Site @ Bharuch, Gujarat (6 days/week, Mon-Sat working) Perks: Work with Client Directly & Monthly renumeration for lodging Mandatory Skills: Exp. In full scale implementation from requirement gathering till project delivery (end to end). EDA, ML Techniques (supervised and unsupervised), Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), cloud ML tooling (Azure ML, AWS Sage maker, etc.), plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data, optimization models (LP, MILP, MINLP). We are seeking a highly capable and hands-on Sr Data Scientist to drive data science solution development for chemicals manufacturing environment. This role is ideal for someone with a strong product mindset and a proven ability to work independently, while mentoring a small team. You will play a pivotal role in developing advanced analytics and AI/ML solutions for operations, production, quality, energy optimization, and asset performance, delivering tangible business impact. Responsibilities: 1. Data Science Solution Development • Design and develop predictive and prescriptive models for manufacturing challenges such as process optimization, yield prediction, quality forecasting, downtime prevention, and energy usage minimization. • Perform robust exploratory data analysis (EDA) and apply advanced statistical and machine learning techniques (supervised and unsupervised). • Translate physical and chemical process knowledge into mathematical features or constraints in models. • Deploy models into production environments (on-prem or cloud) with high robustness and monitoring. 2. Team Leadership & Management • Lead a compact data science pod (2-3 members), assigning responsibilities, reviewing work, and mentoring junior data scientists or interns. • Own the entire data science lifecycle: problem framing, model development, and validation, deployment, monitoring, and retraining protocols. 3. Stakeholder Engagement & Collaboration • Work directly with Process Engineers, Plant Operators, DCS system owners, and Business Heads to identify pain points and convert them into use-cases. • Collaborate with Data Engineers and IT to ensure data pipelines and model interfaces are robust, secure, and scalable. • Act as a translator between manufacturing business units and technical teams to ensure alignment and impact. 4. Solution Ownership & Documentation • Independently manage and maintain use-cases through versioned model management, robust documentation, and logging. • Define and monitor model KPIs (e.g., drift, accuracy, business impact) post-deployment and lead remediation efforts. Required Skills: 1. 7+ years of experience in Data Science roles, with a strong portfolio of deployed use-cases in manufacturing, energy, or process industries. 2. Proven track record of end-to-end model delivery (from data prep to business value realization). 3. Master’s or PhD in Data Science, Computer Science Engineering, Applied Mathematics, Chemical Engineering, Mechanical Engineering, or a related quantitative discipline. 4. Expertise in Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), and experience with cloud ML tooling (Azure ML, AWS Sagemaker, etc.). 5. Familiarity with plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data. 6. Experience in developing optimization models (LP, MILP, MINLP) for process or resource allocation problems is a strong plus. Job Types: Full-time, Contractual / Temporary Contract length: 6-12 months Pay: Up to ₹200,000.00 per month Work Location: In person

Posted 1 day ago

Apply

12.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About IDfy IDfy is an Integrated Identity Platform offering products and solutions for KYC, KYB, Background Verifications, Risk Assessment, and Digital Onboarding. We establish trust while delivering a frictionless experience for you, your employees, customers and partners. Only IDfy combines enterprise-grade technology with business understanding and has the widest breadth of offerings in the industry. With more than 12+ years of experience and 2 million verifications per day, we are pioneers in this industry. Our clients include HDFC Bank, Induslnd Bank, Zomato, Amazon, PhonePe, Paytm, HUL and many others. We have successfully raised $27M from Elev8 Venture Partners, KB Investments & Tenacity Ventures! We work fully onsite on all days of the week from our office in Andheri, Mumbai Role Overview: Support project delivery by coordinating teams, tracking progress, and clearing roadblocks. Learn on the job. Deliver on time. Own your part like a pro. Key Responsibilities: Assist in planning and executing projects under senior guidance. Communicate effectively with cross-functional teams to keep projects moving. Monitor timelines and raise flags early when things drift off course. Manage project documentation and action items with discipline. Participate in meetings, track decisions, and drive follow-ups. Learn project management tools and frameworks on the job. Adapt quickly and maintain urgency in a fast-paced environment. Qualifications: Bachelor’s degree in any field. 0.6-2 years of experience; willingness to learn is non-negotiable. Strong organizational and communication skills. Proactive, detail-oriented, and accountable. Comfortable working under pressure and managing multiple priorities.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Competetive Salary PF and Gratuity About Our Client Our client is an international professional services brand of firms, operating as partnerships under the brand. It is the second-largest professional services network in the world Job Description Position: ML Engineer Job type: Techno-Functional Preferred education qualifications: Bachelor/ Master's degree in computer science, Data Science, Machine Learning OR related technical degree Job location: India Geography: SAPMENA Required experience: 6-8 Years Preferred profile/ skills: 5+ years in developing and deploying enterprise-scale ML solutions [Mandatory] Proven track record in data analysis (EDA, profiling, sampling), data engineering (wrangling, storage, pipelines, orchestration), [Mandatory] Proficiency in Data Science/ML algorithms such as regression, classification, clustering, decision trees, random forest, gradient boosting, recommendation, dimensionality reduction [Mandatory] Experience in ML algorithms such as ARIMA, Prophet, Random Forests, and Gradient Boosting algorithms (XGBoost, LightGBM, CatBoost) [Mandatory] Prior experience on MLOps with Kubeflow or TFX [Mandatory] Experience in model explainability with Shapley plot and data drift detection metrics. [Mandatory] Advanced programming skills with Python and SQL [Mandatory] Prior experience on building scalable ML pipelines & deploying ML models on Google Cloud [Mandatory] Proven expertise in ML pipeline optimization and monitoring the model's performance over time [Mandatory] Proficiency in version control systems such as GitHub Experience with feature engineering optimization and ML model fine tuning is preferred Google Cloud Machine Learning certifications will be a big plus Experience in Beauty or Retail/FMCG industry is preferred Experience in training with large volume of data (>100 GB) Experience in delivering AI-ML projects using Agile methodologies is preferred Proven ability to effectively communicate technical concepts and results to technical & business audiences in a comprehensive manner Proven ability to work proactively and independently to address product requirements and design optimal solutions Fluency in English, strong communication and organizational capabilities; and ability to work in a matrix/ multidisciplinary team Job objectives: Design, develop, deploy, and maintain data science and machine learning solutions to meet enterprise goals. Collaborate with product managers, data scientists & analysts to identify innovative & optimal machine learning solutions that leverage data to meet business goals. Contribute to development, rollout and onboarding of data scientists and ML use-cases to enterprise wide MLOps framework. Scale the proven ML use-cases across the SAPMENA region. Be responsible for optimal ML costs. Job description: Deep understanding of business/functional needs, problem statements and objectives/success criteria Collaborate with internal and external stakeholders including business, data scientists, project and partners teams in translating business and functional needs into ML problem statements and specific deliverables Develop best-fit end-to-end ML solutions including but not limited to algorithms, models, pipelines, training, inference, testing, performance tuning, deployments Review MVP implementations, provide recommendations and ensure ML best practices and guidelines are followed Act as 'Owner' of end-to-end machine learning systems and their scaling Translate machine learning algorithms into production-level code with distributed training, custom containers and optimal model serving Industrialize end-to-end MLOps life cycle management activities including model registry, pipelines, experiments, feature store, CI-CD-CT-CE with Kubeflow/TFX Accountable for creating, monitoring drifts leveraging continuous evaluation tools and optimizing performance and overall costs Evaluate, establish guidelines, and lead transformation with emerging technologies and practices for Data Science, ML, MLOps, Data Ops The Successful Applicant Position: ML Engineer Job type: Techno-Functional Preferred education qualifications: Bachelor/ Master's degree in computer science, Data Science, Machine Learning OR related technical degree Job location: India Geography: SAPMENA Required experience: 6-8 Years Preferred profile/ skills: 5+ years in developing and deploying enterprise-scale ML solutions [Mandatory] Proven track record in data analysis (EDA, profiling, sampling), data engineering (wrangling, storage, pipelines, orchestration), [Mandatory] Proficiency in Data Science/ML algorithms such as regression, classification, clustering, decision trees, random forest, gradient boosting, recommendation, dimensionality reduction [Mandatory] Experience in ML algorithms such as ARIMA, Prophet, Random Forests, and Gradient Boosting algorithms (XGBoost, LightGBM, CatBoost) [Mandatory] Prior experience on MLOps with Kubeflow or TFX [Mandatory] Experience in model explainability with Shapley plot and data drift detection metrics. [Mandatory] Advanced programming skills with Python and SQL [Mandatory] Prior experience on building scalable ML pipelines & deploying ML models on Google Cloud [Mandatory] Proven expertise in ML pipeline optimization and monitoring the model's performance over time [Mandatory] Proficiency in version control systems such as GitHub Experience with feature engineering optimization and ML model fine tuning is preferred Google Cloud Machine Learning certifications will be a big plus Experience in Beauty or Retail/FMCG industry is preferred Experience in training with large volume of data (>100 GB) Experience in delivering AI-ML projects using Agile methodologies is preferred Proven ability to effectively communicate technical concepts and results to technical & business audiences in a comprehensive manner Proven ability to work proactively and independently to address product requirements and design optimal solutions Fluency in English, strong communication and organizational capabilities; and ability to work in a matrix/ multidisciplinary team Job objectives: Design, develop, deploy, and maintain data science and machine learning solutions to meet enterprise goals. Collaborate with product managers, data scientists & analysts to identify innovative & optimal machine learning solutions that leverage data to meet business goals. Contribute to development, rollout and onboarding of data scientists and ML use-cases to enterprise wide MLOps framework. Scale the proven ML use-cases across the SAPMENA region. Be responsible for optimal ML costs. Job description: Deep understanding of business/functional needs, problem statements and objectives/success criteria Collaborate with internal and external stakeholders including business, data scientists, project and partners teams in translating business and functional needs into ML problem statements and specific deliverables Develop best-fit end-to-end ML solutions including but not limited to algorithms, models, pipelines, training, inference, testing, performance tuning, deployments Review MVP implementations, provide recommendations and ensure ML best practices and guidelines are followed Act as 'Owner' of end-to-end machine learning systems and their scaling Translate machine learning algorithms into production-level code with distributed training, custom containers and optimal model serving Industrialize end-to-end MLOps life cycle management activities including model registry, pipelines, experiments, feature store, CI-CD-CT-CE with Kubeflow/TFX Accountable for creating, monitoring drifts leveraging continuous evaluation tools and optimizing performance and overall costs Evaluate, establish guidelines, and lead transformation with emerging technologies and practices for Data Science, ML, MLOps, Data Ops What's on Offer Competitive compensation commensurate with role and skill set Medical Insurance Coverage worth of 10 Lacs Social Benifits including PF & Gratuity A fast-paced, growth-oriented environment with the associated (challenges and) rewards Opportunity to grow and develop your own skills and create your future Contact: Anwesha Banerjee Quote job ref: JN-072025-6793565

Posted 1 day ago

Apply

3.0 years

0 Lacs

Uttar Pradesh, India

On-site

Job Description Be part of the solution at Technip Energies and embark on a one-of-a-kind journey. You will be helping to develop cutting-edge solutions to solve real-world energy problems. About us: Technip Energies is a global technology and engineering powerhouse. With leadership positions in LNG, hydrogen, ethylene, sustainable chemistry, and CO2 management, we are contributing to the development of critical markets such as energy, energy derivatives, decarbonization, and circularity. Our complementary business segments, Technology, Products and Services (TPS) and Project Delivery, turn innovation into scalable and industrial reality. Through collaboration and excellence in execution, our 17,000+ employees across 34 countries are fully committed to bridging prosperity with sustainability for a world designed to last. About the role: We are currently seeking a Machine Learning (Ops) - Engineer , to join our Digi team based in Noida. Key Responsibilities: ML Pipeline Development and Automation: Design, build, and maintain end-to-end AI/ML CI/CD pipelines using Azure DevOps and leveraging Azure AI Stack (e.g., Azure ML, AI Foundry …) and Dataiku Model Deployment and Monitoring: Deliver tooling to deploy AI/ML products into production, ensuring they meet performance, reliability, and security standards. Implement and maintain a transversal monitoring solutions to track model performance, detect drift, and trigger retraining when necessary Collaboration and Support: Work closely with data scientists, AI/ML engineers, and platform team to ensure seamless integration of products into production. Provide technical support and troubleshooting for AI/ML pipelines and infrastructure, particularly in Azure and Dataiku environments Operational Excellence : Define and implement MLOps best practices with a strong focus on governance, security, and quality, while monitoring performance metrics and cost-efficiency to ensure continuous improvement and delivering optimized, high-quality deployments for Azure AI services and Dataiku Documentation and Reporting: Maintain comprehensive documentation of AI/ML pipelines, and processes, with a focus on Azure AI and Dataiku implementations. Provide regular updates to the AI Platform Lead on system status, risks, and resource needs About you: Proven track record of experience in MLOps, DevOps, or related roles Strong knowledge of machine learning workflows, data analytics, and Azure cloud Hands-on experience with tools and technologies such as Dataiku, Azure ML, Azure AI Services, Docker, Kubernetes, and Terraform Proficiency in programming languages such as Python, with experience in ML and automation libraries (e.g., TensorFlow, PyTorch, Azure AI SDK …) Expertise in CI/CD pipeline management and automation tools using Azure DevOps Familiarity with monitoring tools and logging frameworks Catch this opportunity and invest in your skills development, should your profile meet these requirements. Additional attributes: A proactive mindset with a focus on operationalizing AI/ML solutions to drive business value Experience with budget oversight and cost optimization in cloud environments. Knowledge of agile methodologies and software development lifecycle (SDLC). Strong problem-solving skills and attention to detail Work Experience: 3-5 years of experience in MLOps Minimum Education: Advanced degree (Master’s or PhD preferred) in Computer Science, Data Science, Engineering, or a related field. What’s next? Once receiving your application, our Talent Acquisition professionals will screen and match your profile against the role requirements. We ask for your patience as the team completes the volume of applications with reasonable timeframe. Check your application progress periodically via personal account from created candidate profile during your application. We invite you to get to know more about our company by visiting and follow us on LinkedIn, Instagram, Facebook, X and YouTube for company updates.

Posted 1 day ago

Apply

7.0 years

0 Lacs

Bharuch, Gujarat

On-site

Role: Sr Data Scientist – Digital & Analytics Experience: 7+ Years | Industry: Exposure to manufacturing, energy, supply chain or similar Location: On-Site @ Bharuch, Gujarat (6 days/week, Mon-Sat working) Perks: Work with Client Directly & Monthly renumeration for lodging Mandatory Skills: Exp. In full scale implementation from requirement gathering till project delivery (end to end). EDA, ML Techniques (supervised and unsupervised), Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), cloud ML tooling (Azure ML, AWS Sage maker, etc.), plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data, optimization models (LP, MILP, MINLP). We are seeking a highly capable and hands-on Sr Data Scientist to drive data science solution development for chemicals manufacturing environment. This role is ideal for someone with a strong product mindset and a proven ability to work independently, while mentoring a small team. You will play a pivotal role in developing advanced analytics and AI/ML solutions for operations, production, quality, energy optimization, and asset performance, delivering tangible business impact. Responsibilities: 1. Data Science Solution Development • Design and develop predictive and prescriptive models for manufacturing challenges such as process optimization, yield prediction, quality forecasting, downtime prevention, and energy usage minimization. • Perform robust exploratory data analysis (EDA) and apply advanced statistical and machine learning techniques (supervised and unsupervised). • Translate physical and chemical process knowledge into mathematical features or constraints in models. • Deploy models into production environments (on-prem or cloud) with high robustness and monitoring. 2. Team Leadership & Management • Lead a compact data science pod (2-3 members), assigning responsibilities, reviewing work, and mentoring junior data scientists or interns. • Own the entire data science lifecycle: problem framing, model development, and validation, deployment, monitoring, and retraining protocols. 3. Stakeholder Engagement & Collaboration • Work directly with Process Engineers, Plant Operators, DCS system owners, and Business Heads to identify pain points and convert them into use-cases. • Collaborate with Data Engineers and IT to ensure data pipelines and model interfaces are robust, secure, and scalable. • Act as a translator between manufacturing business units and technical teams to ensure alignment and impact. 4. Solution Ownership & Documentation • Independently manage and maintain use-cases through versioned model management, robust documentation, and logging. • Define and monitor model KPIs (e.g., drift, accuracy, business impact) post-deployment and lead remediation efforts. Required Skills: 1. 7+ years of experience in Data Science roles, with a strong portfolio of deployed use-cases in manufacturing, energy, or process industries. 2. Proven track record of end-to-end model delivery (from data prep to business value realization). 3. Master’s or PhD in Data Science, Computer Science Engineering, Applied Mathematics, Chemical Engineering, Mechanical Engineering, or a related quantitative discipline. 4. Expertise in Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), and experience with cloud ML tooling (Azure ML, AWS Sagemaker, etc.). 5. Familiarity with plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data. 6. Experience in developing optimization models (LP, MILP, MINLP) for process or resource allocation problems is a strong plus. Job Types: Full-time, Contractual / Temporary Contract length: 6-12 months Pay: Up to ₹200,000.00 per month Work Location: In person

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are looking for a highly skilled and proactive Senior DevOps Specialist to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities: Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have: Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting Familiarity with SQL for validating database changes, debugging issues, and running schema checks Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups Strong exposure to API testing, load/performance testing, and reliability validation Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog) Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders

Posted 1 day ago

Apply

0.0 - 40.0 years

0 Lacs

Gurugram, Haryana

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. About the position: Senior Engineer – Agentic AI: Join Boston Scientific at the forefront of innovation as we embrace AI to transform healthcare and deliver cutting-edge solutions. As a Senior Engineer – Agentic AI, you will architect and deliver autonomous, goal-driven agents powered by large language models (LLMs) and multi-agent frameworks. Key Responsibilities: Design and implement agentic AI systems leveraging LLMs for reasoning, multi-step planning, and tool execution. Evaluate and build upon multi-agent frameworks such as LangGraph, AutoGen, and CrewAI to coordinate distributed problem-solving agents. Develop context-handling, memory, and API-integration layers enabling agents to interact reliably with internal services and third-party tools. Create feedback-loop and evaluation pipelines (LangSmith, RAGAS, custom metrics) that measure factual grounding, safety, and latency. Own backend services that scale agent workloads, optimize GPU / accelerator utilization, and enforce cost governance. Embed observability, drift monitoring, and alignment guardrails throughout the agent lifecycle. Collaborate with research, product, and security teams to translate emerging agentic patterns into production-ready capabilities. Mentor engineers on prompt engineering, tool-use chains, and best practices for agent deployment in regulated environments. Required: 8+ years of software engineering experience, including 3+ years building AI/ML or NLP systems. Expertise in Python and modern LLM APIs (OpenAI, Anthropic, etc.), plus agentic orchestration frameworks (LangGraph, AutoGen, CrewAI, LangChain, LlamaIndex). Proven delivery of agentic systems or LLM-powered applications that invoke external APIs or tools. Deep knowledge of vector databases (Azure AI Search, Weaviate, Pinecone, FAISS, pgvector) and Retrieval-Augmented Generation (RAG) pipelines. Hands-on experience with LLMOps: CI/CD for fine-tuning, model versioning, performance monitoring, and drift detection. Strong background in cloud-native micro-services, security, and observability. Requisition ID: 610421 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description: AI/ML Engineer - US Healthcare Claims Management Position : MLOps Engineer - US Healthcare Claims Management Location : Gurgaon, (Hybrid) Company : Neolytix Experience Required-3 To 5 Years Preference will be given to candidates holding a PHD in the relevant field. About the Role: We are seeking an experienced MLOps Engineer to build, deploy, and maintain AI/ML systems for our healthcare Revenue Cycle Management (RCM) platform. This role will focus on operationalizing machine learning models that analyze claims, prioritize denials, and optimize revenue recovery through automated resolution pathways. Key Tech Stack: Models & ML Components: Fine-tuned healthcare LLMs (GPT-4, Claude) for complex claim analysis Knowledge of Supervised/Unsupervised Models, Optimization & Simulation techniques Domain-specific SLMs for denial code classification and prediction Vector embedding models for similar claim identification NER models for extracting critical claim information Seq2seq models (automated appeal letter generation) Languages & Frameworks: Strong proficiency in Python with OOP principles - (4 years of experience) Experience developing APIs using Flask or Fast API frameworks – (2 years of experience) Integration knowledge with front-end applications – (1 year of experience) Expertise in version control systems (e.g., GitHub, GitLab, Azure DevOps) - (3 years of experience) Proficiency in databases, including SQL, NoSQL and vector databases – (2+ years of experience) Experience with Azure (2+ years of experience) Libraries: PyTorch/TensorFlow/Hugging Face Transformers Key Responsibilities: ML Pipeline Architecture: Design and implement end-to-end ML pipelines for claims processing, incorporating automated training, testing, and deployment workflows Model Deployment & Scaling: Deploy and orchestrate LLMs and SLMs in production using containerization (Docker/Kubernetes) and Azure cloud services Monitoring & Observability: Implement comprehensive monitoring systems to track model performance, drift detection, and operational health metrics CI/CD for ML Systems: Establish CI/CD pipelines specifically for ML model training, validation, and deployment Data Pipeline Engineering: Create robust data preprocessing pipelines for healthcare claims data, ensuring compliance with HIPAA standards Model Optimization: Tune and optimize models for both performance and cost- efficiency in production environments Infrastructure as Code: Implement IaC practices for reproducible ML environments and deployments Document technical solutions & create best practices for scalable AI-driven claims management Preferred Qualifications: Experience with healthcare data, particularly claims processing (EDI 837/835) Knowledge of RCM workflows & denial management processes Understanding of HIPAA compliance requirements Experience with feature stores & model registries Familiarity with healthcare-specific NLP applications What Sets You Apart: Experience operationalizing LLMs for domain-specific enterprise applications Background in healthcare technology or revenue cycle operations Track record of improving model performance metrics in production systems What We Offer: Competitive salary and benefits package Opportunity to contribute to innovative AI solutions in the healthcare industry Dynamic and collaborative work environment Opportunities for continuous learning and professional growth To Apply: Submit your resume and a cover letter detailing your relevant experience and interest in the role to shivanir@neolytix.com Powered by JazzHR BM2Iy0O5p7

Posted 2 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are looking for a highly skilled and proactive Team Lead – DevOps to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting Familiarity with SQL for validating database changes, debugging issues, and running schema checks Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups Strong exposure to API testing, load/performance testing, and reliability validation Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog) Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You will be joining a dynamic venture that is dedicated to developing a gamified learning app for exam preparation. The company's objective is to transform the landscape of exam preparation on a global scale through an interactive, engaging, and efficient platform that incorporates AI, gamification, community-driven features, and a premium user experience. Operating with agility, the company boasts a formidable founding team with a track record of establishing successful enterprises. The team comprises experienced professionals who have previously founded and sold startups to leading multinational corporations. The platform provides personalized learning journeys and in-depth insights. AI customizes content based on individual learning preferences, while analytics pinpoint strengths and areas for improvement, offering actionable suggestions. The product is currently in its initial stages, and your assistance is sought to expedite its market launch. The company is self-funded and plans to pursue funding post its launch in a few months. As a full-time Sr Flutter Developer at PrepAiro in Bengaluru, you will play a crucial role in developing high-caliber mobile applications for iOS and Android. Your responsibilities will encompass API Integration, state management, real-time data manipulation, local data storage, and Firebase integrations, all within a structured MVVM architecture to deliver seamless, secure user experiences. Your tasks will include: - Developing and maintaining responsive Flutter applications with well-structured, scalable code. - Implementing Bloc for efficient state management, utilizing Equatable for streamlined state comparison. - Integrating local databases (Hive, SQLite, Floor ORM, Drift) for offline functionality. - Creating captivating animations using Rive and Flutter's animation tools. - Employing reactive programming, ETag caching, and encryption methods for optimal performance. - Implementing MVVM architecture and adhering to clean code practices. - Constructing robust applications with comprehensive testing strategies (unit, widget, integration). - Integrating Firebase services like Crashlytics, Analytics, and App Distribution for monitoring and deployment. - Collaborating with cross-functional teams to enhance UX, security, and app performance. Qualifications required: - Minimum 4 years of experience in Flutter & Dart development. - Proficiency in Bloc for state management, leveraging Equatable. - Experience with local databases: Hive, SQLite, Floor ORM, Drift. - Knowledge of reactive programming, encryption techniques, and ETag optimization. - Familiarity with MVVM architecture and clean code principles. - Proficiency in Rive animations and Flutter's native animation tools. - Strong skills in Flutter's testing frameworks. - Experience in Firebase Integrations: Crashlytics, Analytics, App Distribution. - Familiarity with dependency injection and Git version control. Join us for the opportunity to: - Work on impactful and innovative projects within a collaborative team. - Contribute to an early-stage startup. - Collaborate with visionary founders in a conducive workspace. - Engage in continuous learning and growth opportunities. - Enjoy a flexible work culture.,

Posted 2 days ago

Apply

0 years

2 - 5 Lacs

Hyderābād

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Key Responsibilities Develop, deploy, and monitor machine learning models in production environments. Automate ML pipelines for model training, validation, and deployment. Optimize ML model performance, scalability, and cost efficiency. Implement CI/CD workflows for ML model versioning, testing, and deployment. Manage and optimize data processing workflows for structured and unstructured data. Design, build, and maintain scalable ML infrastructure on cloud platforms. Implement monitoring, logging, and alerting solutions for model performance tracking. Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into business applications. Ensure compliance with best practices for security, data privacy, and governance. Stay updated with the latest trends in MLOps, AI, and cloud technologies. Mandatory Skills Technical Skills: Programming Languages: Proficiency in Python (3.x) and SQL. ML Frameworks & Libraries: Extensive knowledge of ML frameworks (TensorFlow, PyTorch, Scikit-learn), data structures, data modeling, and software architecture. Databases: Experience with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, DynamoDB) databases. Mathematics & Algorithms: Strong understanding of mathematics, statistics, and algorithms for machine learning applications. ML Modules & REST API: Experience in developing and integrating ML modules with RESTful APIs. Version Control: Hands-on experience with Git and best practices for version control. Model Deployment & Monitoring: Experience in deploying and monitoring ML models using:MLflow (for model tracking, versioning, and deployment) WhyLabs (for model monitoring and data drift detection) Kubeflow (for orchestrating ML workflows) Airflow (for managing ML pipelines) Docker & Kubernetes (for containerization and orchestration) Prometheus & Grafana (for logging and real-time monitoring) Data Processing: Ability to process and transform unstructured data into meaningful insights (e.g., auto-tagging images, text-to-speech conversions). Preferred Cloud & Infrastructure Skills: Experience with cloud platforms : Knowledge of AWS Lambda, AWS API Gateway, AWS Glue, Athena, S3 and Iceberg and Azure AI Studio for model hosting, GPU/TPU usage, and scalable infrastructure. Hands-on with Infrastructure as Code (Terraform, CloudFormation) for cloud automation. Experience on CI/CD pipelines: Experience integrating ML models into continuous integration/continuous delivery workflows. We use Git based CI/CD methods mostly. Experience with feature stores (Feast, Tecton) for managing ML features. Knowledge of big data processing tools (Spark, Hadoop, Dask, Apache Beam). EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 days ago

Apply

8.0 years

3 - 8 Lacs

Hyderābād

On-site

About the Role: We are seeking a DevOps Technical Lead with a strong background in infrastructure automation, cloud architecture, and a keen interest in Generative AI technologies. The ideal candidate will lead the development of an Infrastructure Agent powered by GenAI – capable of intelligent provisioning, configuration, observability, and self-healing. Key Responsibilities: Lead architecture & design of an intelligent Infra Agent leveraging GenAI capabilities. Integrate LLMs and automation frameworks (e.g., LangChain, OpenAI, Hugging Face) to enhance DevOps workflows. Build solutions that automate infrastructure provisioning , CI/CD , incident remediation , and drift detection . Develop reusable components and frameworks using IaC (Terraform, Pulumi, CloudFormation) and configuration management tools (Ansible, Chef, etc.). Partner with AI/ML engineers and SREs to design intelligent infrastructure decision-making logic. Implement secure and scalable infrastructure on cloud platforms ( AWS, Azure, GCP ). Continuously improve agent performance through feedback loops , telemetry , and fine-tuning of models. Drive DevSecOps best practices, compliance, and observability. Mentor DevOps engineers and collaborate with cross-functional teams (AI/ML, Platform, and Product). Required Qualifications: Bachelor's or Master’s degree in Computer Science, Engineering, or related field. 8+ years of experience in DevOps, SRE, or Infrastructure Engineering. Proven experience in leading infrastructure automation projects and technical teams. Expertise with one or more cloud platforms: AWS, Azure, GCP . Deep knowledge of tools like Terraform , Kubernetes , Helm , Docker , Jenkins , and GitOps . Hands-on experience integrating or building with LLMs / GenAI APIs (e.g., OpenAI, Anthropic, Cohere) . Familiarity with LangChain , AutoGen , or custom agent frameworks . Experience with programming/scripting languages: Python, Go, or Bash . Understanding of cloud security, policy as code , and monitoring tools (Prometheus, Grafana, Datadog). Preferred Qualifications: Experience building or fine-tuning LLM-based agents for operations or automation tasks. Contributions to open-source GenAI or DevOps projects. Understanding of MLOps pipelines and AI infrastructure. Certifications in DevOps, cloud, or AI technologies (e.g., AWS DevOps Engineer, Azure AI Engineer).

Posted 2 days ago

Apply

10.0 years

2 - 11 Lacs

Bengaluru

On-site

Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160

Posted 2 days ago

Apply

5.0 years

18 Lacs

Bengaluru

On-site

Hiring Data Engineer (Microsoft Fabric & Lakehouse) for one of our client MNC Job Title: Data Engineer (Microsoft Fabric & Lakehouse) Location: Hybrid – Bangalore, India Experience: 5 Years Joining: ImmediateHiring Process: One interview + One case study round About the Role- We are looking for a skilled Data Engineer with 2–5 years of experience to join our dynamic team. The ideal candidate will be responsible for designing and developing scalable, reusable, and efficient data pipelines using modern Data Engineering platforms such as Microsoft Fabric, PySpark, and Data Lakehouse architectures.You will play a key role in integrating data from diverse sources, transforming it into actionable insights, and ensuring high standards of data governance and quality. This role requires a strong understanding of modern data architectures, pipeline observability, and performance optimization. Key Responsibilities ● Design and build robust data pipelines using Microsoft Fabric components including Pipelines, Notebooks (PySpark), Dataflows, and Lakehouse architecture .● Ingest and transform data from a variety of sources such as cloud platforms (Azure, AWS), on-prem databases, SaaS platforms (e.g., Salesforce, Workday), and REST/OpenAPI-based APIs. ● Develop and maintain semantic models and define standardized KPIs for reporting and analytics in Power BI or equivalent BI tools. ● Implement and manage Delta Tables across bronze/silver/gold layers using Lakehouse medallion architecture within OneLake or equivalent environments. ● Apply metadata-driven design principles to support pipeline parameterization, reusability, and scalability. ● Monitor, debug, and optimize pipeline performance; implement logging, alerting, and observability mechanisms. ● Establish and enforce data governance policies including schema versioning, data lineage tracking, role-based access control (RBAC), and audit trail mechanisms. ● Perform data quality checks including null detection, duplicate handling, schema drift management, outlier identification, and Slowly Changing Dimensions (SCD) type management. Required Skills & Qualifications- ● 2–5 years of hands-on experience in Data Engineering or related fields. ● Solid understanding of data lake/lakehouse architectures, preferably with Microsoft Fabric or equivalent tools (e.g., Databricks, Snowflake, Azure Synapse). ● Strong experience with PySpark, SQL, and working with dataflows and notebooks. ● Exposure to BI tools like Power BI, Tableau, or equivalent for data consumption layers. ● Experience with Delta Lake or similar transactional storage layers. ● Familiarity with data ingestion from SaaS applications, APIs, and enterprise databases. ● Understanding of data governance, lineage, and RBAC principles. ● Strong analytical, problem-solving, and communication skills. Nice to Have- ● Prior experience with Microsoft Fabric and OneLake platform. ● Knowledge of CI/CD practices in data engineering. ● Experience implementing monitoring/alerting tools for data pipelines. Why Join Us? ● Opportunity to work on cutting-edge data engineering solutions. ● Fast-paced, collaborative environment with a focus on innovation and learning. ● Exposure to end-to-end data product development and deployment cycles. Job Type: Contractual / Temporary Contract length: 12 months Pay: From ₹150,000.00 per month Work Location: In person

Posted 2 days ago

Apply

10.0 years

2 - 10 Lacs

Calcutta

On-site

Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160

Posted 2 days ago

Apply

0 years

2 - 7 Lacs

Jaipur

On-site

ID: 346 | 2-5 yrs | Jaipur | careers AI Senior Engineer About the Role: We are looking for a highly capable AI Lead Engineer to contribute to the design and delivery of intelligent, scalable AI solutions. This role focuses on building production-grade systems involving LLMs, vector databases, agent-based workflows, and Retrieval-Augmented Generation (RAG) architectures on the cloud. The ideal candidate should demonstrate strong problem-solving abilities, hands-on technical skills, and the ability to align AI design with real-world business needs. Must-Have Skills Strong coding ability in Python and proficiency in SQL. Hands-on experience with vector databases (e.g., Pinecone, FAISS, Weaviate). Practical experience with LLMs (OpenAI, Claude, Gemini, etc.) in real-world workflows. Familiarity with LangChain, LlamaIndex, or similar orchestration tools. Proven track record in delivering scalable AI systems on public cloud (Azure, AWS, or GCP). Experience building and optimizing RAG pipelines. Solid understanding of server less/cloud-native architecture and event-driven design. Integrate/expose the ai solution in applications using fastapi,flask,django Preferred or Good to Have Skills Exposure to agentic AI patterns and multi-agent coordination. Knowledge of AI system safety practices (e.g., hallucination filtering, grounding). Experience with MLOps tools (MLflow, KubeFlow) and CI/CD for ML. Understanding of concept/data drift and retraining strategies in production. Experience working on AI projects involving classification , regression , and clustering models. Prior work on multi-modal AI pipelines (vision + language). Familiarity with real-time inference tuning (batching, concurrency). Demonstrated ability to deliver a variety of AI solutions in production environments across domains. Any Other: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related field. Strong analytical mindset with a clear focus on business-aligned AI delivery. Excellent verbal and written communication skills. Key Responsibilities Collaborate with the Solution Architect to design agentic AI systems (e.g., ReAct, CodeAct, Self-Reflective Agents). Build and deploy scalable RAG pipelines using vector databases and embedding models. Integrate modern AI tools (e.g., LangChain, LlamaIndex, Kagi, Search APIs) into solution workflows. Optimize inference performance for cloud and edge environments. Contribute to the development of feedback loops, drift detection, and self-healing AI systems. Deploy, monitor, and manage AI solutions on cloud platforms (Azure, AWS, or GCP). Translate business use cases into robust technical solutions in collaboration with cross-functional teams.

Posted 2 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At Umami Bioworks, we are a leading bioplatform for the development and production of sustainable planetary biosolutions. Through the synthesis of machine learning, multi- omics biomarkers, and digital twins, UMAMI has established market-leading capability for discovery and development of cultivated bioproducts that can seamlessly transition to manufacturing with UMAMI’s modular, automated, plug-and-play production solution By partnering with market leaders as their biomanufacturing solution provider, UMAMI is democratizing access to sustainable blue bioeconomy solutions that address a wide range of global challenges. We’re a venture-backed biotech startup located in Singapore where some of the world’s smartest, most passionate people are pioneering a sustainable food future that is attractive and accessible to people around the world. We are united by our collective drive to ask tough questions, take on challenging problems, and apply cutting-edge science and engineering to create a better future for humanity. At Umami Bioworks, you will be encouraged to dream big and will have the freedom to create, invent, and do the best, most impactful work of your career. Umami Bioworks is looking to hire an inquisitive, innovative, and independent Machine Learning Engineer to join our R&D team in Bangalore, India, to develop scalable, modular ML infrastructure integrating predictive and optimization models across biological and product domains. The role focuses on orchestrating models for media formulation, bioprocess tuning, metabolic modeling, and sensory analysis to drive data-informed R&D. The ideal candidate combines strong software engineering skills with multi-model system experience, collaborating closely with researchers to abstract biological complexity and enhance predictive accuracy. Responsibilities Design and build the overall architecture for a multi-model ML system that integrates distinct models (e.g., media prediction, bioprocess optimization, sensory profile, GEM-based outputs) into a unified decision pipeline Develop robust interfaces between sub-models to enable modularity, information flow, and cross-validation across stages (e.g., outputs of one model feeding into another) Implement model orchestration logic to allow conditional routing, fallback mechanisms, and ensemble strategies across different models Build and maintain pipelines for training, testing, and deploying multiple models across different data domains Optimize inference efficiency and reproducibility by designing clean APIs and containerized deployments Translate conceptual product flow into technical architecture diagrams, integration roadmaps, and modular codebases Implement model monitoring and versioning infrastructure to track performance drift, flag outliers, and allow comparison across iterations Collaborate with data engineers and researchers to abstract away biological complexity and ensure a smooth ML-only engineering focus Lead efforts to refactor and scale ML infrastructure for future integrations (e.g., generative layers, reinforcement learning modules) Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, Computational Biology, Data Science, or a related field Proven experience developing and deploying multi-model machine learning systems in a scientific or numerical domain Exposure to hybrid modeling approaches and/or reinforcement learning strategies Experience Experience with multi-model systems Worked with numerical/scientific datasets (multi-modal datasets) Hybrid modelling and/or RL (AI systems) Core Technical Skills Machine Learning Frameworks: PyTorch, TensorFlow, scikit-learn, XGBoost, CatBoost Model Orchestration: MLflow, Prefect, Airflow Multi-model Systems: Ensemble learning, model stacking, conditional pipelines Reinforcement Learning: RLlib, Stable-Baselines3 Optimization Libraries: Optuna, Hyperopt, GPyOpt Numerical & Scientific Computing: NumPy, SciPy, panda Containerization & Deployment: Docker, FastAPI Workflow Management: Snakemake, Nextflow ETL & Data Pipelines: pandas pipelines, PySpark Data Versioning: Git API Design for modular ML blocks You will work directly with other members of our small but growing team to do cutting-edge science and will have the autonomy to test new ideas and identify better ways to do things.

Posted 2 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics.

Posted 2 days ago

Apply

Exploring Drift Jobs in India

The drift job market in India is rapidly growing, with an increasing demand for professionals skilled in this area. Drift professionals are sought after by companies looking to enhance their customer service and engagement through conversational marketing.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

Average Salary Range

The average salary range for drift professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 10 lakhs per annum.

Career Path

A typical career path in the drift domain may progress from roles such as Junior Drift Specialist or Drift Consultant to Senior Drift Specialist, Drift Manager, and eventually reaching the position of Drift Director or Head of Drift Operations.

Related Skills

In addition to expertise in drift, professionals in this field are often expected to have skills in customer service, marketing automation, chatbot development, and data analytics.

Interview Questions

  • What is conversational marketing? (basic)
  • How would you handle a customer complaint through a drift chatbot? (medium)
  • Can you explain a scenario where you successfully implemented drift for a client? (medium)
  • What are some common challenges faced in drift implementation and how do you overcome them? (advanced)
  • How do you measure the success of a drift campaign? (medium)
  • Explain the importance of personalization in drift marketing. (medium)
  • How do you ensure compliance with data privacy regulations when using drift? (advanced)
  • What strategies would you implement to increase customer engagement through drift? (medium)
  • Can you provide examples of drift integrations with other marketing tools? (advanced)
  • How do you stay updated on the latest trends and developments in drift technology? (basic)
  • Describe a situation where you had to troubleshoot a technical issue in a drift chatbot. (medium)
  • How do you handle leads generated through drift to ensure conversion? (medium)
  • What are some best practices for setting up drift playbooks? (medium)
  • How do you customize drift for different target audiences? (medium)
  • Explain the difference between drift and traditional marketing methods. (basic)
  • Can you give an example of a successful drift campaign you were involved in? (medium)
  • How do you ensure a seamless transition between drift and human agents in customer interactions? (medium)
  • What metrics do you track to measure the effectiveness of a drift chatbot? (medium)
  • How do you handle negative feedback received through drift interactions? (medium)
  • What are the key components of a successful drift strategy? (medium)
  • How do you handle a high volume of customer inquiries through drift? (medium)
  • Explain the role of AI in drift marketing. (medium)
  • How do you ensure that drift chatbots are providing accurate information to customers? (medium)
  • Describe a situation where you had to customize drift to meet specific client requirements. (advanced)

Closing Remark

As you prepare for a career in drift jobs in India, remember to showcase your expertise, experience, and passion for conversational marketing. Stay updated on industry trends and technologies to stand out in the competitive job market. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies