Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bhopal, Madhya Pradesh, India
On-site
We are building India's first commercial Scanning Electron Microscope (SEM) from the ground up. This is not an integration project. We will design our own electron optics, high-voltage systems, precision mechanics, and real-time control software. If the idea of turning first-principles physics into a manufacturable instrument excites you, read on. Your Mission Build the real-time beam controller on a Zynq Ultrascale+ (ARM+FPGA): Pixel clock ≤20 MHz with ±5 ns jitter; Blanking rise/fall <25 ns into 50 Ω Implement closed-loop stage control using high-resolution position feedback; update PID at 10 kHz. Stream 16-bit detector data at 250 MB/s over PCIe to host memory. Ship imaging algorithms: auto-focus, auto-stigmation, drift correction running at ≥5 fps. Provide a clean, documented Python API for scripting and automated metrology. Who You Are 6+ yrs C++17/C, Python, real-time Linux or RTOS; comfortable inside FPGA toolchains (Vivado, Verilog). Solid DSP & control theory background; not afraid of scope probes or logic analyzers. Bonus: prior work on SEM, TEM, or other charged-particle systems. Share the hardest technical problem you have solved and why Bharat Atomic's mission fires you up along with your CV and Portfolio.
Posted 4 days ago
0 years
0 Lacs
Bhopal, Madhya Pradesh, India
On-site
We are building India's first commercial Scanning Electron Microscope (SEM) from the ground up. This is not an integration project. We will design our own electron optics, high-voltage systems, precision mechanics, and real-time control software. If the idea of turning first-principles physics into a manufacturable instrument excites you, read on. Your Mission Own the complete electron-optical column: thermionic source, condenser & objective lenses, SE/BSE detectors. Push a tungsten cathode to ≤10 nm resolution at 20 kV, while also delivering a stable, high-current beam (≥1 nA) for analytical work—among the best ever reported for a thermionic system. Minimise spherical and chromatic aberrations through iterative modelling (COMSOL / SIMION / Opera-3D) and hardware prototyping. Derive and flow down stability & alignment budgets to high-voltage and mechanical teams (≤1 µm source drift / 8 hr, ≤50 nm lens-to-lens decenter). Design modular, UHV-compatible lens stacks that can later accept a Schottky or cold-field upgrade without a full redesign. Who You Are PhD or 6+ yrs R&D in electron/ion optics, accelerator physics, or charged-particle instrumentation. Hands-on with high-vacuum hardware, electron guns, electrostatic/magnetic lens design. Fluent in at least one high-fidelity field solver (COMSOL, SIMION, CST). Comfort translating aberration theory into tolerance tables a machinist can hit. Share the hardest technical problem you have solved and why Bharat Atomic's mission fires you up along with your CV and Portfolio.
Posted 4 days ago
0 years
0 Lacs
Bhopal, Madhya Pradesh, India
On-site
We are building India's first commercial Scanning Electron Microscope (SEM) from the ground up. This is not an integration project. We will design our own electron optics, high-voltage systems, precision mechanics, and real-time control software. If the idea of turning first-principles physics into a manufacturable instrument excites you, read on. Your Mission Architect the floating HV stack: 30 kV, 200 µA beam supply; 5 ppm rms ripple (10 Hz–1 MHz) at the gun terminal 20 ppm/hr long-term drift (including reference & tempco) at 25 °C ± 2 °C Design resonant converters (LLC) followed by ultra-low-noise linear post-regulators. Deliver arc-handling (≤5 µs crowbar) and IEC 61010-1 compliant safety interlocks. Create on-board diagnostics: 16-bit delta-sigma ADCs, optically-isolated feedback, FFT logging to the control system. Who You Are 8+ yrs precision analog / power electronics; proven HV designs ≥5 kV. Expert in SPICE, magnetics design, PCB layout for >100 dB isolation. Obsessed with noise budgets, Kelvin sensing, and ppm-level calibration. Share the hardest technical problem you have solved and why Bharat Atomic's mission fires you up along with your CV and Portfolio.
Posted 4 days ago
10.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida,Uttar Pradesh,India +1 more Job ID 770510 Join our Team About this opportunity: At Ericsson, you can be a game changer! Because working here isn’t just a deal. It’s a big deal. This means that you get to leverage our 140+ years of experience & the expertise of more than 95,000 diverse colleagues worldwide. As part of our team, you will help solve some of society´s most complicated challenges, enabling you to be ‘the person that did that.’ We’ve never had a greater opportunity to inspire change; setting the bar for technology to be inclusive & accessible; empowering a hard-working, sustainable, & connected world. What you will do End-to-end administration and operations of CNIS platforms including Kubernetes, container runtimes, and cloud-native infrastructure components. Manage and optimize Cloud Container Distribution (CCD), covering provisioning, upgrades, backup/restore, lifecycle management, and disaster recovery readiness. Perform CCD health checks, configuration drift analysis, capacity tracking, and environment validation across clusters. Work closely with app and platform teams to onboard workloads and provide CCD-level support. Monitor system performance and availability using tools like Prometheus, Grafana, and other tools Ensure compliance with security, governance, and operational policies. Participate in and lead DR planning, execution, and RCA of major incidents. Maintain detailed runbooks, MOPs, and CNIS documentation for operational readiness. Monitor SDI health and performance, resolve bottlenecks, and ensure seamless integration with CNIS layers. Good troubleshooting experience with CNIS components Expertise in Red Hat Enterprise Linux (RHEL) system administration and troubleshooting. Hardware troubleshooting experience on HP ProLiant, Dell PowerEdge servers. Patch management, firmware upgrades, and OS-level tuning Strong hands-on experience with Juniper and Extreme Networks switches. Switch boot process, firmware upgrades, backup/restore, Border leaf switch management and troubleshooting Familiarity with traffic filtering, isolation, and multi-tenant CNIS network architecture. Configuration of VLANs, trunking, port profiles (access/trunk), and ACLs Strong knowledge on CEPH Storage administration and troubleshooting The skills you bring: 10+ years of Experience is supporting & managing business critical operations. In depth knowledge & working experience on Linux Administration. In depth knowledge & working experience on Dockers, Kubernetes & any other cloud platform. Troubleshooting & Debugging skills to find issues. Understanding of SDN and integration with container networking layers (e.g., Calico). Familiarity with traffic filtering, isolation, and multi-tenant CNIS network architecture. Switch boot process, firmware upgrades, backup/restore, Border leaf switch management and troubleshooting. Configuration of VLANs,trucking, port profiles (access/trunk), and ACLs Performance analysis, component replacement, switch stacking, and chassis management. Certifications Preferred: CKA / CKAD / CKS – Kubernetes certifications Red Hat Certified System Administrator (RHCSA/RHCE) Networking certifications (Juniper/Extreme/SDN) ITIL Foundation (preferred in process-driven environments) Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 4 days ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Description: The AI/ML engineer role requires a blend of expertise in machine learning operations (MLOps), ML Engineering, Data Science, Large Language Models (LLMs), and software engineering principles. Skills you'll need to bring: Experience building production-quality ML and AI systems. Experience in MLOps and real-time ML and LLM model deployment and evaluation. Experience with RAG frameworks and Agentic workflows valuable. Proven experience deploying and monitoring large language models (e.g., Llama, Mistral, etc.). Improve evaluation accuracy and relevancy using creative, cutting-edge techniques from both industry and new research Solid understanding of real-time data processing and monitoring tools for model drift and data validation. Knowledge of observability best practices specific to LLM outputs, including semantic similarity, compliance, and output quality. Strong programming skills in Python and familiarity with API-based model serving. Experience with LLM management and optimization platforms (e.g., LangChain, Hugging Face). Familiarity with data engineering pipelines for real-time input-output logging and analysis. Qualifications: Experience working with common AI-related models, frameworks and toolsets like LLMs, Vector Databases, NLP, prompt engineering and agent architectures. Experience in building AI and ML solutions. Strong software engineering skills for the rapid and accurate development of AI models and systems. Prominent in programming language like Python. Hands-on experience with technologies like Databricks, and Delta Tables. Broad understanding of data engineering (SQL, NoSQL, Big Data), Agile, UX, Cloud, software architecture, and ModelOps/MLOps. Experience in CI/CD and testing, with experience building container-based stand-alone applications using tools like GitHub, Jenkins, Docker and Kubernetes Responsibilities: Participate in research and innovation of data science projects that have impact to our products and customers globally. Apply ML expertise to train models, validates the accuracy of the models, and deploys the models at scale to production. Apply best practices in MLOps, LLMOps, Data Science, and software engineering to ensure the delivery of clean, efficient, and reliable code. Aggregate huge amounts of data from disparate sources to discover patterns and features necessary to automate the analytical models. About Company Improva is a global IT solution provider and outsourcing company with contributions across several domains including FinTech, Healthcare, Insurance, Airline, Ecommerce & Retail, Logistics, Education, Insurance, Startups, Government & Semi-Government, and more.
Posted 4 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities Architect and build real-time feature pipelines and model training workflows. Design, train, validate, and deploy RUL predictors, anomaly detectors, and LP-based grid solvers. Implement MLOps best practices: MLflow/Kubeflow pipelines, model registry, canary deployments. Collaborate on explainability modules (SHAP, LIME) and drift-detection alerts. Optimize GPU utilization; automate retraining schedules and performance monitoring. Skills & Experience 3+ years in ML engineering or data science roles; production-grade ML deployments. Expertise in time-series modeling: LSTM, GRU, isolation forest, ensemble methods. Strong Python skills; frameworks: TensorFlow, PyTorch, Scikit-Learn. Experience with Kubernetes-based MLOps: Kubeflow, KServe, MLflow. Proficiency tuning and deploying on NVIDIA GPUs (H100, H200). Nice-to-Have Domain experience in predictive maintenance, grid optimization, or IIoT. Familiar with feature-store design (TimescaleDB, Feast) and Spark-on-GPU. Knowledge of explainability libraries and regulatory compliance for AI.
Posted 4 days ago
3.0 - 12.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
𝗛𝗗𝗙𝗖 𝗕𝗮𝗻𝗸 – 𝗠𝗼𝗱𝗲𝗹 𝗥𝗶𝘀𝗸 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 𝗧𝗲𝗮𝗺 𝗢𝗽𝗲𝗻𝗶𝗻𝗴𝘀 Generative AI Governance – 𝗔𝗻𝗮𝗹𝘆𝘀𝘁/ 𝗦𝗲𝗻𝗶𝗼𝗿 𝗔𝗻𝗮𝗹𝘆𝘀𝘁 𝗘𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲: 3-12 years Passionate about Generative AI and its risks? HDFC Bank is hiring Model Risk Managers to help safeguard its next-gen AI systems. The Bank is building a team of AI specialists responsible for the entire lifecycle of Generative AI models - from initial validation through ongoing monitoring and governance. As a member of the AI Governance team, you'll ensure the robustness, safety, and compliance of Generative AI systems through comprehensive technical assessments, risk evaluations, and performance monitoring. What You’ll Do: - Develop and evolve Gen AI validation methodologies and frameworks in line with regulatory requirements and industry best practices - Validate foundation models, Gen AI solutions, fine-tuning approaches, and prompt engineering - Implement monitoring for deployed AI solutions - Assess risks of AI models and solutions to address hallucinations, bias, toxicity, adversarial threats, model drift, and regulatory/ethical challenges - Investigate performance issues and develop remediation plans - Create executive-ready reports and remediation plans - Partner with various stakeholders to embed risk controls into MLops/LLMops pipelines Recommended Experience / Exposure You should have: - Hands-on with Foundation Model architectures, RAG, fine-tuning & prompt engineering approaches - Experience in either development or evaluation of Gen AI solutions and LLM architecture knowledge - Familiarity with end-to-end Generative AI lifecycle is crucial. - Knowledge of guardrails and risk mitigation techniques for hallucination, toxicity, bias & adversarial/security concerns - 3-12 Years of Overall Experience How You’ll Thrive: - Rapid learner with a natural curiosity for emerging AI technologies - Clear communicator who can translate technical insights into business risks - Comfortable with ambiguity and skilled at structured problem-solving - Collaborative mindset—working seamlessly across technical and business teams
Posted 4 days ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
AVEVA is creating software trusted by over 90% of leading industrial companies. Job Title: Salesforce Developer Location: Hyderabad, India Employment type: Full time, regular, hybrid work Benefits: Gratuity, Medical and accidental insurance, very attractive leave entitlement, emergency leave days, childcare support, maternity, paternity and adoption leaves, education assistance program, home office set up support (for hybrid roles), well-being support. The job We are looking for a motivated and talented Salesforce Developer to join our growing Salesforce Center of Excellence team. The ideal candidate will have a passion for technology, a strong desire to learn, and the ability to work collaboratively in a fast-paced environment. As a Salesforce Developer, you will work closely with business analysts, senior developers, capability owners and project managers to design, develop, and implement Salesforce solutions that meet our business needs. Key Responsibilities Lead the design and development of complex Salesforce solutions, including customizations, integrations, and enhancements. Collaborate with business stakeholders to gather and analyze requirements and translate them into technical solutions. Architect scalable and robust Salesforce solutions that align with best practices and industry standards. Mentor and provide guidance to junior developers, sharing your expertise and knowledge of the Salesforce platform. Develop and maintain Apex code, Visualforce pages, Lightning components, and other custom solutions. Implement and customize Salesforce features, such as objects, fields, workflows, validation rules, and process automation. Conduct code reviews and ensure adherence to coding standards and best practices. Perform unit testing, debugging, and troubleshooting to ensure the quality and stability of Salesforce applications. Manage deployments and release management processes in Salesforce environments. Provide technical leadership and support for ongoing maintenance and administration of Salesforce orgs. Stay up to date on Salesforce updates, new features, and emerging technologies, and evaluate their impact on our Salesforce ecosystem. Collaborate with cross-functional teams to drive continuous improvement and innovation in Salesforce solutions. Basic Requirements Bachelor’s degree in computer science, Information Technology, or a related field. Minimum of 3 ~ 7 years of hands-on experience with the Salesforce platform, including configuration, customization, and development, with proven experience in a leadership or team lead role.. Essential Requirements And Skills Salesforce Certified Platform Developer II certification is preferred Strong proficiency in Apex, Visualforce, Lightning Web Components, SOQL, SOSL, and Salesforce APIs. Experience with Salesforce integration technologies, such as REST/SOAP APIs, Platform Events, and Salesforce Connect. In-depth understanding of Salesforce data model, security model, and sharing settings. Experience with Salesforce development tools, including Salesforce DX, Git, Copado and CI/CD pipelines. Extensive experience in Sales Cloud implementation and customization, Partner portal, Sales, and marketing tools. Experience managing integrations with sales and marketing tools like people.ai, Quip, LinkedIn Navigator, D&B, Demand base, Drift and other tools related to partner process. Strong leadership, communication, and interpersonal skills. Commitment to implementing best practices in Salesforce development. Proven track record of delivering complex Salesforce projects on time and within budget. Excellent problem-solving skills and attention to detail. Strong communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and stakeholders. Experience with Agile/Scrum methodologies is a plus. Experience with lead-to-quote domain knowledge. Experience with MuleSoft is a plus. India Benefits include: Gratuity, Medical and accidental insurance, very attractive leave entitlement, emergency leave days, childcare support, maternity, paternity and adoption leaves, education assistance program, home office set up support (for hybrid roles), well-being support It’s possible we’re hiring for this position in multiple countries, in which case the above benefits apply to the primary location. Specific benefits vary by country, but our packages are similarly comprehensive. Find out more: aveva.com/en/about/careers/benefits/ Hybrid working By default, employees are expected to be in their local AVEVA office three days a week, but some positions are fully office-based. Roles supporting particular customers or markets are sometimes remote. Hiring process Interested? Great! Get started by submitting your cover letter and CV through our application portal. AVEVA is committed to recruiting and retaining people with disabilities. Please let us know in advance if you need reasonable support during your application process. Find out more: aveva.com/en/about/careers/hiring-process About AVEVA AVEVA is a global leader in industrial software with more than 6,500 employees in over 40 countries. Our cutting-edge solutions are used by thousands of enterprises to deliver the essentials of life – such as energy, infrastructure, chemicals, and minerals – safely, efficiently, and more sustainably. We are committed to embedding sustainability and inclusion into our operations, our culture, and our core business strategy. Learn more about how we are progressing against our ambitious 2030 targets: sustainability-report.aveva.com/ Find out more: aveva.com/en/about/careers/ AVEVA requires all successful applicants to undergo and pass a drug screening and comprehensive background check before they start employment. Background checks will be conducted in accordance with local laws and may, subject to those laws, include proof of educational attainment, employment history verification, proof of work authorization, criminal records, identity verification, credit check. Certain positions dealing with sensitive and/or third-party personal data may involve additional background check criteria. AVEVA is an Equal Opportunity Employer. We are committed to being an exemplary employer with an inclusive culture, developing a workplace environment where all our employees are treated with dignity and respect. We value diversity and the expertise that people from different backgrounds bring to our business. AVEVA provides reasonable accommodation to applicants with disabilities where appropriate. If you need reasonable accommodation for any part of the application and hiring process, please notify your recruiter. Determinations on requests for reasonable accommodation will be made on a case-by-case basis.
Posted 5 days ago
7.0 years
35 - 40 Lacs
India
Remote
Job Title: Azure DevOps Engineer (MLOps) - Lead Location: Remote (with initial 2-3 months of travel to AbuDhabi, UAE office is a MUST, and then can continue from India remotely) Employment Type: Full-time About The Role Our client, a leading AWS Premier Partner, is seeking a highly skilled Lead DevOps / MLOps Engineer (Azure, Terraform) to join their growing cloud and AI engineering team. This role is ideal for candidates with a strong foundation in cloud DevOps practices and a passion for implementing MLOps solutions at scale. Key Responsibilities Design, implement, and manage CI/CD pipelines using tools such as Jenkins, GitHub Actions, or Azure DevOps Develop and maintain Infrastructure-as-Code using Terraform Manage container orchestration environments using Kubernetes Ensure cloud infrastructure is optimized, secure, and monitored effectively Collaborate with data science teams to support ML model deployment and operationalization Implement MLOps best practices, including model versioning, deployment strategies (e.g., blue-green), monitoring (data drift, concept drift), and experiment tracking (e.g., MLflow) Build and maintain automated ML pipelines to streamline model lifecycle management Required Skills 7+ years of experience in DevOps and/or MLOps roles Proficient in CI/CD tools: Jenkins, GitHub Actions, Azure DevOps Strong expertise in Terraform and cloud-native infrastructure (AWS preferred) Hands-on experience with Kubernetes, Docker, and microservices Solid understanding of cloud networking, security, and monitoring Scripting proficiency in Bash and Python Preferred Skills Experience with MLflow, TFX, Kubeflow, or SageMaker Pipelines Knowledge of model performance monitoring and ML system reliability Familiarity with AWS MLOps stack or equivalent tools on Azure/GCP Skills: devops,bash,kubeflow,sagemaker pipelines,security,terraform,python,microservices,monitoring,tfx,kubernetes,jenkins,github actions,azure,ci/cd tools,cloud networking,azure devops,mlflow,docker
Posted 5 days ago
8.0 years
4 - 9 Lacs
Gurgaon
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. About the position: Senior Engineer – Agentic AI: Join Boston Scientific at the forefront of innovation as we embrace AI to transform healthcare and deliver cutting-edge solutions. As a Senior Engineer – Agentic AI, you will architect and deliver autonomous, goal-driven agents powered by large language models (LLMs) and multi-agent frameworks. Key Responsibilities: Design and implement agentic AI systems leveraging LLMs for reasoning, multi-step planning, and tool execution. Evaluate and build upon multi-agent frameworks such as LangGraph, AutoGen, and CrewAI to coordinate distributed problem-solving agents. Develop context-handling, memory, and API-integration layers enabling agents to interact reliably with internal services and third-party tools. Create feedback-loop and evaluation pipelines (LangSmith, RAGAS, custom metrics) that measure factual grounding, safety, and latency. Own backend services that scale agent workloads, optimize GPU / accelerator utilization, and enforce cost governance. Embed observability, drift monitoring, and alignment guardrails throughout the agent lifecycle. Collaborate with research, product, and security teams to translate emerging agentic patterns into production-ready capabilities. Mentor engineers on prompt engineering, tool-use chains, and best practices for agent deployment in regulated environments. Required: 8+ years of software engineering experience, including 3+ years building AI/ML or NLP systems. Expertise in Python and modern LLM APIs (OpenAI, Anthropic, etc.), plus agentic orchestration frameworks (LangGraph, AutoGen, CrewAI, LangChain, LlamaIndex). Proven delivery of agentic systems or LLM-powered applications that invoke external APIs or tools. Deep knowledge of vector databases (Azure AI Search, Weaviate, Pinecone, FAISS, pgvector) and Retrieval-Augmented Generation (RAG) pipelines. Hands-on experience with LLMOps: CI/CD for fine-tuning, model versioning, performance monitoring, and drift detection. Strong background in cloud-native micro-services, security, and observability. Requisition ID: 610421 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 5 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a highly skilled and proactive Senior DevOps Specialist to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting Familiarity with SQL for validating database changes, debugging issues, and running schema checks Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups Strong exposure to API testing, load/performance testing, and reliability validation Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog) Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders
Posted 5 days ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Company: DataRepo Private Limited Location: Remote (Work from Home) Working Hours: 6:00 PM to 2:00 AM IST (Strict shift adherence required) Salary: ₹35,000/month (Fixed) About the Role: We are hiring a Data Science Engineer to join our growing remote team focused on building and deploying real-time fraud detection models for credit card transactions. This role is ideal for a professional with strong hands-on experience in machine learning, big data engineering, and financial risk systems . You will work closely with cross-functional teams to develop production-ready solutions using Azure , Databricks , and Spark , helping prevent financial fraud at scale. Key Responsibilities: Design, build, and deploy machine learning models for real-time fraud detection Analyze large-scale financial transaction data to identify suspicious patterns Create and manage data pipelines and workflows using Databricks on Azure Collaborate with engineering, fraud operations, and compliance teams Monitor model performance and implement feedback loops for retraining and drift handling Optimize data workflows for cost, performance, and accuracy Required Skills & Qualifications: Minimum 7 years of experience in data science or ML engineering roles Strong programming experience in Python and SQL Solid understanding of ML algorithms (supervised, unsupervised, anomaly detection, etc.) Proven experience with fraud detection systems or financial transaction modeling Hands-on experience with Databricks , Apache Spark , and Azure ML Strong knowledge of model evaluation , monitoring , and retraining strategies Ability to work remotely with strict adherence to the 6 PM – 2 AM IST shift Preferred Skills (Nice to Have): Prior experience in payments , banking , or financial services Familiarity with Microsoft Fabric and stream analytics Exposure to Kafka and real-time data processing Experience working directly with fraud ops , risk , or compliance teams Additional Notes: You will be required to sign an NDA . Disclosure of internal work or salary details is strictly prohibited. Strong commitment and communication are expected — you’ll be working with a remote team and may need to interface with clients.
Posted 5 days ago
2.0 - 5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Title: Data Scientist Job Location: Jaipur Experience: 2 to 5 years Job Description: We are seeking a highly skilled and innovative Data Scientist to join our dynamic and forward-thinking team. This role is ideal for someone who is passionate about advancing the fields of Classical Machine Learning, Conversational AI, and Deep Learning Systems, and thrives on translating complex mathematical challenges into actionable machine learning models. The successful candidate will focus on developing, designing, and maintaining cutting-edge AI-based systems, ensuring seamless and engaging user experiences. Additionally, the role involves active participation in a wide variety of Natural Language Processing (NLP) tasks, including refining and optimizing prompts to enhance the performance of Large Language Models (LLMs). Key Responsibilities: • Generative AI Solutions: Develop innovative Generative AI solutions using machine learning and AI technologies, including building and fine-tuning models such as GANs, VAEs, and Transformers. • Classical ML Models: Design and develop machine learning models (regression, decision trees, SVMs, random forests, gradient boosting, clustering, dimensionality reduction) to address complex business challenges. • Deep Learning Systems: Train, fine-tune, and deploy deep learning models such as CNNs, RNNs, LSTMs, GANs, and Transformers to solve AI problems and optimize performance. • NLP and LLM Optimization: Participate in Natural Language Processing activities, refining and optimizing prompts to improve outcomes for Large Language Models (LLMs), such as GPT, BERT, and T5. • Data Management s Feature Engineering: Work with large datasets, perform data preprocessing, augmentation, and feature engineering to prepare data for machine learning and deep learning models. • Model Evaluation s Monitoring: Fine-tune models through hyperparameter optimization (grid search, random search, Bayesian optimization) to improve performance metrics (accuracy, precision, recall, F1-score). Monitor model performance to address drift, overfitting, and bias. • Code Review s Design Optimization: Participate in code and design reviews, ensuring quality and scalability in system architecture and development. Work closely with other engineers to review algorithms, validate models, and improve overall system efficiency. • Collaboration s Research: Collaborate with cross-functional teams including data scientists, engineers, and product managers to integrate machine learning solutions into production. Stay up to date with the latest AI/ML trends and research, applying cutting-edge techniques to projects. Qualifications: • Educational Background: Bachelor’s or Master’s degree in Computer Science, Mathematics, Statistics, Data Science, or any related field. • Experience in Machine Learning: Extensive experience in both classical machine learning techniques (e.g., regression, SVM, decision trees) and deep learning systems (e.g., neural networks, transformers). Experience with frameworks such as TensorFlow, PyTorch, or Keras. • Natural Language Processing Expertise: Proven experience in NLP, especially with Large Language Models (LLMs) like GPT, BERT, or T5. Experience in prompt engineering, fine-tuning, and optimizing model outcomes is a strong plus. • Programming Skills: Proficiency in Python and relevant libraries such as NumPy, Pandas, Scikit-learn, and natural language processing libraries (e.g., Hugging Face Transformers, NLTK, SpaCy). • Mathematical s Statistical Knowledge: Strong understanding of statistical modeling, probability theory, and mathematical optimization techniques used in machine learning. • Model Deployment s Automation: Experience with deploying machine learning models into production environments using platforms such as AWS SageMaker or Azure ML, GCP AI, or similar. Familiarity with MLOps practices is an advantage. • Code Review s System Design: Experience in code review, design optimization, and ensuring quality in large-scale AI/ML systems. Understanding of distributed computing and parallel processing is a plus. Soft Skills s Behavioural Qualifications: • Must be a good team player and self-motivated to achieve positive results • Must have excellent communication skills in English. • Exhibits strong presentation skills with attention to detail. • It’s essential to have a strong aptitude for learning new techniques. • Takes ownership for responsibilities • Demonstrates a high degree of reliability, integrity, and trustworthiness • Ability to manage time, displays appropriate sense of urgency and meet/exceed all deadlines • Ability to accurately process high volumes of work within established deadlines. Interested candidate can share your cv or reference at sulabh.tailang@celebaltech.com
Posted 6 days ago
5.0 years
0 Lacs
Greater Kolkata Area
Remote
ML Ops Engineer (Remote). Are you passionate about scaling machine learning models in the cloud? Were on the hunt for an experienced ML Ops Engineer to help us build scalable, automated, and production-ready ML infrastructure across multi-cloud environments. Location : Remote. Experience : 5+ Years. What Youll Do Design and manage scalable ML pipelines and deployment frameworks. Own the full ML lifecycle: training versioning deployment monitoring. Build cloud-native infrastructure on AWS, GCP, or Azure. Automate deployment using CI/CD tools like Jenkins, GitLab. Containerize and orchestrate ML apps with Docker and Kubernetes. Use tools like MLflow, TensorFlow Serving, Kubeflow. Partner with Data Scientists & DevOps to ship robust ML solutions. Set up monitoring systems for model drift and performance Were Looking For : 5+ years of experience in MLOps or DevOps for ML systems. Hands-on with at least two cloud platforms : AWS, GCP, or Azure. Proficient in Python and ML libraries (TensorFlow, Scikit-learn, etc.) Strong skills in Docker, Kubernetes, and cloud infrastructure automation Experience building CI/CD pipelines (Jenkins, GitLab CI/CD, etc. Familiarity with tools like MLflow, TensorFlow have skills : Strong experience in any two cloud technologies (Azure, AWS, GCP). (ref:hirist.tech)
Posted 6 days ago
8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. About The Position Senior Engineer – Agentic AI: Join Boston Scientific at the forefront of innovation as we embrace AI to transform healthcare and deliver cutting-edge solutions. As a Senior Engineer – Agentic AI, you will architect and deliver autonomous, goal-driven agents powered by large language models (LLMs) and multi-agent frameworks. Key Responsibilities Design and implement agentic AI systems leveraging LLMs for reasoning, multi-step planning, and tool execution. Evaluate and build upon multi-agent frameworks such as LangGraph, AutoGen, and CrewAI to coordinate distributed problem-solving agents. Develop context-handling, memory, and API-integration layers enabling agents to interact reliably with internal services and third-party tools. Create feedback-loop and evaluation pipelines (LangSmith, RAGAS, custom metrics) that measure factual grounding, safety, and latency. Own backend services that scale agent workloads, optimize GPU / accelerator utilization, and enforce cost governance. Embed observability, drift monitoring, and alignment guardrails throughout the agent lifecycle. Collaborate with research, product, and security teams to translate emerging agentic patterns into production-ready capabilities. Mentor engineers on prompt engineering, tool-use chains, and best practices for agent deployment in regulated environments. Required 8+ years of software engineering experience, including 3+ years building AI/ML or NLP systems. Expertise in Python and modern LLM APIs (OpenAI, Anthropic, etc.), plus agentic orchestration frameworks (LangGraph, AutoGen, CrewAI, LangChain, LlamaIndex). Proven delivery of agentic systems or LLM-powered applications that invoke external APIs or tools. Deep knowledge of vector databases (Azure AI Search, Weaviate, Pinecone, FAISS, pgvector) and Retrieval-Augmented Generation (RAG) pipelines. Hands-on experience with LLMOps: CI/CD for fine-tuning, model versioning, performance monitoring, and drift detection. Strong background in cloud-native micro-services, security, and observability. Requisition ID: 610421 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 6 days ago
15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Business Analyst Lead – Generative AI Experience: 7–15 Years Location: Bangalore Designation Level: Lead Role Overview: We are looking for a Business Analyst Lead with a strong grounding in Generative AI to bridge the gap between innovation and business value. In this role, you'll drive adoption of GenAI tools (LLMs, RAG systems, AI agents) across enterprise functions, aligning cutting-edge capabilities with practical, measurable outcomes. Key Responsibilities: 1. GenAI Strategy & Opportunity Identification Collaborate with cross-functional stakeholders to identify high-impact Generative AI use cases (e.g., AI-powered chatbots, content generation, document summarization, synthetic data). Lead cost-benefit analyses (e.g., fine-tuning open-source models vs. adopting commercial LLMs like GPT-4 Enterprise). Evaluate ROI and adoption feasibility across departments. 2. Requirements Engineering for GenAI Projects Define and document both functional and non-functional requirements tailored to GenAI systems: Accuracy thresholds (e.g., hallucination rate under 5%) Ethical guardrails (e.g., PII redaction, bias mitigation) Latency SLAs (e.g., <2 seconds response time) Develop prompt engineering guidelines, testing protocols, and iteration workflows. 3. Stakeholder Collaboration & Communication Translate technical GenAI concepts into business-friendly language. Manage expectations on probabilistic outputs and incorporate validation workflows (e.g., human-in-the-loop review). Use storytelling and outcome-driven communication (e.g., “Automated claims triage reduced handling time by 40%.”) 4. Business Analysis & Process Modeling Create advanced user story maps for multi-agent workflows (AutoGen, CrewAI). Model current and future business processes using BPMN to reflect human-AI collaboration. 5. Tools & Technical Proficiency Hands-on experience with LangChain, LlamaIndex for LLM integration. Knowledge of vector databases, RAG architectures, LoRA-based fine-tuning. Experience using Azure OpenAI Studio, Google Vertex AI, Hugging Face. Data validation using SQL and Python; exposure to synthetic data generation tools (e.g., Gretel, Mostly AI). 6. Governance & Performance Monitoring Define KPIs for GenAI performance: Token cost per interaction User trust scores Automation rate and model drift tracking Support regulatory compliance with audit trails and documentation aligned with EU AI Act and other industry standards. Required Skills & Experience: 7–10 years of experience in business analysis or product ownership, with recent focus on Generative AI or applied ML. Strong understanding of the GenAI ecosystem and solution lifecycle from ideation to deployment. Experience working closely with data science, engineering, product, and compliance teams. Excellent communication and stakeholder management skills, with a focus on enterprise environments. Preferred Qualifications: Certification in Business Analysis (CBAP/PMI-PBA) or AI/ML (e.g., Coursera/Stanford/DeepLearning.ai) Familiarity with compliance and AI regulations (GDPR, EU AI Act). Experience in BFSI, healthcare, telecom, or other regulated industries.
Posted 6 days ago
7.0 years
24 Lacs
Bharūch
On-site
Role: Sr Data Scientist – Digital & Analytics Experience: 7+ Years | Industry: Exposure to manufacturing, energy, supply chain or similar Location: On-Site @ Bharuch, Gujarat (6 days/week, Mon-Sat working) Perks: Work with Client Directly & Monthly renumeration for lodging Mandatory Skills: Exp. In full scale implementation from requirement gathering till project delivery (end to end). EDA, ML Techniques (supervised and unsupervised), Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), cloud ML tooling (Azure ML, AWS Sage maker, etc.), plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data, optimization models (LP, MILP, MINLP). We are seeking a highly capable and hands-on Sr Data Scientist to drive data science solution development for chemicals manufacturing environment. This role is ideal for someone with a strong product mindset and a proven ability to work independently, while mentoring a small team. You will play a pivotal role in developing advanced analytics and AI/ML solutions for operations, production, quality, energy optimization, and asset performance, delivering tangible business impact. Responsibilities: 1. Data Science Solution Development • Design and develop predictive and prescriptive models for manufacturing challenges such as process optimization, yield prediction, quality forecasting, downtime prevention, and energy usage minimization. • Perform robust exploratory data analysis (EDA) and apply advanced statistical and machine learning techniques (supervised and unsupervised). • Translate physical and chemical process knowledge into mathematical features or constraints in models. • Deploy models into production environments (on-prem or cloud) with high robustness and monitoring. 2. Team Leadership & Management • Lead a compact data science pod (2-3 members), assigning responsibilities, reviewing work, and mentoring junior data scientists or interns. • Own the entire data science lifecycle: problem framing, model development, and validation, deployment, monitoring, and retraining protocols. 3. Stakeholder Engagement & Collaboration • Work directly with Process Engineers, Plant Operators, DCS system owners, and Business Heads to identify pain points and convert them into use-cases. • Collaborate with Data Engineers and IT to ensure data pipelines and model interfaces are robust, secure, and scalable. • Act as a translator between manufacturing business units and technical teams to ensure alignment and impact. 4. Solution Ownership & Documentation • Independently manage and maintain use-cases through versioned model management, robust documentation, and logging. • Define and monitor model KPIs (e.g., drift, accuracy, business impact) post-deployment and lead remediation efforts. Required Skills: 1. 7+ years of experience in Data Science roles, with a strong portfolio of deployed use-cases in manufacturing, energy, or process industries. 2. Proven track record of end-to-end model delivery (from data prep to business value realization). 3. Master’s or PhD in Data Science, Computer Science Engineering, Applied Mathematics, Chemical Engineering, Mechanical Engineering, or a related quantitative discipline. 4. Expertise in Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), and experience with cloud ML tooling (Azure ML, AWS Sagemaker, etc.). 5. Familiarity with plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data. 6. Experience in developing optimization models (LP, MILP, MINLP) for process or resource allocation problems is a strong plus. Job Types: Full-time, Contractual / Temporary Contract length: 6-12 months Pay: Up to ₹200,000.00 per month Work Location: In person
Posted 6 days ago
12.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About IDfy IDfy is an Integrated Identity Platform offering products and solutions for KYC, KYB, Background Verifications, Risk Assessment, and Digital Onboarding. We establish trust while delivering a frictionless experience for you, your employees, customers and partners. Only IDfy combines enterprise-grade technology with business understanding and has the widest breadth of offerings in the industry. With more than 12+ years of experience and 2 million verifications per day, we are pioneers in this industry. Our clients include HDFC Bank, Induslnd Bank, Zomato, Amazon, PhonePe, Paytm, HUL and many others. We have successfully raised $27M from Elev8 Venture Partners, KB Investments & Tenacity Ventures! We work fully onsite on all days of the week from our office in Andheri, Mumbai Role Overview: Support project delivery by coordinating teams, tracking progress, and clearing roadblocks. Learn on the job. Deliver on time. Own your part like a pro. Key Responsibilities: Assist in planning and executing projects under senior guidance. Communicate effectively with cross-functional teams to keep projects moving. Monitor timelines and raise flags early when things drift off course. Manage project documentation and action items with discipline. Participate in meetings, track decisions, and drive follow-ups. Learn project management tools and frameworks on the job. Adapt quickly and maintain urgency in a fast-paced environment. Qualifications: Bachelor’s degree in any field. 0.6-2 years of experience; willingness to learn is non-negotiable. Strong organizational and communication skills. Proactive, detail-oriented, and accountable. Comfortable working under pressure and managing multiple priorities.
Posted 6 days ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Competetive Salary PF and Gratuity About Our Client Our client is an international professional services brand of firms, operating as partnerships under the brand. It is the second-largest professional services network in the world Job Description Position: ML Engineer Job type: Techno-Functional Preferred education qualifications: Bachelor/ Master's degree in computer science, Data Science, Machine Learning OR related technical degree Job location: India Geography: SAPMENA Required experience: 6-8 Years Preferred profile/ skills: 5+ years in developing and deploying enterprise-scale ML solutions [Mandatory] Proven track record in data analysis (EDA, profiling, sampling), data engineering (wrangling, storage, pipelines, orchestration), [Mandatory] Proficiency in Data Science/ML algorithms such as regression, classification, clustering, decision trees, random forest, gradient boosting, recommendation, dimensionality reduction [Mandatory] Experience in ML algorithms such as ARIMA, Prophet, Random Forests, and Gradient Boosting algorithms (XGBoost, LightGBM, CatBoost) [Mandatory] Prior experience on MLOps with Kubeflow or TFX [Mandatory] Experience in model explainability with Shapley plot and data drift detection metrics. [Mandatory] Advanced programming skills with Python and SQL [Mandatory] Prior experience on building scalable ML pipelines & deploying ML models on Google Cloud [Mandatory] Proven expertise in ML pipeline optimization and monitoring the model's performance over time [Mandatory] Proficiency in version control systems such as GitHub Experience with feature engineering optimization and ML model fine tuning is preferred Google Cloud Machine Learning certifications will be a big plus Experience in Beauty or Retail/FMCG industry is preferred Experience in training with large volume of data (>100 GB) Experience in delivering AI-ML projects using Agile methodologies is preferred Proven ability to effectively communicate technical concepts and results to technical & business audiences in a comprehensive manner Proven ability to work proactively and independently to address product requirements and design optimal solutions Fluency in English, strong communication and organizational capabilities; and ability to work in a matrix/ multidisciplinary team Job objectives: Design, develop, deploy, and maintain data science and machine learning solutions to meet enterprise goals. Collaborate with product managers, data scientists & analysts to identify innovative & optimal machine learning solutions that leverage data to meet business goals. Contribute to development, rollout and onboarding of data scientists and ML use-cases to enterprise wide MLOps framework. Scale the proven ML use-cases across the SAPMENA region. Be responsible for optimal ML costs. Job description: Deep understanding of business/functional needs, problem statements and objectives/success criteria Collaborate with internal and external stakeholders including business, data scientists, project and partners teams in translating business and functional needs into ML problem statements and specific deliverables Develop best-fit end-to-end ML solutions including but not limited to algorithms, models, pipelines, training, inference, testing, performance tuning, deployments Review MVP implementations, provide recommendations and ensure ML best practices and guidelines are followed Act as 'Owner' of end-to-end machine learning systems and their scaling Translate machine learning algorithms into production-level code with distributed training, custom containers and optimal model serving Industrialize end-to-end MLOps life cycle management activities including model registry, pipelines, experiments, feature store, CI-CD-CT-CE with Kubeflow/TFX Accountable for creating, monitoring drifts leveraging continuous evaluation tools and optimizing performance and overall costs Evaluate, establish guidelines, and lead transformation with emerging technologies and practices for Data Science, ML, MLOps, Data Ops The Successful Applicant Position: ML Engineer Job type: Techno-Functional Preferred education qualifications: Bachelor/ Master's degree in computer science, Data Science, Machine Learning OR related technical degree Job location: India Geography: SAPMENA Required experience: 6-8 Years Preferred profile/ skills: 5+ years in developing and deploying enterprise-scale ML solutions [Mandatory] Proven track record in data analysis (EDA, profiling, sampling), data engineering (wrangling, storage, pipelines, orchestration), [Mandatory] Proficiency in Data Science/ML algorithms such as regression, classification, clustering, decision trees, random forest, gradient boosting, recommendation, dimensionality reduction [Mandatory] Experience in ML algorithms such as ARIMA, Prophet, Random Forests, and Gradient Boosting algorithms (XGBoost, LightGBM, CatBoost) [Mandatory] Prior experience on MLOps with Kubeflow or TFX [Mandatory] Experience in model explainability with Shapley plot and data drift detection metrics. [Mandatory] Advanced programming skills with Python and SQL [Mandatory] Prior experience on building scalable ML pipelines & deploying ML models on Google Cloud [Mandatory] Proven expertise in ML pipeline optimization and monitoring the model's performance over time [Mandatory] Proficiency in version control systems such as GitHub Experience with feature engineering optimization and ML model fine tuning is preferred Google Cloud Machine Learning certifications will be a big plus Experience in Beauty or Retail/FMCG industry is preferred Experience in training with large volume of data (>100 GB) Experience in delivering AI-ML projects using Agile methodologies is preferred Proven ability to effectively communicate technical concepts and results to technical & business audiences in a comprehensive manner Proven ability to work proactively and independently to address product requirements and design optimal solutions Fluency in English, strong communication and organizational capabilities; and ability to work in a matrix/ multidisciplinary team Job objectives: Design, develop, deploy, and maintain data science and machine learning solutions to meet enterprise goals. Collaborate with product managers, data scientists & analysts to identify innovative & optimal machine learning solutions that leverage data to meet business goals. Contribute to development, rollout and onboarding of data scientists and ML use-cases to enterprise wide MLOps framework. Scale the proven ML use-cases across the SAPMENA region. Be responsible for optimal ML costs. Job description: Deep understanding of business/functional needs, problem statements and objectives/success criteria Collaborate with internal and external stakeholders including business, data scientists, project and partners teams in translating business and functional needs into ML problem statements and specific deliverables Develop best-fit end-to-end ML solutions including but not limited to algorithms, models, pipelines, training, inference, testing, performance tuning, deployments Review MVP implementations, provide recommendations and ensure ML best practices and guidelines are followed Act as 'Owner' of end-to-end machine learning systems and their scaling Translate machine learning algorithms into production-level code with distributed training, custom containers and optimal model serving Industrialize end-to-end MLOps life cycle management activities including model registry, pipelines, experiments, feature store, CI-CD-CT-CE with Kubeflow/TFX Accountable for creating, monitoring drifts leveraging continuous evaluation tools and optimizing performance and overall costs Evaluate, establish guidelines, and lead transformation with emerging technologies and practices for Data Science, ML, MLOps, Data Ops What's on Offer Competitive compensation commensurate with role and skill set Medical Insurance Coverage worth of 10 Lacs Social Benifits including PF & Gratuity A fast-paced, growth-oriented environment with the associated (challenges and) rewards Opportunity to grow and develop your own skills and create your future Contact: Anwesha Banerjee Quote job ref: JN-072025-6793565
Posted 1 week ago
3.0 years
0 Lacs
Uttar Pradesh, India
On-site
Job Description Be part of the solution at Technip Energies and embark on a one-of-a-kind journey. You will be helping to develop cutting-edge solutions to solve real-world energy problems. About us: Technip Energies is a global technology and engineering powerhouse. With leadership positions in LNG, hydrogen, ethylene, sustainable chemistry, and CO2 management, we are contributing to the development of critical markets such as energy, energy derivatives, decarbonization, and circularity. Our complementary business segments, Technology, Products and Services (TPS) and Project Delivery, turn innovation into scalable and industrial reality. Through collaboration and excellence in execution, our 17,000+ employees across 34 countries are fully committed to bridging prosperity with sustainability for a world designed to last. About the role: We are currently seeking a Machine Learning (Ops) - Engineer , to join our Digi team based in Noida. Key Responsibilities: ML Pipeline Development and Automation: Design, build, and maintain end-to-end AI/ML CI/CD pipelines using Azure DevOps and leveraging Azure AI Stack (e.g., Azure ML, AI Foundry …) and Dataiku Model Deployment and Monitoring: Deliver tooling to deploy AI/ML products into production, ensuring they meet performance, reliability, and security standards. Implement and maintain a transversal monitoring solutions to track model performance, detect drift, and trigger retraining when necessary Collaboration and Support: Work closely with data scientists, AI/ML engineers, and platform team to ensure seamless integration of products into production. Provide technical support and troubleshooting for AI/ML pipelines and infrastructure, particularly in Azure and Dataiku environments Operational Excellence : Define and implement MLOps best practices with a strong focus on governance, security, and quality, while monitoring performance metrics and cost-efficiency to ensure continuous improvement and delivering optimized, high-quality deployments for Azure AI services and Dataiku Documentation and Reporting: Maintain comprehensive documentation of AI/ML pipelines, and processes, with a focus on Azure AI and Dataiku implementations. Provide regular updates to the AI Platform Lead on system status, risks, and resource needs About you: Proven track record of experience in MLOps, DevOps, or related roles Strong knowledge of machine learning workflows, data analytics, and Azure cloud Hands-on experience with tools and technologies such as Dataiku, Azure ML, Azure AI Services, Docker, Kubernetes, and Terraform Proficiency in programming languages such as Python, with experience in ML and automation libraries (e.g., TensorFlow, PyTorch, Azure AI SDK …) Expertise in CI/CD pipeline management and automation tools using Azure DevOps Familiarity with monitoring tools and logging frameworks Catch this opportunity and invest in your skills development, should your profile meet these requirements. Additional attributes: A proactive mindset with a focus on operationalizing AI/ML solutions to drive business value Experience with budget oversight and cost optimization in cloud environments. Knowledge of agile methodologies and software development lifecycle (SDLC). Strong problem-solving skills and attention to detail Work Experience: 3-5 years of experience in MLOps Minimum Education: Advanced degree (Master’s or PhD preferred) in Computer Science, Data Science, Engineering, or a related field. What’s next? Once receiving your application, our Talent Acquisition professionals will screen and match your profile against the role requirements. We ask for your patience as the team completes the volume of applications with reasonable timeframe. Check your application progress periodically via personal account from created candidate profile during your application. We invite you to get to know more about our company by visiting and follow us on LinkedIn, Instagram, Facebook, X and YouTube for company updates.
Posted 1 week ago
7.0 years
0 Lacs
Bharuch, Gujarat
On-site
Role: Sr Data Scientist – Digital & Analytics Experience: 7+ Years | Industry: Exposure to manufacturing, energy, supply chain or similar Location: On-Site @ Bharuch, Gujarat (6 days/week, Mon-Sat working) Perks: Work with Client Directly & Monthly renumeration for lodging Mandatory Skills: Exp. In full scale implementation from requirement gathering till project delivery (end to end). EDA, ML Techniques (supervised and unsupervised), Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), cloud ML tooling (Azure ML, AWS Sage maker, etc.), plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data, optimization models (LP, MILP, MINLP). We are seeking a highly capable and hands-on Sr Data Scientist to drive data science solution development for chemicals manufacturing environment. This role is ideal for someone with a strong product mindset and a proven ability to work independently, while mentoring a small team. You will play a pivotal role in developing advanced analytics and AI/ML solutions for operations, production, quality, energy optimization, and asset performance, delivering tangible business impact. Responsibilities: 1. Data Science Solution Development • Design and develop predictive and prescriptive models for manufacturing challenges such as process optimization, yield prediction, quality forecasting, downtime prevention, and energy usage minimization. • Perform robust exploratory data analysis (EDA) and apply advanced statistical and machine learning techniques (supervised and unsupervised). • Translate physical and chemical process knowledge into mathematical features or constraints in models. • Deploy models into production environments (on-prem or cloud) with high robustness and monitoring. 2. Team Leadership & Management • Lead a compact data science pod (2-3 members), assigning responsibilities, reviewing work, and mentoring junior data scientists or interns. • Own the entire data science lifecycle: problem framing, model development, and validation, deployment, monitoring, and retraining protocols. 3. Stakeholder Engagement & Collaboration • Work directly with Process Engineers, Plant Operators, DCS system owners, and Business Heads to identify pain points and convert them into use-cases. • Collaborate with Data Engineers and IT to ensure data pipelines and model interfaces are robust, secure, and scalable. • Act as a translator between manufacturing business units and technical teams to ensure alignment and impact. 4. Solution Ownership & Documentation • Independently manage and maintain use-cases through versioned model management, robust documentation, and logging. • Define and monitor model KPIs (e.g., drift, accuracy, business impact) post-deployment and lead remediation efforts. Required Skills: 1. 7+ years of experience in Data Science roles, with a strong portfolio of deployed use-cases in manufacturing, energy, or process industries. 2. Proven track record of end-to-end model delivery (from data prep to business value realization). 3. Master’s or PhD in Data Science, Computer Science Engineering, Applied Mathematics, Chemical Engineering, Mechanical Engineering, or a related quantitative discipline. 4. Expertise in Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), and experience with cloud ML tooling (Azure ML, AWS Sagemaker, etc.). 5. Familiarity with plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data. 6. Experience in developing optimization models (LP, MILP, MINLP) for process or resource allocation problems is a strong plus. Job Types: Full-time, Contractual / Temporary Contract length: 6-12 months Pay: Up to ₹200,000.00 per month Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a highly skilled and proactive Senior DevOps Specialist to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities: Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have: Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting Familiarity with SQL for validating database changes, debugging issues, and running schema checks Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups Strong exposure to API testing, load/performance testing, and reliability validation Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog) Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders
Posted 1 week ago
0.0 - 40.0 years
0 Lacs
Gurugram, Haryana
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. About the position: Senior Engineer – Agentic AI: Join Boston Scientific at the forefront of innovation as we embrace AI to transform healthcare and deliver cutting-edge solutions. As a Senior Engineer – Agentic AI, you will architect and deliver autonomous, goal-driven agents powered by large language models (LLMs) and multi-agent frameworks. Key Responsibilities: Design and implement agentic AI systems leveraging LLMs for reasoning, multi-step planning, and tool execution. Evaluate and build upon multi-agent frameworks such as LangGraph, AutoGen, and CrewAI to coordinate distributed problem-solving agents. Develop context-handling, memory, and API-integration layers enabling agents to interact reliably with internal services and third-party tools. Create feedback-loop and evaluation pipelines (LangSmith, RAGAS, custom metrics) that measure factual grounding, safety, and latency. Own backend services that scale agent workloads, optimize GPU / accelerator utilization, and enforce cost governance. Embed observability, drift monitoring, and alignment guardrails throughout the agent lifecycle. Collaborate with research, product, and security teams to translate emerging agentic patterns into production-ready capabilities. Mentor engineers on prompt engineering, tool-use chains, and best practices for agent deployment in regulated environments. Required: 8+ years of software engineering experience, including 3+ years building AI/ML or NLP systems. Expertise in Python and modern LLM APIs (OpenAI, Anthropic, etc.), plus agentic orchestration frameworks (LangGraph, AutoGen, CrewAI, LangChain, LlamaIndex). Proven delivery of agentic systems or LLM-powered applications that invoke external APIs or tools. Deep knowledge of vector databases (Azure AI Search, Weaviate, Pinecone, FAISS, pgvector) and Retrieval-Augmented Generation (RAG) pipelines. Hands-on experience with LLMOps: CI/CD for fine-tuning, model versioning, performance monitoring, and drift detection. Strong background in cloud-native micro-services, security, and observability. Requisition ID: 610421 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: AI/ML Engineer - US Healthcare Claims Management Position : MLOps Engineer - US Healthcare Claims Management Location : Gurgaon, (Hybrid) Company : Neolytix Experience Required-3 To 5 Years Preference will be given to candidates holding a PHD in the relevant field. About the Role: We are seeking an experienced MLOps Engineer to build, deploy, and maintain AI/ML systems for our healthcare Revenue Cycle Management (RCM) platform. This role will focus on operationalizing machine learning models that analyze claims, prioritize denials, and optimize revenue recovery through automated resolution pathways. Key Tech Stack: Models & ML Components: Fine-tuned healthcare LLMs (GPT-4, Claude) for complex claim analysis Knowledge of Supervised/Unsupervised Models, Optimization & Simulation techniques Domain-specific SLMs for denial code classification and prediction Vector embedding models for similar claim identification NER models for extracting critical claim information Seq2seq models (automated appeal letter generation) Languages & Frameworks: Strong proficiency in Python with OOP principles - (4 years of experience) Experience developing APIs using Flask or Fast API frameworks – (2 years of experience) Integration knowledge with front-end applications – (1 year of experience) Expertise in version control systems (e.g., GitHub, GitLab, Azure DevOps) - (3 years of experience) Proficiency in databases, including SQL, NoSQL and vector databases – (2+ years of experience) Experience with Azure (2+ years of experience) Libraries: PyTorch/TensorFlow/Hugging Face Transformers Key Responsibilities: ML Pipeline Architecture: Design and implement end-to-end ML pipelines for claims processing, incorporating automated training, testing, and deployment workflows Model Deployment & Scaling: Deploy and orchestrate LLMs and SLMs in production using containerization (Docker/Kubernetes) and Azure cloud services Monitoring & Observability: Implement comprehensive monitoring systems to track model performance, drift detection, and operational health metrics CI/CD for ML Systems: Establish CI/CD pipelines specifically for ML model training, validation, and deployment Data Pipeline Engineering: Create robust data preprocessing pipelines for healthcare claims data, ensuring compliance with HIPAA standards Model Optimization: Tune and optimize models for both performance and cost- efficiency in production environments Infrastructure as Code: Implement IaC practices for reproducible ML environments and deployments Document technical solutions & create best practices for scalable AI-driven claims management Preferred Qualifications: Experience with healthcare data, particularly claims processing (EDI 837/835) Knowledge of RCM workflows & denial management processes Understanding of HIPAA compliance requirements Experience with feature stores & model registries Familiarity with healthcare-specific NLP applications What Sets You Apart: Experience operationalizing LLMs for domain-specific enterprise applications Background in healthcare technology or revenue cycle operations Track record of improving model performance metrics in production systems What We Offer: Competitive salary and benefits package Opportunity to contribute to innovative AI solutions in the healthcare industry Dynamic and collaborative work environment Opportunities for continuous learning and professional growth To Apply: Submit your resume and a cover letter detailing your relevant experience and interest in the role to shivanir@neolytix.com Powered by JazzHR BM2Iy0O5p7
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a highly skilled and proactive Team Lead – DevOps to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting Familiarity with SQL for validating database changes, debugging issues, and running schema checks Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups Strong exposure to API testing, load/performance testing, and reliability validation Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog) Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi