Jobs
Interviews

1480 Mlflow Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

2 - 6 Lacs

Hyderābād

On-site

Overview: As a key member of the team, you will be responsible for building and maintaining the infrastructure, tools, and workflows that enable the efficient, reliable, and secure deployment of LLMs in production environments. You will collaborate closely with data scientists, Data Engineers and product teams to ensure seamless integration of AI capabilities into our core systems. Responsibilities: Design and implement scalable model deployment pipelines for LLMs, ensuring high availability and low latency. Build and maintain CI/CD workflows for model training, evaluation, and release. Monitor and optimize model performance, drift, and resource utilization in production. Manage cloud infrastructure (e.g., AWS, GCP, Azure) and container orchestration (e.g., Kubernetes, Docker) for AI workloads. Implement observability tools to track system health, token usage, and user feedback loops. Ensure security, compliance, and governance of AI systems, including access control and audit logging. Collaborate with cross-functional teams to align infrastructure with product goals and user needs. Stay current with the latest in MLOps and GenAI tooling and drive continuous improvement in deployment practices. Define and evolve the architecture for GenAI systems, ensuring alignment with business goals and scalability requirements Qualifications: Bachelor’s or master’s degree in computer science, Software Engineering, Data Science, or a related technical field. 5 to 7 years of experience in software engineering, DevOps, and 3+ years in machine learning infrastructure roles. Hands-on experience deploying and maintaining machine learning models in production, ideally including LLMs or other deep learning models. Proven experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Strong programming skills in Python, with experience in ML libraries (e.g., TensorFlow, PyTorch, Hugging Face). Proficiency in CI/CD pipelines for ML workflows Experience with MLOps tools: MLflow, Kubeflow, DVC, Airflow, Weights & Biases. Knowledge of monitoring and observability tools

Posted 1 week ago

Apply

0 years

4 - 6 Lacs

Gurgaon

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Assistant V ice President, Databricks Squad Delivery lead The Databricks Delivery Lead will oversee the end-to-end delivery of Databricks-based solutions for clients, ensuring the successful implementation, optimization, and scaling of big data and analytics solutions. This role will drive the adoption of Databricks as the preferred platform for data engineering and analytics, while managing a cross-functional team of data engineers and developers. . Responsibilities Lead and manage Databricks-based project delivery, ensuring that all solutions are designed, developed, and implemented according to client requirements, best practices, and industry standards. Act as the subject matter expert (SME) on Databricks, providing guidance to teams on architecture, implementation, and optimization. Collaborate with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads. Serve as the primary point of contact for clients, ensuring alignment between business requirements and technical delivery. Maintain effective communication with stakeholders, providing regular updates on project status, risks, and achievements. Oversee the setup, deployment, and optimization of Databricks workspaces, clusters, and pipelines. Ensure that Databricks solutions are optimized for cost and performance, utilizing best practices for data storage, processing, and querying. Continuously evaluate the effectiveness of the Databricks platform and processes, suggesting improvements or new features that could enhance delivery efficiency and effectiveness. Drive innovation within the team, introducing new tools, technologies, and best practices to improve delivery quality. . Qualifications we seek in you! Minimum Q ualifications / Skills Bachelor’s degree in Computer Science, Engineering, or a related field (Master’s or MBA preferred). Relevant years in IT services with experience specifically in Databricks and cloud-based data engineering. Preferred Q ualifications / Skills Proven experience in leading end-to-end delivery of data engineering or analytics solutions on Databricks. Strong experience in cloud technologies (AWS, Azure, GCP), data pipelines, and big data tools. Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies. Expertise in data engineering concepts, including ETL, data lakes, data warehousing, and distributed computing. Preferred Certifications: Databricks Certified Associate or Professional. Cloud certifications (AWS Certified Solutions Architect, Azure Data Engineer, or equivalent). Certifications in data engineering, big data technologies, or project management (e.g., PMP, Scrum Master). Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com. Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Assistant Vice President Primary Location India-Gurugram Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 14, 2025, 11:20:58 PM Unposting Date Jan 11, 2026, 3:20:58 AM Master Skills List Digital Job Category Full Time

Posted 1 week ago

Apply

4.0 years

4 - 8 Lacs

Gurgaon

On-site

About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners Job Title: Senior/Lead Data Scientist Experience Required: 4 + Years About the Role: We are seeking a skilled and innovative Machine Learning Engineer with 4+ years of experience to join our AI/ML team. The ideal candidate will have strong expertise in Computer Vision , Generative AI (GenAI) , and Deep Learning , with a proven track record of deploying models in production environments using Python, MLOps best practices, and cloud platforms like Azure ML. Key Responsibilities: Design, develop, and deploy AI/ML models for Computer Vision and GenAI use cases Build, fine-tune, and evaluate deep learning architectures (CNNs, Transformers, Diffusion models, etc.) Collaborate with product and engineering teams to integrate models into scalable pipelines and applications Manage the complete ML lifecycle using MLOps practices (versioning, CI/CD, monitoring, retraining) Develop reusable Python modules and maintain high-quality, production-grade ML code Work with Azure Machine Learning Services for training, inference, and model management Analyze large-scale datasets, extract insights, and prepare them for model training and validation Document technical designs, experiments, and decision-making processes Required Skills & Experience: 4–5 years of hands-on experience in Machine Learning and Deep Learning Strong experience in Computer Vision tasks such as object detection, image segmentation, OCR, etc. Practical knowledge and implementation experience in Generative AI (LLMs, diffusion models, embeddings) Solid programming skills in Python , with experience using frameworks like PyTorch , TensorFlow , OpenCV , Transformers (HuggingFace) , etc. Good understanding of MLOps concepts , model deployment, and lifecycle management Experience with cloud platforms , preferably Azure ML , for scalable model training and deployment Familiarity with data labeling tools, synthetic data generation, and model interpretability Strong problem-solving, debugging, and communication skills Good to Have: Experience with NLP , multimodal learning , or 3D computer vision Familiarity with containerization tools (Docker, Kubernetes) Experience in building end-to-end ML pipelines using MLflow, DVC, or similar tools Exposure to CI/CD pipelines for ML projects and working in agile development environments Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science , or a related field

Posted 1 week ago

Apply

5.0 years

4 - 9 Lacs

Noida

On-site

Posted On: 14 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description We are looking for a skilled AI/ML Ops Engineer to join our team to bridge the gap between data science and production systems. You will be responsible for deploying, monitoring, and maintaining machine learning models and data pipelines at scale. This role involves close collaboration with data scientists, engineers, and DevOps to ensure that ML solutions are robust, scalable, and reliable. Key Responsibilities: Design and implement ML pipelines for model training, validation, testing, and deployment. Automate ML workflows using tools such as MLflow, Kubeflow, Airflow, or similar. Deploy machine learning models to production environments (cloud). Monitor model performance, drift, and data quality in production. Collaborate with data scientists to improve model robustness and deployment readiness. Ensure CI/CD practices for ML models using tools like Jenkins, GitHub Actions, or GitLab CI. Optimize compute resources and manage model versioning, reproducibility, and rollback strategies. Work with cloud platforms AWS and containerization tools like Kubernetes (AKS). Ensure compliance with data privacy and security standards (e.g., GDPR, HIPAA). Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 5+ years of experience in DevOps, Data Engineering, or ML Engineering roles. Strong programming skills in Python; familiarity with R, Scala, or Java is a plus. Experience with automating ML workflows using tools such as MLflow, Kubeflow, Airflow, or similar Experience with ML frameworks like TensorFlow, PyTorch, Scikit-learn, or XGBoost. Experience with ML model monitoring and alerting frameworks (e.g., Evidently, Prometheus, Grafana). Familiarity with data orchestration and ETL/ELT tools (Airflow, dbt, Prefect). Preferred Qualifications: Experience with large-scale data systems (Spark, Hadoop). Knowledge of feature stores (Feast, Tecton). Experience with streaming data (Kafka, Flink). Experience working in regulated environments (finance, healthcare, etc.). Certifications in cloud platforms or ML tools. Soft Skills: Strong problem-solving and debugging skills. Excellent communication and collaboration with cross-functional teams. Adaptable and eager to learn new technologies. Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - AI/ML Database - Database Programming - SQL Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Development Tools and Management - Development Tools and Management - CI/CD DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Data Science and Machine Learning - Data Science and Machine Learning - Gen AI (LLM, Agentic AI, Gen AI enable tools like Github Copilot) DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket Programming Language - Other Programming Language - Scala Big Data - Big Data - Hadoop Big Data - Big Data - SPARK Data Science and Machine Learning - Data Science and Machine Learning - Python Beh - Communication and collaboration Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

Remote

Senior DevOps (Azure, Terraform, Kubernetes) Engineer Location: Remote (Initial 2–3 months in Abu Dhabi office, and then remote from India) T ype: Full-time | Long-term | Direct Client Hire Client: Abu Dhabi Government About The Role Our client, UAE (Abu Dhabi) Government, is seeking a highly skilled Senior DevOps Engineer (with skills on Azure, Terraform, Kubernetes, Argo) to join their growing cloud and AI engineering team. This role is ideal for candidates with a strong foundation in cloud Azure DevOps practices. Key Responsibilities Design, implement, and manage CI/CD pipelines using tools such as Jenkins, GitHub Actions, or Azure DevOps, AKS Develop and maintain Infrastructure-as-Code using Terraform Manage container orchestration environments using Kubernetes Ensure cloud infrastructure is optimized, secure, and monitored effectively Collaborate with data science teams to support ML model deployment and operationalization Implement MLOps best practices, including model versioning, deployment strategies (e.g., blue-green), monitoring (data drift, concept drift), and experiment tracking (e.g., MLflow) Build and maintain automated ML pipelines to streamline model lifecycle management Required Skills 7+ years of experience in DevOps and/or MLOps roles Proficient in CI/CD tools: Jenkins, GitHub Actions, Azure DevOps Strong expertise in Terraform and cloud-native infrastructure (AWS preferred) Hands-on experience with Kubernetes, Docker, and microservices Solid understanding of cloud networking, security, and monitoring Scripting proficiency in Bash and Python Preferred Skills Experience with MLflow, TFX, Kubeflow, or SageMaker Pipelines Knowledge of model performance monitoring and ML system reliability Familiarity with AWS MLOps stack or equivalent tools on Azure/GCP Skills: argo,terraform,kubernetes,azure

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

TCS HIRING !!! Role : Data Scientist Required Technical Skill Set : Data Science Experience: 3-8 years Locations: Kolkata,Hyd,Bangalore,Chennai,Pune Job Description: Must-Have** (Ideally should not be more than 3-5) Proficiency in Python or R for data analysis and modeling. Strong understanding of machine learning algorithms (regression, classification, clustering, etc.). Experience with SQL and working with relational databases. Hands-on experience with data wrangling, feature engineering, and model evaluation techniques. Experience with data visualization tools like Tableau, Power BI, or matplotlib/seaborn. Strong understanding of statistics and probability. Ability to translate business problems into analytical solutions. Good-to-Have Experience with deep learning frameworks (TensorFlow, Keras, PyTorch). Knowledge of big data platforms (Spark, Hadoop, Databricks). Experience deploying models using MLflow, Docker, or cloud platforms (AWS, Azure, GCP). Familiarity with NLP, computer vision, or time series forecasting. Exposure to MLOps practices for model lifecycle management. Understanding of data privacy and governance concepts.

Posted 1 week ago

Apply

6.0 years

20 - 25 Lacs

Bengaluru, Karnataka, India

Remote

:-Job Title: Machine Learning Engineer – 2 Location: Onsite – Bengaluru, Karnataka, India Experience Required: 3 – 6 Years Compensation: ₹20 – ₹25 LPA Employment Type: Full-Time Work Mode: Onsite Only (No Remote) About the Company:- A fast-growing Y Combinator-backed SaaS startup is revolutionizing underwriting in the insurance space through AI and Generative AI. Their platform empowers insurance carriers in the U.S. to make faster, more accurate decisions by automating key processes and enhancing risk assessment. As they expand their AI capabilities, they’re seeking a Machine Learning Engineer – 2 to build scalable ML solutions using NLP, Computer Vision, and LLM technologies. Role Overview:- As a Machine Learning Engineer – 2, you'll take ownership of designing, developing, and deploying ML systems that power critical features across the platform. You'll lead end-to-end ML workflows, working with cross-functional teams to deliver real-world AI solutions that directly impact business outcomes. Key Responsibilities:- Design and develop robust AI product features aligned with user and business needs Maintain and enhance existing ML/AI systems Build and manage ML pipelines for training, deployment, monitoring, and experimentation Deploy scalable inference APIs and conduct A/B testing Optimize GPU architectures and fine-tune transformer/LLM models Build and deploy LLM applications tailored to real-world use cases Implement DevOps/ML Ops best practices with tools like Docker and Kubernetes Tech Stack & Tools Machine Learning & LLMs GPT, LLaMA, Gemini, Claude, Hugging Face Transformers PyTorch, TensorFlow, Scikit-learn LLMOps & MLOps Langchain, LangGraph, LangFlow, Langfuse MLFlow, SageMaker, LlamaIndex, AWS Bedrock, Azure AI Cloud & Infrastructure AWS, Azure Kubernetes, Docker Databases MongoDB, PostgreSQL, Pinecone, ChromaDB Languages Python, SQL, JavaScript What You’ll Do Collaborate with product, research, and engineering teams to build scalable AI solutions Implement advanced NLP and Generative AI models (e.g., RAG, Transformers) Monitor and optimize model performance and deployment pipelines Build efficient, scalable data and feature pipelines Stay updated on industry trends and contribute to internal innovation Present key insights and ML solutions to technical and business stakeholders Requirements Must-Have:- 3–6 years of experience in Machine Learning and software/data engineering Master’s degree (or equivalent) in ML, AI, or related technical fields Strong hands-on experience with Python, PyTorch/TensorFlow, and Scikit-learn Familiarity with ML Ops, model deployment, and production pipelines Experience working with LLMs and modern NLP techniques Ability to work collaboratively in a fast-paced, product-driven environment Strong problem-solving and communication skills Bonus Certifications such as: AWS Machine Learning Specialty AWS Solution Architect – Professional Azure Solutions Architect Expert Why Apply Work directly with a high-caliber founding team Help shape the future of AI in the insurance space Gain ownership and visibility in a product-focused engineering role Opportunity to innovate with state-of-the-art AI/LLM tech Be part of a fast-moving team with real market traction 📍 Note: This is an onsite-only role based in Bengaluru. Remote work is not available. Skills: postgresql,docker,llms and modern nlp techniques,machine learning,computer vision,tensorflow,scikit-learn,pytorch,llm technologies,python,nlp,aws,ml, ai,sql,ml ops,azure,javascript,software/data engineering,kubernetes,mongodb,python, pytorch/tensorflow, and scikit-learn

Posted 1 week ago

Apply

0 years

0 Lacs

Thiruvananthapuram Taluk, India

On-site

We are looking for a versatile and highly skilled Data Analyst / AI Engineer to join our innovative team. This unique role combines the strengths of a data scientist with the capabilities of an AI engineer, allowing you to dive deep into data, extract meaningful insights, and then build and deploy cutting-edge Machine Learning, Deep Learning, and Generative AI models. You will play a crucial role in transforming raw data into strategic assets and intelligent applications. Key Responsibilities: · Data Analysis & Insight Generation: o Perform in-depth Exploratory Data Analysis (EDA) to identify trends, patterns, and anomalies in complex datasets. o Clean, transform, and prepare data from various sources for analysis and model development. o Apply statistical methods and hypothesis testing to validate findings and support data-driven decision-making. o Create compelling and interactive BI dashboards (e.g., Power BI, Tableau) to visualize data insights and communicate findings to stakeholders. · Machine Learning & Deep Learning Model Development: o Design, build, train, and evaluate Machine Learning models (e.g., regression, classification, clustering) to solve specific business problems. o Develop and optimize Deep Learning models, including CNNs for computer vision tasks and Transformers for Natural Language Processing (NLP). o Implement feature engineering techniques to enhance model performance. · Generative AI Implementation: o Explore and experiment with Large Language Models (LLMs) and other Generative AI techniques. o Implement and fine-tune LLMs for specific use cases (e.g., text generation, summarization, Q&A). o Develop and integrate Retrieval Augmented Generation (RAG) systems using vector databases and embedding models. o Apply Prompt Engineering best practices to optimize LLM interactions. o Contribute to the development of Agentic AI systems that leverage multiple tools and models. Required Skills & Experience: o Data Science & Analytics: o Strong proficiency in Python and its data science libraries (Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn). o Proven experience with Exploratory Data Analysis (EDA) and statistical analysis. o Hands-on experience developing BI Dashboards using tools like Power BI or Tableau. o Understanding of data warehousing and data lake concepts. o Machine Learning: o Solid understanding of various ML algorithms (e.g., Regression, Classification, Clustering, Tree-based models). o Experience with model evaluation, validation, and hyperparameter tuning. o Deep Learning: o Proficiency with Deep Learning frameworks such as TensorFlow, Keras, or PyTorch . o Experience with CNNs (Convolutional Neural Networks) and computer vision concepts (e.g., OpenCV, object detection). o Familiarity with Transformer architectures for NLP tasks. o Generative AI: o Practical experience with Large Language Models (LLMs). o Understanding and application of RAG (Retrieval Augmented Generation) systems. o Experience with Fine-tuning LLMs and Prompt Engineering. o Familiarity with frameworks like LangChain or LlamaIndex. o Problem-Solving: Excellent analytical and problem-solving skills with a strong ability to approach complex data challenges. Good to Have: o Experience with cloud-based AI/ML services (e.g., Azure ML, AWS SageMaker, Google Cloud AI Platform). o Familiarity with MLOps principles and tools (e.g., MLflow, DVC, CI/CD for models). o Experience with big data technologies (e.g., Apache Spark). Educational Qualification: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience). Please share your resume to the mail id: careers@appfabs.in

Posted 1 week ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Greetings from Stanco Solutions a leading and fast growing IT Services and IT Consulting Company. Currently we are hiring for Project Manager for our company Full Time/Permanent Role with Immediate to 15 Days Joiners only. Total Experience: 12 - 15 years Notice Period: Immediate to 15 Days Location: Chennai Work Mode: Work From Office Interview Mode: Virtual Interested candidates kindly share your updated CV to ruban.p@stancosolutions.com and for further queries kindly call to +91-8248551519. 1. Project Planning & Execution Define project scope, schedule, milestones, and deliverables . Prepare project charters, plans, and WBS (Work Breakdown Structure) . Create and manage Agile sprint plans and ensure iteration goals are met . 2. Stakeholder & Team Management Act as a bridge between business, development, QA, and infrastructure teams . Manage internal and external stakeholder expectations . Coordinate with cross-functional teams for on-time and on-budget delivery . 3. Technical Oversight & Risk Management Provide technical input and oversight on architecture and build activities. Track and mitigate technical, resource, and delivery risks proactively . Drive resolution of blockers, dependencies, and escalations. 4. Progress Tracking & Communication Use tools like JIRA or Azure DevOps for project tracking and burndown charts . Generate daily/weekly status reports, dashboards, and executive summaries . Present status updates and delivery health reports to senior management . 5. Quality, Compliance & Governance Ensure QA, UAT, and release processes are followed. Drive process improvement initiatives across the team. Maintain audit trails, change logs, and sign-off documentation . Primary Skills: Project Management (Agile/Scrum/Waterfall/Hybrid Models) Software Delivery Lifecycle (SDLC) Ownership Working Knowledge of Full Stack Development React.js (Frontend) Java (Backend APIs) MySQL (Database Queries, Data Models) Agile Planning Tools (JIRA, Azure DevOps, Trello, ClickUp) CI/CD Implementation Understanding (Jenkins, GitHub Actions, Azure Pipelines) Resource Planning, Sprint Management, and Backlog Grooming Risk & Issue Management, Change Requests, RCA Documentation Project Tracking, Budgeting & Estimation Stakeholder Communication & Cross-Functional Team Coordination Status Reporting to Senior Leadership and C-level Executives Secondary Skills: Exposure to AI/ML Project Lifecycle & Tools (MLFlow, Vertex AI, Azure ML – conceptual level) Cloud Platform Understanding (Azure, AWS, or GCP) DevOps Awareness (Version Control, Pipelines, Release Cycles) Quality Assurance Coordination & Release Sign-off Processes Team Coaching, Conflict Management & People Leadership Documentation & Process Improvement

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Urgently Hiring for AI/ML Role We are seeking an experienced AI/ML Engineer with 3–5 years of hands-on experience in building and deploying machine learning solutions in the fintech and spend management domain. You will work on real-time forecasting, intelligent document processing (invoices/receipts), fraud detection, and other AI-powered features that enhance our finance intelligence platform. This role demands expertise in both time series forecasting and computer vision, as well as a solid understanding of how ML applies to enterprise finance operations. Key Responsibilities: Design, train, and deploy ML models for spend forecasting, budget prediction, expense categorization, and risk scoring. Build and optimize OCR-based invoice and receipt parsing systems using computer vision and NLP techniques. Implement time-series models (Prophet, ARIMA, LSTM, XGBoost, etc.) for forecasting trends in financial transactions, expenses, and vendor payments. Work on intelligent document classification, key-value extraction, and line-item detection from unstructured financial documents (PDFs, scanned images). Collaborate with product and finance teams to define high-impact AI use cases and deliver business-ready solutions. Integrate ML pipelines into production using scalable tools and platforms (Docker, CI/CD, cloud services). Monitor model performance post-deployment, conduct drift analysis, and implement retraining strategies. Required Skills & Qualifications: Core Machine Learning Strong knowledge of supervised and unsupervised ML techniques applied to structured and semi-structured financial data Experience in time-series analysis and forecasting algorithms such as: ARIMA, SARIMA Facebook Prophet XGBoost for regression LSTM / GRU models for sequential data Proficiency in Python and key libraries: scikit-learn, Pandas, NumPy, StatsModels, PyTorch, TensorFlow. Computer Vision & Document AI Hands-on experience with OCR tools such as Tesseract, Google Vision API, or AWS Textract. Knowledge of document layout analysis and field-level extraction using OpenCV, LayoutLM, or Google Document AI. Familiarity with annotation tools (Label Studio, CVAT) and post-processing OCR outputs for structured data extraction. Deployment & Engineering Experience in exposing ML models via Flask or FastAPI. Model packaging and deployment with Docker, version control with Git, and ML lifecycle tools like MLflow or DVC. Working knowledge of cloud platforms (AWS/GCP/Azure) and integrating models with backend microservices. Data & Domain Understanding of financial documents: invoices, receipts, expense reports, and GL data. Ability to work with tabular, image-based, and PDF-based financial datasets. SQL proficiency and familiarity with financial databases or ERP systems is a plus.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position Overview: ShyftLabs is seeking an experienced Databricks Architect to lead the design, development, and optimization of big data solutions using the Databricks Unified Analytics Platform. This role requires deep expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to architect scalable, high-performance data platforms and drive data-driven innovation. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to accelerate business growth across various industries by focusing on creating value through innovation. Job Responsibilities Architect, design, and optimize big data and AI/ML solutions on the Databricks platform. Develop and implement highly scalable ETL pipelines for processing large datasets. Lead the adoption of Apache Spark for distributed data processing and real-time analytics. Define and enforce data governance, security policies, and compliance standards. Optimize data lakehouse architectures for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineers to enable AI/ML-driven insights. Oversee and troubleshoot Databricks clusters, jobs, and performance bottlenecks. Automate data workflows using CI/CD pipelines and infrastructure-as-code practices. Ensure data integrity, quality, and reliability across all data processes. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. 8+ years of hands-on experience in data engineering, with at least 5+ years in Databricks Architect and Apache Spark. Proficiency in SQL, Python, or Scala for data processing and analytics. Extensive experience with cloud platforms (AWS, Azure, or GCP) for data engineering. Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture. Hands-on experience with CI/CD tools and DevOps best practices. Familiarity with data security, compliance, and governance best practices. Strong problem-solving and analytical skills in a fast-paced environment. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer). Hands-on experience with MLflow, Feature Store, or Databricks SQL. Exposure to Kubernetes, Docker, and Terraform. Experience with streaming data architectures (Kafka, Kinesis, etc.). Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker). Prior experience working with retail, e-commerce, or ad-tech data platforms. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

🚀 We’re Hiring: AI/ML Technical Coach (Remote | Contract) 📍 Location: Remote 📅 Experience: 5+ Years in AI/ML 🏢 Company: Krosbridge Are you an AI/ML expert who loves mentoring and guiding future tech minds? We’re seeking a passionate AI/ML Technical Coach to teach, mentor, and shape aspiring engineers through interactive sessions, real-world projects, and cutting-edge AI tools. 💡 What You’ll Do • Lead live workshops & bootcamps on AI/ML, Deep Learning, LLMs, and MLOps • Mentor learners through 1:1s, group discussions, and project feedback • Review hands-on projects and help learners build job-ready portfolios • Stay on top of AI trends and contribute to curriculum improvements 🧠 What You Need • Solid Python skills (scikit-learn, TensorFlow, PyTorch, HuggingFace) • Experience across the ML pipeline (data → training → deployment) • Strong communication & mentoring skills • Bonus if you’ve worked with LLMs, LangChain, MLFlow, or similar tools 🎯 Good to Have • Experience in teaching, mentoring, or coaching • Contributions to GitHub, Kaggle, or AI communities • Certifications or a degree in AI/ML/Data Science 🌟 Why Join Us? • Be a mentor and role model for tomorrow’s AI engineers • Run your workshops & be featured as a tech coach • Flexible work, creative freedom, and real impact 📩 Interested? You can just drop your resume at contact@krosbridge.com Or DM us to know more! Let’s build the future of AI together.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Generative AI Architect Experience: 10+ Years Location: Noida, Mumbai, Pune, Chennai, Gurgaon (Hybrid) Contract Duration: Short Term Work Time: IST Shift Job Purpose We are seeking a highly skilled Generative AI Architect to lead the design, development, and deployment of cutting-edge GenAI solutions across enterprise-grade applications. This role demands deep expertise in large language models (LLMs), prompt engineering, and scalable AI system architecture, along with hands-on experience in MLOps, cloud, and data engineering. Key Responsibilities: Design and implement scalable, secure GenAI solutions using LLMs such as GPT, Claude, LLaMA, or Mistral Architect Retrieval-Augmented Generation (RAG) pipelines using LangChain, LlamaIndex, Weaviate, FAISS, or ElasticSearch Lead prompt engineering and evaluation frameworks for accuracy, safety, and contextual relevance Collaborate with product, engineering, and data teams to integrate GenAI into existing applications and workflows Build reusable GenAI modules like function calling, summarization engines, Q&A bots, and document chat solutions Deploy and optimize GenAI workloads on AWS Bedrock, Azure OpenAI, and Vertex AI Ensure robust monitoring, logging, and observability using Grafana, OpenTelemetry, and Prometheus Apply MLOps practices including CI/CD of AI pipelines, model versioning, validation, and rollback Research and prototype innovations like multi-agent systems, autonomous agents, and fine-tuning methods Implement security best practices, data governance, and compliance protocols such as PII masking, encryption, and audit logs Required Skills & Experience: 8+ years in AI/ML with at least 2–3 years in LLMs or Generative AI Proficient in Python with experience in Transformers (Hugging Face), LangChain, OpenAI SDKs Strong knowledge of vector databases like Pinecone, Weaviate, FAISS, Qdrant Experience working with AWS (SageMaker, Bedrock), Azure (OpenAI), and GCP (Vertex AI) Hands-on expertise in RAG pipelines, summarization, and chat-based applications Familiarity with LLM orchestration frameworks like LangGraph, AutoGen, CrewAI Understanding of MLOps tools: MLflow, Airflow, Docker, Kubernetes, FastAPI Exposure to prompt injection mitigation, hallucination control, and LLMOps practices Ability to evaluate GenAI solutions using BERTScore, BLEU, GPTScore Strong communication skills with experience in architecture leadership and mentoring Preferred (Nice to Have): Experience fine-tuning open-source LLMs (LLaMA, Mistral, Falcon) using LoRA or QLoRA Knowledge of multi-modal AI systems (text-image, voice assistants) Domain-specific LLM knowledge in Healthcare, BFSI, Legal, or EdTech Contributions to published work, patents, or open-source GenAI projects

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Responsibilities Lead 4-8 data scientists to deliver ML capabilities within a Databricks-Azure platform Guide delivery of complex ML systems that align with product and platform goals Balance scientific rigor with practical engineering Define model lifecycle, tooling, and architectural direction Requirements Skills & Experience Advanced ML: Supervised/unsupervised modeling, time-series, interpretability, MLflow, Spark, TensorFlow/PyTorch Engineering: Feature pipelines, model serving, CI/CD, production deployment Leadership: Mentorship, architectural alignment across subsystems, experimentation strategy Communication: Translate ML results into business impact Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Responsibilities Act as both a hands-on tech lead and product manager Deliver data/ML platforms and pipelines in a Databricks-Azure environment Lead a small delivery team and coordinate with enabling teams for product, architecture, and data science Translate business needs into product strategy and technical delivery with a platform-first mindset Requirements Skills & Experience Technical: Python, SQL, Databricks, Delta Lake, MLflow, Terraform, medallion architecture, data mesh/fabric, Azure Product: Agile delivery, discovery cycles, outcome-focused planning, trunk-based development Collaboration: Able to coach engineers, work with cross-functional teams, and drive self-service platforms Communication: Clear in articulating decisions, roadmap, and priorities Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Responsibilities Lead 4-8 data scientists to deliver ML capabilities within a Databricks-Azure platform Guide delivery of complex ML systems that align with product and platform goals Balance scientific rigor with practical engineering Define model lifecycle, tooling, and architectural direction Requirements Skills & Experience Advanced ML: Supervised/unsupervised modeling, time-series, interpretability, MLflow, Spark, TensorFlow/PyTorch Engineering: Feature pipelines, model serving, CI/CD, production deployment Leadership: Mentorship, architectural alignment across subsystems, experimentation strategy Communication: Translate ML results into business impact Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Responsibilities Act as both a hands-on tech lead and product manager Deliver data/ML platforms and pipelines in a Databricks-Azure environment Lead a small delivery team and coordinate with enabling teams for product, architecture, and data science Translate business needs into product strategy and technical delivery with a platform-first mindset Requirements Skills & Experience Technical: Python, SQL, Databricks, Delta Lake, MLflow, Terraform, medallion architecture, data mesh/fabric, Azure Product: Agile delivery, discovery cycles, outcome-focused planning, trunk-based development Collaboration: Able to coach engineers, work with cross-functional teams, and drive self-service platforms Communication: Clear in articulating decisions, roadmap, and priorities Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

You will be working as a Databricks Developer with 3-6 years of experience, located in India. Joining the data engineering and AI innovation team, your main responsibilities will include developing scalable data pipelines using Databricks and Apache Spark, implementing AI/ML workflows with tools like MLflow and AutoML, collaborating with data scientists to deploy models into production, performing ETL development, data transformation, and model training pipelines, managing Delta Lake architecture, and working closely with cross-functional teams to ensure data quality and governance.,

Posted 1 week ago

Apply

2.5 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Site Reliability Engineer Experience: 2.5 - 4 Years Location: Hyderabad Work Mode: Work From Office (5 Days a Week) Overview: We are seeking a proactive and technically skilled Site Reliability Engineer (SRE-1) with a strong background in Kubernetes and DevOps practices. This role requires a self-starter who is enthusiastic about automation , observability , and enhancing infrastructure reliability. Key Responsibilities: Manage, monitor, and troubleshoot Kubernetes environments in production. Design, implement, and maintain CI/CD pipelines using tools like Jenkins, ArgoCD, and Ansible. Implement and maintain observability solutions (metrics, logs, traces). Automate infrastructure and operational tasks using scripting languages such as Python, Shell, Groovy, or Ansible. Support and optimize ML workflows, including platforms like MLflow and Kubeflow. Collaborate with cross-functional teams to ensure infrastructure scalability, availability, and performance. Qualifications: Strong hands-on experience with Kubernetes and container orchestration. Solid understanding of DevOps tools and practices. Experience with observability platforms (e.g., Prometheus, Grafana, ELK, Datadog). Familiarity with MLflow and Kubeflow is a strong plus. CKS (Certified Kubernetes Security Specialist) certification is preferred. Exposure to Big Data environments is an added advantage. Proficient in scripting with Python, Shell, Groovy, or Ansible. Hands-on experience with tools like Jenkins, Ansible, and ArgoCD.

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Site Reliability Engineer Experience: 5 - 8 Years Location: Hyderabad Work Mode: Work From Office (5 Days a Week) Overview: We are seeking a proactive and technically skilled Site Reliability Engineer (SRE-1) with a strong background in Kubernetes and DevOps practices. This role requires a self-starter who is enthusiastic about automation , observability , and enhancing infrastructure reliability. Key Responsibilities: Manage, monitor, and troubleshoot Kubernetes environments in production. Design, implement, and maintain CI/CD pipelines using tools like Jenkins, ArgoCD, and Ansible. Implement and maintain observability solutions (metrics, logs, traces). Automate infrastructure and operational tasks using scripting languages such as Python, Shell, Groovy, or Ansible. Support and optimize ML workflows, including platforms like MLflow and Kubeflow. Collaborate with cross-functional teams to ensure infrastructure scalability, availability, and performance. Qualifications: Strong hands-on experience with Kubernetes and container orchestration. Solid understanding of DevOps tools and practices. Experience with observability platforms (e.g., Prometheus, Grafana, ELK, Datadog). Familiarity with MLflow and Kubeflow is a strong plus. CKS (Certified Kubernetes Security Specialist) certification is preferred. Exposure to Big Data environments is an added advantage. Proficient in scripting with Python, Shell, Groovy, or Ansible. Hands-on experience with tools like Jenkins, Ansible, and ArgoCD.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title : AI/ML Engineer / Junior Data Scientist Location : Bangalore / Pune Experience : 05 Years Employment Type : Full-Time Salary : 5 15 LPA (Based on experience and skillset) About The Role We are looking for a passionate and driven AI/ML Engineer or Junior Data Scientist to join our growing analytics and product team. Youll work closely with senior data scientists, engineers, and business stakeholders to build scalable AI/ML solutions, extract insights from complex datasets, and develop models that improve real-world decision-making. Whether you're a fresher with solid projects or a professional with up to 5 years of experience, if you're enthusiastic about AI/ML and data science, we want to hear from you! Key Responsibilities Collect, clean, preprocess, and analyze structured and unstructured data from multiple sources. Design, implement, and evaluate machine learning models for classification, regression, clustering, NLP, or recommendation systems. Collaborate with data engineers to deploy models in production (using Python, APIs, or cloud services like AWS/GCP). Visualize results and present actionable insights through dashboards, reports, and presentations. Conduct experiments, hypothesis testing, and A/B tests to optimize models and business outcomes. ? Develop scripts and reusable tools for automation and scalability of ML pipelines. Stay updated with the latest research papers, open-source tools, and trends in AI/ML. Required Skills & Qualifications Bachelors/Masters degree in Computer Science, Data Science, Mathematics, Statistics, or related fields. Strong Python programming skills with experience in libraries like NumPy, Pandas, Scikit-learn, TensorFlow, or PyTorch. Proficiency in data analysis, visualization (using tools like Matplotlib, Seaborn, Plotly, or Power BI/Tableau). Solid understanding of ML algorithms (linear regression, decision trees, random forests, SVMs, neural networks). Experience with SQL and working with large datasets. Exposure to cloud platforms (AWS, GCP, or Azure) and APIs is a plus. Knowledge of NLP, computer vision, or generative AI models is desirable. Strong problem-solving skills, attention to detail, and ability to work in agile teams. Good To Have (Bonus Points) Experience in end-to-end ML model lifecycle (development to deployment). Experience with MLOps tools like MLflow, Docker, or CI/CD. Participation in Kaggle competitions or open-source contributions. Certifications in Data Science, AI/ML, or Cloud Platforms. What We Offer A dynamic and collaborative work environment. Opportunities to work on cutting-edge AI projects. Competitive salary and growth path. Training, mentorship, and access to tools and resources. Flexible work culture and supportive teams. (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As the MLOps Engineering Director at Horizontal Data Science Enablement Team within SSO Data Science, you will play a crucial role in managing the Databricks platform for the entire organization and leading best practices in MLOps. Your responsibilities will include overseeing the administration, configuration, and maintenance of Databricks clusters and workspaces. You will continuously monitor the clusters for high workloads or excessive usage costs, ensuring the overall health of the clusters and addressing any issues promptly. Implementing and managing security protocols to safeguard sensitive information and facilitating the integration of various data sources into Databricks will be key aspects of your role. Collaborating closely with data engineers, data scientists, and stakeholders, you will provide support for data processing and analytics needs. Maintaining comprehensive documentation of Databricks configurations, processes, and best practices, as well as leading participation in security and architecture reviews, will be part of your responsibilities. Additionally, you will bring MLOps expertise to the table by focusing on areas such as model monitoring, feature catalog/store, model lineage maintenance, and CI/CD pipelines. To excel in this role, you should possess a Master's degree in computer science or a related field, along with strong experience in Databricks management, cloud technologies, and MLOps solutions like MLFlow. Your background should include hands-on experience with industry-standard CI/CD tools, data governance processes, and coding proficiency in languages such as Python, Java, and C++. A systematic problem-solving approach, excellent communication skills, and a sense of ownership and drive are essential qualities for success in this position. Moreover, your ability to set yourself apart will be demonstrated through your experience in SQL tuning, automation, data observability, and supporting highly scalable systems. Operating in a 24x7 environment, self-motivation, creativity in solving software problems, and ensuring system availability across global time zones will further enhance your profile for this role. In alignment with Mastercard's corporate security responsibility, you will be expected to adhere to security policies and practices, maintain the confidentiality and integrity of accessed information, report any security violations, and complete mandatory security trainings. By taking on this role, you will contribute to ensuring the efficiency and security of Mastercard's data science operations.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You are a highly experienced Senior Python & AI Engineer who will be responsible for leading the development of cutting-edge AI/ML solutions. Your role will involve architecting solutions, driving technical strategy, mentoring team members, and ensuring timely delivery of key projects. As a Technical Leader, you will architect, design, and implement scalable AI/ML systems and backend services using Python. You will also oversee the design and development of machine learning pipelines, APIs, and model deployment workflows. Your responsibilities will include reviewing code, establishing best practices, and driving technical quality across the team. In terms of Team Management, you will lead a team of data scientists, ML engineers, and Python developers. Providing mentorship, coaching, and performance evaluations will be vital. You will facilitate sprint planning, daily stand-ups, and retrospectives using Agile/Scrum practices. Additionally, coordinating with cross-functional teams such as product, QA, DevOps, and UI/UX will be necessary to deliver features on time. Your focus on AI/ML Development will involve developing and fine-tuning models for NLP, computer vision, or structured data analysis based on project requirements. Optimizing model performance and inference using frameworks like PyTorch, TensorFlow, or Hugging Face will be part of your responsibilities. Implementing model monitoring, drift detection, and retraining strategies will also be crucial. Project & Stakeholder Management will require you to work closely with product managers to translate business requirements into technical deliverables. You will own the end-to-end delivery of features, ensuring they meet performance and reliability goals. Providing timely updates to leadership and managing client communication if necessary will also be part of your role. Your Required Skills & Experience include a minimum of 5 years of professional experience with Python and at least 2 years working on AI/ML projects. You should have a strong understanding of ML/DL concepts, algorithms, and data preprocessing. Experience with frameworks like PyTorch, TensorFlow, scikit-learn, FastAPI/Django/Flask, deployment tools like Docker, Kubernetes, MLflow, and cloud platforms such as AWS/GCP/Azure is essential. In terms of leadership, you should have at least 3 years of experience in leading engineering or AI teams, along with excellent planning, estimation, and people management skills. Strong communication and collaboration skills are also required. Preferred Qualifications for this role include a Masters or PhD in Computer Science, Data Science, AI/ML, or related fields, exposure to MLOps practices, experience with RAG, LLMs, transformers, or vector databases, and prior experience in fast-paced startup or product environments. In return, you will have the opportunity to lead cutting-edge AI initiatives, encounter task variety and challenging opportunities, benefit from high autonomy, a flat hierarchy, and fast decision-making, as well as receive competitive compensation and performance-based incentives.,

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space, it has footprint across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Data Scientist will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About The Role The incumbent will be responsible for delivering advanced analytics projects that drive business results including interpreting business, selecting the appropriate methodology, data cleaning, exploratory data analysis, model building, and creation of polished deliverables. Responsibilities Analytics & Strategy Analyse large-scale structured and unstructured data; develop deep-dive analyses and machine learning models in retail, marketing, merchandising, and other areas of the business Utilize data mining, statistical and machine learning techniques to derive business value from store, product, operations, financial, and customer transactional data Apply multiple algorithms or architectures and recommend the best model with in-depth description to evangelize data-driven business decisions Utilize cloud setup to extract processed data for statistical modelling and big data analysis, and visualization tools to represent large sets of time series/cross-sectional data Operational Excellence Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Structure hypothesis, build thoughtful analyses, develop underlying data models and bring clarity to previously undefined problems Partner with Data Engineering to build, design and maintain core data infrastructure, pipelines and data workflows to automate dashboards and analyses Stakeholder Engagement Working collaboratively across multiple sets of stakeholders – Business functions, Data Engineers, Data Visualization experts to deliver on project deliverables Articulate complex data science models to business teams and present the insights in easily understandable and innovative formats Job Requirements Education Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) Relevant Experience 3–4 years of relevant working experience in a data science/advanced analytics role Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Supply chain analytics, Marketing Analytics, Customer Analytics) Statistical modelling using Analytical tools (R, Python, KNIME, etc.) and use big data technologies Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference. Practical experience deploying models using MLOps tools and practices (e.g., MLflow, DVC, Docker, etc.) Strong coding proficiency in Python (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Microsoft Office applications (MS Excel, etc.)

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

maharashtra

On-site

As a Subject Matter Expert in ML or LLM Ops at CitiusTech, you will play a crucial role in guiding the organization's MLOps and LLMOps strategy. Your responsibilities will include providing strategic direction, technical leadership, and overseeing the development of scalable infrastructure and processes to ensure alignment with business objectives. Your expertise will be instrumental in designing and implementing robust pipelines for ML and LLM models, focusing on efficient deployment and continuous optimization. You will collaborate with cross-functional teams to seamlessly integrate ML and LLM models into production workflows and communicate complex technical concepts clearly to both technical and non-technical audiences. The ideal candidate should have a Bachelor's or Master's degree in Computer Science, Data Science, or a related field, or equivalent experience, with a minimum of 10-15 years of experience in MLOps or LLM Ops. Proficiency in MLOps tools and platforms such as Kubernetes, Docker, Jenkins, Git, MLflow, and LLM-specific tooling is essential. Experience with Cloud platforms like AWS, Azure, Google Cloud, and infrastructure as code principles is desired. In addition to technical skills, excellent communication, collaboration, and problem-solving abilities are crucial for this role. A passion for innovation, the ability to translate technical concepts into clear language, and a drive to optimize ML and LLM workflows are key attributes that we are looking for in potential candidates. CitiusTech offers a dynamic work environment focused on continuous learning, work-life balance, and a culture that values Passion, Respect, Openness, Unity, and Depth (PROUD) of knowledge. Rated as a Great Place to Work, we provide a comprehensive set of benefits to support your career growth and personal well-being. Our EVP, "Be You Be Awesome," reflects our commitment to creating a workplace where employees can thrive personally and professionally. Join CitiusTech to be part of a team that is solving healthcare challenges and positively impacting human lives. Experience faster growth, higher learning, and a stronger impact with us. To know more about CitiusTech and explore career opportunities, visit www.citiustech.com. Happy Applying!,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies