Jobs
Interviews

1563 Mlflow Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

We are seeking a highly skilled AI/ML Engineer to join our team. As an AI/ML Engineer, you will be responsible for designing, implementing, and optimizing machine learning solutions, encompassing traditional models, deep learning architectures, and generative AI systems. Your role will involve collaborating with data engineers and cross-functional teams to create scalable, ethical, and high-performance AI/ML solutions that contribute to business growth. Your key responsibilities will include developing, implementing, and optimizing AI/ML models using both traditional machine learning and deep learning techniques. You will also design and deploy generative AI models for innovative business applications, in addition to working closely with data engineers to establish and maintain high-quality data pipelines and preprocessing workflows. Integrating responsible AI practices to ensure ethical, explainable, and unbiased model behavior will be a crucial aspect of your role. Furthermore, you will be expected to develop and maintain MLOps workflows to streamline training, deployment, monitoring, and continuous integration of ML models. Your expertise will be essential in optimizing large language models (LLMs) for efficient inference, memory usage, and performance. Collaboration with product managers, data scientists, and engineering teams to seamlessly integrate AI/ML into core business processes will also be part of your responsibilities. Rigorous testing, validation, and benchmarking of models to ensure accuracy, reliability, and robustness are essential aspects of this role. To be successful in this position, you must possess a strong foundation in machine learning, deep learning, and statistical modeling techniques. Hands-on experience with TensorFlow, PyTorch, scikit-learn, or similar ML frameworks is required. Proficiency in Python and ML engineering tools such as MLflow, Kubeflow, or SageMaker is also necessary. Experience in deploying generative AI solutions, understanding responsible AI concepts, solid experience with MLOps pipelines, and proficiency in optimizing transformer models or LLMs for production workloads are key qualifications for this role. Additionally, familiarity with cloud services (AWS, GCP, Azure), containerized deployments (Docker, Kubernetes), as well as excellent problem-solving and communication skills are essential. Ability to work collaboratively with cross-functional teams is also a crucial requirement. Preferred qualifications include experience with data versioning tools like DVC or LakeFS, exposure to vector databases and retrieval-augmented generation (RAG) pipelines, knowledge of prompt engineering, fine-tuning, and quantization techniques for LLMs, familiarity with Agile workflows and sprint-based delivery, and contributions to open-source AI/ML projects or published papers in conferences/journals. Join our team at Lucent Innovation, an India-based IT solutions provider, and enjoy a work environment that promotes work-life balance. With a focus on employee well-being, we offer 5-day workweeks, flexible working hours, and a range of indoor/outdoor activities, employee trips, and celebratory events throughout the year. At Lucent Innovation, we value our employees" growth and success, providing in-house training, as well as quarterly and yearly rewards and appreciation. Perks: - 5-day workweeks - Flexible working hours - No hidden policies - Friendly working environment - In-house training - Quarterly and yearly rewards & appreciation,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Intelligent Image Management Inc (IIMI) is an IT Services company reimagines and digitizes data through document automation using modern, cloud-native app development. As one of the world's leading multinational IT services companies with offices in the USA and Singapore, India, Sri Lanka, Bangladesh, Nepal and Kenya. Over 7,000 people are employed by IIMI worldwide whose mission is to advance data process automation. US and European Fortune 500 companies are among our clients. Become part of a team that puts its people first. Founded in 1996, Intelligent Image Management Inc. has always believed in its people. We strive to foster an environment where all feel welcome, supported, and empowered to be innovative and reach their full potential. Website: https://www.iimdirect.com/ About the Role: We are looking for a highly experienced and driven Senior Data Scientist to join our advanced AI and Data Science team. You will play a key role in building and deploying machine learning models—especially in the areas of computer vision, document image processing, and large language models (LLMs) . This role requires a combination of hands-on technical skills and the ability to design scalable ML solutions that solve real-world business problems. Key Responsibilities: Design and develop end-to-end machine learning pipelines, from data preprocessing and feature engineering to model training, evaluation, and deployment. Lead complex ML projects using deep learning, computer vision, and document analysis methods (e.g., object detection, image classification, segmentation, layout analysis). Build solutions for document image processing using tools like Google Cloud Vision, AWS Textract , and OCR libraries. Apply LLMs (Large Language Models), both open-source (e.g., LLaMA, Mistral, Falcon, GPT-NeoX) and closed-source (e.g., OpenAI GPT, Claude, Gemini), to automate text understanding, extraction, summarization, classification, and question-answering tasks. Integrate LLMs into applications for intelligent document processing, NER, semantic search, embeddings, and chat-based interfaces. Use Python (along with libraries such as OpenCV, PyTorch, TensorFlow, Hugging Face Transformers) and for building scalable, multi-threaded data processing pipelines. Implement and maintain ML Ops practices using tools such as MLflow, AWS SageMaker, GCP AI Platform , and containerized deployments. Collaborate with engineering and product teams to embed ML models into scalable production systems. Stay up to date with emerging research and best practices in machine learning, LLMs, and document AI. Required Qualifications: Bachelor’s or master’s degree in computer science, Mathematics, Statistics, Engineering, or a related field. Minimum 5 years of experience in machine learning, data science, or AI engineering roles. Strong background in deep learning, computer vision, and document image processing . Practical experience with LLMs (open and closed source), including fine-tuning, prompt engineering, and inference optimization. Solid grasp of MLOps , model versioning, and model lifecycle management. Expertise in Python , with strong knowledge of ML and CV libraries. Experience with Java and multi-threading is a plus. Familiarity with NLP tasks including Named Entity Recognition , classification, embeddings , and text summarization . Experience with cloud platforms (AWS/GCP) and their ML toolkits Preferred Skills: • Experience with retrieval-augmented generation (RAG), vector databases, and LLM evaluation tools. • Exposure to CI/CD for ML workflows and best practices in production ML. • Ability to mentor junior team members and lead cross-functional AI projects. Work Location: Work from Office Send cover letter, complete resume, and references to email: tech.jobs@iimdirect.com Industry: Outsourcing/Offshoring Employment Type Full-time

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Description About Sutherland Artificial Intelligence. Automation.Cloud engineering. Advanced analytics.For business leaders, these are key factors of success. For us, they’re our core expertise. We work with iconic brands worldwide. We bring them a unique value proposition through market-leading technology and business process excellence. We’ve created over 200 unique inventions under several patents across AI and other critical technologies. Leveraging our advanced products and platforms, we drive digital transformation, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless “as a service” model. For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. We tailor proven and rapid formulas, to fit their unique DNA.We bring together human expertise and artificial intelligence to develop digital chemistry. This unlocks new possibilities, transformative outcomes and enduring relationships. Sutherland Unlocking digital performance. Delivering measurable results. Job Description We are looking for a proactive and detail-oriented AI OPS Engineer to support the deployment, monitoring, and maintenance of AI/ML models in production. Reporting to the AI Developer, this role will focus on MLOps practices including model versioning, CI/CD, observability, and performance optimization in cloud and hybrid environments. Key Responsibilities: Build and manage CI/CD pipelines for ML models using platforms like MLflow, Kubeflow, or SageMaker. Monitor model performance and health using observability tools and dashboards. Ensure automated retraining, version control, rollback strategies, and audit logging for production models. Support deployment of LLMs, RAG pipelines, and agentic AI systems in scalable, containerized environments. Collaborate with AI Developers and Architects to ensure reliable and secure integration of models into enterprise systems. Troubleshoot runtime issues, latency, and accuracy drift in model predictions and APIs. Contribute to infrastructure automation using Terraform, Docker, Kubernetes, or similar technologies. Qualifications Required Qualifications: 3–5 years of experience in DevOps, MLOps, or platform engineering roles with exposure to AI/ML workflows. Hands-on experience with deployment tools like Jenkins, Argo, GitHub Actions, or Azure DevOps. Strong scripting skills (Python, Bash) and familiarity with cloud environments (AWS, Azure, GCP). Understanding of containerization, service orchestration, and monitoring tools (Prometheus, Grafana, ELK). Bachelor’s degree in computer science, IT, or a related field. Preferred Skills: Experience supporting GenAI or LLM applications in production. Familiarity with vector databases, model registries, and feature stores. Exposure to security and compliance standards in model lifecycle management

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary We are seeking a highly motivated AI/ML Engineer to design, develop, and deploy machine learning solutions that solve real-world problems. The ideal candidate should have strong foundations in machine learning algorithms , Python programming , and experience with model development , data pipelines , and production deployment in cloud or on-prem environments. Key Responsibilities Design and implement machine learning models and AI solutions for business use cases Build and optimize data preprocessing pipelines for training and inference Train, evaluate, and fine-tune supervised, unsupervised, and deep learning models Collaborate with data engineers, product teams, and software developers Deploy ML models into production using APIs, Docker, or cloud-native tools Monitor model performance and retrain/update models as needed Document model architectures, experiments, and performance metrics Research and stay updated on new AI/ML trends and tools Required Skills And Experience Strong programming skills in Python (NumPy, Pandas, Scikit-learn, etc.) Experience with deep learning frameworks like TensorFlow, Keras, or PyTorch Solid understanding of machine learning algorithms, data structures, and statistics Experience with NLP, computer vision, or time series analysis is a plus Familiarity with tools like Jupyter, MLflow, or Weights & Biases Understanding of Docker, Git, and RESTful APIs Experience with cloud platforms such as AWS, GCP, or Azure Strong problem-solving and communication skills Nice To Have Experience with MLOps tools and concepts (CI/CD for ML, model monitoring) Familiarity with big data tools (Spark, Hadoop) Knowledge of FastAPI, Flask, or Streamlit for ML API development Understanding of transformer models (e.g., BERT, GPT) or LLM integration  Education Bachelor’s or Master’s degree in Computer Science, Data Science, AI, or a related field Certifications in Machine Learning/AI (e.g., Google ML Engineer, AWS ML Specialty) are a plus

Posted 1 week ago

Apply

0.0 years

0 Lacs

Noida, Uttar Pradesh

Remote

Role Summary: The AIML Platform Engineering Lead is a pivotal leadership role responsible for managing the day-to-day operations and development of the AI/ML platform team. In this role, you will guide the team in designing, building, and maintaining scalable platforms, while collaborating with other engineering and data science teams to ensure successful model deployment and lifecycle management. Key Responsibilities: Lead and manage a team of platform engineers in developing and maintaining robust AI/ML platforms. Define and implement best practices for machine learning infrastructure, ensuring scalability, performance, and security. Collaborate closely with data scientists and DevOps teams to optimize the ML lifecycle from model training to deployment. Establish and enforce standards for platform automation, monitoring, and operational efficiency. Serve as the primary liaison between engineering teams, product teams, and leadership. Mentor and develop junior engineers, providing technical guidance and performance feedback. Stay abreast of the latest advancements in AI/ML infrastructure and integrate new technologies where applicable. Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 8+ years of experience in Python & Node.js development and infrastructure. Proven experience in leading engineering teams and driving large-scale projects. Extensive expertise in cloud infrastructure (AWS, GCP, Azure), MLOps tools (e.g., Kubeflow, MLflow), and infrastructure as code (Terraform) Strong programming skills in Python and Node.js, with a proven track record of building scalable and maintainable systems that support AI/ML workflows. Hands-on experience with monitoring and observability tools, such as Datadog, to ensure platform reliability and performance. Strong leadership and communication skills with the ability to influence cross-functional teams. Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Job Type: Full-time Benefits: Commuter assistance Flexible schedule Health insurance Life insurance Paid sick time Paid time off Provident Fund Work from home Ability to commute/relocate: Noida, Uttar Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): What are your salary expectations? What is your notice period? Location: Noida, Uttar Pradesh (Preferred) Work Location: In person

Posted 1 week ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

WorkMode :Hybrid Work Location : Chennai / Hyderabad / Work Timing : 2 PM to 11 PM Primary : Data Scientist We are seeking a skilled Data Scientist with strong expertise in Python programming and Amazon SageMaker to join our data team. The ideal candidate will have a solid foundation in machine learning, data analysis, and cloud-based model deployment. You will work closely with cross-functional teams to build, deploy, and optimize predictive models and data-driven solutions at scale. Bachelors or Master's degree in Computer Science, Data Science, Statistics, or a related field. 12+ years of experience in data science or machine learning roles. Proficiency in Python and popular ML libraries (e.g., scikit-learn, pandas, NumPy). Hands-on experience with Amazon SageMaker for model training, tuning, and deployment. Strong understanding of supervised and unsupervised learning techniques. Experience working with large datasets and cloud platforms (AWS preferred). Excellent problem-solving and communication skills. Experience with AWS services beyond SageMaker (e.g., S3, Lambda, Step Functions). Familiarity with deep learning frameworks like TensorFlow or PyTorch. Exposure to MLOps practices and tools (e.g., CI/CD for ML, MLflow, Kubeflow). Knowledge of version control (e.g., Git) and agile development practices.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities: Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements: 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate send mail to :- thasleema@qcentro.com Job Type: Permanent Ability to commute/relocate: Thiruvananthapuram District, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: Devops and MLops: 5 years (Required) Work Location: In person

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

About this role: Wells Fargo is seeking a Senior Software Engineer. In this role, you will: Lead complex technology initiatives including those that are companywide with broad impact Act as a key participant in developing standards and companywide best practices for engineering complex and large-scale technology solutions for technology engineering disciplines Design, code, test, debug, and document for projects and programs Review and analyze complex, large-scale technology solutions for tactical and strategic business objectives, enterprise technological environment, and technical challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented technical factors Make decisions in developing standard and companywide best practices for engineering and technology solutions requiring understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals Lead projects, teams, or serve as a peer mentor Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Strong Python programming skills Expertise in RPA tools such as UI path Expertise in Workflow automation tools such as Power Platform Minimum 2 Years of hands-on experience in AI/ML and Gen AI Proven experience in LLMs (Gemini, GPT or Llama etc.) Extensive experience in Prompt Engineering and model fine tuning AI/Gen AI Certifications from premier institution Hands on experience in ML ops (MLflow, CICD pipelines) Job Expectations: Design and develop AI driven automation solutions Implement AI automation to enhance process automation Develop and maintain automation, BOTS and AI based workflows Integrate Ai automation with existing applications, API's and databases Design, develop and implement Gen AI applications using LLM's Build and optimize prompt engineering workflows Fine tune and integrate pre-trained models for specific use cases Deploy models in production using robust MLops practices Posting End Date: 23 Jul 2025 *Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants with Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment and Hiring Requirements: a. Third-Party recordings are prohibited unless authorized by Wells Fargo. b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.

Posted 1 week ago

Apply

4.0 years

1 - 7 Lacs

Hyderābād

On-site

About this role: Wells Fargo is seeking a Senior Software Engineer. In this role, you will: Lead moderately complex initiatives and deliverables within technical domain environments Contribute to large scale planning of strategies Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients' needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals Lead projects and act as an escalation point, provide guidance and direction to less experienced staff Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Strong Python programming skills Expertise in RPA tools such as UI path Expertise in Workflow automation tools such as Power Platform Minimum 2 Years of hands-on experience in AI/ML and Gen AI Proven experience in LLMs (Gemini, GPT or Llama etc.) Extensive experience in Prompt Engineering and model fine tuning AI/Gen AI Certifications from premier institution Hands on experience in ML ops (MLflow, CICD pipelines) Job Expectations: Design and develop AI driven Automation solutions Implement AI automation to enhance process automation Develop and maintain automation, BOTS and AI based workflows Integrate Ai automation with existing applications, API's and databases Design, develop and implement Gen AI applications using LLM's Build and optimize prompt engineering workflows Fine tune and integrate pre-trained models for specific use cases Deploy models in production using robust MLops practices Posting End Date: 23 Jul 2025 *Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants with Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment and Hiring Requirements: a. Third-Party recordings are prohibited unless authorized by Wells Fargo. b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.

Posted 1 week ago

Apply

1.0 - 2.0 years

1 - 5 Lacs

Gurgaon

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company and a leader in the convenience store and fuel space with over 16,700 stores. It has footprints across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Associate ML Ops Analyst will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About the role The incumbent will be responsible for implementing Azure data services to deliver scalable and sustainable solutions, build model deployment and monitor pipelines to meet business needs. Roles & Responsibilities Development and Integration Collaborate with data scientists to deploy ML models into production environments Implement and maintain CI/CD pipelines for machine learning workflows Use version control tools (e.g., Git) and ML lifecycle management tools (e.g., MLflow) for model tracking, versioning, and management. Design, build as well as optimize applications containerization and orchestration with Docker and Kubernetes and cloud platforms like AWS or Azure Automation & Monitoring Automating pipelines using understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Implement model monitoring and alerting systems to track model performance, accuracy, and data drift in production environments. Collaboration and Communication Work closely with data scientists to ensure that models are production-ready Collaborate with Data Engineering and Tech teams to ensure infrastructure is optimized for scaling ML applications. Optimization and Scaling Optimize ML pipelines for performance and cost-effectiveness Operational Excellence Help the Data teams leverage best practices to implement Enterprise level solutions. Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Helping to define common coding standards and model monitoring performance best practices Continuously evaluate the latest packages and frameworks in the ML ecosystem Build automated model deployment data engineering pipelines from plain Python/PySpark mode Stakeholder Engagement Collaborate with Data Scientists, Data Engineers, cloud platform and application engineers to create and implement cloud policies and governance for ML model life cycle. Job Requirements Education & Relevant Experience Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) 1-2 years of relevant working experience in MLOps Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Knowledge of core computer science concepts such as common data structures and algorithms, OOPs Programming languages (R, Python, PySpark, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Exposure to ETL tools and version controlling Experience in building and maintaining CI/CD pipelines for ML models. Understanding of machine-learning, information retrieval or recommendation systems Familiarity with DevOps tools (Docker, Kubernetes, Jenkins, GitLab). #LI-DS1

Posted 1 week ago

Apply

3.0 - 4.0 years

2 - 6 Lacs

Gurgaon

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space, it has footprint across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Data Scientist/Senior Data Scientist will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. ___________________________________________________________________________________________________________ Department: Data & Analytics Location: Cyber Hub, Gurugram, Haryana (5 days in office) Job Type: Permanent, Full-Time (40 Hours) Reports To: Senior Manager Data Science & Analytics ____________________________________________________________________________________________________________ About the role The incumbent will be responsible for delivering advanced analytics projects that drive business results including interpreting business, selecting the appropriate methodology, data cleaning, exploratory data analysis, model building, and creation of polished deliverables. Roles & Responsibilities Analytics & Strategy Analyse large-scale structured and unstructured data; develop deep-dive analyses and machine learning models in retail, marketing, merchandising, and other areas of the business Utilize data mining, statistical and machine learning techniques to derive business value from store, product, operations, financial, and customer transactional data Apply multiple algorithms or architectures and recommend the best model with in-depth description to evangelize data-driven business decisions Utilize cloud setup to extract processed data for statistical modelling and big data analysis, and visualization tools to represent large sets of time series/cross-sectional data Operational Excellence Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Structure hypothesis, build thoughtful analyses, develop underlying data models and bring clarity to previously undefined problems Partner with Data Engineering to build, design and maintain core data infrastructure, pipelines and data workflows to automate dashboards and analyses. Stakeholder Engagement Working collaboratively across multiple sets of stakeholders – Business functions, Data Engineers, Data Visualization experts to deliver on project deliverables Articulate complex data science models to business teams and present the insights in easily understandable and innovative formats Job Requirements Education Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) Relevant Experience 3 - 4 years for Data Scientist Relevant working experience in a data science/advanced analytics role Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Supply chain analytics, Marketing Analytics, Customer Analytics, etc.) Statistical modelling using Analytical tools (R, Python, KNIME, etc.) Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference. Practical experience deploying models using MLOps tools and practices (e.g., MLflow, DVC, Docker, etc.) Strong coding proficiency in Python (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Microsoft Office applications (MS Excel, etc.) #LI-DS1

Posted 1 week ago

Apply

0 years

4 - 16 Lacs

Gurgaon

On-site

About the Role We are seeking an experienced Senior DevOps/MLOps Engineer to lead and manage a high-performing engineering team. You will oversee the deployment and scaling of machine learning models and backend services using modern DevOps and MLOps practices. Proficiency in FastAPI , Docker , Kubernetes , and CI/CD is essential. Key Responsibilities Team Leadership : Guide and manage a team of DevOps/MLOps engineers. FastAPI Deployment : Optimize, containerize, and deploy FastAPI applications at scale. Infrastructure as Code (IaC) : Use tools like Terraform or Helm to manage infrastructure. Kubernetes Management : Handle multi-environment Kubernetes clusters (GKE, EKS, AKS, or on-prem). Model Ops : Manage ML model lifecycle: versioning, deployment, monitoring, and rollback. CI/CD Pipelines : Design and maintain robust pipelines for model and application deployment. Monitoring & Logging : Set up observability tools (Prometheus, Grafana, ELK, etc.). Security & Compliance : Ensure secure infrastructure and data pipelines. Required Skills FastAPI : Deep understanding of building, scaling, and securing APIs. Docker & Kubernetes : Expert-level experience in containerization and orchestration. CI/CD Tools : GitHub Actions, GitLab CI, Jenkins, ArgoCD, or similar. Cloud Platforms : AWS/GCP/Azure. Python : Strong scripting and automation skills. ML Workflow Tools (preferred): MLflow, DVC, Kubeflow, or Seldon. Preferred Qualifications Experience in managing hybrid cloud/on-premise deployments. Strong communication and mentoring skills. Understanding of data pipelines, feature stores, and model drift monitoring. Job Types: Full-time, Permanent Pay: ₹426,830.06 - ₹1,653,904.80 per year Work Location: In person Speak with the employer +91 9867786230

Posted 1 week ago

Apply

5.0 years

3 - 4 Lacs

Noida

On-site

ROLES & RESPONSIBILITIES Qualifications and Skills: •Master’s or Ph.D. in Computer Science, Statistics, Mathematics, Data Science, or a related field. 5+ years of hands-on experience in data science or machine learning roles. Strong proficiency in Python or R, with deep knowledge of libraries like scikit-learn, pandas, NumPy, TensorFlow, or PyTorch. Proficient in SQL and working with relational databases. Solid experience with Azure cloud platforms and data pipeline tools. Strong grasp of statistical methods, machine learning algorithms, and model evaluation techniques. Excellent communication and storytelling skills with the ability to influence stakeholders. Proven track record of delivering impactful data science solutions in a business setting. Preferred Qualifications: Experience working in industries such as [logistics, aerospace, marketing, etc.]. Familiarity with MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow). Knowledge of data visualization tools (e.g., Tableau, Power BI, Plotly). Responsibilities and Duties •Model Development: Design, build, and deploy scalable machine learning models to solve key business challenges (e.g., customer churn, recommendation engines, pricing optimization). Data Analysis: Perform exploratory data analysis (EDA), statistical testing, and feature engineering to uncover trends and actionable insights. Project Leadership: Lead end-to-end data science projects, including problem definition, data acquisition, modeling, and presentation of results to stakeholders. Cross-functional Collaboration: Partner with engineering, product, marketing, and business teams to integrate models into products and processes. Mentorship: Guide and mentor junior data scientists and analysts, helping them grow technically and professionally. Innovation: Stay current with the latest data science techniques, tools, and best practices. Evaluate and incorporate new technologies when appropriate. Communication: Translate complex analyses and findings into clear, compelling narratives for non-technical stakeholders. EXPERIENCE 8-11 Years SKILLS Primary Skill: Data Science Sub Skill(s): Data Science Additional Skill(s): Data Science

Posted 1 week ago

Apply

2.0 years

1 - 9 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for an enthusiastic and curious Junior Data Scientist to join the Cloud Nova team. This is an excellent opportunity for someone with 2-3 years of experience to work on exciting projects involving Generative AI (GenAI), Retrieval-Augmented Generation (RAG), and deep learning. You will support senior data scientists and engineers in building and deploying AI models that solve real-world problems. Primary Responsibilities: Assist in developing and testing GenAI models using tools like LangChain and Hugging Face Transformers Support the creation of RAG pipelines and embedding-based search systems Help prepare datasets and perform exploratory data analysis Contribute to model evaluation and performance tracking Collaborate with team members to integrate models into applications Stay updated on the latest trends in AI and deep learning Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree in Computer Science, Data Science, or a related field 1+ years of experience in data science, machine learning, or AI projects (internships count) Basic understanding of NLP and deep learning concepts Willingness to learn and grow in a collaborative environment Technical Skills: Programming: Python, SQL AI/ML: PyTorch or TensorFlow, Scikit-learn, Hugging Face Transformers GenAI Tools: LangChain, LlamaIndex (basic familiarity preferred) Data Tools: Pandas, NumPy, Jupyter Notebooks Version Control: Git Preferred Qualifications: Experience with vector databases (e.g., FAISS) Familiarity with MLOps tools like MLflow or Docker Exposure to cloud-based model deployment Technical Skills: Cloud: Exposure to Azure or AWS At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

5.0 years

2 - 6 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a versatile AI/ML Engineer to join the Our team, contributing to the design and deployment of scalable AI solutions across the full stack. This role blends machine learning engineering with frontend/backend development and cloud native microservices. You’ll work closely with data scientists, MLOps engineers, and product teams to bring generative AI capabilities like RAG and LLM based systems into production. Primary Responsibility: Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s or masters in computer science, Engineering, or related field. 5+ years of experience in AI/ML engineering, full stack development, or MLOps. Proven experience deploying AI models in production environments. Solid understanding of microservices architecture and cloud native development. Familiarity with Agile/Scrum methodologies Technical Skills: Languages & Frameworks: Python, JavaScript/TypeScript, SQL, Scala ML Tools: MLflow, TensorFlow, PyTorch, Scikit learn Frontend: React.js, Angular (preferred), HTML/CSS Backend: Node.js, Spring Boot, REST APIs Cloud: Azure (preferred), UAIS, AWS DevOps & MLOps: Git, Jenkins, Docker, Kubernetes, Azure DevOps Data Engineering: Apache Spark/Databricks, Kafka, ETL pipelines Monitoring: Prometheus, Grafana RAG/LLM: LangChain, LlamaIndex, embedding pipelines, prompt engineering Preferred Qualifications: Experience with Spark, Hadoop Familiarity with Maven, Spring, XML, Tomcat Proficiency in Unix shell scripting and SQL Server At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

2.0 - 3.0 years

8 - 10 Lacs

Noida

On-site

At Trackier, we're revolutionizing the way businesses measure and optimize their marketing performance. As a leading Marketing Analytics & Attribution platform, we empower advertisers, agencies, and ad networks with powerful, real-time insights to drive growth and maximize ROI. In today's complex digital landscape, understanding every touchpoint of the customer journey is paramount. That's where Trackier comes in. Our robust platform provides comprehensive tracking, detailed analytics, and precise attribution models, ensuring you have a clear picture of what's working and why. From performance marketing campaigns to influencer collaborations and beyond, we give you the tools to make data-driven decisions that propel your business forward. We're passionate about transparency, efficiency, and delivering measurable results. Our commitment to innovation means we're constantly evolving our platform to meet the dynamic needs of the industry, helping our clients achieve their growth ambitions with confidence. Position Summary: We are seeking a driven and analytically-minded AI/ML Engineer with 2-3 years of experience to join our growing team. In this role, you will play a crucial part in the end-to-end lifecycle of our AI solutions, from understanding and preparing complex datasets to developing, deploying, and optimizing robust machine learning models. You will leverage your strong data analysis skills to identify patterns, generate insights, and translate them into effective AI strategies that drive business value. You will be working with LLM integrations where necessary to generate insights from data and enhancing chatbots using RAG and similar tech. Key Responsibilities: Data Understanding & Preparation: Collaborate with data stakeholders to understand business problems and data sources. Perform data loading, cleaning, and preparation, including handling missing values, data type conversions, and ensuring data integrity for large datasets. Feature Engineering: Identify, extract, and transform relevant features from raw data to optimize model performance. Model Development: Design, develop, train, and evaluate machine learning models (including deep learning, natural language processing,etc., as relevant to our domain) for various applications. System Integration: Integrate AI models into existing production systems and applications, ensuring scalability and reliability.Performance Optimization: Continuously monitor, analyze, and improve the performance, accuracy, and efficiency of AI models in production. Insight Generation & Communication: Translate complex analytical findings and model outputs into clear, concise, and actionable business insights and recommendations for end users. Research & Innovation: Stay abreast of the latest advancements in AI/ML research and actively explore new technologies and methodologies to enhance our capabilities. Deployment & MLOps: Contribute to the development and implementation of MLOps practices, including model versioning, CI/CD for ML, and model monitoring. Collaboration: Work closely with cross-functional teams, including product managers, software engineers to define requirements and deliver high-quality AI solutions. Documentation: Create clear and comprehensive documentation for models, code, and processes. Requirements Experience: 2-3 years of professional experience as an AI Engineer, Machine Learning Engineer, or a similar role focused on building and deploying ML solutions. Education: Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Data Science, Electrical Engineering, or a related quantitative field. Programming: Strong proficiency in Python and experience with relevant AI/ML libraries (e.g., TensorFlow, PyTorch, scikit-learn, Keras). Data Manipulation & Analysis: Demonstrated strong skills in data loading, cleaning, manipulation, and preparation using Pandas and NumPy. EDA & Visualization: Proven ability to conduct exploratory data analysis and create effective visualizations using libraries to communicate insights. ML Fundamentals: Solid understanding of machine learning principles, algorithms (e.g., supervised, unsupervised, reinforcement learning), and statistical modeling. Software Engineering: Strong software engineering fundamentals, including experience with version control (Git), testing, and code review practices. Problem Solving: Excellent analytical and problem-solving skills with a keen attention to detail and the ability to derive actionable insights from data. Communication: Strong written and verbal communication skills, with the ability to explain complex technical concepts and present data-driven recommendations to both technical and non-technical stakeholders. Preferred Qualifications : Experience with cloud platforms (AWS, Azure, GCP) and their AI/ML services. Familiarity with containerization technologies (Docker, Kubernetes). Experience with MLOps tools and frameworks (e.g., MLflow, Kubeflow, Sagemaker). Knowledge of distributed computing frameworks (e.g., Spark). Contribution to open-source projects or relevant publications. Experience with agile development methodologies. Benefits Medical Insurance. 5 days working culture. Best in industry salary structure. Sponsored trips.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are looking for an enthusiastic Machine Learning Engineer to join our growing team. The hire will be responsible for working in collaboration with other data scientists and engineers across the organization to develop production-quality models for a variety of problems across Razorpay. Some possible problems include : making recommendations to merchants from Razorpay’s suite of products, cost optimisation of transactions for merchants, automatic address disambiguation / correction to enable tracking customer purchases using advanced natural language processing techniques, computer vision techniques for auto-verifications, running large-scale bandit experiments to optimize Razorpay’s merchant facing web pages at scale, and many more. In addition to this, we expect the MLE to be adept at productionising ML models using state-of-the-art systems. As part of the DS team @ Razorpay, you’ll work with some of the smartest engineers/architects/data scientists/product leaders in the industry and have the opportunity to solve complex and critical problems for Razorpay. As a Senior MLE, you will also have the opportunity to partner with and be mentored by senior engineers across the organization and lay the foundation for a world-class DS team here at Razorpay. You come and work with the right attitude, fun and growth guaranteed! Required qualifications 5+ years of experience doing ML in a production environment and productionising ML models at scale Bachelors (required) or Masters in a quantitative field such as Computer science, operations research, statistics, mathematics, physics Familiarity with basic machine learning techniques : regression, classification, clustering, model metrics and performance (AUC, ROC, precision, recall and their various flavors) Basic knowledge of advanced machine learning techniques : regression, clustering, recommender systems, ranking systems and neural networks Expertise in coding in python and good knowledge of at least one language from C, C++, Java and at least one scripting language (perl, shell commands) Experience with big data tools like Spark and experience working with Databricks / DataRobots Experience with AWS’ suite of tools for production-quality ML work, or alternatively familiarity with Microsoft Azure / GCP Experience deploying complex ML algorithms to production in collaboration with engineers using Flask, MLFlow, Seldon, etc. Good to have: Excellent communication skills and ability to keep stakeholders informed of progress / blockers

Posted 1 week ago

Apply

5.0 years

10 Lacs

Calcutta

On-site

Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now!

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Responsibilities Build and deploy production-grade ML pipelines for varied use cases across operations, manufacturing, supply chain, and more Work hands-on in designing, training, and fine-tuning models across traditional ML, deep learning, and GenAI (LLMs, diffusion models, etc.) Collaborate with data scientists to transform exploratory notebooks into scalable, maintainable, and monitored deployments Implement CI/CD pipelines, version control, and experiment tracking using tools like MLflow, DVC, or similar Do shadow deployment and A/B testing of production models Partner with data engineers to build data pipelines that support real-time or batch model inference Ensure high availability, performance, and observability of deployed ML solutions using MLOps best practices Conduct code reviews, performance tuning, and contribute to ML infrastructure improvements Support the end-to-end lifecycle of ML products Contribute to knowledge sharing, reusable component development, and internal upskilling initiatives Qualifications Bachelor's in Computer Science, Engineering, Data Science, or related field. Master’s degree preferred 4–6 years of experience in developing and deploying machine learning models, with significant exposure to MLOps practices Experience in implementing and productionizing Generative AI applications using LLMs (e.g., OpenAI, HuggingFace, LangChain, RAG architectures) Strong programming skills in Python; familiarity with ML libraries such as scikit-learn, TensorFlow, PyTorch Hands-on experience with tools like MLflow, Docker, Kubernetes, FastAPI/Flask, Airflow, Git, and cloud platforms (Azure/AWS) Solid understanding of software engineering fundamentals and DevOps/MLOps workflows Exposure to at least 2-3 industry domains (energy, manufacturing, finance, etc.) preferred Excellent problem-solving skills, ownership mindset, and ability to work in agile cross-functional teams

Posted 1 week ago

Apply

7.0 - 9.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Biz2X Biz2X is the leading digital lending platform, enabling financial providers to power growth with a modern omni-channel experience, best-in-class risk management tools and a comprehensive yet flexible Servicing engine. The company partners with financial institutions to support their Digital Transformation efforts with Biz2X’s digital lending platform. Biz2X solutions not only reduces operational expenses, but accelerates lending growth by significantly improving client experience, reducing total turnaround time, and equipping the relationship managers with powerful monitoring insights and alerts Read Our Latest Press Release : Press Release - Biz 2X About Biz2Cre dit Biz2Credit is a digital-first provider of small business funding. Biz2Credit leverages data, cash flow insights, and the latest technology to give business owners an automated small business funding platform. Since its inception, Biz2Credit has been the best place for small businesses to get funding online. With over 750 employees globally, our team – made up of top-notch engineers, marketers, and data scientists – is building the next generation in business lending soluti ons. Read Our Latest Press Rele ase: Biz2Credit in the News - Biz2C redit Learn More: www.biz2x.com & www.biz2cred it.com Role – Lead Engineer – AI, Machine L earning Job O verview: We are seeking a Lead Engineer to drive the development and deployment of sophisticated AI solutions in our fintech products. You will lead a team of engineers, oversee MLOps pipelines, and manage large language models (LLMs) to enhance our financial technology services. Key Respons ibilities: AI/ML Development: Design and implement advanced ML models for applications including fraud detection, credit scoring, and algorithmic trading. MLOps: Develop and manage MLOps pipelines using tools such as MLflow, Kubeflow, and Airflow for CI/CD, model monitoring, and automation. LLMOps: Optimize and operationalize LLMs (e.g., GPT-4, BERT) for fintech applications like automated customer support and sentiment analysis. Technical Leadership : Mentor and lead a team of ML engineers and data scientists, conducting code reviews and ensuring best practices. Collaboration: Work with product managers, data engineers, and business analysts to align technical solutions with business objectives. Experience in building RA G pipelines Qualifications: Experience: 7-9 years in AI, ML, MLOps, and LLMOps with a focus on fintech. Technical Skills: Expertise in TensorFlow, PyTorch, scikit-learn, and MLOps tools (MLflow, Kubeflow). Proficiency in large language models (LLMs) and cloud platforms (AWS, GCP, Azure). Strong programming skills in Python, Java, or Scala.

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Data Scientists for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Data Scientists with expertise in predictive modeling, statistical analysis, and A/B testing. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Lead design, development, and deployment of scalable data science solutions optimizing large-scale data pipelines in collaboration with engineering teams. Architect advanced machine learning models (deep learning, RL, ensemble) and apply statistical analysis for business insights. Apply statistical analysis, predictive modeling, and optimization techniques to derive actionable business insights. Own the full lifecycle of data science projects—from data acquisition, preprocessing, and exploratory data analysis (EDA) to model development, deployment, and monitoring. Implement MLOps workflows (model training, deployment, versioning, monitoring) and conduct A/B testing to validate models. Required Skills: Expert in Python, data science libraries (Pandas, NumPy, Scikit-learn), and R with extensive experience with machine learning (XGBoost, PyTorch, TensorFlow) and statistical modeling. Proficient in building scalable data pipelines (Apache Spark, Dask) and cloud platforms (AWS, GCP, Azure). Expertise in MLOps (Docker, Kubernetes, MLflow, CI/CD) along with strong data visualization skills (Tableau, Plotly Dash) and business acumen. Nice to Have: Experience with NLP, computer vision, recommendation systems, or real-time data processing (Kafka, Flink). Knowledge of data privacy regulations (GDPR, CCPA) and ethical AI practices. Contributions to open-source projects or published research. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!

Posted 1 week ago

Apply

15.0 years

0 Lacs

India

On-site

Job Location: Hyderabad / Bangalore / Pune Immediate Joiners / less than 30 days About the Role We are looking for a seasoned AI/ML Solutions Architect with deep expertise in designing and deploying scalable AI/ML and GenAI solutions on cloud platforms. The ideal candidate will have a strong track record in BFSI, leading end-to-end projects—from use case discovery to productionization—while ensuring governance, compliance, and performance at scale. Key Responsibilities Lead the design and deployment of enterprise-scale AI/ML and GenAI architectures. Drive end-to-end AI/ML project delivery : discovery, prototyping, productionization. Architect solutions using leading cloud-native AI services (AWS, Azure, GCP). Implement MLOps/LLMOps pipelines for model lifecycle and automation. Guide teams in selecting and integrating GenAI/LLM frameworks (OpenAI, Cohere, Hugging Face, LangChain, etc.). Ensure robust AI governance, model risk management , and compliance practices. Collaborate with senior business stakeholders and cross-functional engineering teams. Required Skills & Experience 15+ years in AI/ML, cloud architecture, and data engineering. At least 10 end-to-end AI/ML project implementations. Hands-on expertise in one or more of the following: ML frameworks: scikit-learn, XGBoost, TensorFlow, PyTorch GenAI/LLM tools: OpenAI, Cohere, LangChain, Hugging Face, FAISS, Pinecone Cloud platforms: AWS, Azure, GCP (AI/ML services) MLOps: MLflow, SageMaker Pipelines, Kubeflow, Vertex AI Strong understanding of data privacy, model governance, and compliance frameworks in BFSI. Proven leadership of cross-functional technical teams and stakeholder engagement.

Posted 1 week ago

Apply

0.0 - 5.0 years

0 Lacs

Thiruvananthapuram District, Kerala

On-site

Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities: Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements: 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate send mail to :- thasleema@qcentro.com Job Type: Permanent Ability to commute/relocate: Thiruvananthapuram District, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: Devops and MLops: 5 years (Required) Work Location: In person

Posted 1 week ago

Apply

5.0 years

0 Lacs

Tamil Nadu, India

On-site

Role : Sr. AI/ML Engineer Years of experience: 5+ years (with minimum 4 years of relevant experience) Work mode: WFO- Chennai (mandate) Type: FTE Notice Period: Immediate to 15 days ONLY Key skills: Python, Tensorflow, Generative AI ,Machine Learning, AWS , Agentic AI, Open AI, Claude, Fast API JD: Experience in Gen AI, CI/CD pipelines, scripting languages, and a deep understanding of version control systems(e.g. Git), containerization (e.g. Docker), and continuous integration/deployment tools (e.g. Jenkins) third party integration is a plus, cloud computing platforms (e.g. AWS, GCP, Azure), Kubernetes and Kafka. Experience building production-grade ML pipelines. Proficient in Python and frameworks like Tensorflow, Keras , or PyTorch. Experience with cloud build, deployment, and orchestration tools Experience with MLOps tools such as MLFlow, Kubeflow, Weights & Biases, AWS Sagemaker, Vertex AI, DVC, Airflow, Prefect, etc., Experience in statistical modeling, machine learning, data mining, and unstructured data analytics. Understanding of ML Lifecycle, MLOps & Hands on experience to Productionize the ML Model Detail-oriented, with the ability to work both independently and collaboratively. Ability to work successfully with multi-functional teams, principals, and architects, across organizational boundaries and geographies. Equal comfort driving low-level technical implementation and high-level architecture evolution Experience working with data engineering pipelines

Posted 1 week ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies