Jobs
Interviews

1767 Mlflow Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

8 - 8 Lacs

Chennai

On-site

Build and manage CI/CD pipelines for ML models. Deploy models to cloud/on-premise environments. Monitor model performance and automate retraining workflows. Implement model versioning and reproducibility. Collaborate with data scientists and engineers. Requirements Looking for an ML DevOps Engineer to streamline the deployment and monitoring of ML models. The role requires strong DevOps skills with knowledge of ML lifecycle management Experience with Docker, Kubernetes, Jenkins, or similar tools. Familiarity with ML platforms like MLflow, Kubeflow, or SageMaker. Strong scripting skills in Python and Shell. Knowledge of cloud platforms (AWS, Azure, GCP). Understanding of MLOps best practices and ML lifecycle Date Opened 07/09/2025 Job Type Full time Years of Experience 3 - 6 Years Domain Chemicals City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600001

Posted 2 weeks ago

Apply

2.0 - 3.0 years

5 - 7 Lacs

Vadodara

On-site

Responsibilities Design and implement scalable machine learning models from data preprocessing to deployment. Lead feature engineering and model optimization to improve performance and accuracy. Build and manage end-to-end ML pipelines using MLOps practices. Deploy, monitor, and maintain models in production environments. Collaborate with data scientists and product teams to understand business requirements and translate them into ML solutions. Conduct advanced data analysis and build visualization dashboards to support insights. Maintain thorough documentation of models, experiments, and workflows. Mentor junior team members on best practices and technical skills. Skills Must-have 2–3 years of experience in machine learning development, focusing on end-to-end model lifecycle. Proficiency in Python with Pandas, NumPy, and Scikit-learn for advanced data handling and feature engineering. Strong hands-on expertise in TensorFlow or PyTorch for deep learning model development. Good-to-have Experience with MLOps tools such as MLflow or Kubeflow for model management and deployment. Familiarity with big data frameworks like Spark or Dask. Exposure to cloud ML services such as AWS SageMaker or GCP AI Platform. Will be a plus Working knowledge of Weights & Biases and DVC for experiment tracking and versioning. Experience using Ray or BentoML for distributed training and model serving.

Posted 2 weeks ago

Apply

6.0 years

20 - 25 Lacs

Bengaluru, Karnataka, India

Remote

:-Job Title: Machine Learning Engineer – 2 Location: Onsite – Bengaluru, Karnataka, India Experience Required: 3 – 6 Years Compensation: ₹20 – ₹25 LPA Employment Type: Full-Time Work Mode: Onsite Only (No Remote) About the Company:- A fast-growing Y Combinator-backed SaaS startup is revolutionizing underwriting in the insurance space through AI and Generative AI. Their platform empowers insurance carriers in the U.S. to make faster, more accurate decisions by automating key processes and enhancing risk assessment. As they expand their AI capabilities, they’re seeking a Machine Learning Engineer – 2 to build scalable ML solutions using NLP, Computer Vision, and LLM technologies. Role Overview:- As a Machine Learning Engineer – 2, you'll take ownership of designing, developing, and deploying ML systems that power critical features across the platform. You'll lead end-to-end ML workflows, working with cross-functional teams to deliver real-world AI solutions that directly impact business outcomes. Key Responsibilities:- Design and develop robust AI product features aligned with user and business needs Maintain and enhance existing ML/AI systems Build and manage ML pipelines for training, deployment, monitoring, and experimentation Deploy scalable inference APIs and conduct A/B testing Optimize GPU architectures and fine-tune transformer/LLM models Build and deploy LLM applications tailored to real-world use cases Implement DevOps/ML Ops best practices with tools like Docker and Kubernetes Tech Stack & Tools Machine Learning & LLMs GPT, LLaMA, Gemini, Claude, Hugging Face Transformers PyTorch, TensorFlow, Scikit-learn LLMOps & MLOps Langchain, LangGraph, LangFlow, Langfuse MLFlow, SageMaker, LlamaIndex, AWS Bedrock, Azure AI Cloud & Infrastructure AWS, Azure Kubernetes, Docker Databases MongoDB, PostgreSQL, Pinecone, ChromaDB Languages Python, SQL, JavaScript What You’ll Do Collaborate with product, research, and engineering teams to build scalable AI solutions Implement advanced NLP and Generative AI models (e.g., RAG, Transformers) Monitor and optimize model performance and deployment pipelines Build efficient, scalable data and feature pipelines Stay updated on industry trends and contribute to internal innovation Present key insights and ML solutions to technical and business stakeholders Requirements Must-Have:- 3–6 years of experience in Machine Learning and software/data engineering Master’s degree (or equivalent) in ML, AI, or related technical fields Strong hands-on experience with Python, PyTorch/TensorFlow, and Scikit-learn Familiarity with ML Ops, model deployment, and production pipelines Experience working with LLMs and modern NLP techniques Ability to work collaboratively in a fast-paced, product-driven environment Strong problem-solving and communication skills Bonus Certifications such as: AWS Machine Learning Specialty AWS Solution Architect – Professional Azure Solutions Architect Expert Why Apply Work directly with a high-caliber founding team Help shape the future of AI in the insurance space Gain ownership and visibility in a product-focused engineering role Opportunity to innovate with state-of-the-art AI/LLM tech Be part of a fast-moving team with real market traction 📍 Note: This is an onsite-only role based in Bengaluru. Remote work is not available. Skills: software/data engineering,machine learning,ml, ai,sql,computer vision,tensorflow,ml ops,nlp,kubernetes,mongodb,llms and modern nlp techniques,python,postgresql,docker,azure,scikit-learn,python, pytorch/tensorflow, and scikit-learn,javascript,aws,llm technologies,pytorch

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview As a key member of the team, you will be responsible for building and maintaining the infrastructure, tools, and workflows that enable the efficient, reliable, and secure deployment of LLMs in production environments. You will collaborate closely with data scientists, Data Engineers and product teams to ensure seamless integration of AI capabilities into our core systems. Responsibilities Design and implement scalable model deployment pipelines for LLMs, ensuring high availability and low latency. Build and maintain CI/CD workflows for model training, evaluation, and release. Monitor and optimize model performance, drift, and resource utilization in production. Manage cloud infrastructure (e.g., AWS, GCP, Azure) and container orchestration (e.g., Kubernetes, Docker) for AI workloads. Implement observability tools to track system health, token usage, and user feedback loops. Ensure security, compliance, and governance of AI systems, including access control and audit logging. Collaborate with cross-functional teams to align infrastructure with product goals and user needs. Stay current with the latest in MLOps and GenAI tooling and drive continuous improvement in deployment practices. Define and evolve the architecture for GenAI systems, ensuring alignment with business goals and scalability requirements Qualifications Bachelor’s or master’s degree in computer science, Software Engineering, Data Science, or a related technical field. 5 to 7 years of experience in software engineering, DevOps, and 3+ years in machine learning infrastructure roles. Hands-on experience deploying and maintaining machine learning models in production, ideally including LLMs or other deep learning models. Proven experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Strong programming skills in Python, with experience in ML libraries (e.g., TensorFlow, PyTorch, Hugging Face). Proficiency in CI/CD pipelines for ML workflows Experience with MLOps tools: MLflow, Kubeflow, DVC, Airflow, Weights & Biases. Knowledge of monitoring and observability tools

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Designation: AI ML Engineer – Generative AI Solutions Mode: Hybrid Experience: 4+ years Location : Mumbai Salary : Upto 30LPA We hiring for one of our top-tier client organization , seeking a Senior AI ML Engineer with deep expertise in Generative AI, LLMs, OCR, RAG, and Azure-based deployments . If you're excited about working on cutting-edge AI systems that integrate LLMs (GPT, BERT, T5, LLaMA, Mistral) with real-world use cases across text, vision, and document AI , this role is for you. What You’ll Be Working On: Building RAG-based solutions , optimizing with prompt engineering , and deploying multi-modal AI (text + vision) Working on OCR-based AI for document understanding & extraction Designing and testing AI models using frameworks like LangChain, LlamaIndex, Hugging Face Deploying scalable solutions on Azure (OpenAI, ML, Cognitive Services) with CI/CD, Docker & Kubernetes Collaborating across DevOps, Product, and Business to create intelligent systems Key Technologies: LLMs : GPT, BERT, LLaMA, T5, Mistral Frameworks : LangChain, LlamaIndex, Hugging Face, OpenAI, PyTorch, TensorFlow Cloud & DevOps : Azure AI Services, Azure ML, Azure DevOps, Docker, Kubernetes Vector DBs : FAISS, Pinecone, ChromaDB Testing & MLOps : MLflow, Weights & Biases, custom QA sampling, BLEU, ROUGE Ideal Experience: 5+ years in AI/ML development with a minimum of 2 years in Generative AI Proven track record in deploying RAG and LLM-based solutions Deep understanding of Transformer models & OCR Hands-on with Azure AI stack ; bonus if you’ve dabbled in AWS/GCP

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Mohali district, India

On-site

🔍 Job Title: Python & AI Engineer – CRM Intelligence Systems 🕐 Urgently Hiring | Immediate Joiner Preferred 📍 Location: Mohali, Punjab 🕒 Type: Full-time On-site 💼 Experience: Minimum 3+ years in the same tech stack 📌 About the Role: We are looking for a skilled and experienced Python & AI Engineer to join our growing CRM product team. The ideal candidate will have 3+ years of hands-on experience in developing AI-powered features using Python and relevant ML/NLP tools. You’ll build intelligent modules like recommendation engines, lead scoring, document extraction, chatbot assistants, and predictive insights directly into our CRM. ⚡ Immediate joiners will be given priority. 🎯 Key Responsibilities: Design, build, and deploy AI/ML models and NLP systems for real-world CRM challenges. Develop Python-based microservices for smart automation and CRM intelligence. Implement AI modules for chatbot integration, document analysis, lead prediction, etc. Integrate LLMs and RAG systems for contextual search and automation workflows. Handle structured and unstructured data from MongoDB/PostgreSQL for model training. Optimize models for accuracy, performance, and scalability. Collaborate with product and engineering teams for seamless feature delivery. 🧠 Must-Have Skills: 3+ years of experience in Python, Machine Learning, and AI development. Proficiency with ML/NLP libraries like scikit-learn, spaCy, Transformers, LangChain. Working knowledge of LLMs , OpenAI APIs , RAG , chatbot architecture . Experience building REST APIs using FastAPI or Flask . Strong in database management using MongoDB and PostgreSQL . Version control (Git), Docker containers, API integration. 💡 Nice-to-Have: Exposure to vector databases (e.g., Pinecone, Weaviate, FAISS). Experience with MLOps tools like MLflow, Airflow, or Seldon. Worked on SaaS-based CRM or multi-tenant applications. Background in EdTech, ImmigrationTech, or SalesTech domains. 🎓 Education: Bachelor’s/Master’s in Computer Science, AI/ML, Data Science, or a related field.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Overview We are seeking a Data Scientist with a strong foundation in machine learning and a passion for the travel industry. You will work with cross-functional teams to analyze customer behavior, forecast travel demand, optimize pricing models, and deploy AI-driven solutions to improve user experience and drive business growth. Key Responsibilities Engage in all stages of the project lifecycle, including data collection, labeling, and preprocessing, to ensure high-quality datasets for model training. Utilize advanced machine learning frameworks and pipelines for efficient model development, training execution, and deployment. Implement MLFlow for tracking experiments, managing datasets, and facilitating model versioning to streamline collaboration. Oversee model deployment on cloud platforms, ensuring scalable and robust performance in real-world travel applications. Analyze large volumes of structured and unstructured travel data to identify trends, patterns, and actionable insights. Develop, test, and deploy predictive models and machine learning algorithms for fare prediction, demand forecasting, and customer segmentation. Create dashboards and reports to communicate insights effectively to stakeholders across the business. Collaborate with Engineering, Product, Marketing, and Finance teams to support strategic data initiatives. Build and maintain data pipelines for data ingestion, transformation, and modeling. Conduct statistical analysis, A/B testing, and hypothesis testing to guide product decisions. Automate processes and contribute to scalable, production-ready data science tools. Technical Skills Machine Learning Frameworks: PyTorch, TensorFlow, JAX, Keras, Keras-Core, Scikit-learn, Distributed Model Training Programming & Development: Python, Pyspark, Julia, MATLAB, Git, GitLab, Docker, MLOps, CI/CD Pipelines Cloud & Deployment: AWS SageMaker, MLFlow, Production Scaling Data Science & Analytics: Statistical Analysis, Predictive Modeling, Feature Engineering, Data Preprocessing, Pandas, NumPy, PySpark Computer Vision: CNN, RNN, OpenCV, Kornia, Object Detection, Image Processing, Video Analytics Visualization Tools: Looker,Tableau, Power BI, Matplotlib, Seaborn Databases & Querying: SQL, Snowflake, Databricks Big Data & MLOps: Spark, Hadoop, Kubernetes, Model Monitoring Nice To Have Experience with deep learning, LLMs, NLP (Transformers), or recommendation systems in travel use cases. Knowledge of GDS APIs (Amadeus, Sabre), flight search optimization, and pricing models. Strong system design (HLD/LLD) and architecture experience for production-scale ML workflows. Skills: data preprocessing,docker,feature engineering,sql,python,predictive modeling,statistical analysis,keras,data science,spark,data scientist,aws sagemaker,machine learning,mlflow,tensorflow,pytorch

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

🧭 Job Summary: We are seeking a results-driven Data Project Manager (PM) to lead data initiatives leveraging Databricks and Confluent Kafka in a regulated banking environment. The ideal candidate will have a strong background in data platforms, project governance, and financial services, and will be responsible for ensuring successful end-to-end delivery of complex data transformation initiatives aligned with business and regulatory requirements. Key Responsibilities: 🔹 Project Planning & Execution - Lead planning, execution, and delivery of enterprise data projects using Databricks and Confluent. - Develop detailed project plans, delivery roadmaps, and work breakdown structures. - Ensure resource allocation, budgeting, and adherence to timelines and quality standards. 🔹 Stakeholder & Team Management - Collaborate with data engineers, architects, business analysts, and platform teams to align on project goals. - Act as the primary liaison between business units, technology teams, and vendors. - Facilitate regular updates, steering committee meetings, and issue/risk escalations. 🔹 Technical Oversight - Oversee solution delivery on Databricks (for data processing, ML pipelines, analytics). - Manage real-time data streaming pipelines via Confluent Kafka. - Ensure alignment with data governance, security, and regulatory frameworks (e.g., GDPR, CBUAE, BCBS 239). 🔹 Risk & Compliance - Ensure all regulatory reporting data flows are compliant with local and international financial standards. - Manage controls and audit requirements in collaboration with Compliance and Risk teams. 💼 Required Skills & Experience: ✅ Must-Have: - 7+ years of experience in Project Management within the banking or financial services sector. - Proven experience leading data platform projects (especially Databricks and Confluent Kafka). - Strong understanding of data architecture, data pipelines, and streaming technologies. - Experience managing cross-functional teams (onshore/offshore). - Strong command of Agile/Scrum and Waterfall methodologies. ✅ Technical Exposure: - Databricks (Delta Lake, MLflow, Spark) - Confluent Kafka (Kafka Connect, kSQL, Schema Registry) - Azure or AWS Cloud Platforms (preferably Azure) - Integration tools (Informatica, Data Factory), CI/CD pipelines - Oracle ERP Implementation experience ✅ Preferred: - PMP / Prince2 / Scrum Master certification - Familiarity with regulatory frameworks: BCBS 239, GDPR, CBUAE regulations - Strong understanding of data governance principles (e.g., DAMA-DMBOK) 🎓 Education: Bachelor’s or Master’s in Computer Science, Information Systems, Engineering, or related field. 📈 KPIs: - On-time, on-budget delivery of data initiatives - Uptime and SLAs of data pipelines - User satisfaction and stakeholder feedback - Compliance with regulatory milestones

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Job Title: Generative AI Experience: 5+ Years Contract Duration: 6 Months+ Job Purpose We are looking for a highly experienced Generative AI Engineer to join our AI/ML team. The ideal candidate will have a strong background in designing and deploying advanced Retrieval-Augmented Generation (RAG) and Graph-RAG systems in production. This is a critical role focused on building scalable and intelligent AI solutions using state-of-the-art LLMs, agent frameworks, and MLOps tools. Key Responsibilities: Design, develop, and deploy RAG and Graph-RAG systems at scale. Integrate and optimize vector and graph databases for efficient information retrieval. Work with Large Language Models (LLMs), embedding models, and retrieval pipelines. Leverage MLOps tools to automate and manage AI/ML workflows. Experiment with and implement Agentic AI systems using frameworks like LangChain Agents, AutoGPT, or CrewAI. Collaborate with cross-functional teams to deliver robust, production-ready solutions. Clearly articulate technical decisions, system architecture, and project outcomes to both technical and non-technical stakeholders. Required Skills & Experience: Proven 5+ yrs experience delivering RAG and Graph-RAG solutions in production. Strong proficiency in Python, vector DBs, and graph DB query languages (Cypher, Gremlin, SPARQL). Hands-on experience with LLMs, embedding models, and retrieval frameworks. Familiarity with MLOps tools (MLflow, Airflow, Docker, Kubernetes). Deep understanding of AI/ML/Data Science principles and practices. Conceptual knowledge of Agentic AI and autonomous agents (LangChain Agents, AutoGPT, CrewAI). Ability to clearly articulate past project experience, technical decisions, and outcomes.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Job Title: Senior Data Scientist (Advanced Modeling & Machine Learning) Location: Remote Location Preference: We are specifically looking to hire talented individuals from Tier 2 and Tier 3 cities for this opportunity. Job Type: Full-time About the role We are seeking a highly motivated and experienced Senior Data Scientist with a strong background in statistical modeling, machine learning, and natural language processing (NLP). This individual will work on advanced attribution models and predictive algorithms that power strategic decision-making across the business. The ideal candidate will have a Master’s degree in a quantitative field, 4–6 years of hands-on experience, and demonstrated expertise in building models from linear regression to cutting-edge deep learning and large language models (LLMs). A Ph.D. is strongly preferred. Responsibilities Responsible for analyzing the data, identifying patterns, and do a detailed EDA. Build and refine predictive models using techniques such as linear/logistic regression, XGBoost, and neural networks. Leverage machine learning and NLP methods to analyze large-scale structured and unstructured datasets. Apply LLMs and transformers to develop solutions in content understanding, summarization, classification, and retrieval. Collaborate with data engineers and product teams to deploy scalable data pipelines and model production systems. Interpret model results, generate actionable insights, and present findings to technical and non-technical stakeholders. Stay abreast of the latest research and integrate cutting-edge techniques into ongoing projects Required Qualifications Master’s degree in Computer Science, Statistics, Applied Mathematics, or a related field. 4–6 years of industry experience in data science or machine learning roles. Strong statistical foundation, with practical experience in regression modeling, hypothesis testing, and A/B testing. Hands-on knowledge of: > Programming languages : Python (primary), SQL, R (optional) > Libraries : pandas, NumPy, scikit-learn, TensorFlow, PyTorch, XGBoost, LightGBM, spaCy, Hugging Face Transformers > Distributed computing : PySpark, Dask > Big Data and Cloud Platforms : Databricks, AWS Sagemaker, Google Vertex AI, Azure ML > Data Engineering Tools : Apache Spark, Delta Lake, Airflow > ML Workflow & Visualization : MLflow, Weights & Biases, Plotly, Seaborn, Matplotlib > Version control and collaboration : Git, GitHub, Jupyter, VSCode Preferred Qualifications Masters or Ph.D. in a quantitative or technical field. Experience with deploying machine learning pipelines in production using CI/CD tools. Familiarity with containerization (Docker) and orchestration (Kubernetes) in ML workloads. Understanding of MLOps and model lifecycle management best practices. Experience in real-time data processing (Kafka, Flink) and high-throughput ML systems. What We Offer Competitive salary and performance bonuses Flexible working hours and remote options Opportunities for continued learning and research Collaborative, high-impact team environment Access to cutting-edge technology and compute resources To apply, send your resume to jobs@megovation.io to be part of a team pushing the boundaries of data-driven innovation.

Posted 2 weeks ago

Apply

12.0 years

5 - 10 Lacs

Madurai

On-site

Job Location: Madurai Job Experience: 12-20 Years Model of Work: Work From Office Technologies: Artificial Intelligence Machine Learning Functional Area: Software Development Job Summary: Job Title: Technical Manager – AI/ML Location: Madurai - Work from Office Experience: 12+ Years Employment Type: Full-time About TechMango IT Services: TechMango is a digital transformation and IT solutions company delivering cutting-edge services in AI/ML, Data Analytics, Cloud, and Full Stack Development. We help businesses innovate, scale, and lead with robust, scalable, and future-proof technology solutions. Job Summary: We are seeking a seasoned Technical Manager – AI/ML to lead our AI/ML solution design and delivery initiatives. The ideal candidate will have a strong background in machine learning and data science, combined with leadership in technical project execution and stakeholder engagement. Key Responsibilities: Technical Leadership & Architecture: Define AI/ML solution architecture across multiple client projects. Guide teams on model design, development, testing, deployment, and maintenance. Review algorithms for performance, scalability, and business fit. Evaluate new tools, technologies, and best practices in AI/ML. Project & Delivery Management: Manage end-to-end delivery of AI/ML projects with quality, timeliness, and customer satisfaction. Collaborate with cross-functional teams including data engineering, DevOps, UI/UX, and business analysts. Ensure alignment with client goals and business KPIs. Conduct risk management, resource planning, and cost estimation. Client & Stakeholder Engagement Act as a technical point of contact for clients on AI/ML initiatives Translate business requirements into technical solutions Present architecture, PoCs, and demos to clients and internal stakeholders Team Management & Mentorship Lead and mentor data scientists, ML engineers, and software developers. Build high-performing AI/ML teams and nurture talent. Conduct technical training, code reviews, and knowledge-sharing sessions. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or related field. 12+ years of IT experience with at least 5+ years in AI/ML leadership roles. Strong experience in ML frameworks: TensorFlow, PyTorch, Scikit-learn Solid understanding of deep learning, NLP, computer vision, time-series forecasting, and generative AI Proficient in Python and experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, etc.). Experience with cloud platforms (AWS, Azure, GCP). Ability to architect and scale AI/ML models in production environments. Excellent communication and leadership skills. Preferred Qualifications Certifications in AI/ML or cloud (AWS Certified ML Specialist, etc.) Experience in delivering AI projects in domains like Healthcare, Retail, BFSI, or Manufacturing Exposure to LLMs, RAG systems, and Prompt Engineering About our Talent Acquisition Team: Arumugam Veera leads the Talent Acquisition function for both TechMango and Bautomate - SaaS Platform, driving our mission to build high-performing teams and connect top talent with exciting career opportunities. Feel free to connect with him on LinkedIn: https://www.linkedin.com/in/arumugamv/ Follow our official TechMango LinkedIn page for the latest job updates and career opportunities: https://www.linkedin.com/company/techmango-technology-services-private-limited/ Looking forward to connecting and helping you explore your next great opportunity with us!

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Who We Are The next step of your career starts here, where you can bring your own unique mix of skills and perspectives to a fast-growing team. Metyis is a global and forward-thinking firm operating across a wide range of industries, developing and delivering AI & Data, Digital Commerce, Marketing & Design solutions and Advisory services. At Metyis, our long-term partnership model brings long-lasting impact and growth to our business partners and clients through extensive execution capabilities. With our team, you can experience a collaborative environment with highly skilled multidisciplinary experts, where everyone has room to build bigger and bolder ideas. Being part of Metyis means you can speak your mind and be creative with your knowledge. Imagine the things you can achieve with a team that encourages you to be the best version of yourself. We are Metyis. Partners for Impact. What We Offer Interact with C-level at our clients on regular basis to drive their business towards impactful change Lead your team in creating new business solutions Seize opportunities at the client and at Metyis in our entrepreneurial environment Become part of a fast growing international and diverse team What You Will Do Lead and manage the delivery of complex data science projects, ensuring quality and timelines. Engage with clients and business stakeholders to understand business challenges and translate them into analytical solutions. Design solution architectures and guide the technical approach across projects. Align technical deliverables with business goals, ensuring data products create measurable business value. Communicate insights clearly through presentations, visualizations, and storytelling for both technical and non-technical audiences. Promote best practices in coding, model validation, documentation, and reproducibility across the data science lifecycle. Collaborate with cross functional teams to ensure smooth integration and deployment of solutions. Drive experimentation and innovation in AI/ML techniques, including newer fields - Generative AI. What You’ll Bring 6+ years of experience in delivering full-lifecycle data science projects. Proven ability to lead cross-functional teams and manage client interactions independently. Strong business understanding with the ability to connect data science outputs to strategic business outcomes. Experience with stakeholder management, translating business questions into data science solutions. Track record of mentoring junior team members and creating a collaborative learning environment. Familiarity with data productization and ML systems in production, including pipelines, monitoring, and scalability. Experience managing project roadmaps, resourcing, and client communication. Tools & Technologies: Strong hands-on experience in Python/R and SQL. Good understanding and Experience with cloud platforms such as Azure, AWS, or GCP. Experience with data visualization tools in python like – Seaborn, Plotly. Good understanding of Git concepts. Good experience with data manipulation tools in python like Pandas and Numpy. Must have worked with scikit learn, NLTK, Spacy, transformers. Experience with dashboarding tools such as Power BI and Tableau to create interactive and insightful visualizations. Proficient in using deployment and containerization tools like Docker and Kubernetes for building and managing scalable applications. Core Competencies: Strong foundation in machine learning algorithms, predictive modeling, and statistical analysis. Good understanding of deep learning concepts, especially in NLP and Computer Vision applications. Proficiency in time-series forecasting and business analytics for functions like marketing, sales, operations, and CRM. Exposure to tools like – Mlflow, model deployment, API integration, and CI/CD pipelines. Hands on experience with MLOps and model governance best practices in production environments. Experience in developing optimization and recommendation system solutions to enhance decision-making, user personalization, and operational efficiency across business functions. Good to have: Generative AI Experience with text and Image data. Familiarity with LLM frameworks such as LangChain and hubs like Hugging Face. Exposure to vector databases (e.g., FAISS, Pinecone, Weaviate) for semantic search or retrieval-augmented generation (RAG). In a changing world, diversity and inclusion are core values for team well-being and performance. At Metyis, we want to welcome and retain all talents, regardless of gender, age, origin or sexual orientation, and irrespective of whether or not they are living with a disability, as each of them has their own experience and identity.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Greetings from UST Who we are: Born digital, UST transforms lives through the power of technology. We walk alongside our clients and partners, embedding innovation and agility into everything they do. We help them create transformative experiences and human-centered solutions for a better world. UST is a mission-driven group of over 38,000+ practical problem solvers and creative thinkers in over 30+ countries. Our entrepreneurial teams are empowered to innovate, act nimbly, and create a lasting and sustainable impact for our clients, their customers, and the communities in which we live. With us, you’ll create a boundless impact that transforms your career—and the lives of people across the world. Job Summary: Notice Period: Immediate to 15 days Mandatory skills: Gen AI, Data Science, Machine Learning, ML Modal Building Experience: 8-12 years Work Location: Any UST location Work Mode: Hybrid (3 Days - Work from Office) Role Overview: We are seeking two highly skilled AI Engineers with strong proficiency in Python and a solid understanding of AI/ML concepts. You will be responsible for developing, training, and optimizing machine learning models, collaborating with cross-functional teams, and contributing to scalable AI solutions in production. This is a hands-on role requiring both theoretical knowledge and practical experience in building AI systems. Roles and Responsibilities: Design, develop, and optimize machine learning and AI models using Python and modern ML frameworks. Collaborate with data scientists, software engineers, and product managers to define AI solution requirements. Perform data preprocessing, cleaning, and feature engineering to prepare datasets for modeling. Conduct model training, validation, and performance evaluation using appropriate metrics. Tune hyperparameters and experiment with model architectures to enhance accuracy and efficiency. Document model workflows, methodologies, and code to ensure transparency and reproducibility. Assist in deploying AI models to production, monitoring performance, and refining them over time. Stay informed on the latest research papers, tools, and best practices in AI and machine learning. Participate in code reviews and maintain high standards for clean and maintainable code. Must-Have Skills: Strong proficiency in Python, with a focus on AI/ML development. Solid understanding of machine learning algorithms (e.g., regression, classification, clustering, deep learning). Experience with ML frameworks like TensorFlow, PyTorch, or Scikit-learn. Proficiency in data preprocessing and feature engineering techniques. Familiarity with model evaluation metrics (accuracy, precision, recall, AUC, etc.). Experience working with Jupyter Notebooks and collaborative development tools (Git, etc.). Strong problem-solving skills and ability to work in a fast-paced environment. Good-to-Have Skills: Experience with model deployment tools and platforms (e.g., Docker, Flask, FastAPI, AWS SageMaker). Knowledge of MLOps practices and model lifecycle management. Familiarity with NLP, computer vision, or time series modeling. Understanding of distributed computing frameworks (e.g., Spark, Dask). Experience in using experiment tracking tools like MLflow or Weights & Biases. Exposure to cloud services (AWS, Azure, or GCP) for AI model development and deployment. What We Believe We’re proud to embrace the same values that have shaped UST since the beginning. Since day one, we’ve been building enduring relationships and a culture of integrity. And today, it's those same values that are inspiring us to encourage innovation from everyone, to champion diversity and inclusion and to place people at the center of everything we do. Humility: We will listen, learn, be empathetic and help selflessly in our interactions with everyone. Humanity: Through business, we will better the lives of those less fortunate than ourselves. Integrity: We honor our commitments and act with responsibility in all our relationships. Equal Employment Opportunity Statement UST is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. UST reserves the right to periodically redefine your roles and responsibilities based on the requirements of the organization and/or your performance.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

hackajob is collaborating with J.P. Morgan to connect them with exceptional tech professionals for this role. We are seeking individuals with advanced expertise in Machine Learning (ML) to join our dynamic team. As an Applied AI ML Lead within our Corporate Sector, you will play a pivotal role in developing machine learning and deep learning solutions, and experimenting with state of the art models. You will contribute to our innovative projects and drive the future of machine learning at AI Technologies. You will use your knowledge of ML tools and algorithms to deliver the right solution. You will be a part of an innovative team, working closely with our product owners, data engineers, and software engineers to build new AI/ML solutions and productionize them. You will also mentor other AI engineers and scientists while fostering a culture of continuous learning and technical excellence. We are looking for someone with a passion for data, ML, and programming, who can build ML solutions at-scale with a hands-on approach with detailed technical acumen. Job responsibilities Serve as a subject matter expert on a wide range of machine learning techniques and optimizations. Provide in-depth knowledge of machine learning algorithms, frameworks, and techniques. Enhance machine learning workflows through advanced proficiency in large language models (LLMs) and related techniques. Conduct experiments using the latest machine learning technologies, analyze results, and tune models. Engage in hands-on coding to transition experimental results into production solutions by collaborating with the engineering team, owning end-to-end code development in Python for both proof of concept/experimentation and production-ready solutions. Optimize system accuracy and performance by identifying and resolving inefficiencies and bottlenecks, collaborating with product and engineering teams to deliver tailored, science and technology-driven solutions. Integrate Generative AI within the machine learning platform using state-of-the-art techniques, driving decisions that influence product design, application functionality, and technical operations and processes Required Qualifications, Capabilities, And Skills Formal training or certification on AI/ML concepts and 5+ years applied experience Hans on experience in programming languages, particularly Python. Manage to apply data science and machine learning techniques to address business challenges. Strong background in Natural Language Processing (NLP) and Large Language Models (LLMs). Expertise in deep learning frameworks such as PyTorch or TensorFlow, and advanced applied ML areas like GPU optimization, finetuning, embedding models, inferencing, prompt engineering, evaluation, and RAG (Similarity Search). Manage to complete tasks and projects independently with minimal supervision, with a passion for detail and follow-through. Excellent communication skills, team player, and demonstrated leadership in collaborating effectively with engineers, product managers, and other ML practitioners Preferred Qualifications, Capabilities, And Skills Exposure with Ray, MLFlow, and/or other distributed training frameworks. MS and/or PhD in Computer Science, Machine Learning, or a related field. Understanding of Search/Ranking, Recommender systems, Graph techniques, and other advanced methodologies. Familiar in Reinforcement Learning or Meta Learning. Understanding of Large Language Model (LLM) techniques, including Agents, Planning, Reasoning, and other related methods. Exposure building and deploying ML models on cloud platforms such as AWS and AWS tools like Sagemaker, EKS, etc. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We offer a competitive total rewards package including base salary determined based on the role, experience, skill set and location. Those in eligible roles may receive commission-based pay and/or discretionary incentive compensation, paid in the form of cash and/or forfeitable equity, awarded in recognition of individual achievements and contributions. We also offer a range of benefits and programs to meet employee needs, based on eligibility. These benefits include comprehensive health care coverage, on-site health and wellness centers, a retirement savings plan, backup childcare, tuition reimbursement, mental health support, financial coaching and more. Additional details about total compensation and benefits will be provided during the hiring process. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. JPMorgan Chase & Co. is an Equal Opportunity Employer, including Disability/Veterans About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. JOB Description: Senior Architect As a Senior Manager - GenAI Architect, the person will be responsible for leading and designing advanced AI architectures that drive strategic business outcomes. This role requires a deep understanding of data management, application development, AI technologies, cloud solutions, and user interface design. The person will work closely with cross-functional teams to deliver robust and scalable solutions that meet our business objectives. Key Responsibilities: Architectural Design: Develop and oversee data architectures and application frameworks that support AI initiatives. Ensure integration of data and applications with existing systems to create cohesive and efficient solutions. AI Solutions: Design and implement AI-driven models and solutions, leveraging machine learning, natural language processing, and other AI technologies to solve complex business problems. Cloud Integration: Architect cloud-based solutions that support scalability, security, and performance for AI applications. Collaborate with cloud providers and internal teams to ensure optimal cloud infrastructure. User Interface Design: Work with UI/UX teams to ensure that AI solutions have user-friendly interfaces and deliver an exceptional user experience. Leadership & Collaboration: Lead a team of architects and engineers, providing guidance and support in the development of AI solutions. Collaborate with stakeholders across departments to align technical strategies with business goals. Strategic Planning: Develop and implement architectural strategies and roadmaps that align with the company’s vision and technology advancements. Go-To-Market Strategy (GTM): Collaborate with onsite teams and senior architects in the team to define and execute go-to-market strategies for AI solutions. Provide architectural insights that align with client’s needs and support successful solution development. Innovation & Best Practices: Stay abreast of industry trends and emerging technologies to drive innovation and implement best practices in AI architecture and implementation. Qualifications: Education: Bachelor’s or Master’s degree in computer Science, engineering, Data Science, mathematics or a related field. Experience: Minimum of 3 years of experience in a senior architectural role with a focus on AI, data management, application development, and cloud technologies. Technical Skills: Hand-on experience in deploying AI/ML solutions on different cloud platforms like Azure, AWS and/or Google Cloud. Experience in using and orchestration LLM models on cloud platforms i.e., OpenAI @ Azure/AWS Bedrock/ GCP Vertex AI or Gemini AI Experience in writing SQL and data modelling. Experience in designing and implementation of AI solution using microservice based architecture. Understanding of machine learning, deep learning, NLP and GenAI. Strong programming skills in Python and/or pyspark. Proven experience in integrating authentication security measures within machine learning operations and applications. Excellent problem-solving skills and ability to connect AI capabilities to business value. Strong communication and presentation skills. Proven experience in AI/ML solution deployment process on Kubernetes, Web Apps, Databricks or on similar platforms. Familiarity with MLOps concepts and tech stack. Good to know code versioning, MLFlow, batch prediction and real-time end point workflows. Familiarity with Azure DevOps / GitHub actions/ Jenkins / Terraform / AWS CFT etc. Leadership: Demonstrated ability to lead teams, manage complex projects, and work effectively with stakeholders at all levels. Problem-Solving: Strong analytical and problem-solving skills with the ability to design innovative solutions to complex challenges. Communication: Excellent verbal and written communication skills with the ability to convey technical concepts to non-technical stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About The Role We are seeking a highly motivated AI Research Engineer with a strong background in Natural Language Processing (NLP) and Generative AI to join our growing team. The ideal candidate will be passionate about advancing open-source LLMs, embedding techniques, and vector databases, and will have hands-on experience building classification models. Experience in the customer support industry and knowledge of MLOps/LLMOps practices will be a strong plus. Requirements Required Skills and Qualifications Strong proficiency in Python and deep learning frameworks (e.g., PyTorch, TensorFlow) 4+ years of experience Solid understanding of NLP algorithms, transformer architectures, and language model training/fine-tuning Experience with open-source LLMs (e.g., LLaMA, Mistral, Falcon, etc.) Expertise in embedding techniques (e.g., OpenAI, HuggingFace, SentenceTransformers) and their applications in semantic search and RAG Experience with vector databases such as Weaviate, Pinecone, FAISS, or Milvus Hands-on experience building NLP classification models Familiarity with MLOps tools (e.g., MLflow, DVC, Kubeflow) and LLMOps platforms for managing LLM pipelines Excellent problem-solving skills, ability to work in a fast-paced environment Preferred Qualifications Prior experience in the customer support or conversational AI industry Knowledge of deploying AI applications in cloud environments (AWS, GCP, Azure) Contributions to open-source projects in the NLP/LLM space Experience with prompt engineering and fine-tuning for specific downstream tasks Benefits Hybrid setup Worker's insurance Paid Time Offs Other employee benefits to be discussed by our Talent Acquisition team in India Closing: Helpshift embraces diversity. We are proud to be an equal opportunity workplace and do not discriminate on the basis of sex, race, color, age, sexual orientation, gender identity, religion, national origin, citizenship, marital status, veteran status, or disability status. Privacy Notice By providing your information in this application, you understand that we will collect and process your information in accordance with our Applicant Privacy Notice. For more information, please see our Applicant Privacy Notice at https://www.keywordsstudios.com/en/applicant-privacy-notice.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

hackajob is collaborating with J.P. Morgan to connect them with exceptional tech professionals for this role. We have an exciting and rewarding opportunity for you to take your AI ML career to the next level. As a Senior Software Engineer in the Self Service Enablement team, you will lead the development and innovation of agentic applications using LLM technologies. You will work closely with a talented team to design scalable, resilient applications with strong observability, contributing to JP Morgan Chase's mission of delivering exceptional self-service solutions. Your role will be pivotal in driving innovation and enhancing the user experience, making a difference in the lives of our clients and the wider community. Job Responsibilities Develop solutions related to data architecture, ML Platform as well as GenAI platform architecture, provide tactical solution and design support to the team and embedded with engineering on the execution and implementation of processes and procedures Serve as a subject matter expert on a wide range of ML techniques and optimizations. Provide in-depth knowledge of distributed ML platform deployment including training and serving. Create curative solutions using GenAI workflows through advanced proficiency in large language models (LLMs) and related techniques. Gain Experience with creating a Generative AI evaluation and feedback loop for GenAI/ML pipelines. Get Hands on code and design to bring the experimental results into production solutions by collaborating with engineering team. Own end to end code development in python/Java for both proof of concept/experimentation and production-ready solutions. Optimize system accuracy and performance by identifying and resolving inefficiencies and bottlenecks and collaborate with product and engineering teams to deliver tailored, science and technology-driven solutions. Drives decisions that influence the product design, application functionality, and technical operations and processes. Required Qualifications, Capabilities, And Skills Formal training or certification on AI ML concepts and 3+ years applied experience Solid understanding of using ML techniques specially in Natural Language Processing (NLP) and Large Language Models (LLMs) Hands-on experience with machine learning and deep learning methods. Good understanding in deep learning frameworks such as PyTorch or TensorFlow. Experience in advanced applied ML areas such as GPU optimization, finetuning, embedding models, inferencing, prompt engineering, evaluation, RAG (Similarity Search). Deep understanding of Large Language Model (LLM) techniques, including Agents, Planning, Reasoning, and other related methods. Practical cloud native experience such as AWS Preferred Qualifications, Capabilities, And Skills Experience with Ray, MLFlow, and/or other distributed training frameworks. In-depth understanding of Embedding based Search/Ranking, Recommender systems, Graph techniques, and other advanced methodologies. Experience with building and deploying ML models on cloud platforms such as AWS and AWS tools like Sagemaker. Exposure to agentic frameworks such as Langchain, Langgraph, RASA, Parlant, Decagon. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions - all while ranking first in customer satisfaction. The CCB Data & Analytics team responsibly leverages data across Chase to build competitive advantages for the businesses while providing value and protection for customers. The team encompasses a variety of disciplines from data governance and strategy to reporting, data science and machine learning. We have a strong partnership with Technology, which provides cutting edge data and analytics infrastructure. The team powers Chase with insights to create the best customer and business outcomes.

Posted 2 weeks ago

Apply

8.0 years

2 - 5 Lacs

Hyderābād

On-site

Position Overview: ShyftLabs is seeking an experienced Databricks Architect to lead the design, development, and optimization of big data solutions using the Databricks Unified Analytics Platform. This role requires deep expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to architect scalable, high-performance data platforms and drive data-driven innovation. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to accelerate business growth across various industries by focusing on creating value through innovation. Job Responsibilities Architect, design, and optimize big data and AI/ML solutions on the Databricks platform. Develop and implement highly scalable ETL pipelines for processing large datasets. Lead the adoption of Apache Spark for distributed data processing and real-time analytics. Define and enforce data governance, security policies, and compliance standards. Optimize data lakehouse architectures for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineers to enable AI/ML-driven insights. Oversee and troubleshoot Databricks clusters, jobs, and performance bottlenecks. Automate data workflows using CI/CD pipelines and infrastructure-as-code practices. Ensure data integrity, quality, and reliability across all data processes. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. 8+ years of hands-on experience in data engineering, with at least 5+ years in Databricks Architect and Apache Spark. Proficiency in SQL, Python, or Scala for data processing and analytics. Extensive experience with cloud platforms (AWS, Azure, or GCP) for data engineering. Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture. Hands-on experience with CI/CD tools and DevOps best practices. Familiarity with data security, compliance, and governance best practices. Strong problem-solving and analytical skills in a fast-paced environment. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer). Hands-on experience with MLflow, Feature Store, or Databricks SQL. Exposure to Kubernetes, Docker, and Terraform. Experience with streaming data architectures (Kafka, Kinesis, etc.). Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker). Prior experience working with retail, e-commerce, or ad-tech data platforms. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Hyderābād

On-site

Overview: PepsiCo Data BI & Integration Platforms is seeking an experienced Cloud Platform Databricks SME, responsible for overseeing the Platform administration, Security, new NPI tools integration, migrations, platform maintenance and other platform administration activities on Azure/AWS. The ideal candidate will have hands-on experience with Azure/AWS services – Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities: Databricks Subject Matter Expert (SME) plays a pivotal role in admin, security best practices, platform sustain support, new tools adoption, cost optimization, supporting new patterns/design solutions using the Databricks platform. Here’s a breakdown of typical responsibilities: Core Technical Responsibilities Architect and optimize big data pipelines using Apache Spark, Delta Lake, and Databricks-native tools. Design scalable data ingestion and transformation workflows, including batch and streaming (e.g., Kafka, Spark Structured Streaming). Create integration guidelines to configure and integrate Databricks with other existing security tools relevant to data access control. Implement data security and governance using Unity Catalog, access controls, and data classification techniques. Support migration of legacy systems to Databricks on cloud platforms like Azure, AWS, or GCP. Manage cloud platform operations with a focus on FinOps support, optimizing resource utilization, cost visibility, and governance across multi-cloud environments. Collaboration & Advisory Act as a technical advisor to data engineering and analytics teams, guiding best practices and performance tuning. Partner with architects and business stakeholders to align Databricks solutions with enterprise goals. Lead proof-of-concept (PoC) initiatives to demonstrate Databricks capabilities for specific use cases. Strategic & Leadership Contributions Mentor junior engineers and promote knowledge sharing across teams. Contribute to platform adoption strategies, including training, documentation, and internal evangelism. Stay current with Databricks innovations and recommend enhancements to existing architectures. Specialized Expertise (Optional but Valuable) Machine Learning & AI integration using MLflow, AutoML, or custom models. Cost optimization and workload sizing for large-scale data processing. Compliance and audit readiness for regulated industries. Qualifications: Bachelor’s degree in computer science. At least 12 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 5 years in a Platform admin role Strong understanding of data security principles and best practices. Expertise in Databricks platform, security features, Unity Catalog, and data access control mechanisms. Experience with data classification and masking techniques. Strong understanding of cloud cost management, with hands-on experience in usage analytics, budgeting, and cost optimization strategies across multi-cloud platforms. Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in Azure/AWS/Databricks platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.

Posted 2 weeks ago

Apply

5.0 years

4 - 6 Lacs

Hyderābād

On-site

Position Overview: ShyftLabs is seeking a skilled Databricks Engineer to support in designing, developing, and optimizing big data solutions using the Databricks Unified Analytics Platform. This role requires strong expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to drive data-driven insights and ensure scalable, high-performance data architectures. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsiblities Design, implement, and optimize big data pipelines in Databricks. Develop scalable ETL workflows to process large datasets. Leverage Apache Spark for distributed data processing and real-time analytics. Implement data governance, security policies, and compliance standards. Optimize data lakehouse architectures for performance and cost-efficiency. Collaborate with data scientists, analysts, and engineers to enable advanced AI/ML workflows. Monitor and troubleshoot Databricks clusters, jobs, and performance bottlenecks. Automate workflows using CI/CD pipelines and infrastructure-as-code practices. Ensure data integrity, quality, and reliability in all pipelines. Basic Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. 5+ years of hands-on experience with Databricks and Apache Spark. Proficiency in SQL, Python, or Scala for data processing and analysis. Experience with cloud platforms (AWS, Azure, or GCP) for data engineering. Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture. Experience with CI/CD tools and DevOps best practices. Familiarity with data security, compliance, and governance best practices. Strong problem-solving and analytical skills with an ability to work in a fast-paced environment. Preferred Qualifications Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer). Hands-on experience with MLflow, Feature Store, or Databricks SQL. Exposure to Kubernetes, Docker, and Terraform. Experience with streaming data architectures (Kafka, Kinesis, etc.). Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker). Prior experience working with retail, e-commerce, or ad-tech data platforms. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.

Posted 2 weeks ago

Apply

5.0 years

5 - 9 Lacs

Gurgaon

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary MLOps Engineering Director Overview: Horizontal Data Science Enablement Team within SSO Data Science is looking for a MLOps Engineering Director who can help solve MLOps problems, manage the Databricks platform for the entire organization, build CI/CD or automation pipelines, and lead best practices. Role and responsibilities: Oversee the administration, configuration, and maintenance of Databricks clusters and workspaces. Continuously monitor Databricks clusters for high workloads or excessive usage costs, and promptly alert relevant stakeholders to address issues impacting overall cluster health. Implement and manage security protocols, including access controls and data encryption, to safeguard sensitive information in adherence with Mastercard standards. Facilitate the integration of various data sources into Databricks, ensuring seamless data flow and consistency. Identify and resolve issues related to Databricks infrastructure, providing timely support to users and stakeholders. Work closely with data engineers, data scientists, and other stakeholders to support their data processing and analytics needs. Maintain comprehensive documentation of Databricks configurations, processes, and best practices and lead participation in security and architecture reviews of the infrastructure Bring MLOps expertise to the table, namely within the scope of, but not limited to: Model monitoring Feature catalog/store Model lineage maintenance CI/CD pipelines to gatekeep model lifecycle from development to production Own and maintain MLOps solutions either by leveraging open-sourced solutions or with a 3rd party vendor Build LLMOps pipelines using open-source solutions. Recommend alternatives and onboard products to the solution Maintain services once they are live by measuring and monitoring availability, latency and overall system health. Manage a small team of MLOps engineers All about you: Master’s degree in computer science, software engineering, or a similar field. Strong experience with Databricks and its management of roles and resources Experience in cloud technologies and operations Experience supporting API’s and Cloud technologies Experience with MLOps solutions like MLFlow Experience with performing data analysis, data observability, data ingestion and data integration. 7+ Yrs DevOps, SRE, or general systems engineering experience. 5+ years of hands-on experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef. Experience architecting and implementing data governance processes and tooling (such as data catalogs, lineage tools, role-based access control, PII handling) Strong coding ability in Python or other languages like Java, and C++, plus a solid grasp of SQL fundamentals Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. What could set you apart: SQL tuning experience. Strong automation experience Strong Data Observability experience. Operations experience in supporting highly scalable systems. Ability to operate in a 24x7 environment encompassing global time zones Self-Motivating and creatively solves software problems and effectively keep the lights on for modeling systems. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 2 weeks ago

Apply

8.0 years

7 - 9 Lacs

Chennai

Remote

Title: Senior Data Scientist Years of Experience : 8+ years *Location: The selected candidate is required to work onsite at our Chennai/Kovilpatti location for the initial Three-month project training and execution period. After the Three months , the candidate will be offered remote opportunities.* The Senior Data Scientist will lead the development and implementation of advanced analytics and AI/ML models to solve complex business problems. This role requires deep statistical expertise, hands-on model building experience, and the ability to translate raw data into strategic insights. The candidate will collaborate with business stakeholders, data engineers, and AI engineers to deploy production-grade models that drive innovation and value. Key responsibilities · Lead end-to-end model lifecycle: data exploration, feature engineering, model training, validation, deployment, and monitoring · Develop predictive models, recommendation systems, anomaly detection, NLP models, and generative AI applications · Conduct statistical analysis and hypothesis testing for business experimentation · Optimize model performance using hyperparameter tuning, ensemble methods, and explainable AI (XAI) · Collaborate with data engineering teams to improve data pipelines and quality · Document methodologies, build reusable ML components, and publish technical artifacts · Mentor junior data scientists and contribute to CoE-wide model governance Technical Skills · ML Frameworks: Scikit-learn, TensorFlow, PyTorch, XGBoost · Statistical tools: Python (NumPy, Pandas, SciPy), R, SAS · NLP & LLMs: Hugging Face Transformers, GPT APIs, BERT, LangChain · Model deployment: MLflow, Docker, Azure ML, AWS Sagemaker · Data visualization: Power BI, Tableau, Plotly, Seaborn · SQL and NoSQL (CosmosDB, MongoDB) · Git, CI/CD tools, and model monitoring platforms Qualification · Master’s in Data Science, Statistics, Mathematics, or Computer Science · Microsoft Certified: Azure Data Scientist Associate or equivalent · Proven success in delivering production-ready ML models with measurable business impact · Publications or patents in AI/ML will be considered a strong advantage Job Types: Full-time, Permanent Work Location: Hybrid remote in Chennai, Tamil Nadu Expected Start Date: 12/07/2025

Posted 2 weeks ago

Apply

5.0 years

6 - 9 Lacs

Chennai

On-site

If you are looking for a career at a dynamic company with a people-first mindset and a deep culture of growth and autonomy, ACV is the right place for you! Competitive compensation packages and learning and development opportunities, ACV has what you need to advance to the next level in your career. We will continue to raise the bar every day by investing in our people and technology to help our customers succeed. We hire people who share our passion, bring innovative ideas to the table, and enjoy a collaborative atmosphere. Who we are: ACV is a technology company that has revolutionized how dealers buy and sell cars online. We are transforming the automotive industry. ACV Auctions Inc. (ACV), has applied innovation and user-designed, data driven applications and solutions. We are building the most trusted and efficient digital marketplace with data solutions for sourcing, selling and managing used vehicles with transparency and comprehensive insights that were once unimaginable. We are disruptors of the industry and we want you to join us on our journey. Our network of brands include ACV Auctions, ACV Transportation, ClearCar, MAX Digital and ACV Capital within its Marketplace Products, as well as, True360 and Data Services. ACV Auctions in Chennai, India are looking for talented individuals to join our team. As we expand our platform, we're offering a wide range of exciting opportunities across various roles in corporate, operations, and product and technology. Our global product and technology organization spans product management, engineering, data science, machine learning, DevOps and program leadership. What unites us is a deep sense of customer centricity, calm persistence in solving hard problems, and a shared passion for innovation. If you're looking to grow, lead, and contribute to something larger than yourself, we'd love to have you on this journey. Let's build something extraordinary together. Join us in shaping the future of automotive! At ACV we focus on the Health, Physical, Financial, Social and Emotional Wellness of our Teammates and to support this we offer industry leading benefits and wellness programs. What you will do: ACV’s Machine Learning (ML) team is looking to grow its MLOps team. Multiple ACV operations and product teams rely on the ML team’s solutions. Current deployments drive opportunities in the marketplace, in operations, and sales, to name a few. As ACV has experienced hyper growth over the past few years, the volume, variety, and velocity of these deployments has grown considerably. Thus, the training, deployment, and monitoring needs of the ML team has grown as we’ve gained traction. MLOps is a critical function to help ourselves continue to deliver value to our partners and our customers. Successful candidates will demonstrate excellent skill and maturity, be self-motivated as well as team-oriented, and have the ability to support the development and implementation of end-to-end ML-enabled software solutions to meet the needs of their stakeholders. Those who will excel in this role will be those who listen with an ear to the overarching goal, not just the immediate concern that started the query. They will be able to show their recommendations are contextually grounded in an understanding of the practical problem, the data, and theory as well as what product and software solutions are feasible and desirable. The core responsibilities of this role are: Working with fellow machine learning engineers to build, automate, deploy, and monitor ML applications. Developing data pipelines that feed ML models. Deploy new ML models into production. Building REST APIs to serve ML models predictions. Monitoring performance of models in production. Required Qualifications: Graduate education in a computationally intensive domain or equivalent work experience. 5+ years of prior relevant work or lab experience in ML projects/research Advanced proficiency with Python, SQL etc. Experience with building and deploying REST APIs (FastAPI, Flask) Experience with distributed caching technologies (Redis) Experience with real-time data streaming and processing (Kafka) Experience with cloud services (AWS / GCP) and kubernetes, docker, CI/CD. Preferred Qualifications: Experience with MLOps-specific tooling like Vertex AI, Ray, Feast, Kubeflow, or MLFlow, etc. are a plus. Experience with building data pipelines Experience with training ML models Our Values Trust & Transparency | People First | Positive Experiences | Calm Persistence | Never Settling At ACV, we are committed to an inclusive culture in which every individual is welcomed and empowered to celebrate their true selves. We achieve this by fostering a work environment of acceptance and understanding that is free from discrimination. ACV is committed to being an equal opportunity employer regardless of sex, race, creed, color, religion, marital status, national origin, age, pregnancy, sexual orientation, gender, gender identity, gender expression, genetic information, disability, military status, status as a veteran, or any other protected characteristic. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires reasonable accommodation, please let us know. Data Processing Consent When you apply to a job on this site, the personal data contained in your application will be collected by ACV Auctions Inc. and/or one of its subsidiaries ("ACV Auctions"). By clicking "apply", you hereby provide your consent to ACV Auctions and/or its authorized agents to collect and process your personal data for purpose of your recruitment at ACV Auctions and processing your job application. ACV Auctions may use services provided by a third party service provider to help manage its recruitment and hiring process. For more information about how your personal data will be processed by ACV Auctions and any rights you may have, please review ACV Auctions' candidate privacy notice here. If you have any questions about our privacy practices, please contact datasubjectrights@acvauctions.com.

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Company Description Aestriks is a full-service software development company headquartered in Delhi NCR, India. We specialize in building scalable, reliable web, mobile, and backend systems for startups, enterprises, and side hustlers. Role Description Experience - 1-5 years AI/ML engineering (even freshers can apply) This is a full-time on-site role for an Artificial Intelligence Engineer, located in New Delhi. The AI Engineer will be responsible for designing, developing, and implementing AI-based solutions. Core Technologies: Python, SQL PyTorch or TensorFlow LangChain or LlamaIndex Hugging Face ecosystem (Transformers, PEFT, Datasets) Vector databases (Pinecone, Qdrant, Chroma) Cloud platforms (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure OpenAI) LLM/GenAI Stack: Fine-tuning techniques (LoRA, QLoRA) RAG implementation and optimization LLM APIs (OpenAI, Anthropic, Google Gemini) Embedding models and similarity search Evaluation frameworks (DeepEval, LLM-as-a-Judge) MLOps tools (MLflow, Weights & Biases) Qualifications Proficiency in Pattern Recognition and Neural Networks Strong background in Computer Science and Software Development Experience with Natural Language Processing (NLP) technologies Excellent problem-solving and analytical skills Bachelor’s or Master’s degree in Computer Science, Engineering, or related field Understanding of data structures and algorithms Ability to work collaboratively in a team environment Experience in the software development lifecycle is a plus

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Senior Artificial Intelligence Developer Location: Pune Experience: 3–8 Years Company: Asmadiya Technologies Pvt. Ltd. About the Role Asmadiya Technologies is seeking a Senior AI Developer to lead the design and deployment of advanced AI solutions across enterprise-grade applications. You will architect intelligent systems, mentor junior engineers, and drive innovation in the areas of machine learning, deep learning, computer vision, and large language models. If you're ready to turn AI research into impactful production systems, we want to work with you. Key Responsibilities Lead end-to-end design, development, and deployment of scalable AI/ML solutions in production environments. Architect AI pipelines and integrate models with enterprise systems and APIs. Collaborate cross-functionally with product managers, data engineers, and software teams to align AI initiatives with business goals. Optimize models for performance, scalability, and interpretability using MLOps practices. Conduct deep research and experimentation with the latest AI techniques (e.g., Transformers, Reinforcement Learning, GenAI). Review code, mentor team members, and set technical direction for AI projects. Own model governance, ethical AI considerations, and post-deployment monitoring. Required Skills & Qualifications Bachelor’s/Master’s in Computer Science, Artificial Intelligence, Data Science, or a related field. 3–8 years of hands-on experience in AI/ML, including production model deployment. Advanced Python skills and deep expertise in libraries such as TensorFlow, PyTorch, Hugging Face, and Scikit-learn. Proven experience in deploying models to production (REST APIs, containers, cloud ML services). Deep understanding of ML algorithms, optimization, statistical modeling, and deep learning. Familiarity with tools like MLflow, Docker, Kubernetes, Airflow, and CI/CD pipelines for ML. Experience with cloud AI/ML services (AWS SageMaker, GCP Vertex AI, or Azure ML). Preferred Skills Hands-on with LLMs and GenAI tools (OpenAI, LangChain, RAG architecture, vector DBs). Experience in NLP, computer vision, or recommendation systems at scale. Knowledge of model explainability (SHAP, LIME), bias detection, and AI ethics. Strong understanding of software engineering best practices, microservices, and API architecture. What We Offer ✅ Leadership role in cutting-edge AI product development ✅ Influence on AI strategy and technical roadmap ✅ Exposure to enterprise and global AI projects ✅ Fast-paced, growth-focused work environment ✅ Flexible work hours, supportive leadership, and a collaborative team Apply now by sending your resume to: careers@asmadiya.com

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies