Jobs
Interviews

1696 Mlflow Jobs - Page 40

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

5 - 7 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Oversee the administration, configuration, and maintenance of Databricks clusters and workspaces. Continuously monitor Databricks clusters for high workloads or excessive usage costs, and promptly alert relevant stakeholders to address issues impacting overall cluster health. Implement and manage security protocols, including access controls and data encryption, to safeguard sensitive information in adherence with Mastercard standards. Facilitate the integration of various data sources into Databricks, ensuring seamless data flow and consistency. Identify and resolve issues related to Databricks infrastructure, providing timely support to users and stakeholders. Work closely with data engineers, data scientists, and other stakeholders to support their data processing and analytics needs. Maintain comprehensive documentation of Databricks configurations, processes, and best practices and lead participation in security and architecture reviews of the infrastructure Bring MLOps expertise to the table, namely within the scope of, but not limited to: Model monitoring Feature catalog/store Model lineage maintenance CI/CD pipelines to gatekeep model lifecycle from development to production Own and maintain MLOps solutions either by leveraging open-sourced solutions or with a 3rd party vendor Build LLMOps pipelines using open-source solutions. Recommend alternatives and onboard products to the solution Maintain services once they are live by measuring and monitoring availability, latency and overall system health. What experience you need: Master s degree in computer science, software engineering, or a similar field. Strong experience with Databricks and its management of roles and resources Experience in cloud technologies and operations Experience supporting API s and Cloud technologies Experience with MLOps solutions like MLFlow Experience with performing data analysis, data observability, data ingestion and data integration. 5+ DevOps, SRE, or general systems engineering experience. 2+ years of hands-on experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef. Experience architecting and implementing data governance processes and tooling (such as data catalogs, lineage tools, role-based access control, PII handling) Strong coding ability in Python or other languages like Java, and C++, plus a solid grasp of SQL fundamentals Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

Hyderabad, Telangana

On-site

It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Job Description AI Operations Engineer This role requires working from our local Hyderabad office 2-3x a week. Location: Hyderabad, Telangana, India ABOUT THE TEAM The AI Operations team serves as the backbone of our GenAI product development, ensuring seamless deployment, monitoring, and optimization of AI systems. As a Junior AI Operations Engineer, you will accelerate innovation by handling the operational tasks that enable our senior engineers to focus on complex problem-solving. You'll gain hands-on experience with cutting-edge AI infrastructure while contributing to experiments, deployments, and automation that power our fitness technology platform. At ABC, we love entrepreneurs because we are entrepreneurs. We roll our sleeves up, we act fast, and we learn together. WHAT YOU’LL DO Execute and monitor AI experiments, tracking logs, retries, and performance metrics to ensure reliable model behavior. Manage API keys, data flow configurations, and agent deployments across development and production environments. Develop automation scripts using Python to streamline repetitive operational tasks and reduce manual intervention. Support evaluation pipelines and deployment processes, collaborating with ML engineers to maintain system reliability. Fill operational gaps by quickly learning new tools and technologies, enabling faster iteration cycles for the team. Troubleshoot deployment issues and maintain documentation for operational procedures and best practices. WHAT YOU’LL NEED 1–3 years of engineering experience with strong fundamentals in Python programming and REST API integration. Proficiency with version control systems (Git) and experience with deployment tools and automation frameworks. Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes). Understanding of CI/CD pipelines and monitoring tools for maintaining system health and performance. Strong problem-solving mindset with eagerness to learn AI/ML operations and grow within a technical team. Excellent communication skills and ability to collaborate effectively in fast-paced, agile environments. AND ITS GREAT TO HAVE Exposure to MLOps tools and practices (MLflow, Weights & Biases, or similar platforms). Experience with infrastructure as code (Terraform, CloudFormation) and configuration management. Bachelor's degree in Computer Science, Engineering, or related technical field. WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). #LI-HYBRID If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

Posted 1 month ago

Apply

3.0 years

0 Lacs

New Delhi, Delhi, India

Remote

About Vibrant Brands : Vibrant Brands is a leading Australian multi-branded tech company, delivering innovative solutions across diverse industries. We’re passionate about pushing boundaries and are seeking a talented AI Engineer to join our global, remote team to drive cutting-edge AI initiatives. Role Overview : As an AI Engineer, you’ll design and deploy advanced AI/ML models to enhance Vibrant Brands’ diverse product portfolio. You’ll work remotely with cross-functional teams to solve complex challenges and deliver scalable, impactful solutions. Key Responsibilities : Build and deploy machine learning models (e.g., NLP, computer vision, or predictive analytics) for various brand applications. Develop scalable AI pipelines and integrate models into production. Collaborate with data engineers to ensure high-quality data pipelines. Optimize models for performance and cost in cloud environments. Propose innovative AI applications tailored to Vibrant Brands’ multi-brand ecosystem. Communicate technical solutions clearly to global stakeholders. Required Qualifications : Bachelor’s or Master’s in Computer Science, Machine Learning, or related field. 3+ years of experience deploying AI/ML models in production. Expertise in Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn). Experience with cloud platforms (AWS, Azure, or GCP). Knowledge of MLOps tools (e.g., MLflow, Kubeflow) and CI/CD pipelines. Strong problem-solving and remote collaboration skills. Preferred Qualifications : Experience in tech-driven industries (e.g., SaaS, e-commerce, or digital platforms). Familiarity with generative AI, LLMs, or multi-modal models. Contributions to open-source AI projects. Comfort working in APAC time zones (e.g., AEST overlap). What We Offer : Competitive salary and equity options. Fully remote work with flexible hours. Health and wellness benefits tailored to your region. Professional development budget for AI conferences and courses. Opportunity to shape AI strategy for a multi-branded tech leader. How to Apply : Submit your resume, cover letter, and GitHub/portfolio links on the next page. Join us in redefining tech innovation at Vibrant Brands! Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Hello, FCM part of FTCG is one of the world’s largest travel management companies and a trusted partner for nationals and multinational companies. With a 24/7 reach in 97 countries, FCM’s flexible technology anticipates and solves client needs, supported by experts who provide in-depth local knowledge and duty of care as part of the ultimate personalised business travel experience. As part of the ASX-listed Flight Centre Travel Group, FCM delivers the best market-wide rates, unique added-value benefits, and exclusive solutions. Winner of the World's Leading Travel Management Company Award at the WTM for nine consecutive years (2019-2011), FCM is constantly transforming the business of travel through its empowered and accountable people who deliver 24/7 service and are available online and offline. FCM has won the coveted Great Place to Work certification for the fifth time ! FCM Travel India is one of India’s Top 100 Great Mid-size Workplaces 2024 and the Best in Professional Services. A leader in the travel tech space, FCM has proprietary client solutions. FCM provides specialist services via FCM Consulting and FCM Meetings & Events. Key Responsibilities Design and develop AI solutions that address real-world business challenges, ensuring alignment with strategic objectives and measurable outcomes. Work with large-scale structured and unstructured datasets, leveraging modern data frameworks, tools, and platforms. Establish and maintain robust standards for data security, privacy, and regulatory compliance across all AI and data workflows. Collaborate closely with cross-functional teams to gather requirements, share insights, and deliver high-impact solutions. Monitor and maintain production AI systems to ensure continued accuracy, scalability, and reliability over time. Stay up to date with the latest advancements in AI, machine learning, and data engineering, and apply them where relevant. Write clean, well-documented, and maintainable code, and actively contribute to team best practices and technical documentation. You'll Be Perfect For The Role If You Have Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field Strong programming skills in Python (preferred) and experience with AI/ML libraries such as TensorFlow, PyTorch, scikit-learn, or Hugging Face Experience designing and deploying machine learning models and AI systems in production environments Familiarity with modern data platforms and cloud services (e.g., Azure, AWS, GCP), including AutoML and MLflow Proficiency with data processing tools and frameworks (e.g., Spark, Pandas, SQL) and working with both structured and unstructured data Experience with Generative AI technologies, including prompt engineering, vector databases, and RAG (Retrieval-Augmented Generation) pipelines Solid understanding of data security, privacy, and compliance principles, with experience implementing these in real-world projects Strong problem-solving skills and ability to translate complex business problems into technical solutions Excellent communication and collaboration skills, with the ability to work effectively across technical and non-technical teams Experience with version control (e.g., Git) and agile development practices Enthusiasm for learning and applying emerging technologies in AI and machine learning Work Perks! - What’s in it for you: FCTG is renowned internationally for having amazing perks and an even better culture. We understand that our people are our most valuable asset. It is the passion and dedication of our teams that keep the company on top of the industry ladder. It’s also why we offer some great employee benefits and perks outside of the norm. You will be rewarded with competitive market salary. You will also be equipped with relevant training courses and tools to set you up for success with endless career advancement and job opportunities all over the world. Market Aligned remuneration structure and a highly competitive salary Fun and Energetic culture : At the heart of everything we do at FCM is a desire to have fun and be yourself Work life Balance : We believe in “No Leave = No Life” So have your own travel adventures with paid annual leave Great place to work - Recognized as a top workplace for 5 consecutive years, which is a testimonial of our commitment towards our people Wellbeing Focus - We take care of our employee with comprehensive medical coverage, accidental insurance, and term insurance for the well being of our people. Paternity Leave: We ensure that you can spend quality time with your growing family Travel perks : You'll have access to plenty of industry discounts to ensure you continue to broaden your horizons A career, not a job : We believe in our people brightness of future. As a high growth company, you will have the opportunity to advance your career in any direction you choose whether that is locally or globally. Reward & Recognition : Celebrate the success of yourself and others at our regular Buzz Nights and at the annual Global Gathering - You'll have to experience it to believe it! #FCMIN Love for travel : We were founded by people who wanted to travel and want others to do the same. That passion is something you can’t miss in our people or service. We value you... Flight Centre Travel Group is committed to creating an inclusive and diverse workplace that supports your unique identity to create better, safer experiences for everyone. We encourage you to come as you are; to foster inclusivity and collaboration. We celebrate you. Who We Are... Since our beginning, our vision has always been to open up the world for those who want to see. As a global travel retailer, our people come from all different backgrounds, and our connections spread to the far reaches of the globe - 20+ countries and counting! Together, we are a family (we call ourselves Flighties). We offer genuine opportunities for people to grow and evolve. We embrace new experiences, we celebrate the wins, seize all opportunities, and empower all of our people to find their Brightness of Future. We encourage you to DREAM BIG through collaboration and innovation, and make sure you are supported to make incredible ideas a reality. Together, we deliver quality, innovative solutions that delight our customers and achieve our strategic priorities. Irreverence. Ownership. Egalitarianism Show more Show less

Posted 1 month ago

Apply

9.0 - 12.0 years

16 - 25 Lacs

Hyderabad

Work from Office

Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization Also experienced in developing model wrapers Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow

Posted 1 month ago

Apply

9.0 - 12.0 years

16 - 25 Lacs

Hyderabad

Work from Office

Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization Also experienced in developing model wrapers Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow

Posted 1 month ago

Apply

9.0 years

0 Lacs

India

Remote

We are looking for a visionary and technically adept AI Technical Architect to lead the design, development, and deployment of scalable AI/ML solutions across the enterprise. This role blends deep technical expertise with strategic leadership to deliver innovative, secure, and ethical AI systems. You will work closely with cross-functional teams to architect intelligent platforms that align with organizational goals and drive meaningful business impact. Location: Hyderabad/ Remote Experience: 9+ Years Key Responsibilities Architect and build cloud-native AI/ML platforms using AWS (SageMaker, Bedrock), Azure (ML, OpenAI), or GCP (Vertex AI, BigQuery, LangChain). Lead the end-to-end development of cutting-edge AI solutions, including Retrieval-Augmented Generation (RAG) pipelines, summarization tools, and virtual assistants. Design and implement robust MLOps and LLMOps frameworks to support CI/CD, model versioning, retraining, monitoring, and production observability. Ensure all AI solutions adhere to Responsible AI practices, focusing on explainability, fairness, bias mitigation, auditability, and compliance with regulations (e.g., GDPR, HIPAA). Integrate AI models seamlessly with enterprise data pipelines, APIs, and business applications to support real-time and batch inference workflows. Oversee the full ML lifecycle—from data preparation and feature engineering to model training, tuning, deployment, and monitoring. Collaborate with cross-functional teams including data engineers, product managers, and business stakeholders to ensure alignment with strategic initiatives. Leverage deep learning architectures (CNNs, RNNs, Transformers) to address use cases in NLP, computer vision, and forecasting. Define AI governance frameworks, including audit trails, bias detection protocols, and model transparency mechanisms. Provide mentorship and technical leadership to data scientists and AI engineers. Continuously evaluate new AI technologies, research advancements, and industry best practices to evolve architectural standards and drive innovation. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, AI/ML, or a related field. 9+ years of overall software development experience, with at least 3 years in AI/ML architecture or technical leadership. Expertise in Python and ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Proven experience deploying AI models at scale in production environments. Strong grasp of modern data architectures including data lakes, data warehouses, and ETL/ELT pipelines. Proficient with containerization (Docker), orchestration (Kubernetes), and cloud-based ML platforms (AWS Sagemaker, Azure ML, GCP Vertex AI). Hands-on experience with MLOps tools such as MLflow, Kubeflow, and Airflow. Familiarity with LLMs, prompt engineering, and language model fine-tuning is a plus. Exceptional communication skills with the ability to influence and collaborate across teams. Demonstrated experience mentoring engineers and leading technical initiatives. Candidates with prior experience in Healthcare or Telecom domains will be strongly preferred , especially those who have delivered domain-specific AI solutions aligned to regulatory, operational, or customer engagement needs. Preferred Qualifications Industry certifications in AI/ML or cloud architecture (e.g., AWS Machine Learning Specialty, Google Cloud ML Engineer). Experience with RAG pipelines and vector databases like Pinecone, FAISS, or Weaviate. Deep understanding of ethical AI principles and regulatory compliance requirements. Prior involvement in architectural reviews, technical steering committees, or enterprise-wide AI initiatives. Why Veltris? AI-First Company: Veltris is built on the foundation of AI. We enable clients to build advanced products using the latest technologies including Machine Learning (ML), Deep Learning (DL – CV, NLP), and MLOps. Proprietary AI Framework: We've developed a full-stack AI framework to accelerate clients' ML model development and deployment lifecycle. Explore: Insight.AI NVIDIA Partnership: We are a part of the NVIDIA Partner Network as a Professional Services Partner, delivering solutions powered by NVIDIA’s GPU and software ecosystem. Read more Cutting-Edge Research: We collaborate with academic institutions and research organizations to address complex problems in domains like medical imaging, biopharma, life sciences, legal, retail, and agriculture. Empowered Work Environment: Every team member, including junior engineers, works on impactful features in complex domains, contributing to real-world client success. Culture: Open communication, flat hierarchy, and a strong culture of ownership and innovation. Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

22 - 32 Lacs

Hyderabad

Work from Office

Product Engineer (Onsite, Hyderabad) Experience: 5 - 8 Years Exp Salary : INR 30-32 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 9:00AM to 6:00PM IST Opportunity Type: Onsite (Hyderabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python, FastAPI, Django, MLFlow, feast, Kubeflow, Numpy, Pandas, Big Data Good to have skills : Banking, Fintech, Product Engineering background IF (One of Uplers' Clients) is Looking for: Product Engineer (Onsite, Hyderabad) who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Product Engineer Location: Narsingi, Hyderabad 5 days of work from the Office Client is a Payment gateway processing company Interview Process: Screening round with InfraCloud, followed by a second round with our Director of Engineering. We share the profile with the client, and they take one/two interviews About the Project We are building a high-performance machine learning engineering platform that powers scalable, data-driven solutions for enterprise environments. Your expertise in Python, performance optimization, and ML tooling will play a key role in shaping intelligent systems for data science and analytics use cases. Experience with MLOps, SaaS products, or big data environments will be a strong plus. Role and Responsibilities Design, build, and optimize components of the ML engineering pipeline for scalability and performance. Work closely with data scientists and platform engineers to enable seamless deployment and monitoring of ML models. Implement robust workflows using modern ML tooling such as Feast, Kubeflow, and MLflow. Collaborate with cross-functional teams to design and scale end-to-end ML services across a cloud-native infrastructure. Leverage frameworks like NumPy, Pandas, and distributed compute environments to manage large-scale data transformations. Continuously improve model deployment pipelines for reliability, monitoring, and automation. Requirements 5+ years of hands-on experience in Python programming with a strong focus on performance tuning and optimization. Solid knowledge of ML engineering principles and deployment best practices. Experience with Feast, Kubeflow, MLflow, or similar tools. Deep understanding of NumPy, Pandas, and data processing workflows. Exposure to big data environments and a good grasp of data science model workflows. Strong analytical and problem-solving skills with attention to detail. Comfortable working in fast-paced, agile environments with frequent cross-functional collaboration. Excellent communication and collaboration skills. Nice to Have Experience deploying ML workloads in public cloud environments (AWS, GCP, or Azure). Familiarity with containerization technologies like Docker and orchestration using Kubernetes. Exposure to CI/CD pipelines, serverless frameworks, and modern cloud-native stacks. Understanding of data protection, governance, or security aspects in ML pipelines. Experience Required: 5+ years How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: We foster business expansion through our innovative products and services, facilitating the seamless adoption of cloud-native technologies by companies. Our expertise lies in the revitalization of applications and infrastructure, harnessing the power of cloud-native solutions for enhanced resilience and scalability. As pioneering Kubernetes partners, we have been dedicated contributors to the open-source cloud-native community, consistently achieving nearly 100% growth over the past few years. We take pride in spearheading local chapters of Serverless & Kubernetes Meetup, actively participating in the development of a vibrant community dedicated to cutting-edge technologies within the Cloud and DevOps domains. About Uplers: Uplers is the #1 hiring platform for SaaS companies, designed to help you hire top product and engineering talent quickly and efficiently. Our end-to-end AI-powered platform combines artificial intelligence with human expertise to connect you with the best engineering talent from India. With over 1M deeply vetted professionals, Uplers streamlines the hiring process, reducing lengthy screening times and ensuring you find the perfect fit. Companies like GitLab, Twilio, TripAdvisor, and AirBnB trust Uplers to scale their tech and digital teams effectively and cost-efficiently. Experience a simpler, faster, and more reliable hiring process with Uplers today.

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

About Markovate: At Markovate, we build next-gen AI solutions that help businesses scale intelligently and stay future-ready. From cutting-edge Generative AI applications to computer vision and intelligent automation, our team transforms AI research into real-world products. We're now expanding our AI/ML leadership and looking for a driven, hands-on Lead AI/ML Engineer to guide the next wave of innovation. Why This Role is Exciting: You'll lead mission-critical AI initiatives from concept to deployment. Architect intelligent systems that directly impact global clients. Work with cross-functional teams to bring ML solutions to production at scale. Stay at the forefront of GenAI, MLOps, and LLM integration. Join a high-growth, innovation-first culture with real ownership. What You'll Do: Design and deploy scalable ML and GenAI models for diverse applications, including forecasting, recommendation, NLP, and computer vision. Lead a team of AI/ML and software engineers—providing architectural direction, code reviews, and mentorship. Translate complex business needs into actionable AI solutions. Own full ML pipelines: data ingestion, feature engineering, training, deployment, monitoring. Collaborate with product, data, and engineering teams to deliver impactful solutions. Apply best practices in MLOps, model versioning, retraining, and performance tracking. Stay current on the latest in LLMs, Transformers, Vision Models, and Foundation Models. Drive AI integration into full-stack applications and APIs. What We’re Looking For: 5+ years of experience in AI/ML or data science roles, with at least 1–2 years in a technical leadership capacity. Deep hands-on experience with Python, TensorFlow/PyTorch, and GenAI frameworks (e.g., Hugging Face Transformers, Lang Chain, Open AI API). Experience in computer vision (OpenCV, YOLO, segmentation models) or agentic AI. Experience deploying ML models in production using Docker, FastAPI, AWS/GCP/Azure. Exposure to MLOps tools (MLflow, Airflow, CI/CD pipelines). Solid understanding of model performance tuning, experimentation, and lifecycle management. Background in full-stack development (JavaScript/Node/React) is a plus. Strong problem-solving mindset, clear communication, and collaborative leadership style. Show more Show less

Posted 1 month ago

Apply

4.0 - 9.0 years

0 Lacs

Andhra Pradesh, India

On-site

At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in data science and machine learning engineering at PwC will focus on leveraging advanced analytics and machine learning techniques to extract insights from large datasets and drive data-driven decision making. You will work on developing predictive models, conducting statistical analysis, and creating data visualisations to solve complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Role Overview We are seeking a Senior Associate – AI Engineer / MLOps / LLMOps with a passion for building resilient, cloud-native AI systems. In this role, you’ll collaborate with data scientists, researchers, and product teams to build infrastructure, automate pipelines, and deploy models that power intelligent applications at scale. If you enjoy solving real-world engineering challenges at the convergence of AI and software systems, this role is for you. Key Responsibilities Architect and implement AI/ML/GenAI pipelines, automating end-to-end workflows from data ingestion to model deployment and monitoring. Develop scalable, production-grade APIs and services using FastAPI, Flask, or similar frameworks for AI/LLM model inference. Design and maintain containerized AI applications using Docker and Kubernetes. Operationalize Large Language Models (LLMs) and other GenAI models via cloud-native deployment (e.g., Azure ML, AWS Sagemaker, GCP Vertex AI). Manage and monitor model performance post-deployment, applying concepts of MLOps and LLMOps including model versioning, A/B testing, and drift detection. Build and maintain CI/CD pipelines for rapid and secure deployment of AI solutions using tools such as GitHub Actions, Azure DevOps, GitLab CI. Implement security, governance, and compliance standards in AI pipelines. Optimize model serving infrastructure for speed, scalability, and cost-efficiency. Collaborate with AI researchers to translate prototypes into robust production-ready solutions. Required Skills & Experience 4 to 9 years of hands-on experience in AI/ML engineering, MLOps, or DevOps for data science products. Bachelor's degree in Computer Science, Engineering, or related technical field (BE/BTech/MCA). Strong software engineering foundation with hands-on experience in Python, Shell scripting, and familiarity with ML libraries (scikit-learn, transformers, etc.). Experience deploying and maintaining LLM-based applications, including prompt orchestration, fine-tuned models, and agentic workflows. Deep understanding of containerization and orchestration (Docker, Kubernetes, Helm). Experience with CI/CD pipelines, infrastructure-as-code tools (Terraform, CloudFormation), and automated deployment practices. Proficiency in cloud platforms: Azure (preferred), AWS, or GCP – including AI/ML services (e.g., Azure ML, AWS Sagemaker, GCP Vertex AI). Experience managing and monitoring ML lifecycle (training, validation, deployment, feedback loops). Solid understanding of APIs, microservices, and event-driven architecture. Experience with model monitoring/orchestration tools (e.g, Kubeflow, MLflow). Exposure to LLMOps-specific orchestration tools such as LangChain, LangGraph, Haystack, or PromptLayer. Experience with serverless deployments (AWS Lambda, Azure Functions) and GPU-enabled compute instances. Knowledge of data pipelines using tools like Apache Airflow, Prefect, or Azure Data Factory. Exposure to logging and observability tools like ELK stack, Azure Monitor, or Datadog. Good to Have Experience implementing multi-model architecture, serving GenAI models alongside traditional ML models. Knowledge of data versioning tools like DVC, Delta Lake, or LakeFS. Familiarity with distributed systems and optimizing inference pipelines for throughput and latency. Experience with infrastructure cost monitoring and optimization strategies for large-scale AI workloads. It would be great if the candidate has exposure to full-stack ML/DL. Soft Skills & Team Expectations Strong communication and documentation skills; ability to clearly articulate technical concepts to both technical and non-technical audiences. Demonstrated ability to work independently as well as collaboratively in a fast-paced environment. A builder's mindset with a strong desire to innovate, automate, and scale. Comfortable in an agile, iterative development environment. Willingness to mentor junior engineers and contribute to team knowledge growth. Proactive in identifying tech stack improvements, security enhancements, and performance bottlenecks. Show more Show less

Posted 1 month ago

Apply

4.0 - 9.0 years

0 Lacs

Andhra Pradesh, India

On-site

At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in data science and machine learning engineering at PwC will focus on leveraging advanced analytics and machine learning techniques to extract insights from large datasets and drive data-driven decision making. You will work on developing predictive models, conducting statistical analysis, and creating data visualisations to solve complex business problems. The Opportunity When you join PwC Acceleration Centers (ACs), you step into a pivotal role focused on actively supporting various Acceleration Center services, from Advisory to Assurance, Tax and Business Services. In our innovative hubs, you’ll engage in challenging projects and provide distinctive services to support client engagements through enhanced quality and innovation. You’ll also participate in dynamic and digitally enabled training that is designed to grow your technical and professional skills. As part of the Data Science team you will design and deliver scalable AI applications that drive business transformation. As a Senior Associate you will analyze complex problems, mentor junior team members, and build meaningful client connections while navigating the evolving landscape of AI and machine learning. This role offers the chance to work on innovative technologies, collaborate with cross-functional teams, and contribute to creative solutions that shape the future of the industry. Responsibilities Design and implement scalable AI applications to facilitate business transformation Analyze intricate problems and propose practical solutions Mentor junior team members to enhance their skills and knowledge Establish and nurture meaningful relationships with clients Navigate the dynamic landscape of AI and machine learning Collaborate with cross-functional teams to drive innovative solutions Utilize advanced technologies to improve project outcomes Contribute to the overall strategy of the Data Science team What You Must Have Bachelor's Degree in Computer Science, Engineering, or equivalent technical discipline 4-9 years of experience in Data Science/ML/AI roles Oral and written proficiency in English required What Sets You Apart Proficiency in Python and data science libraries Hands-on experience with Generative AI and prompt engineering Familiarity with cloud platforms like Azure, AWS, GCP Understanding of production-level AI systems and CI/CD Experience with Docker, Kubernetes for ML workloads Knowledge of MLOps tooling and pipelines Demonstrated track record of delivering AI-driven solutions Preferred Knowledge/Skills Please reference About PwC CTIO – AI Engineering PwC’s Commercial Technology and Innovation Office (CTIO) is at the forefront of emerging technology, focused on building transformative AI-powered products and driving enterprise innovation. The AI Engineering team within CTIO is dedicated to researching, developing, and operationalizing cutting-edge technologies such as Generative AI, Large Language Models (LLMs), AI Agents, and more. Our mission is to continuously explore what's next—enabling business transformation through scalable AI/ML solutions while remaining grounded in research, experimentation, and engineering excellence.ill categories for job description details. Role Overview We are seeking a Senior Associate – Data Science/ML/DL/GenAI to join our high-impact, entrepreneurial team. This individual will play a key role in designing and delivering scalable AI applications, conducting applied research in GenAI and deep learning, and contributing to the team’s innovation agenda. This is a hands-on, technical role ideal for professionals passionate about AI-driven transformation. Key Responsibilities Design, develop, and deploy machine learning, deep learning, and Generative AI solutions tailored to business use cases. Build scalable pipelines using Python (and frameworks such as Flask/FastAPI) to operationalize data science models in production environments. Prototype and implement solutions using state-of-the-art LLM frameworks such as LangChain, LlamaIndex, LangGraph, or similar. Also developing applications in streamlit/chainlit for demo purposes. Design advanced prompts and develop agentic LLM applications that autonomously interact with tools and APIs. Fine-tune and pre-train LLMs (HuggingFace and similar libraries) to align with business objectives. Collaborate in a cross-functional setup with ML engineers, architects, and product teams to co-develop AI solutions. Conduct R&D in NLP, CV, and multi-modal tasks, and evaluate model performance with production-grade metrics. Stay current with AI research and industry trends; continuously upskill to integrate the latest tools and methods into the team’s work. Required Skills & Experience 4 to 9 years of experience in Data Science/ML/AI roles. Bachelor’s degree in Computer Science, Engineering, or equivalent technical discipline (BE/BTech/MCA). Proficiency in Python and related data science libraries: Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, PyTorch, Keras, etc. Hands-on experience with Generative AI, including prompt engineering, LLM fine-tuning, and deployment. Experience with Agentic LLMs and task orchestration using tools like LangGraph or AutoGPT-like flows. Strong knowledge of NLP techniques, transformer architectures, and text analysis. Proven experience working with cloud platforms (preferably Azure; AWS/GCP also considered). Understanding of production-level AI systems including CI/CD, model monitoring, and cloud-native architecture. (Need not develop from scratch) Familiarity with ML algorithms: XGBoost, GBM, k-NN, SVM, Decision Forests, Naive Bayes, Neural Networks, etc. Exposure to deploying AI models via APIs and integration into larger data ecosystems. Strong understanding of model operationalization and lifecycle management. Experience with Docker, Kubernetes, and containerized deployments for ML workloads. Use of MLOps tooling and pipelines (e.g., MLflow, Azure ML, SageMaker, etc.). Experience in full-stack AI applications, including visualization (e.g., PowerBI, D3.js). Demonstrated track record of delivering AI-driven solutions as part of large-scale systems. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Hello, FCM part of FTCG is one of the world’s largest travel management companies and a trusted partner for nationals and multinational companies. With a 24/7 reach in 97 countries, FCM’s flexible technology anticipates and solves client needs, supported by experts who provide in-depth local knowledge and duty of care as part of the ultimate personalised business travel experience. As part of the ASX-listed Flight Centre Travel Group, FCM delivers the best market-wide rates, unique added-value benefits, and exclusive solutions. Winner of the World's Leading Travel Management Company Award at the WTM for nine consecutive years (2019-2011), FCM is constantly transforming the business of travel through its empowered and accountable people who deliver 24/7 service and are available online and offline. FCM has won the coveted Great Place to Work certification for the fifth time ! FCM Travel India is one of India’s Top 100 Great Mid-size Workplaces 2024 and the Best in Professional Services. A leader in the travel tech space, FCM has proprietary client solutions. FCM provides specialist services via FCM Consulting and FCM Meetings & Events. Key Responsibilities : Design and develop AI solutions that address real-world business challenges, ensuring alignment with strategic objectives and measurable outcomes. Work with large-scale structured and unstructured datasets, leveraging modern data frameworks, tools, and platforms. Establish and maintain robust standards for data security, privacy, and regulatory compliance across all AI and data workflows. Collaborate closely with cross-functional teams to gather requirements, share insights, and deliver high-impact solutions. Monitor and maintain production AI systems to ensure continued accuracy, scalability, and reliability over time. Stay up to date with the latest advancements in AI, machine learning, and data engineering, and apply them where relevant. Write clean, well-documented, and maintainable code, and actively contribute to team best practices and technical documentation. You'll be perfect for the role if you have: Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field Strong programming skills in Python (preferred) and experience with AI/ML libraries such as TensorFlow, PyTorch, scikit-learn, or Hugging Face Experience designing and deploying machine learning models and AI systems in production environments Familiarity with modern data platforms and cloud services (e.g., Azure, AWS, GCP), including AutoML and MLflow Proficiency with data processing tools and frameworks (e.g., Spark, Pandas, SQL) and working with both structured and unstructured data Experience with Generative AI technologies, including prompt engineering, vector databases, and RAG (Retrieval-Augmented Generation) pipelines Solid understanding of data security, privacy, and compliance principles, with experience implementing these in real-world projects Strong problem-solving skills and ability to translate complex business problems into technical solutions Excellent communication and collaboration skills, with the ability to work effectively across technical and non-technical teams Experience with version control (e.g., Git) and agile development practices Enthusiasm for learning and applying emerging technologies in AI and machine learning Work Perks! - What’s in it for you: FCTG is renowned internationally for having amazing perks and an even better culture. We understand that our people are our most valuable asset. It is the passion and dedication of our teams that keep the company on top of the industry ladder. It’s also why we offer some great employee benefits and perks outside of the norm. You will be rewarded with competitive market salary. You will also be equipped with relevant training courses and tools to set you up for success with endless career advancement and job opportunities all over the world. Market Aligned remuneration structure and a highly competitive salary Fun and Energetic culture : At the heart of everything we do at FCM is a desire to have fun and be yourself Work life Balance : We believe in “No Leave = No Life” So have your own travel adventures with paid annual leave Great place to work - Recognized as a top workplace for 5 consecutive years, which is a testimonial of our commitment towards our people Wellbeing Focus - We take care of our employee with comprehensive medical coverage, accidental insurance, and term insurance for the well being of our people. Paternity Leave: We ensure that you can spend quality time with your growing family Travel perks : You'll have access to plenty of industry discounts to ensure you continue to broaden your horizons A career, not a job : We believe in our people brightness of future. As a high growth company, you will have the opportunity to advance your career in any direction you choose whether that is locally or globally. Reward & Recognition : Celebrate the success of yourself and others at our regular Buzz Nights and at the annual Global Gathering - You'll have to experience it to believe it! Love for travel : We were founded by people who wanted to travel and want others to do the same. That passion is something you can’t miss in our people or service. We value you... #FCMIN Flight Centre Travel Group is committed to creating an inclusive and diverse workplace that supports your unique identity to create better, safer experiences for everyone. We encourage you to come as you are; to foster inclusivity and collaboration. We celebrate you. Who We Are... Since our beginning, our vision has always been to open up the world for those who want to see. As a global travel retailer, our people come from all different backgrounds, and our connections spread to the far reaches of the globe - 20+ countries and counting! Together, we are a family (we call ourselves Flighties). We offer genuine opportunities for people to grow and evolve. We embrace new experiences, we celebrate the wins, seize all opportunities, and empower all of our people to find their Brightness of Future. We encourage you to DREAM BIG through collaboration and innovation, and make sure you are supported to make incredible ideas a reality. Together, we deliver quality, innovative solutions that delight our customers and achieve our strategic priorities. Irreverence. Ownership. Egalitarianism Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Hello, FCM part of FTCG is one of the world’s largest travel management companies and a trusted partner for nationals and multinational companies. With a 24/7 reach in 97 countries, FCM’s flexible technology anticipates and solves client needs, supported by experts who provide in-depth local knowledge and duty of care as part of the ultimate personalised business travel experience. As part of the ASX-listed Flight Centre Travel Group, FCM delivers the best market-wide rates, unique added-value benefits, and exclusive solutions. Winner of the World's Leading Travel Management Company Award at the WTM for nine consecutive years (2019-2011), FCM is constantly transforming the business of travel through its empowered and accountable people who deliver 24/7 service and are available online and offline. FCM has won the coveted Great Place to Work certification for the fifth time ! FCM Travel India is one of India’s Top 100 Great Mid-size Workplaces 2024 and the Best in Professional Services. A leader in the travel tech space, FCM has proprietary client solutions. FCM provides specialist services via FCM Consulting and FCM Meetings & Events. Key Responsibilities : Design and develop AI solutions that address real-world business challenges, ensuring alignment with strategic objectives and measurable outcomes. Work with large-scale structured and unstructured datasets, leveraging modern data frameworks, tools, and platforms. Establish and maintain robust standards for data security, privacy, and regulatory compliance across all AI and data workflows. Collaborate closely with cross-functional teams to gather requirements, share insights, and deliver high-impact solutions. Monitor and maintain production AI systems to ensure continued accuracy, scalability, and reliability over time. Stay up to date with the latest advancements in AI, machine learning, and data engineering, and apply them where relevant. Write clean, well-documented, and maintainable code, and actively contribute to team best practices and technical documentation. You'll be perfect for the role if you have: Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field Strong programming skills in Python (preferred) and experience with AI/ML libraries such as TensorFlow, PyTorch, scikit-learn, or Hugging Face Experience designing and deploying machine learning models and AI systems in production environments Familiarity with modern data platforms and cloud services (e.g., Azure, AWS, GCP), including AutoML and MLflow Proficiency with data processing tools and frameworks (e.g., Spark, Pandas, SQL) and working with both structured and unstructured data Experience with Generative AI technologies, including prompt engineering, vector databases, and RAG (Retrieval-Augmented Generation) pipelines Solid understanding of data security, privacy, and compliance principles, with experience implementing these in real-world projects Strong problem-solving skills and ability to translate complex business problems into technical solutions Excellent communication and collaboration skills, with the ability to work effectively across technical and non-technical teams Experience with version control (e.g., Git) and agile development practices Enthusiasm for learning and applying emerging technologies in AI and machine learning Work Perks! - What’s in it for you: FCTG is renowned internationally for having amazing perks and an even better culture. We understand that our people are our most valuable asset. It is the passion and dedication of our teams that keep the company on top of the industry ladder. It’s also why we offer some great employee benefits and perks outside of the norm. You will be rewarded with competitive market salary. You will also be equipped with relevant training courses and tools to set you up for success with endless career advancement and job opportunities all over the world. Market Aligned remuneration structure and a highly competitive salary Fun and Energetic culture : At the heart of everything we do at FCM is a desire to have fun and be yourself Work life Balance : We believe in “No Leave = No Life” So have your own travel adventures with paid annual leave Great place to work - Recognized as a top workplace for 5 consecutive years, which is a testimonial of our commitment towards our people Wellbeing Focus - We take care of our employee with comprehensive medical coverage, accidental insurance, and term insurance for the well being of our people. Paternity Leave: We ensure that you can spend quality time with your growing family Travel perks : You'll have access to plenty of industry discounts to ensure you continue to broaden your horizons A career, not a job : We believe in our people brightness of future. As a high growth company, you will have the opportunity to advance your career in any direction you choose whether that is locally or globally. Reward & Recognition : Celebrate the success of yourself and others at our regular Buzz Nights and at the annual Global Gathering - You'll have to experience it to believe it! #FCMIN Love for travel : We were founded by people who wanted to travel and want others to do the same. That passion is something you can’t miss in our people or service. We value you... Flight Centre Travel Group is committed to creating an inclusive and diverse workplace that supports your unique identity to create better, safer experiences for everyone. We encourage you to come as you are; to foster inclusivity and collaboration. We celebrate you. Who We Are... Since our beginning, our vision has always been to open up the world for those who want to see. As a global travel retailer, our people come from all different backgrounds, and our connections spread to the far reaches of the globe - 20+ countries and counting! Together, we are a family (we call ourselves Flighties). We offer genuine opportunities for people to grow and evolve. We embrace new experiences, we celebrate the wins, seize all opportunities, and empower all of our people to find their Brightness of Future. We encourage you to DREAM BIG through collaboration and innovation, and make sure you are supported to make incredible ideas a reality. Together, we deliver quality, innovative solutions that delight our customers and achieve our strategic priorities. Irreverence. Ownership. Egalitarianism Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Key Responsibilities Build robust document data extraction pipelines using NLP and OCR techniques Develop and optimize end-to-end workflows for parsing scanned/image-based documents (PDFs, JPGs, TIFFs) and structured files (MS Excel, MS Word). Leverage LLM models (OpenAI GPT, Claude, Gemini etc.) for advanced entity extraction, summarization, and classification tasks. Design and implement Python-based scripts for parsing, cleaning, and transforming data. Integrate with Azure Services for document storage, compute, and secured API hosting (e.g., Azure Blob, Azure Functions, Key Vault, Azure Cognitive Services). Deploy and orchestrate workflows in Azure Databricks (including Spark and ML pipelines). Build and manage API calls for model integration, rate-limiting, and token control using AI gateways. Automate results export into SQL/Oracle databases and enable downstream access for analytics/reporting. Handle diverse metadata requirements, and create reusable, modular code for different document types. Optionally visualize and report data using Power BI and export data into Excel for stakeholder review. Technical Skills Required Skills & Qualifications: Strong programming skills in Python (Pandas, Regex, Pytesseract, spaCy, LangChain, Transformers, etc.) Experience with Azure Cloud (Blob Storage, Function Apps, Key Vaults, Logic Apps) Hands-on with Azure Databricks (PySpark, Delta Lake, MLFlow) Familiarity with OCR tools like Tesseract, Azure OCR, AWS textract, or Google Vision API Proficient in SQL and experience with Oracle Database integration (using cx_Oracle, SQLAlchemy, etc.) Experience working with LLM APIs (OpenAI, Anthropic, Google, or Hugging Face models) Knowledge of API development and integration (REST, JSON, API rate limits, authentication handling) Excel data manipulation using Python (e.g., openpyxl, pandas, xlrd) Understanding of Power BI dashboards and integration with structured data sources Nice To Have Experience with LangChain, LlamaIndex, or similar frameworks for document Q&A and retrieval-augmented generation (RAG) Background in data science or machine learning CI/CD and version control (Git, Azure DevOps) Familiarity with Data Governance and PII handling in document processing Soft Skills Strong problem-solving skills and an analytical mindset Attention to detail and ability to work with messy/unstructured data Excellent communication skills to interact with technical and non-technical stakeholders Ability to work independently and manage priorities in a fast-paced environment Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: DevOps Engineer Location: Gurugram (On-site) Experience Required: 2–6 years Work Schedule: Monday to Friday, 10:30 AM – 8:00 PM (1st and 3rd Saturdays off) About Darwix AI Darwix AI is a next-generation Generative AI platform built for enterprise revenue teams across sales, support, credit, and retail. Our proprietary AI infrastructure processes multimodal data such as voice calls, emails, chat logs, and CCTV streams to deliver real-time contextual nudges, performance analytics, and AI-assisted coaching. Our product suite includes: Transform+: Real-time conversational intelligence for contact centers and field sales Sherpa.ai: Multilingual GenAI assistant offering live coaching, call summaries, and objection handling Store Intel: A computer vision solution converting retail CCTV feeds into actionable insights Darwix AI is trusted by leading organizations including IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar, and Sobha Realty. We are backed by top institutional investors and are expanding rapidly across India, the Middle East, and Southeast Asia. Key Responsibilities Design, implement, and manage scalable cloud infrastructure using AWS services such as EC2, S3, IAM, Lambda, SageMaker, and EKS Build and maintain secure, automated CI/CD pipelines using GitHub Actions, Docker, and Terraform Manage machine learning model deployment workflows and lifecycle using tools such as MLflow or DVC Deploy and monitor Kubernetes-based workloads in Amazon EKS (both managed and self-managed node groups) Implement best practices for configuration management, containerization, secrets handling, and infrastructure security Ensure system availability, performance monitoring, and failover automation for critical ML services Collaborate with data scientists and software engineers to operationalize model training, inference, and version control Contribute to Agile ceremonies and ensure DevOps alignment with sprint cycles and delivery milestones Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field 2–6 years of experience in DevOps, MLOps, or related roles Proficiency in AWS services including EC2, S3, IAM, Lambda, SageMaker, and EKS Strong understanding of Kubernetes architecture and workload orchestration in EKS environments Hands-on experience with CI/CD pipelines and GitHub Actions, including secure credential management using GitHub Secrets Strong scripting and automation skills (Python, Shell scripting) Familiarity with model versioning tools such as MLflow or DVC, and artifact storage strategies using AWS S3 Solid understanding of Agile software development practices and QA/testing workflows Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Hello, FCM part of FTCG is one of the world’s largest travel management companies and a trusted partner for nationals and multinational companies. With a 24/7 reach in 97 countries, FCM’s flexible technology anticipates and solves client needs, supported by experts who provide in-depth local knowledge and duty of care as part of the ultimate personalised business travel experience. As part of the ASX-listed Flight Centre Travel Group, FCM delivers the best market-wide rates, unique added-value benefits, and exclusive solutions. Winner of the World's Leading Travel Management Company Award at the WTM for nine consecutive years (2019-2011), FCM is constantly transforming the business of travel through its empowered and accountable people who deliver 24/7 service and are available online and offline. FCM has won the coveted Great Place to Work certification for the fifth time ! FCM Travel India is one of India’s Top 100 Great Mid-size Workplaces 2024 and the Best in Professional Services. A leader in the travel tech space, FCM has proprietary client solutions. FCM provides specialist services via FCM Consulting and FCM Meetings & Events. Key Responsibilities Design and develop AI solutions that address real-world business challenges, ensuring alignment with strategic objectives and measurable outcomes. Work with large-scale structured and unstructured datasets, leveraging modern data frameworks, tools, and platforms. Establish and maintain robust standards for data security, privacy, and regulatory compliance across all AI and data workflows. Collaborate closely with cross-functional teams to gather requirements, share insights, and deliver high-impact solutions. Monitor and maintain production AI systems to ensure continued accuracy, scalability, and reliability over time. Stay up to date with the latest advancements in AI, machine learning, and data engineering, and apply them where relevant. Write clean, well-documented, and maintainable code, and actively contribute to team best practices and technical documentation. You'll Be Perfect For The Role If You Have Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field Strong programming skills in Python (preferred) and experience with AI/ML libraries such as TensorFlow, PyTorch, scikit-learn, or Hugging Face Experience designing and deploying machine learning models and AI systems in production environments Familiarity with modern data platforms and cloud services (e.g., Azure, AWS, GCP), including AutoML and MLflow Proficiency with data processing tools and frameworks (e.g., Spark, Pandas, SQL) and working with both structured and unstructured data Experience with Generative AI technologies, including prompt engineering, vector databases, and RAG (Retrieval-Augmented Generation) pipelines Solid understanding of data security, privacy, and compliance principles, with experience implementing these in real-world projects Strong problem-solving skills and ability to translate complex business problems into technical solutions Excellent communication and collaboration skills, with the ability to work effectively across technical and non-technical teams Experience with version control (e.g., Git) and agile development practices Enthusiasm for learning and applying emerging technologies in AI and machine learning Work Perks! - What’s in it for you: FCTG is renowned internationally for having amazing perks and an even better culture. We understand that our people are our most valuable asset. It is the passion and dedication of our teams that keep the company on top of the industry ladder. It’s also why we offer some great employee benefits and perks outside of the norm. You will be rewarded with competitive market salary. You will also be equipped with relevant training courses and tools to set you up for success with endless career advancement and job opportunities all over the world. Market Aligned remuneration structure and a highly competitive salary Fun and Energetic culture : At the heart of everything we do at FCM is a desire to have fun and be yourself Work life Balance : We believe in “No Leave = No Life” So have your own travel adventures with paid annual leave Great place to work - Recognized as a top workplace for 5 consecutive years, which is a testimonial of our commitment towards our people Wellbeing Focus - We take care of our employee with comprehensive medical coverage, accidental insurance, and term insurance for the well being of our people. Paternity Leave: We ensure that you can spend quality time with your growing family Travel perks : You'll have access to plenty of industry discounts to ensure you continue to broaden your horizons A career, not a job : We believe in our people brightness of future. As a high growth company, you will have the opportunity to advance your career in any direction you choose whether that is locally or globally. Reward & Recognition : Celebrate the success of yourself and others at our regular Buzz Nights and at the annual Global Gathering - You'll have to experience it to believe it! Love for travel : We were founded by people who wanted to travel and want others to do the same. That passion is something you can’t miss in our people or service. We value you... #FCMIN Flight Centre Travel Group is committed to creating an inclusive and diverse workplace that supports your unique identity to create better, safer experiences for everyone. We encourage you to come as you are; to foster inclusivity and collaboration. We celebrate you. Who We Are... Since our beginning, our vision has always been to open up the world for those who want to see. As a global travel retailer, our people come from all different backgrounds, and our connections spread to the far reaches of the globe - 20+ countries and counting! Together, we are a family (we call ourselves Flighties). We offer genuine opportunities for people to grow and evolve. We embrace new experiences, we celebrate the wins, seize all opportunities, and empower all of our people to find their Brightness of Future. We encourage you to DREAM BIG through collaboration and innovation, and make sure you are supported to make incredible ideas a reality. Together, we deliver quality, innovative solutions that delight our customers and achieve our strategic priorities. Irreverence. Ownership. Egalitarianism Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Job Title – Senior Data Scientist Candidate Specification – 10+ years, Notice Period – Immediate to 30 days, Hybrid. Job Summary We are seeking a highly skilled and experienced Senior Data Scientist to join our advanced analytics team. The ideal candidate will possess strong statistical and machine learning expertise, hands-on programming skills, and the ability to transform data into actionable business insights. This role also requires domain understanding to align data science efforts with business objectives in industries such as Oil & Gas, Pharma, Automotive, Desalination, and Industrial Equipment . Primary Responsibilities Lead the design, development, and deployment of advanced machine learning and statistical models Analyze large, complex datasets to uncover trends, patterns, and actionable insights Collaborate cross-functionally with business, engineering, and domain teams to define analytical problems and deliver impactful solutions Apply deep understanding of business objectives to drive the application of data science in decision-making Ensure the quality, integrity, and governance of data used for modeling and analytics Guide junior data scientists and review code and models for scalability and accuracy Core Competencies (Primary Skills) Statistical Analysis & Mathematics Strong foundation in probability, statistics, linear algebra, and calculus Experience with hypothesis testing, A/B testing, and regression models Machine Learning & Deep Learning Proficient in supervised/unsupervised learning, ensemble techniques Hands-on experience with neural networks, NLP, and computer vision Business Acumen & Domain Knowledge Proven ability to translate business needs into data science solutions Exposure to domains such as Oil & Gas, Pharma, Automotive, Desalination, and Industrial Pumps/Motors Technical Proficiency Programming Languages: Python, R, SQL Libraries & Tools: Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch Data Visualization: Matplotlib, Seaborn, Plotly, Tableau, Power BI MLOps & Deployment: Docker, Kubernetes, MLflow, Airflow Cloud & Big Data (Preferred): AWS, GCP, Azure, Spark, Hadoop, Hive, Presto Secondary Skills (Preferred) Generative AI: GPT-based models, fine-tuning, open-source LLMs, Agentic AI frameworks Project Management: Agile methodologies, sprint planning, stakeholder communication Skills Required RoleSenior Data Scientist - Contract Hiring Industry TypeIT/ Computers - Software Functional Area Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills DEEP LEARNING MACHINE LAEARNING PYHTON S TATISTICAL ANALYSIS Other Information Job CodeGO/JC/375/2025 Recruiter NameChristopher Show more Show less

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description Build robust ML pipelines and automate model training, evaluation, and deployment. Optimize and tune models for financial time-series, pricing engines, and fraud detection. Collaborate with data scientists and data engineers to deploy scalable and secure ML models. Monitor model drift, data drift, and ensure models are retrained and updated as per regulatory norms. Implement CI/CD for ML and integrate with enterprise applications. Tech Stack Languages: Python ML Platforms: MLflow, Kubeflow MLOps Tools: Airflow, MLReef, Seldon Libraries: scikit-learn, XGBoost, LightGBM Cloud: GCP AI Platform Containerization: Docker, Kubernetes Job Category: AI/ML Engineer Job Type: Full Time Job Location: Mumbai Exp-Level: 3 to 5 Years Apply for this position Full Name * Email * Phone * Cover Letter * Upload CV/Resume *Allowed Type(s): .pdf, .doc, .docx By using this form you agree with the storage and handling of your data by this website. * Recent Comments Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

We’re now looking for a Senior DevOps Engineer to join our fast-growing, remote-first team. If you're passionate about automation, scalable cloud systems, and supporting high-impact AI workloads, we’d love to connect. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure . Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation . Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS , or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog , and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines ( Airflow, ML Pipelines ) and Bedrock with Tensorflow or PyTorch . Implement and optimize ETL/data streaming pipelines using Kafka , EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash , along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare , VPC , and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. Benefits: Competitive Salary Support for continual learning (free books and online courses) Leveling Up Opportunities Diverse team environment Show more Show less

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 7 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Dynamic Yield, a Mastercard company, is dedicated to powering an inclusive, digital economy that benefits everyone, everywhere. Our SSO Data Science team, specifically the Horizontal Data Science Enablement Team, is looking for an MLOps Engineering Manager . This critical leadership role involves solving complex MLOps challenges, overseeing the entire organization's Databricks platform, building robust CI/CD and automation pipelines, and championing MLOps best practices. You'll lead the charge in optimizing the machine learning lifecycle, ensuring platform stability, and collaborating closely with data engineers, data scientists, and other key stakeholders to support their data processing and analytics needs. All About You As an MLOps Engineering Manager, you will: Databricks Platform Leadership: Oversee the administration, configuration, and maintenance of Databricks clusters and workspaces for the entire organization. Continuously monitor Databricks clusters for high workloads or excessive usage costs, proactively alerting relevant stakeholders to address issues impacting overall cluster health. Implement and manage security protocols, including access controls and data encryption, to safeguard sensitive information in adherence with Mastercard standards. Facilitate the integration of various data sources into Databricks, ensuring seamless data flow and consistency. Identify and resolve issues related to Databricks infrastructure, providing timely support to users and stakeholders. MLOps Solution Ownership: Bring deep MLOps expertise to the table, specifically within the scope of, but not limited to: Model monitoring, Feature catalog/store, Model lineage maintenance, and CI/CD pipelines to gatekeep the model lifecycle from development to production. Own and maintain MLOps solutions, either by leveraging open-source options or through third-party vendors. Build LLMOps pipelines using open-source solutions, recommend alternatives, and onboard new products to the solution. Operational Excellence & Collaboration: Maintain services once they are live by measuring and monitoring availability, latency, and overall system health. Work closely with data engineers, data scientists, and other stakeholders to support their data processing and analytics needs. Maintain comprehensive documentation of Databricks configurations, processes, and best practices. Lead participation in security and architecture reviews of the infrastructure. What Experience You Need Education: Master's degree in computer science, software engineering, or a similar field. Databricks Expertise: Strong experience with Databricks and its management of roles and resources. Cloud & APIs: Experience in cloud technologies and operations , and experience supporting APIs and Cloud technologies . MLOps Solutions: Experience with MLOps solutions like MLFlow . Data Skills: Experience with performing data analysis, data observability, data ingestion, and data integration. DevOps/SRE Background: 5+ years of DevOps, SRE, or general systems engineering experience. CI/CD Proficiency: 2+ years of hands-on experience in industry-standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef . Data Governance: Experience architecting and implementing data governance processes and tooling (such as data catalogs, lineage tools, role-based access control, PII handling). Programming: Strong coding ability in Python or other languages like Java and C++, plus a solid grasp of SQL fundamentals . Problem-Solving & Ownership: Possess a systematic problem-solving approach, coupled with strong communication skills and a strong sense of ownership and drive. What Could Set You Apart SQL Tuning: Experience with SQL tuning . Automation: Strong automation experience. Data Observability: Strong Data Observability experience. Operations: Operations experience in supporting highly scalable systems. Global Operations: Ability to operate in a 24x7 environment encompassing global time zones. Self-Motivation: Self-motivating and creatively solves software problems while effectively keeping modeling systems operational.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Azure Data Engineer with Databricks Experience: 5 – 10 years Job Level: Senior Engineer / Lead / Architect Notice Period: Immediate Joiner Role Overview Join our dynamic team at Team Geek Solutions, where we specialize in innovative data solutions and cutting-edge technology implementations to empower businesses across various sectors. We are looking for a skilled Azure Data Engineer with expertise in Databricks to join our high-performing data and AI team for a critical client engagement. The ideal candidate will have strong hands-on experience in building scalable data pipelines, data transformation, and real-time data processing using Azure Data Services and Databricks. Key Responsibilities Design, develop, and deploy end-to-end data pipelines using Azure Databricks, Azure Data Factory, and Azure Synapse Analytics. Perform data ingestion, data wrangling, and ETL/ELT processes from various structured and unstructured data sources (e.g., APIs, on-prem databases, flat files). Optimize and tune Spark-based jobs and Databricks notebooks for performance and scalability. Implement best practices for CI/CD, code versioning, and testing in a Databricks environment using DevOps pipelines. Design data lake and data warehouse solutions using Delta Lake and Synapse Analytics. Ensure data security, governance, and compliance using Azure-native tools (e.g., Azure Purview, Key Vault, RBAC). Collaborate with data scientists to enable feature engineering and model training within Databricks. Write efficient SQL and PySpark code for data transformation and analytics. Monitor and maintain existing data pipelines and troubleshoot issues in a production environment. Document technical solutions, architecture diagrams, and data lineage as part of delivery. Mandatory Skills & Technologies Azure Cloud Services: Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Data Lake Storage (Gen2), Azure Key Vault, Azure Functions, Azure Monitor Databricks Platform: Delta Lake, Databricks Notebooks, Job Clusters, MLFlow (optional), Unity Catalog Programming Languages: PySpark, SQL, Python Data Pipelines: ETL/ELT pipeline design and orchestration Version Control & DevOps: Git, Azure DevOps, CI/CD pipelines Data Modeling: Star/Snowflake schema, Dimensional modeling Performance Tuning: Spark job optimization, Data partitioning strategies Data Governance & Security: Azure Purview, RBAC, Data Masking Nice To Have Experience with Kafka, Event Hub, or other real-time streaming platforms Exposure to Power BI or other visualization tools Knowledge of Terraform or ARM templates for infrastructure as code Experience in MLOps and integration with MLFlow for model lifecycle management Certifications (Good To Have) Microsoft Certified: Azure Data Engineer Associate Databricks Certified Data Engineer Associate / Professional DP-203: Data Engineering on Microsoft Azure Soft Skills Strong communication and client interaction skills Analytical thinking and problem-solving Agile mindset with familiarity in Scrum/Kanban Team player with mentoring ability for junior engineers Skills: data partitioning strategies,azure functions,data analytics,unity catalog,rbac,databricks,elt,devops,azure data factory,delta lake,data factory,spark job optimization,job clusters,azure devops,etl/elt pipeline design and orchestration,data masking,azure key vault,azure databricks,azure data engineer,azure synapse,star/snowflake schema,azure data lake storage (gen2),git,sql,etl,snowflake,azure,python,azure cloud services,azure purview,pyspark,mlflow,ci/cd pipelines,dimensional modeling,sql server,big data technologies,azure monitor,azure synapse analytics,databricks notebooks Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Medicine moves too slow. At Velsera, we are changing that. Velsera was formed in 2023 through the shared vision of Seven Bridges and Pierian, with a mission to accelerate the discovery, development, and delivery of life-changing insights. Velsera provides software and professional services for: AI-powered multimodal data harmonization and analytics for drug discovery and development IVD development, validation, and regulatory approval Clinical NGS interpretation, reporting, and adoption With our headquarters in Boston, MA, we are growing and expanding our teams located in different countries! What will you do? Train, fine-tune, and deploy Large Language Models (LLMs) to solve real-world problems effectively Design, implement, and optimize AI/ML pipelines to support model development, evaluation, and deployment Collaborate with Architect, software engineers, and product teams to integrate AI solutions into applications Ensure model performance, scalability, and efficiency through continuous experimentation and improvements Work on LLM optimization techniques, including Retrieval-Augmented Generation (RAG), prompt tuning, etc Manage and automate the infrastructure necessary for AI/ML workloads while keeping the focus on model development Work with DevOps teams to ensure smooth deployment and monitoring of AI models in production Stay updated on the latest advancements in AI, LLMs, and deep learning to drive innovation What do you bring to the table? Strong experience in training, fine-tuning, and deploying LLMs using frameworks like PyTorch, TensorFlow, or Hugging Face Transformers Hands-on experience in developing and optimizing AI/ML pipelines, from data preprocessing to model inference Solid programming skills in Python and familiarity with libraries like NumPy, Pandas, and Scikit-learn Strong understanding of tokenization, embeddings, and prompt engineering for LLM-based applications Hands-on experience in building and optimizing RAG pipelines using vector databases (FAISS, Pinecone, Weaviate, or ChromaDB) Experience with cloud-based AI infrastructure (AWS, GCP, or Azure) and containerization technologies (Docker, Kubernetes) Experience in model monitoring, A/B testing, and performance optimization in a production environment Familiarity with MLOps best practices and tools (Kubeflow, MLflow, or similar) Ability to balance hands-on AI development with necessary infrastructure management Strong problem-solving skills, teamwork, and a passion for building AI-driven solutions Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

Erode, Tamil Nadu, India

Remote

Job Title: Senior Data Scientist (Advanced Modeling & Machine Learning) Location: Remote Job Type: Full-time About the role We are seeking a highly motivated and experienced Senior Data Scientist with a strong background in statistical modeling, machine learning, and natural language processing (NLP). This individual will work on advanced attribution models and predictive algorithms that power strategic decision-making across the business. The ideal candidate will have a Master’s degree in a quantitative field, 4–6 years of hands-on experience, and demonstrated expertise in building models from linear regression to cutting-edge deep learning and large language models (LLMs). A Ph.D. is strongly preferred. Responsibilities Responsible for analyzing the data, identifying patterns, and do a detailed EDA. Build and refine predictive models using techniques such as linear/logistic regression, XGBoost, and neural networks. Leverage machine learning and NLP methods to analyze large-scale structured and unstructured datasets. Apply LLMs and transformers to develop solutions in content understanding, summarization, classification, and retrieval. Collaborate with data engineers and product teams to deploy scalable data pipelines and model production systems. Interpret model results, generate actionable insights, and present findings to technical and non-technical stakeholders. Stay abreast of the latest research and integrate cutting-edge techniques into ongoing projects Required Qualifications Master’s degree in Computer Science, Statistics, Applied Mathematics, or a related field. 4–6 years of industry experience in data science or machine learning roles. Strong statistical foundation, with practical experience in regression modeling, hypothesis testing, and A/B testing. Hands-on knowledge of: > Programming languages : Python (primary), SQL, R (optional) > Libraries : pandas, NumPy, scikit-learn, TensorFlow, PyTorch, XGBoost, LightGBM, spaCy, Hugging Face Transformers > Distributed computing : PySpark, Dask > Big Data and Cloud Platforms : Databricks, AWS Sagemaker, Google Vertex AI, Azure ML > Data Engineering Tools : Apache Spark, Delta Lake, Airflow > ML Workflow & Visualization : MLflow, Weights & Biases, Plotly, Seaborn, Matplotlib > Version control and collaboration : Git, GitHub, Jupyter, VSCode Preferred Qualifications Masters or Ph.D. in a quantitative or technical field. Experience with deploying machine learning pipelines in production using CI/CD tools. Familiarity with containerization (Docker) and orchestration (Kubernetes) in ML workloads. Understanding of MLOps and model lifecycle management best practices. Experience in real-time data processing (Kafka, Flink) and high-throughput ML systems. What We Offer Competitive salary and performance bonuses Flexible working hours and remote options Opportunities for continued learning and research Collaborative, high-impact team environment Access to cutting-edge technology and compute resources To apply, send your resume to jobs@megovation.io to be part of a team pushing the boundaries of data-driven innovation. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

Client Type: US Client Location: Remote About the Role We’re creating a new certification: Google AI Ecosystem Architect (Gemini & DeepMind) - Subject Matter Expert . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement. Show more Show less

Posted 1 month ago

Apply

15.0 years

0 Lacs

Nagpur, Maharashtra, India

On-site

Job description Job Title: Tech Lead (AI/ML) – Machine Learning & Generative AI Location: Nagpur (Hybrid / On-site) Experience: 8–15 years Employment Type: Full-time Job Summary: We are seeking a highly experienced Python Developer with a strong background in traditional Machine Learning and growing proficiency in Generative AI to join our AI Engineering team. This role is ideal for professionals who have delivered scalable ML solutions and are now expanding into LLM-based architectures, prompt engineering, and GenAI productization. You’ll be working at the forefront of applied AI, driving both model performance and business impact across diverse use cases. Key Responsibilities: Design and develop ML-powered solutions for use cases in classification, regression, recommendation, and NLP. Build and operationalize GenAI solutions, including fine-tuning, prompt design, and RAG implementations using models such as GPT, LLaMA, Claude, or Gemini. Develop and maintain FastAPI-based services that expose AI models through secure, scalable APIs. Lead data modeling, transformation, and end-to-end ML pipelines, from feature engineering to deployment. Integrate with relational (MySQL) and vector databases (e.g., ChromaDB, FAISS, Weaviate) to support semantic search, embedding stores, and LLM contexts. Mentor junior team members and review code, models, and system designs for robustness and maintainability. Collaborate with product, data science, and infrastructure teams to translate business needs into AI capabilities. Optimize model and API performance, ensuring high availability, security, and scalability in production environments. Core Skills & Experience: Strong Python programming skills with 5+ years of applied ML/AI experience. Demonstrated experience building and deploying models using TensorFlow, PyTorch, scikit-learn, or similar libraries. Practical knowledge of LLMs and GenAI frameworks, including Hugging Face, OpenAI, or custom transformer stacks. Proficient in REST API design using FastAPI and securing APIs in production environments. Deep understanding of MySQL (query performance, schema design, transactions). Hands-on with vector databases and embeddings for search, retrieval, and recommendation systems. Strong foundation in software engineering practices: version control (Git), testing, CI/CD. Preferred/Bonus Experience: Deployment of AI solutions on cloud platforms (AWS, GCP, Azure). Familiarity with MLOps tools (MLflow, Airflow, DVC, SageMaker, Vertex AI). Experience with Docker, Kubernetes, and container orchestration. Understanding of prompt engineering, tokenization, LangChain, or multi-agent orchestration frameworks. Exposure to enterprise-grade AI applications in BFSI, healthcare, or regulated industries is a plus. What We Offer: Opportunity to work on a cutting-edge AI stack integrating both classical ML and advanced GenAI. High autonomy and influence in architecting real-world AI solutions. A dynamic and collaborative environment focused on continuous learning and innovation. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies