Jobs
Interviews

1696 Mlflow Jobs - Page 36

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

6 - 8 Lacs

Calcutta

On-site

Kolkata,West Bengal,India Job ID 768921 Join our Team About this opportunity: We are seeking a highly motivated and skilled Data Engineer to join our cross-functional team of Data Architects and Data Scientists. This role offers an exciting opportunity to work on large-scale data infrastructure and AI/ML pipelines, driving intelligent insights and scalable solutions across the organization. What you will do: Build, optimize, and maintain robust ETL/ELT pipelines to support AI/ML and analytics workloads. Collaborate closely with Data Scientists to productionize ML models, ensuring scalable deployment and monitoring. Design and implement cloud-based data lake and data warehouse architectures. Ensure high data quality, governance, security, and observability across data platforms. Develop and manage real-time and batch data workflows using tools like Apache Spark, Airflow, and Kafka. Support CI/CD and MLOps workflows using tools like GitHub Actions, Docker, Kubernetes, and MLflow. The skills you bring: Languages: Python, SQL, Bash Data Tools: Apache Spark, Airflow, Kafka, dbt, Pandas Cloud Platforms: AWS (preferred), Azure, or GCP Databases: Snowflake, Redshift, BigQuery, PostgreSQL, NoSQL (MongoDB/DynamoDB) DevOps/MLOps: Docker, Kubernetes, MLflow, CI/CD (e.g., GitHub Actions, Jenkins) Data Modeling: OLAP/OLTP, Star/Snowflake schema, Data Vault Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 1 month ago

Apply

3.0 years

3 - 6 Lacs

Jaipur

On-site

Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally: Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In https://www.aurigait.com/ -https://aurigait.com/https://aurigait.com

Posted 1 month ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

ABOUT US: The vision from the start has been to create a state-of-the-art infrastructure of the workplace with the implementation of all the tools for employees and clients makes Bytes Technolab a growth hacker. This has really helped the dev team in adapting to the existing & upcoming technologies & platforms to create top-notch software solutions for businesses, startups, and enterprises. Our core value lies with 100% integrity in communication, workflow, methodology, and flexible collaboration. With the client-first approach, we are offering flexible models of engagement that can help our clients in the best way possible. Bytes Technolab is confident that this approach would help us develop user-centric, applicable, advanced, secure, and scalable software solutions. Our team is fully committed to adding value at every stage of your journey with us, from initial engagement to delivery and beyond. Role Description: 3+ years of professional experience in Machine Learning and Artificial Intelligence. Strong proficiency in Python programming and its libraries for ML and AI (NumPy, Pandas, scikit-learn, etc.). Hands-on experience with ML/AI frameworks like PyTorch, TensorFlow, Keras, Facenet, OpenCV, and other relevant libraries. Proven ability to work with GPU acceleration for deep learning model development and optimization (using CUDA, cuDNN). Strong understanding of neural networks, computer vision, and other AI technologies. Solid experience working with Large Language Models (LLMs) such as GPT, BERT, LLaMA, including fine-tuning, prompt engineering, and embedding-based retrieval (RAG). Working knowledge of Agentic Architectures, including designing and implementing LLM-powered agents with planning, memory, and tool-use capabilities. Familiar with frameworks like LangChain, AutoGPT, BabyAGI, and custom agent orchestration pipelines. Solid problem-solving skills and the ability to translate business requirements into ML/AI/LLM solutions. Experience in deploying ML/AI models on cloud platforms (AWS SageMaker, Azure ML, Google AI Platform). Proficiency in building and managing ETL pipelines, data preprocessing, and feature engineering. Experience with MLOps tools and frameworks such as MLflow, Kubeflow, or TensorFlow Extended (TFX). Expertise in optimizing ML/AI models for performance and scalability across diverse hardware architectures. Experience with Natural Language Processing (NLP) and foundational knowledge of Reinforcement Learning. Familiarity with data versioning tools like DVC or Delta Lake. Skilled in containerization and orchestration tools such as Docker and Kubernetes for scalable deployments. Proficient in model evaluation, A/B testing, and establishing continuous training pipelines. Experience working in Agile/Scrum environments with cross-functional teams. Strong understanding of ethical AI principles, model fairness, and bias mitigation techniques. Familiarity with CI/CD pipelines for machine learning workflows. Ability to effectively communicate complex ML, AI, and LLM/Agentic concepts to both technical and non-technical stakeholders. We are hiring professionals with 3+ years of experience in IT Services. Kindly share your updated CV at freny.darji@bytestechnolab.com

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About The Position We are looking for a dynamic and experienced Data Scientist to join our growing AI & Data Science team. In this role, you will lead projects and collaborate with cross-functional teams to build and deploy cutting-edge machine learning solutions that drive business value. Technical and Professional Requirements Minimum 5+ years of experience in data science, with at least 2 years in a leadership or team management role. Proficiency in Natural Language Processing (NLP) techniques such as LDA, embeddings, Retrieval-Augmented Generation (RAG). Experience in time series forecasting, statistical modeling, and predictive analytics. Hands-on experience with Databricks, Azure ML stack, Python, and Django. Solid understanding of end-to-end data science workflows, from data engineering to model deployment. Ability to collaborate across diverse teams and communicate technical concepts clearly to stakeholders. Job Responsibilities Build and deploy end-to-end ML pipelines, including data preprocessing, model training, and production deployment. Implement supervised, unsupervised, and deep learning models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Collaborate with data scientists and engineers to integrate ML models into production systems. Use MLOps tools like MLflow or Kubeflow for CI/CD and model lifecycle management. Monitor and optimize deployed models for performance and scalability. Stay updated on ML advancements and contribute to innovation within the team. Educational Requirements: Minimum 60% in any two of the following: Secondary Higher Secondary Graduation.

Posted 1 month ago

Apply

2.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Title: Senior Machine Learning Engineer Location: Gurgaon, IN Type: (Hybrid, In-Office) Job Description Who We Are: Fareportal is a travel technology company powering a next-generation travel concierge service. Utilizing its innovative technology and company owned and operated global contact centres, Fareportal has built strong industry partnerships providing customers access to over 500 airlines, a million lodgings, and hundreds of car rental companies around the globe. With a portfolio of consumer travel brands including CheapOair and OneTravel, Fareportal enables consumers to book-online, on mobile apps for iOS and Android, by phone, or live chat. Fareportal provides its airline partners with access to a broad customer base that books high-yielding international travel and add-on ancillaries. HIGHLIGHTS : Fareportal is the number 1 privately held online travel company in flight volume. Fareportal partners with over 500 airlines, 1 million lodgings, and hundreds of car rental companies worldwide. 2019 annual sales exceeded $5 billion. Fareportal sees over 150 million unique visitors annually to our desktop and mobile sites. Fareportal, with its global workforce of over 2,600 employees, is strategically positioned with 9 offices in 6 countries and headquartered in New York City. What We Do Our Machine Learning team is at the forefront of developing state-of-the-art models and solutions that drive key business decisions, improve customer experiences, and streamline operations. We work on diverse projects, from personalized recommendations and predictive modelling to call analytics and business forecasting, using cutting-edge technology and data-driven insights. Our department is unique in its end-to-end approach to problem-solving, focusing on innovation, collaboration, and impactful results. As a Senior Machine Learning Engineer, you will play a crucial role in the complete lifecycle of our Machine Learning projects. You will: - Engage with stakeholders to gather and refine requirements. - Perform exploratory data analysis (EDA) and manipulate data using SQL and NoSQL databases. - Engineer and select features, ensuring optimal model performance. - Train, test, and fine-tune models using advanced algorithms and methodologies. - Deploy models into production environments using frameworks like Docker, Kubernetes, and Rest APIs. - Monitor and evaluate model performance, including A/B testing and continuous improvement. - Collaborate with cross-functional teams, ensuring seamless integration and communication throughout the project lifecycle. In this Role You Will Get To - Work within a dynamic and talented team of engineers and data scientists to build scalable Machine Learning solutions that impact thousands of users daily. - Develop, deploy, and maintain Machine Learning models that address complex business challenges and enhance customer engagement. - Participate in brainstorming sessions, technical design discussions, and collaborate with product managers, data engineers, and other stakeholders to ensure alignment and success. - Explore and implement new technologies, including cloud-based (AWS, Azure) and on-prem systems, to optimize and enhance ML model performance. - Conduct rigorous testing and monitoring to ensure the reliability and accuracy of models. Who You Are Must-Haves: - Strong communication skills, capable of translating technical concepts into actionable business insights. - Proven experience (2-4 years) as a Data Scientist, Machine Learning Engineer, or in a similar role. - Proficient in SQL querying for data extraction and manipulation. - Experience with high volume structured data, including data manipulation, exploratory data analysis (EDA), and data modelling. - Strong proficiency in Python programming and familiarity with Rest API frameworks for seamless model integration. - Demonstrated ability to build and deploy at least 3-5 ML models in production environments. - Hands-on experience with Regression, Classification, Recommendation algorithms, and Neural Networks. - Expertise in relevant ML libraries such as CatBoost, XGBoost, LightGBM, Scikit-learn, TensorFlow, and PyTorch. - Knowledge of containerization technologies (e.g., Docker, Kubernetes) and virtual machines (VMs). - Familiarity with cloud (AWS/Azure) and on-prem data systems. - Bachelors in computer science, or a related field; a Master's degree is a plus. Good-to-Haves: - Experience working with distributed data processing frameworks like PySpark. - Experience working with Natural Language Processing (NLP) and Large Language Models (LLMs). - Familiarity with MLOps platforms such as MLFlow and Kubeflow. - Prior experience with A/B testing methodologies. - Advanced skills in data visualization tools like Tableau or Power BI. Disclaimer This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee. Fareportal reserves the right to change the job duties, responsibilities, expectations, or requirements posted here at any time at the Company's sole discretion, with or without notice.

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

Gurugram, Haryana

On-site

Senior Data Scientist (Deep Learning and Artificial Intelligence) Job Description We aim to bring about a new paradigm in medical image diagnostics; providing intelligent, holistic, ethical, explainable and patient centric care. We are looking for innovative problem solvers who love solving problems. We want people who can empathize with the consumer, understand business problems, and design and deliver intelligent products. People who are looking to extend artificial intelligence into unexplored areas. Your primary focus will be in applying deep learning and artificial intelligence techniques to the domain of medical image analysis. Responsibilities Selecting features, building and optimizing classifier engines using deep learning techniques. Understanding the problem and applying the suitable image processing techniques Use techniques from artificial intelligence/deep learning to solve supervised and unsupervised learning problems. Understanding and designing solutions for complex problems related to medical image analysis by using Deep Learning/Object Detection/Image Segmentation. Recommend and implement best practices around the application of statistical modeling. Create, train, test, and deploy various neural networks to solve complex problems. Develop and implement solutions to fit business problems which may include applying algorithms from a standard statistical tool, deep learning or custom algorithm development. Understanding the requirements and designing solutions and architecture in accordance with them is important. Participate in code reviews, sprint planning, and Agile ceremonies to drive high-quality deliverables. Design and implement scalable data science architectures for training, inference, and deployment pipelines. Ensure code quality, readability, and maintainability by enforcing software engineering best practices within the data science team. Optimize models for production, including quantization, pruning, and latency reduction for real-time inference. Drive the adoption of versioing strategies for models, datasets, and experiments (e.g., using MLFlow, DVC). Contribute to the architectural design of data platforms to support large-scale experimentation and production workloads. Skills and Qualifications Strong software engineering skills in Python (or other languages used in data science) with emphasis on clean code, modularity, and testability. Excellent understanding and hands-on of Deep Learning techniques such as ANN, CNN, RNN, LSTM, Transformers, VAEs etc. Must have experience with Tensorflow or PyTorch framework in building, training, testing, and deploying neural networks. Experience in solving problems in the domain of Computer Vision. Knowledge of data, data augmentation, data curation, and synthetic data generation. Ability to understand the complete problem and design the solutions that best fit all the constraints. Knowledge of the common data science and deep learning libraries and toolkits such as Keras, Pandas, Scikit-learn, Numpy, Scipy, OpenCV etc. Good applied statistical skills, such as distributions, statistical testing, regression, etc. Exposure to Agile/Scrum methodologies and collaborative development practices. Experience with the development of RESTful APIs. The knowledge of libraries like FastAPI and the ability to apply it to deep learning architectures is essential. Excellent analytical and problem-solving skills with a good attitude and keen to adapt to evolving technologies. Experience with medical image analysis will be an advantage. Experience designing and building ML architecture components (e.g., feature stores, model registries, inference servers). Solid understanding of software design patterns, microservices, and cloud-native architectures. Expertise in model optimization techniques (e.g., ONNX conversion, TensorRT, model distillation) Education : BE/B Tech MS/M Tech (will be a bonus) Experience : 3+ Years Job Type: Full-time Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Do you have experience leading teams in AI Development? Do you have experience creating software architecture for production environment in AI applications? Experience: Deep learning: 3 years (Required) Computer vision: 3 years (Required) PyTorch: 3 years (Required) Work Location: In person

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About the Role: We are seeking an experienced MLOps Engineer to lead the deployment, scaling, and performance optimization of open-source Generative AI models on cloud infrastructure. You’ll work at the intersection of machine learning, DevOps, and cloud engineering to help productize and operationalize large-scale LLM and diffusion models. Key Responsibilities: Design and implement scalable deployment pipelines for open-source Gen AI models (LLMs, diffusion models, etc.). Fine-tune and optimize models using techniques like LoRA, quantization, distillation, etc. Manage inference workloads, latency optimization, and GPU utilization. Build CI/CD pipelines for model training, validation, and deployment. Integrate observability, logging, and alerting for model and infrastructure monitoring. Automate resource provisioning using Terraform, Helm, or similar tools on GCP/AWS/Azure. Ensure model versioning, reproducibility, and rollback using tools like MLflow, DVC, or Weights & Biases. Collaborate with data scientists, backend engineers, and DevOps teams to ensure smooth production rollouts. Required Skills & Qualifications: 5+ years of total experience in software engineering or cloud infrastructure. 3+ years in MLOps with direct experience in deploying large Gen AI models. Hands-on experience with open-source models (e.g., LLaMA, Mistral, Stable Diffusion, Falcon, etc.). Strong knowledge of Docker, Kubernetes, and cloud compute orchestration. Proficiency in Python and familiarity with model-serving frameworks (e.g., FastAPI, Triton Inference Server, Hugging Face Accelerate, vLLM). Experience with cloud platforms (GCP preferred, AWS or Azure acceptable). Familiarity with distributed training, checkpointing, and model parallelism. Good to Have: Experience with low-latency inference systems and token streaming architectures. Familiarity with cost optimization and scaling strategies for GPU-based workloads. Exposure to LLMOps tools (LangChain, BentoML, Ray Serve, etc.). Why Join Us: Opportunity to work on cutting-edge Gen AI applications across industries. Collaborative team with deep expertise in AI, cloud, and enterprise software. Flexible work environment with a focus on innovation and impact.

Posted 1 month ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Kidde Global Solutions: Kidde Global Solutions is a world leader in fire & life safety solutions tailored for complex commercial facilities to homes. Through iconic, industry-defining brands including Kidde, Kidde Commercial, Edwards, GST, Badger, Gloria and Aritech, we provide residential and commercial customers with advanced solutions and services to protect people and property in a wide range of applications, all around the globe. Role: AI and Automation Architect Location: Hyderabad Employment Type: Full-time Experience: 12-18 years Position Overview: We are seeking an innovative and experienced AI and Automation Architect to lead the design and development of intelligent AI and automation solutions by integrating RPA (Robotic Process Automation) bots with AI-driven technologies . The Architect will work closely with cross-functional teams to identify opportunities for automation, design scalable solutions, and drive business efficiency through cutting-edge AI-powered bots. Key Responsibilities: Automation Strategy and Design AI and BOT Integration Solution Development and Deployment using MLOps on AWS Cloud Platform Stakeholder Collaboration Governance and Compliance Innovation and Continuous Improvement Qualifications: Skills and Competencies: Develop and implement enterprise-wide intelligent automation strategies by integrating RPA with AI capabilities (e.g., natural language processing, machine learning). Analyze business processes to identify automation opportunities and recommend solutions. Define and establish automation frameworks and best practices. Architect and deploy AI-enabled bots that integrate with enterprise platforms like Service Now, Workday etc. Collaborate with AI engineers and data scientists to leverage machine learning models in automation workflows. Oversee the development and deployment of bots, ensuring they adhere to quality and security standards. Partner with business teams, IT departments, and process owners to understand automation needs and deliver impactful solutions. Establish MLOps / AIOps frameworks to ensure compliance with organizational and regulatory standards. Ensure automated solutions are secure, auditable, and aligned with data privacy laws (e.g., GDPR, CCPA). Stay updated on emerging technologies in AI and automation and assess their applicability to the organization. Drive continuous improvement in automation processes, leveraging feedback and analytics for optimization. Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, with a specialization in Data Science, ML or AI. Experience: 12+ years of experience in automation, including 3+ years in an architect or leadership role. Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Hi connections!! we are hiring for ML Engineer Senior Machine Learning Engineer Platform Team • 5+ years • Full-Time • Bangalore --- Role Overview We are seeking a Senior Machine Learning Engineer with strong expertise in Python, EDA, ML model development, and hands-on experience in LLMOps using frameworks like LangChain and LangGraph. You will be instrumental in building and operationalizing scalable ML solutions on our AI-driven platform. Location Bangalore (preferred) | Hybrid options may be considered. Experience 5+ years in Machine Learning, Data Science, or related fields. Key Responsibilities  Design and develop ML models using supervised and unsupervised learning techniques for real-world business applications.  Build, deploy, and manage end-to-end ML pipelines.  Create robust data processing pipelines and APIs (preferably using FastAPI).  Manage deployment and monitoring of ML/DL models, ensuring model reproducibility, concept/data drift monitoring, and versioning of code, model, and data.  Handle data wrangling and feature engineering for both text and image datasets using libraries like Pandas, NumPy, OpenCV, PIL, Spacy, and Hugging Face Transformers.  Utilize ML frameworks such as Scikit-learn, PyTorch, TensorFlow, or Keras.  Work with LLMOps frameworks like LangChain, LangGraph, or similar to operationalize large language models (LLMs).  Implement prompt engineering and fine-tuning techniques for LLMs, including using APIs such as OpenAI, Mistral, Nova.  Collaborate closely with DevOps to integrate ML workflows with CI/CD pipelines, Docker, Kubernetes, and cloud platforms like AWS or GCP or Azure.  Contribute to scaling and maintaining ML systems leveraging MLflow, GitHub Actions, and monitoring stacks like ELK. Preferred Skills  Deep understanding of cloud-native ML practices and MLOps tools.  Strong programming and statistical analysis skills.  Hands-on experience with deep learning models and computer vision tasks.  Strong verbal and written communication skills.  Highly developed attention to detail and structured problem-solving abilities.  Ability to thrive in collaborative team environments.  Strong presentation skills to articulate technical solutions to non-technical stakeholders. Good to Have  Exposure to other LLM ecosystems (e.g., HuggingFace, Open Source LLMs).  Contributions to open-source ML/LLM frameworks.  Knowledge of advanced topics like Reinforcement Learning, GANs, or Diffusion Models. If you are interested kindly revert on supriya.kataram@codersbrain.com #MachineLearning hashtag #MLJobs hashtag #LLMOps hashtag #LangChain hashtag #LangGraph hashtag #DataScience hashtag #AIJobs hashtag #SeniorMLEngineer hashtag #PythonDevelopers hashtag #FastAPI hashtag #MLOps hashtag #DeepLearning hashtag #NLP hashtag #OpenAI hashtag #HuggingFace hashtag #BangaloreJobs hashtag #NowHiring hashtag #TechCareers hashtag #JoinOurTeam

Posted 1 month ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal

On-site

Kolkata,West Bengal,India Job ID 768921 Join our Team About this opportunity: We are seeking a highly motivated and skilled Data Engineer to join our cross-functional team of Data Architects and Data Scientists. This role offers an exciting opportunity to work on large-scale data infrastructure and AI/ML pipelines, driving intelligent insights and scalable solutions across the organization. What you will do: Build, optimize, and maintain robust ETL/ELT pipelines to support AI/ML and analytics workloads. Collaborate closely with Data Scientists to productionize ML models, ensuring scalable deployment and monitoring. Design and implement cloud-based data lake and data warehouse architectures. Ensure high data quality, governance, security, and observability across data platforms. Develop and manage real-time and batch data workflows using tools like Apache Spark, Airflow, and Kafka. Support CI/CD and MLOps workflows using tools like GitHub Actions, Docker, Kubernetes, and MLflow. The skills you bring: Languages: Python, SQL, Bash Data Tools: Apache Spark, Airflow, Kafka, dbt, Pandas Cloud Platforms: AWS (preferred), Azure, or GCP Databases: Snowflake, Redshift, BigQuery, PostgreSQL, NoSQL (MongoDB/DynamoDB) DevOps/MLOps: Docker, Kubernetes, MLflow, CI/CD (e.g., GitHub Actions, Jenkins) Data Modeling: OLAP/OLTP, Star/Snowflake schema, Data Vault Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 1 month ago

Apply

0.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Location: Bangalore - Karnataka, India - EOIZ Industrial Area Job Family: Engineering Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T3(B) Job ID: R-44637-2025 Description & Requirements Job Description Introduction: A Career at HARMAN Digital Transformation Solutions (DTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN DTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs Java Microservices Java Developer with experience in microservices deployment, automation, and system lifecycle management(security, and infrastructure management) Required Skills: Java, hibernate, SAML/OpenSAML REST APIs Docker PostgreSQL (PSQL) Familiar with git hub workflow. Good to Have: Go (for automation and bootstrapping) RAFT Consensus Algorithm HashiCorp Vault Key Responsibilities: Service Configuration & Automation: Configure and bootstrap services using the Go CLI. Develop and maintain Go workflow templates for automating Java-based microservices. Deployment & Upgrade Management: Manage service upgrade workflows and apply Docker-based patches. Implement and manage OS-level patches as part of the system lifecycle. Enable controlled deployments and rollbacks to minimize downtime. Network & Security Configuration: Configure and update FQDN, proxy settings, and SSL/TLS certificates. Set up and manage syslog servers for logging and monitoring. Manage appliance users, including root and SSH users, ensuring security compliance. Scalability & Performance Optimization: Implement scale-up and scale-down mechanisms for resource optimization. Ensure high availability and performance through efficient resource management. Lifecycle & Workflow Automation: Develop automated workflows to support service deployment, patching, and rollback. Ensure end-to-end lifecycle management of services and infrastructure. What You Will Do To perform in-depth analysis of data and machine learning models to identify insights and areas of improvement. Develop and implement models using both classical machine learning techniques and modern deep learning approaches. Deploy machine learning models into production, ensuring robust MLOps practices including CI/CD pipelines, model monitoring, and drift detection. Conduct fine-tuning and integrate Large Language Models (LLMs) to meet specific business or product requirements. Optimize models for performance and latency, including the implementation of caching strategies where appropriate. Collaborate cross-functionally with data scientists, engineers, and product teams to deliver end-to-end ML solutions. What You Need to Be Successful Utilized various statistical techniques to derive important insights and trends. Proven experience in machine learning model development and analysis using classical and neural networks based approaches. Strong understanding of LLM architecture, usage, and fine-tuning techniques. Solid understanding of statistics, data preprocessing, and feature engineering. Proficient in Python and popular ML libraries (scikit-learn, PyTorch, TensorFlow, etc.). Strong debugging and optimization skills for both training and inference pipelines. Familiarity with data formats and processing tools (Pandas, Spark, Dask). Experience working with transformer-based models (e.g., BERT, GPT) and Hugging Face ecosystem. Bonus Points if You Have Experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, or similar). Experience with monitoring tools (Prometheus, Grafana, or custom solutions for ML metrics). Familiarity with cloud platforms (Sagemaker, AWS, GCP, Azure) and containerization (Docker, Kubernetes). Hands-on experience with MLOps practices and tools for deployment, monitoring, and drift detection. Exposure to distributed training and model parallelism techniques. Prior experience in AB testing ML models in production. What Makes You Eligible Bachelor’s or master’s degree in computer science, Artificial Intelligence, or a related field. 5-10 years relevant and Proven experience in developing and deploying generative AI models and agents in a professional setting. What We Offer Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location Access to employee discounts on world-class Harman and Samsung products (JBL, HARMAN Kardon, AKG, etc.) Extensive training opportunities through our own HARMAN University Competitive wellness benefits Tuition reimbursement “Be Brilliant” employee recognition and rewards program An inclusive and diverse work environment that fosters and encourages professional and personal development You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today ! HARMAN is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or Protected Veterans status. HARMAN offers a great work environment, challeng Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.

Posted 1 month ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role Overview We're looking for a Python-based AI/ML Developer who brings solid hands-on experience in building machine learning models and deploying them into scalable, production-ready APIs using FastAPI or Django. The ideal candidate is both analytical and implementation-savvy, capable of transforming models into live services and integrating them with real-world systems. Key Responsibilities Design, train, and evaluate machine learning models (classification, regression, clustering, etc.) Build and deploy scalable REST APIs for model serving using FastAPI or Django Collaborate with data scientists, backend developers, and DevOps to integrate models into production systems Develop clean, modular, and optimized Python code using best practices Perform data preprocessing, feature engineering, and data visualization using Pandas, NumPy, Matplotlib, and Seaborn Implement model serialization techniques (Pickle, Joblib, ONNX) and deploy models using containers (Docker) Manage API security with JWT and OAuth mechanisms Participate in Agile development with code reviews, Git workflows, CI/CD pipelines Must-Have Skills Python & Development : Proficient in Python 3.x, OOP, and clean code principles Experience with Git, Docker, debugging, unit testing AI/ML Good grasp of supervised/unsupervised learning, model evaluation, and data wrangling Hands-on with Scikit-learn, XGBoost, LightGBM Web Frameworks FastAPI : API routes, async programming, Pydantic, JWT Django : REST Framework, ORM, Admin panel, Middleware DevOps & Cloud Experience with containerized deployment using Docker Exposure to cloud platforms: AWS, Azure, or GCP CI/CD with GitHub Actions, Jenkins, or GitLab CI Databases SQL : PostgreSQL, MySQL NoSQL : MongoDB, Redis ORM : Django ORM, Skills : Model tracking/versioning tools (MLflow, DVC) Knowledge of LLMs, transformers, vector DBs (Pinecone, Faiss) Airflow, Prefect, or other workflow automation tools Basic frontend skills (HTML, JavaScript, React) Requirements Education: B.E./B.Tech or M.E./M.Tech in Computer Science, Data Science, or related fields Experience: 3-6 years of industry experience in ML development and backend API integration Strong communication skills and ability to work with cross-functional teams (ref:hirist.tech)

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. TempHtmlFile About KPMG In India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. >> Role and Responsiblities: Support model validation for various supervised and unsupervised AI/ML models pertaining to financial crime compliance. Validate data quality, feature engineering and preprocessing steps. Conduct robustness, sensitivity and stability testing. Evaluate model explainability using tools such as SHAP,LIME. Review model documentation, development code, and model risk assessments. Assist in developing and testing statistical and machine learning models for risk, fraud and business analytics. >> Key skills and tools: Programming: Python (must-have), R, SQL, ML libraries Tools: Jupyter, Git, MLFlow, Excel, Tableau/Power BI (for visualization), Dataiku Good technical writing and stakeholder communication Understanding of model risk governance, MRM policies, and ethical AI principles. Equal employment opportunity information KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you. Qualifications TempHtmlFile >> Qualification: Bachelor’s/Master’s in Computer Science, Data Science, Statistics, Applied Math, or a related quantitative discipline. 1–3 years of experience in AI/ML model validation, development, or risk analytics.

Posted 1 month ago

Apply

5.0 - 7.0 years

30 - 32 Lacs

Hyderabad

Work from Office

Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Hyderabad) Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: InfraCloud Technologies Pvt Ltd) (*Note: This is a requirement for one of Uplers' client - IF) What do you need for this opportunity? Must have skills required: Banking, Fintech, Product Engineering background, Python, FastAPI, Django, MLFlow, feast, Kubeflow, NumPy, pandas, Big Data IF is Looking for: Product Engineer Location: N arsingi, Hyderabad 5 days of work from the Office Client is a Payment gateway processing company Interview Process: Screening round with InfraCloud, followed by a second round with our Director of Engineering. We share the profile with the client, and they take one/two interviews About the Project We are building a high-performance machine learning engineering platform that powers scalable, data-driven solutions for enterprise environments. Your expertise in Python, performance optimization, and ML tooling will play a key role in shaping intelligent systems for data science and analytics use cases. Experience with MLOps, SaaS products, or big data environments will be a strong plus. Role and Responsibilities Design, build, and optimize components of the ML engineering pipeline for scalability and performance. Work closely with data scientists and platform engineers to enable seamless deployment and monitoring of ML models. Implement robust workflows using modern ML tooling such as Feast, Kubeflow, and MLflow. Collaborate with cross-functional teams to design and scale end-to-end ML services across a cloud-native infrastructure. Leverage frameworks like NumPy, Pandas, and distributed compute environments to manage large-scale data transformations. Continuously improve model deployment pipelines for reliability, monitoring, and automation. Requirements 5+ years of hands-on experience in Python programming with a strong focus on performance tuning and optimization. Solid knowledge of ML engineering principles and deployment best practices. Experience with Feast, Kubeflow, MLflow, or similar tools. Deep understanding of NumPy, Pandas, and data processing workflows. Exposure to big data environments and a good grasp of data science model workflows. Strong analytical and problem-solving skills with attention to detail. Comfortable working in fast-paced, agile environments with frequent cross-functional collaboration. Excellent communication and collaboration skills. Nice to Have Experience deploying ML workloads in public cloud environments (AWS, GCP, or Azure). Familiarity with containerization technologies like Docker and orchestration using Kubernetes. Exposure to CI/CD pipelines, serverless frameworks, and modern cloud-native stacks. Understanding of data protection, governance, or security aspects in ML pipelines. Experience Required: 5+ years

Posted 1 month ago

Apply

1.0 - 2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company and a leader in the convenience store and fuel space with over 16,700 stores. It has footprints across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Associate ML Ops Analyst will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About The Role The incumbent will be responsible for implementing Azure data services to deliver scalable and sustainable solutions, build model deployment and monitor pipelines to meet business needs. Roles & Responsibilities Development and Integration Collaborate with data scientists to deploy ML models into production environments Implement and maintain CI/CD pipelines for machine learning workflows Use version control tools (e.g., Git) and ML lifecycle management tools (e.g., MLflow) for model tracking, versioning, and management. Design, build as well as optimize applications containerization and orchestration with Docker and Kubernetes and cloud platforms like AWS or Azure Automation & Monitoring Automating pipelines using understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Implement model monitoring and alerting systems to track model performance, accuracy, and data drift in production environments. Collaboration and Communication Work closely with data scientists to ensure that models are production-ready Collaborate with Data Engineering and Tech teams to ensure infrastructure is optimized for scaling ML applications. Optimization and Scaling Optimize ML pipelines for performance and cost-effectiveness Operational Excellence Help the Data teams leverage best practices to implement Enterprise level solutions. Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Helping to define common coding standards and model monitoring performance best practices Continuously evaluate the latest packages and frameworks in the ML ecosystem Build automated model deployment data engineering pipelines from plain Python/PySpark mode Stakeholder Engagement Collaborate with Data Scientists, Data Engineers, cloud platform and application engineers to create and implement cloud policies and governance for ML model life cycle. Job Requirements Education & Relevant Experience Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) 1-2 years of relevant working experience in MLOps Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Knowledge of core computer science concepts such as common data structures and algorithms, OOPs Programming languages (R, Python, PySpark, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Exposure to ETL tools and version controlling Experience in building and maintaining CI/CD pipelines for ML models. Understanding of machine-learning, information retrieval or recommendation systems Familiarity with DevOps tools (Docker, Kubernetes, Jenkins, GitLab).

Posted 1 month ago

Apply

2.0 - 7.0 years

15 - 25 Lacs

Pune

Work from Office

Experience: 2 + years Expected Notice Period: 30 Days Shift: (GMT+05:30) Asia/Kolkata (IST) Opportunity Type: Office (Pune) Placement Type: Full Time Permanent position Must have skills required: Airflow, LLMs, NLP, Statistical Modeling, Predictive Analysis, Forecasting, Python, SQL, MLFlow, pandas, Scikit-learn, XgBoost As an ML / Data Science Engineer at Anervea, youll work on designing, training, deploying, and maintaining machine learning models across multiple products. Youll build models that predict clinical trial outcomes, extract insights from structured and unstructured healthcare data, and support real-time scoring for sales or market access use cases. Youll collaborate closely with AI engineers, backend developers, and product owners to translate data into product features that are explainable, reliable, and impactful. Key Responsibilities Develop and optimize predictive models using algorithms such as XGBoost, Random Forest, Logistic Regression, and ensemble methods Engineer features from real-world healthcare data (clinical trials, treatment adoption, medical events, digital behavior) Analyze datasets from sources like ClinicalTrials.gov, PubMed, Komodo, Apollo.io, and internal survey pipelines Build end-to-end ML pipelines for inference and batch scoring Collaborate with AI engineers to integrate LLM-generated features with traditional models Ensure explainability and robustness of models using SHAP, LIME, or custom logic Validate models against real-world outcomes and client feedback Prepare clean, structured datasets using SQL and Pandas Communicate insights clearly to product, business, and domain teams Document all processes, assumptions, and model outputs thoroughly Technical Skills Required : Strong programming skills in Python (NumPy, Pandas, scikit-learn, XGBoost, LightGBM) Experience with statistical modeling and classification algorithms Solid understanding of feature engineering, model evaluation, and validation techniques Exposure to real-world healthcare, trial, or patient data (strong bonus) Comfortable working with unstructured data and data cleaning techniques Knowledge of SQL and NoSQL databases Familiarity with ML lifecycle tools (MLflow, Airflow, or similar) Bonus: experience working alongside LLMs or incorporating generative features into ML Bonus: knowledge of NLP preprocessing, embeddings, or vector similarity methods Personal Attributes : Strong analytical and problem-solving mindset Ability to convert abstract questions into measurable models Attention to detail and high standards for model quality Willingness to learn life sciences concepts relevant to each use case Clear communicator who can simplify complexity for product and business teams Independent learner who actively follows new trends in ML and data science Reliable, accountable, and driven by outcomesnot just code Bonus Qualities : Experience building models for healthcare, pharma, or biotech Published work or open-source contributions in data science Strong business intuition on how to turn models into product decisions

Posted 1 month ago

Apply

6.0 - 11.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Shift: (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Machine Learning, ML, ml architectures and lifecycle, Airflow, Kubeflow, MLFlow, Spark, Kubernetes, Docker, Python, SQL, machine learning platforms, BigQuery, GCS, Dataproc, AI Platform, Search Ranking, Deep Learning, Deep Learning Frameworks, PyTorch, TensorFlow About the job Candidates for this position are preferred to be based in Bangalore, India and will be expected to comply with their team's hybrid work schedule requirements. Who We Are Wayfairs Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. What youll do Provide technical leadership in the development of an automated and intelligent advertising system by advancing the state-of-the-art in machine learning techniques to support recommendations for Ads campaigns and other optimizations. Design, build, deploy and refine extensible, reusable, large-scale, and real-world platforms that optimize our ads experience. Work cross-functionally with commercial stakeholders to understand business problems or opportunities and develop appropriately scoped machine learning solutions Collaborate closely with various engineering, infrastructure, and machine learning platform teams to ensure adoption of best-practices in how we build and deploy scalable machine learning services Identify new opportunities and insights from the data (where can the models be improved? What is the projected ROI of a proposed modification?) Research new developments in advertising, sort and recommendations research and open-source packages, and incorporate them into our internal packages and systems. Be obsessed with the customer and maintain a customer-centric lens in how we frame, approach, and ultimately solve every problem we work on. We Are a Match Because You Have: Bachelor's or Masters degree in Computer Science, Mathematics, Statistics, or related field. 6-9 years of industry experience in advanced machine learning and statistical modeling, including hands-on designing and building production models at scale. Strong theoretical understanding of statistical models such as regression, clustering and machine learning algorithms such as decision trees, neural networks, etc. Familiarity with machine learning model development frameworks, machine learning orchestration and pipelines with experience in either Airflow, Kubeflow or MLFlow as well as Spark, Kubernetes, Docker, Python, and SQL. Proficiency in Python or one other high-level programming language Solid hands-on expertise deploying machine learning solutions into production Strong written and verbal communication skills, ability to synthesize conclusions for non-experts, and overall bias towards simplicity Nice to have Familiarity with Machine Learning platforms offered by Google Cloud and how to implement them on a large scale (e.g. BigQuery, GCS, Dataproc, AI Notebooks). Experience in computational advertising, bidding algorithms, or search ranking Experience with deep learning frameworks like PyTorch, Tensorflow, etc.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About PhonePe Group: PhonePe is India’s leading digital payments company with 50 crore (500 Million) registered users and 3.7 crore (37 Million) merchants covering over 99% of the postal codes across India. On the back of its leadership in digital payments, PhonePe has expanded into financial services (Insurance, Mutual Funds, Stock Broking, and Lending) as well as adjacent tech-enabled businesses such as Pincode for hyperlocal shopping and Indus App Store which is India's first localized App Store. The PhonePe Group is a portfolio of businesses aligned with the company's vision to offer every Indian an equal opportunity to accelerate their progress by unlocking the flow of money and access to services. Culture At PhonePe, we take extra care to make sure you give your best at work, Everyday! And creating the right environment for you is just one of the things we do. We empower people and trust them to do the right thing. Here, you own your work from start to finish, right from day one. Being enthusiastic about tech is a big part of being at PhonePe. If you like building technology that impacts millions, ideating with some of the best minds in the country and executing on your dreams with purpose and speed, join us! Job Description We are seeking a motivated and skilled Data Scientist with 3 years of experience to join our dynamic team. The ideal candidate will have a strong foundation in machine learning, with a focus on implementing algorithms at scale. Additionally, knowledge of computer vision and natural language processing will be ideal Key Responsibilities: Develop and implement machine learning models, offline batch models as well as real time online and edge compute models Analyze complex datasets and extract meaningful insights to drive business decisions Collaborate with cross-functional teams to identify and solve business problems using data-driven approaches Communicate findings and recommendations to stakeholders effectively Required Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field 3+ years of experience in a Data Scientist role Strong proficiency in Python and SQL Solid understanding of machine learning algorithms and statistical modeling techniques Knowledge of Natural Language Processing (NLP) and Computer Vision (CV) concepts and algorithms Hands-on experience implementing and deploying machine learning algorithms Experience with data visualization tools and techniques Strong analytical and problem-solving skills Excellent communication skills, both written and verbal Preferred Qualifications: Experience with PySpark and other big data processing frameworks Knowledge of deep learning frameworks (e.g., TensorFlow, PyTorch) Technical Skills: Programming Languages: Python (required), SQL (required), Java (basic knowledge preferred) Machine Learning: Strong foundation in traditional ML algorithms, and a working knowledge of NLP and Computer Vision Big Data: Deep knowledge of PySpark Data Storage and Retrieval: Familiarity with databases/mlflow preferred Mathematics: Strong background in statistics, linear algebra, and probability theory Version Control: Git Soft Skills: Excellent communication skills to facilitate interactions with stakeholders Ability to explain complex technical concepts to non-technical audiences Strong problem-solving and analytical thinking Self-motivated and able to work independently as well as in a team environment Curiosity and eagerness to learn new technologies and methodologies We're looking for a motivated individual who is passionate about data science and eager to take on challenging tasks. If you thrive in a fast-paced environment and are excited about leveraging cutting-edge technologies in machine learning to solve real-world problems, we encourage you to apply! PhonePe Full Time Employee Benefits (Not applicable for Intern or Contract Roles) Insurance Benefits - Medical Insurance, Critical Illness Insurance, Accidental Insurance, Life Insurance Wellness Program - Employee Assistance Program, Onsite Medical Center, Emergency Support System Parental Support - Maternity Benefit, Paternity Benefit Program, Adoption Assistance Program, Day-care Support Program Mobility Benefits - Relocation benefits, Transfer Support Policy, Travel Policy Retirement Benefits - Employee PF Contribution, Flexible PF Contribution, Gratuity, NPS, Leave Encashment Other Benefits - Higher Education Assistance, Car Lease, Salary Advance Policy Working at PhonePe is a rewarding experience! Great people, a work environment that thrives on creativity, the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. Read more about PhonePe on our blog. Life at PhonePe PhonePe in the news

Posted 1 month ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the Role: We are looking for a forward-thinking LLMOps Engineer to join our team and help build the next generation of secure, scalable, and responsible Generative AI (GenAI) platforms. This role will focus on establishing governance, security, and operational best practices while enabling development teams to build high-performing GenAI applications. You will also work closely with GenAI agents and integrate LLMs from multiple providers to support diverse use cases. Key Responsibilities: Design and implement governance frameworks for GenAI platforms, ensuring compliance with internal policies and external regulations (e.g., GDPR, AI Act). Define and enforce responsible AI practices including fairness, transparency, explainability, and auditability. Implement robust security protocols including IAM, data encryption, secure API access, and model sandboxing. Collaborate with security teams to conduct risk assessments and ensure secure deployment of LLMs. Build and maintain scalable LLMOps pipelines for model training, fine-tuning, evaluation, deployment, and monitoring. Automate model lifecycle management with CI/CD, versioning, rollback, and observability. Develop and manage GenAI agents capable of reasoning, planning, and tool use. Integrate and orchestrate LLMs from multiple providers (e.g., OpenAI, Anthropic, Cohere, Google, Azure OpenAI) to support hybrid and fallback strategies. Optimize prompt engineering, context management, and agent memory for production use. Ensure high availability, low latency, and cost-efficiency of GenAI workloads across cloud and hybrid environments. Implement monitoring and alerting for model drift, hallucinations, and performance degradation. Partner with GenAI developers to embed best practices and reusable components (SDKs, templates, APIs). Provide technical guidance and documentation to accelerate development and ensure platform consistency. Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 4+ years of experience in MLOps, DevOps, or platform engineering, with 1–2 years in LLM/GenAI environments. Deep understanding of LLMs, GenAI agents, prompt engineering, and inference optimization. Experience with LangChain, LlamaIndex, Langraph or similar agent frameworks. Hands-on with MLflow, or equivalent tools. Proficient in Python, containerization (Docker) and cloud platforms (AWS/GCP/Azure). Familiarity with AI governance frameworks and responsible AI principles. Experience with vector databases (e.g., FAISS, Pinecone), RAG pipelines, and model evaluation frameworks. Knowledge of Responsible AI, red-teaming, and OWASP security priciples.

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Years of exp :10 - 15 yrs Location : Noida Join us as Cloud Engineer Lead at Dailoqa , where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. You will work closely with data scientists, ML engineers, and business stakeholders to build and maintain robust MLOps pipelines, enabling rapid experimentation and reliable production implementation of AI models, including LLMs and real-time analytics systems. To be successful as Cloud Engineer you should have experience with: Cloud sourcing, networks, VMs, performance, scaling, availability, storage, security, access management Deep expertise in one or more cloud platforms: AWS, Azure, GCP Strong experience in containerization and orchestration (Docker, Kubernetes, Helm) Familiarity with CI/CD tools: GitHub Actions, Jenkins, Azure DevOps, ArgoCD, etc. Proficiency in scripting languages (Python, Bash, PowerShell) Knowledge of MLOps tools such as MLflow, Kubeflow, SageMaker, Vertex AI, or Azure ML Strong understanding of DevOps principles applied to ML workflows. Key Responsibilities may include: Design and implement scalable, cost-optimized, and secure infrastructure for AI-driven platforms. Implement infrastructure as code using tools like Terraform, ARM, or Cloud Formation. Automate infrastructure provisioning, CI/CD pipelines, and model deployment workflows. Ensure version control, repeatability, and compliance across all infrastructure components. Set up monitoring, logging, and alerting frameworks using tools like Prometheus, Grafana, ELK, or Azure Monitor. Optimize performance and resource utilization of AI workloads including GPU-based training/inference Experience with Snowflake, Databricks for collaborative ML development and scalable data processing. Understanding model interpretability, responsible AI, and governance. Contributions to open-source MLOps tools or communities. Strong leadership, communication, and cross-functional collaboration skills. Knowledge of data privacy, model governance, and regulatory compliance in AI systems. Exposure to LangChain, Vector DBs (e. g. , FAISS, Pinecone), and retrieval-augmented generation (RAG) pipelines.

Posted 1 month ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Location Type: In-person Work Location: In person

Posted 1 month ago

Apply

2.5 - 5.0 years

5 - 11 Lacs

India

On-site

We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Experience: 2.5 - 5 years of experience Salary : 5-11 LPA Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,100,000.00 per year Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

3.0 - 4.0 years

15 - 20 Lacs

India

On-site

Job Summary: We are looking for a highly skilled and experienced AI/ML Developer with 3-4 years of hands-on experience to join our technology team. You will be responsible for designing, developing, and optimizing machine learning models that drive intelligent business solutions. The role involves close collaboration with cross-functional teams to deploy scalable AI systems and stay abreast of evolving trends in artificial intelligence and machine learning. Key Responsibilities: Develop and Implement AI/ML Models Design, build, and implement AI/ML models tailored to solve specific business challenges, including but not limited to natural language processing (NLP), image recognition, recommendation systems, and predictive analytics. Model Optimization and Evaluation * Continuously improve existing models for performance, accuracy, and scalability. Data Preprocessing and Feature Engineering * Collect, clean, and preprocess structured and unstructured data from various sources. * Engineer relevant features to improve model performance and interpretability. Collaboration and Communication * Collaborate closely with data scientists, backend engineers, product managers, and stakeholders to align model development with business goals. * Communicate technical insights clearly to both technical and non-technical stakeholders. Model Deployment and Monitoring * Deploy models to production using MLOps practices and tools (e.g., MLflow, Docker, Kubernetes). * Monitor live model performance, diagnose issues, and implement improvements as needed. Staying Current with AI/ML Advancements * Stay informed of current research, tools, and trends in AI and machine learning. * Evaluate and recommend emerging technologies to maintain innovation within the team. Code Reviews and Best Practices * Participate in code reviews to ensure code quality, scalability, and adherence to best practices. * Promote knowledge sharing and mentoring within the development team. Qualifications · Bachelor’s or Master’s degree in computer science, Data Science, Engineering, or a related field. · 3-4 years of experience in machine learning, artificial intelligence, or applied data science roles. Required Skills: · Strong programming skills in Python (preferred) and/or R. · Proficiency in ML libraries and frameworks, including: scikit-learn, XGBoost, LightGBM, TensorFlow or Keras, PyTorch · Skilled in data preprocessing and feature engineering, using; pandas, numpy, sklearn.preprocessing · Practical experience in deploying ML models into production environments using REST APIs and containers. · Familiarity with version control systems (e.g., Git) and containerization tools (e.g., Docker). · Experience working with cloud platforms such as AWS, Google Cloud Platform (GCP), or Azure. · Understanding software development methodologies, especially Agile/Scrum. · Strong analytical thinking, debugging, and problem-solving skills in real-world AI/ML applications. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Benefits: Health insurance Life insurance Provident Fund Schedule: Day shift Monday to Friday Morning shift Weekend availability Supplemental Pay: Performance bonus Work Location: In person

Posted 1 month ago

Apply

6.0 years

0 Lacs

Telangana, India

On-site

Overview We are hiring a Senior Data Engineer (6 to 10 years) with deep expertise in Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics to join our high-performing team. The ideal candidate will have a proven track record in designing, building, and optimizing big data pipelines and architectures while leveraging their technical proficiency in cloud-based data engineering. This role requires a strategic thinker who can bridge the gap between raw data and actionable insights, enabling data-driven decision-making for large-scale enterprise initiatives. A strong foundation in distributed computing, ETL frameworks, and advanced data modeling is crucial. The individual will work closely with data architects, analysts, and business teams to deliver scalable and efficient data solutions. Work Location: Hyderabad, Bangalore and Chennai Work Mode: Work from Office (5 days) Notice Period: Immediate to 15 days Responsibilities Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Requirements Required Qualifications: Experience: 7+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance tools like Azure Purview.

Posted 1 month ago

Apply

5.0 years

5 - 6 Lacs

Bengaluru

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description Responsibilities: Research, design, develop, implement and test econometric, statistical, optimization and machine learning models. Design, write and test modules for Nielsen analytics platforms using Python, R, SQL and/or Spark. Utilize advanced computational/statistics libraries including Spark MLlib, Scikit-learn, SciPy, StatsModels. Collaborate with cross functional Data Science, Product, and Technology teams to integrate best practices from across the organization Provide leadership and guidance for the team in the of adoption of new tools and technologies to improve our core capabilities Execute and refine the roadmap to upgrade the modeling/forecasting/control functions of the team to improve upon the core service KPI’s Ensure product quality, stability, and scalability by facilitating code reviews and driving best practices like modular code, unit tests, and incorporating CI/CD workflows Explain complex data science (e.g. model-related) concepts in simple terms to non-technical internal and external audiences Qualifications Key Skills: 5+ years of professional work experience in Statistics, Data Science, and/or related disciplines, with focus on delivering analytics software solutions in a production environment Strong programming skills in Python with experience in NumPy, Pandas, SciPy and Scikit-learn. Hands-on experience with deep learning frameworks (PyTorch, TensorFlow, Keras). Solid understanding of Machine learning domains such as Computer Vision, Natural Language Processing and classical Machine Learning. Proficiency in SQL and NoSQL databases for large-scale data manipulation Experience with cloud-based ML services (AWS SageMaker, Databricks, GCP AI, Azure ML). Knowledge of model deployment (FastAPI, Flask, TensorRT, ONNX) MLOps tools (MLflow, Kubeflow, Airflow) and containerization. Preferred skills: Understanding of LLM fine-tuning, tokenization, embeddings, and multimodal learning. Familiarity with vector databases (FAISS, Pinecone) and retrieval-augmented generation (RAG). Familiarity with advertising intelligence, recommender systems, and ranking models. Knowledge of CI/CD for ML workflows, and software development best practices. Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies