Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
30 - 40 Lacs
Bengaluru
Work from Office
We are seeking an experienced Generative AI Engineer to design, develop, and optimize AI models for text, image, video, and audio generation. The ideal candidate should have expertise in deep learning, natural language processing (NLP), transformer models (GPT, BERT, LLaMA, etc.), and multimodal AI. This role involves working with large-scale datasets, fine-tuning AI models, and deploying scalable AI solutions. Key Responsibilities: Design, develop, and fine-tune Generative AI models for text, image, video, and audio synthesis. Work with transformer architectures such as GPT, BERT, T5, Stable Diffusion, and CLIP. Implement and optimize LLMs (Large Language Models) using Hugging Face, OpenAI, or custom architectures. Develop AI-powered chatbots, virtual assistants, and content generation tools. Work with diffusion models, GANs (Generative Adversarial Networks), and VAEs (Variational Autoencoders) for creative AI applications. Optimize AI models for performance, inference speed, and cost efficiency in cloud or edge environments. Deploy AI models using TensorFlow, PyTorch, ONNX, and MLflow on AWS, Azure, or GCP. Work with vector databases (FAISS, Pinecone, Weaviate) and embedding-based search techniques. Fine-tune models using RLHF (Reinforcement Learning with Human Feedback) for better alignment. Collaborate with data scientists, ML engineers, and product teams to integrate AI capabilities into applications. Ensure AI model security, bias mitigation, and ethical AI practices. Stay updated with the latest advancements in Generative AI, foundation models, and prompt engineering. Required Skills & Qualifications: 6+ years of experience in AI, machine learning, and deep learning. Strong expertise in Generative AI models and transformer architectures. Proficiency in Python, TensorFlow, PyTorch, and Hugging Face libraries. Experience with NLP, text embeddings, and retrieval-augmented generation (RAG). Knowledge of vector databases, embeddings, and scalable model serving (FastAPI, Triton, Ray Serve). Experience with GPU acceleration (CUDA, TensorRT, ONNX optimization) for AI workloads. Familiarity with cloud-based AI services like AWS Bedrock, Azure OpenAI, or Google Vertex AI. Strong understanding of data preprocessing, annotation, and model evaluation metrics. Experience working with large-scale datasets and distributed training techniques. Strong problem-solving skills and ability to work in Agile/DevOps environments. Preferred Qualifications: Experience with multimodal AI (text-to-image, text-to-video, speech synthesis). Knowledge of RLHF, prompt engineering, and AI-assisted code generation. Certifications in Machine Learning, AI, or Cloud AI services.
Posted 2 weeks ago
0.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Location Bangalore, Karnataka, 560048 Category Engineering Job Type Full time Job Id 1189604 No Marketing Data Foundation - Engineer This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description: Our HPE marketing teams are focused on making our brand easy to understand and easy to buy. We’re devising data driven brand strategy, from architecture and naming, to why we exist, what we do, who we are and how we look, feel and sound. As a team, we are retaining and attracting new customers by accelerating our business strategy by making our brand stronger, more relevant, differentiated, and authentic. Responsibilities: Combines technical depth in big data and cloud technologies with business acumen and solution architect skills to design, implement, and operationalize data solutions that deliver business value. Act as trusted advisors, bridging the gap between stakeholders and technical teams while fostering community engagement and continuous innovation. Education and Experience Required: Master´s degree in Statistics, Operations Research, Computer Science or equivalent preferred. Or Bachelor´s Degree in these areas. At least 5-8 years of relevant experience. Knowledge and Skills: Architect big data solutions that span data engineering, data science, machine learning, and SQL analytics workflows. Deep expertise in areas such as streaming, performance tuning, data lake technologies, or industry-specific data solutions Designing scalable, secure, and optimized data solutions using the Databricks Lakehouse Platform. Deep understanding of the Databricks Lakehouse architecture, including Delta Lake, MLflow, and Databricks SQL for data management, machine learning, and analytics Proficiency in deploying and managing Databricks solutions on Azure Strong skills in building and optimizing data pipelines using ETL/ELT processes, data modeling, schema design, and handling large-scale data processing with Apache Spark. Ability to design scalable and secure Databricks solutions tailored to business needs, including multi-hop data pipelines (Bronze, Silver, Gold layers) Implementing data governance using Unity Catalog, role-based access control (RBAC), entity permissions, and ensuring compliance and data security Additional Skills: Accountability, Accountability, Action Planning, Active Learning (Inactive), Active Listening, Agile Methodology, Agile Scrum Development, Analytical Thinking, Bias, Coaching, Creativity, Critical Thinking, Cross-Functional Teamwork, Data Analysis Management, Data Collection Management (Inactive), Data Controls, Design, Design Thinking, Empathy, Follow-Through, Group Problem Solving, Growth Mindset, Intellectual Curiosity (Inactive), Long Term Planning, Managing Ambiguity {+ 5 more} What We Can Offer You: Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #marketing Job: Engineering Job Level: TCP_05 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 2 weeks ago
12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Lead Engineer - Data & AI Career Level: E Introduction To Role Are you ready to redefine an industry and change lives? AstraZeneca is seeking a seasoned AI and Data Engineering manager to join our Data Analytics and AI (DA&AI) organization. In this pivotal role, you'll be instrumental in shaping and delivering next-generation data platforms, data mesh, and AI capabilities that drive our digital transformation. Your expertise will be crucial in building data infrastructure that supports enterprise-scale data platforms and AI Analytics deployment, fueling intelligent operations across the business. Accountabilities Technical & AI Leadership Lead and mentor a multi-functional team of data and AI engineers to deliver scalable, AI-ready data products and pipelines. Define and enforce standard processes for Data engineering, data pipeline orchestration, and ELT/ETL development lifecycle management. Guide the development of solutions that integrate data engineering with machine learning, foundational models, and semantic enrichment. AI-Driven Data Engineering Architect and develop data pipelines using tools such as DBT, Apache Airflow, and Snowflake, optimized to support both analytics and AI/ML workloads. Design infrastructures that facilitate automated feature engineering, metadata tracking, and real-time model inference. Enable large-scale data ingestion, preparation, and transformation to support AI use cases such as forecasting, natural language querying/processing (NLQ/P), and intelligent automation. Governance and Metadata Management Have an approach to adhere to data governance and compliance practices that ensure trust, transparency, and explainability in AI outputs. Manage and scale enterprise metadata frameworks using tools like Collibra, aligning with FAIR data principles and AI ethics guidelines. Establish traceability across data lineage, model lineage, and business outcomes. Stakeholder Engagement Act as a trusted technical advisor to business leaders across enabling functions (e.g., Finance, M&A, GBS), helping translate strategic goals into AI-driven data solutions. Lead delivery across multiple workstreams, ensuring measurable KPIs and adoption of both data and AI capabilities. Essential Skills/Experience 12+ years of hands-on experience in data engineering and AI-enabling infrastructure, with expertise in: DBT, Apache Airflow, Snowflake, PostgreSQL, Amazon Redshift 2+ years working with or supporting AI/ML teams in building production-ready pipelines and infrastructure. Strong communication skills with a demonstrated ability to influence both technical and non-technical collaborators. Experience in implementing data products by applying data mesh principles. Experience working across enabling business units such as Finance, HR, and M&A. Academic Qualifications Bachelor’s or Master’s degree in computer science, Information Technology, or related field with relevant industrial experiences. Desirable Skills/Experience Proficiency in Python, especially in libraries like Pandas, NumPy, and Scikit-learn for data and ML workflows. Exposure to ML lifecycle tools such as SageMaker, MLflow, Azure ML, or Databricks. Exposure to foundational AI models (e.g., LLMs), vector databases, and retrieval-augmented generation (RAG) methodologies. Knowledge of data cataloguing tools such as Collibra, semantic data models, ontologies, and business glossary tools. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, your work has a direct impact on patients by transforming our ability to develop life-changing medicines. We empower the business to perform at its peak by combining innovative science with leading digital technology platforms. With a passion for impacting lives through data, analytics, AI, machine learning, and more, we are at a crucial stage of our journey to become a digital and data-led enterprise. Here you can innovate, take ownership, experiment with groundbreaking technology, and tackle challenges that have never been addressed before. Our dynamic environment offers countless opportunities to learn and grow while contributing to something far bigger. Ready to make a meaningful impact? Apply now to join our team! Date Posted 27-Jun-2025 Closing Date 03-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 2 weeks ago
0.0 - 2.0 years
5 - 8 Lacs
Gurugram
Work from Office
Role: Backend Developer At Clipwise.ai, we're building the future of AI-powered video creation. If you are excited by backend challenges, machine learning integrations, and solving real problems that shape how users interact with AI-driven tools this is your calling. Key Responsibilities: Develop and deploy robust backend features using Django, PostgreSQL, and Redis. Integrate machine learning models and pipelines into real-time production environments. Work closely with product and ML teams to ship impactful features. Solve challenging backend problems, optimize performance, and debug complex issues. Collaborate in a high-ownership, fast-paced team environment. Use AI tools (like ChatGPT, Claude, Cursor) to enhance productivity and automation. What Were Looking For: Strong backend fundamentals and solid debugging skills. Experience using or experimenting with AI tools (e.g., ChatGPT, Claude, Cursor). Passion for learning new technologies; no DSPy or MLflow experience requiredwe’ll guide you. Hands-on with personal or collaborative projects, hackathons, or open-source contributions. A curious mindset and ability to thrive in high-ownership environments. Tech Stack You’ll Work With: Languages & Frameworks: Python, Django. Databases: PostgreSQL, Redis. AI/ML Tools: DSPy, MLflow. Cloud & Infra: GCP (Google Cloud Platform). Others: Git, GitHub, REST APIs, Async processing. Bonus Points (Nice to Have): Prior experience deploying ML models or integrating with AI APIs. Familiarity with Docker, Kubernetes, or cloud deployment workflows. Exposure to performance tuning and scalable backend systems. Why Join Us? Work directly with experienced founders and AI practitioners. Contribute to a cutting-edge AI product with direct user impact. Accelerated learning and rapid growth through intense ownership. Not a typical 9-to-5 job—expect flexibility, ownership, and energy, especially around key launches. Interview Process: Short coding task Final interview (Technical + Culture fit)
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
We are hiring a contract-based Computer Vision Engineer in Hyderabad, India to lead deep learning model development using PyTorch. The ideal candidate will design and deploy scalable computer vision pipelines focused on image and video analytics. This is a 6-month engagement to support real-time model deployment, optimization, and automation across cloud and edge platforms. Key Responsibilities 1. Deep Learning Model Development • Build and train state-of-the-art CV models using PyTorch for classification, detection (YOLO, Faster R-CNN), and segmentation (UNet, DeepLab) • Optimize data pipelines and preprocessing strategies for high-resolution image and video feeds • Fine-tune pre-trained models and manage custom model development based on project needs 2. Model Optimization & Deployment • Optimize models using ONNX, quantization, or TensorRT for cloud and edge deployment • Deploy real-time inference endpoints using containers (Docker/Kubernetes) and cloud services (Azure, AWS, GCP) • Maintain experiment tracking, model versioning, and deployment automation workflows 3. Data Engineering & Integration • Work with data engineers to build scalable data pipelines for ingestion and preprocessing • Integrate CV models into production systems and IoT environments (e.g., Jetson, Azure IoT) 4. Governance & Performance • Ensure AI workflows are secure, auditable, and production-ready • Apply model compression and tuning for performance at scale 5. Cross-functional Collaboration • Collaborate with DevOps, product teams, and ML engineers for seamless delivery • Document architectures and ensure knowledge transfer at project milestones Required Qualifications • Bachelor’s or Master’s in Computer Science, AI/ML, or a related technical field • 5+ years of experience in AI/ML with at least 2 years in deploying PyTorch-based CV models • Expertise in PyTorch, OpenCV, Python, Git, and deep learning model deployment • Familiarity with cloud platforms (Azure preferred), Docker/Kubernetes • Hands-on experience with model lifecycle tools (MLflow, W&B, DVC, etc.) Preferred Qualifications • Experience with edge AI platforms (Jetson, Coral, Azure Percept) • Knowledge of ONNX, TensorRT, or other optimization tools • Exposure to enterprise-grade security practices and MLOps workflows Contract Details • Duration: 6 months • Location: Hyderabad (Hybrid preferred; remote may be considered for exceptional candidates) • Compensation: Competitive, based on experience and expertise How to Apply Send your resume, portfolio (if available), and GitHub/LinkedIn profiles to: Info@primeverse.in Subject: Computer Vision Engineer – Hyderabad
Posted 2 weeks ago
3.0 years
0 Lacs
India
On-site
Key Responsibilities Design, develop, and deploy deep learning models for image classification, object detection, segmentation, pose estimation, OCR, and related tasks. Work with large-scale datasets (images, videos, annotations), including data cleaning, augmentation, and preprocessing pipelines. Evaluate and fine-tune models using metrics like IoU, mAP, F1 score, and accuracy. Conduct research and experimentation with state-of-the-art architectures such as CNNs, Transformers (ViT, DETR), GANs, and self-supervised learning. Collaborate with cross-functional teams to integrate models into production pipelines (cloud/on-prem). Stay current with the latest advancements in computer vision and contribute to the company’s innovation roadmap. Develop tools for model explainability and performance monitoring in production environments. Required Qualifications B.Tech, Master’s or Ph.D. in Computer Science, Electrical Engineering, Applied Mathematics, or a related field. 3+ years of experience in developing and deploying deep learning models for computer vision tasks. Strong proficiency in Python and deep learning frameworks like PyTorch and TensorFlow. Hands-on experience with libraries such as OpenCV, Albumentations, MMDetection, Detectron2, or YOLOv5/8. Experience training and optimizing models on GPU clusters using distributed training (e.g., PyTorch Lightning, DDP). Familiarity with model deployment (ONNX, TensorRT, TorchScript) and serving (FastAPI, Flask, Triton Inference Server). Experience with annotation tools (e.g., CVAT, Labelbox) and data versioning tools (e.g., DVC, Weights & Biases). Strong understanding of computer vision metrics and evaluation protocols. Strong skillset in mathematical algorithmics and explainability of deep learning models and frameworks. Preferred Skills Knowledge of 3D vision, SLAM, multi-view geometry and YOLO. Experience working with video datasets and spatio-temporal models. Background in self-supervised or semi-supervised learning. Familiarity with MLOps pipelines and tools like MLflow, Kubeflow, or SageMaker Experience in a domain-specific application like medical imaging, aerial imagery, or autonomous vehicles. Why Join Us Work on impactful AI products at the cutting edge of computer vision. Collaborate with a world-class team of researchers and engineers. Access to state-of-the-art GPU infrastructure and training platforms. Flexible work environment with competitive compensation and benefits.
Posted 2 weeks ago
3.0 years
8 - 30 Lacs
Chennai, Tamil Nadu, India
On-site
Azure Databricks Engineer Industry & Sector: We are a fast-growing cloud data and analytics consultancy serving global enterprises across finance, retail, and manufacturing. Our teams design high-throughput lakehouse platforms, predictive analytics, and AI services on Microsoft Azure, unlocking data-driven decisions at scale. Role & Responsibilities Design, develop, and optimise end-to-end data pipelines on Azure Databricks using PySpark/Scala and Delta Lake. Build scalable ETL workflows to ingest structured and semi-structured data from Azure Data Lake, SQL, and API sources. Implement lakehouse architectures, partitioning, and performance tuning to ensure sub-second query response. Collaborate with Data Scientists to prepare feature stores and accelerate model training and inference. Automate deployment with Azure DevOps, ARM/Bicep, and Databricks CLI for secure, repeatable releases. Monitor pipeline health, cost, and governance, applying best practices for security, lineage, and data quality. Skills & Qualifications Must-Have 3+ years building large-scale Spark or Databricks workloads in production. Expert hands-on with PySpark/Scala, Delta Lake, and SQL optimisation. Deep knowledge of Azure services—Data Lake Storage Gen2, Data Factory/Synapse, Key Vault, and Event Hub. Proficiency in CI/CD, Git, and automated testing for data engineering. Understanding of data modelling, partitioning, and performance tuning strategies. Preferred Exposure to MLflow, feature store design, or predictive model serving. Experience implementing role-based access controls and GDPR/PCI compliance on Azure. Certification: Microsoft DP-203 or Databricks Data Engineer Professional. Benefits & Culture Work on cutting-edge Azure Databricks projects with Fortune 500 clients. Flat, learning-centric culture that funds certifications and conference passes. Hybrid leave policy, comprehensive health cover, and performance bonuses. Skills: performance tuning,pyspark,event hub,sql,ci/cd,data factory,automated testing,key vault,azure data lake storage gen2,data modelling,azure databricks,git,delta lake,scala,devops,sql optimisation,spark,data synapse
Posted 2 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary: Job Title: Senior Data Engineer – Machine Learning & Data Engineering Location: Gurgaon [IND] Department: Data Engineering / Data Science Employment Type: Full-Time YoE: 5-10 About the Role: We are looking for a Senior Data Engineer with a strong background in machine learning infrastructure , data pipeline development , and collaboration with data scientists to drive the deployment and scalability of advanced analytics and AI solutions. You will play a pivotal role in building and optimizing data systems that power ML models, dashboards, and strategic insights across the company. Key Responsibilities: Design, develop, and optimize scalable data pipelines and ETL/ELT processes to support ML workflows and analytics. Collaborate with data scientists to operationalize machine learning models in production environments (batch, real-time). Build and maintain data lakes, data warehouses, and feature stores using modern cloud technologies (e.g., AWS/GCP/Azure, Snowflake, Databricks). Implement and maintain ML infrastructure, including model versioning, CI/CD for ML, and monitoring tools (MLflow, Airflow, Kubeflow, etc.). Develop and enforce data quality, governance, and security standards. Troubleshoot data issues and support the lifecycle of model development to deployment. Partner with software engineers and DevOps teams to ensure data systems are robust, scalable, and secure. Mentor junior engineers and provide technical leadership on data and ML infrastructure. Qualifications: Required: 5+ years of experience in data engineering, ML infrastructure, or a related field. Proficient in Python, SQL, and big data processing frameworks (Spark, Flink, or similar). Experience with orchestration tools like Apache Airflow, Prefect, or Luigi. Hands-on experience deploying and managing machine learning models in production. Deep knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Familiarity with CI/CD tools for data and ML pipelines. Experience with version control, testing, and reproducibility in data workflows. Preferred: Experience with feature stores (e.g., Feast), ML experiment tracking (e.g., MLflow), and monitoring solutions. Background in supporting NLP, computer vision, or time-series ML models. Strong communication skills and ability to work cross-functionally with data scientists, analysts, and engineers. Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
Posted 2 weeks ago
3.0 years
8 - 30 Lacs
Hyderabad, Telangana, India
On-site
Azure Databricks Engineer Industry & Sector: We are a fast-growing cloud data and analytics consultancy serving global enterprises across finance, retail, and manufacturing. Our teams design high-throughput lakehouse platforms, predictive analytics, and AI services on Microsoft Azure, unlocking data-driven decisions at scale. Role & Responsibilities Design, develop, and optimise end-to-end data pipelines on Azure Databricks using PySpark/Scala and Delta Lake. Build scalable ETL workflows to ingest structured and semi-structured data from Azure Data Lake, SQL, and API sources. Implement lakehouse architectures, partitioning, and performance tuning to ensure sub-second query response. Collaborate with Data Scientists to prepare feature stores and accelerate model training and inference. Automate deployment with Azure DevOps, ARM/Bicep, and Databricks CLI for secure, repeatable releases. Monitor pipeline health, cost, and governance, applying best practices for security, lineage, and data quality. Skills & Qualifications Must-Have 3+ years building large-scale Spark or Databricks workloads in production. Expert hands-on with PySpark/Scala, Delta Lake, and SQL optimisation. Deep knowledge of Azure services—Data Lake Storage Gen2, Data Factory/Synapse, Key Vault, and Event Hub. Proficiency in CI/CD, Git, and automated testing for data engineering. Understanding of data modelling, partitioning, and performance tuning strategies. Preferred Exposure to MLflow, feature store design, or predictive model serving. Experience implementing role-based access controls and GDPR/PCI compliance on Azure. Certification: Microsoft DP-203 or Databricks Data Engineer Professional. Benefits & Culture Work on cutting-edge Azure Databricks projects with Fortune 500 clients. Flat, learning-centric culture that funds certifications and conference passes. Hybrid leave policy, comprehensive health cover, and performance bonuses. Skills: performance tuning,pyspark,event hub,sql,ci/cd,data factory,automated testing,key vault,azure data lake storage gen2,data modelling,azure databricks,git,delta lake,scala,devops,sql optimisation,spark,data synapse
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 2 weeks ago
9.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
About Us Systango Technologies Limited (NSE: SYSTANGO) is a digital engineering company that offers enterprise-class IT and product engineering services to different size organizations. At Systango, we have a culture of efficiency - we use the best-in-breed technologies to commit quality at speed and world-class support to address critical business challenges. We leverage Gen AI, AI/Machine Learning and Blockchain to unlock the next stage of digitalization for traditional businesses. Our handpicked team is adept at web & enterprise development, mobile apps, QA and DevOps. Ulster University, Sila, Cuentas, Youtility, Porsche, MGM Grand, Deloitte, Grindr, and Tawk.to are some of the top clients that have entrusted us to enhance their digital capabilities and build disruptive innovations. We believe in making the impossible, Possible and we do it literally. About The Role We are looking for a highly skilled and experienced AI/ML Lead with deep technical expertise in machine learning, deep learning, and Generative AI. The ideal candidate will have a strong programming foundation in Python and hands-on experience with modern ML/DL frameworks, version control systems, and data pipeline tools. This role requires both individual contribution and leadership in driving AI initiatives, while effectively communicating with cross-functional teams and stakeholders. Key Responsibilities Lead the design, development, and deployment of ML/DL models for real-world applications. Apply advanced techniques such as ensemble learning, transformers, GANs, LSTMs, and reinforcement learning. Work across diverse domains like NLP, computer vision, or recommendation systems based on project needs. Build scalable APIs and services using Flask, Django, or FastAPI. Collaborate with data engineering teams to ensure data readiness for model training and evaluation. Evaluate and fine-tune models using techniques like cross-validation, hyperparameter tuning, and performance metrics. Drive GenAI adoption by leveraging LLM APIs for inference and contribute to LLM training and deployment (if applicable). Document solution architecture, workflows, and technical implementation details clearly. Mentor junior engineers and collaborate with product managers, data scientists, and other technical teams. Required Skills & Qualifications 6–9 years of hands-on experience in AI/ML and deep learning. Strong programming skills in Python and good understanding of object-oriented programming. Deep knowledge of neural networks including GANs, transformers, LSTMs, etc. Proficient in scikit-learn, pandas, NumPy, and frameworks like TensorFlow-Keras or PyTorch. Experience with version control tools such as Git and GitHub. Solid skills in data visualization tools like Matplotlib, Seaborn, or similar. Familiarity with SQL and NoSQL databases. Experience building RESTful APIs using Django, Flask, or FastAPI. Hands-on experience with GenAI models, especially using LLM APIs for inference. Good understanding of model evaluation techniques and performance metrics. Outstanding verbal and written communication skills, with the ability to clearly articulate complex technical topics. Preferred Qualifications Exposure to LLM training and deployment workflows. Experience with Big Data technologies such as Hadoop, Spark, etc. Certifications in AI/ML or cloud-based AI platforms (AWS, GCP, Azure). Experience in MLOps tools like MLflow, Kubeflow, or Weights & Biases (nice to have). Why Join Us? Be part of cutting-edge AI and GenAI product development. Work in a collaborative and innovation-driven environment. Lead high-impact projects with global exposure. Excellent growth opportunities and performance-driven culture.
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Summary JOB DESCRIPTION We are seeking a talented and experienced Data Engineer to join our team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and systems to support analytics and data-driven decision-making. This role requires expertise in data processing, data modeling, and big data technologies. Responsibilities Key Responsibilities: Design and develop datapipelines to collect, transform, and load data into datalakes and datawarehouses . Optimize ETLworkflows to ensure data accuracy, reliability, and scalability. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Implement and manage cloud − baseddataplatforms (e.g., AWS , Azure , or GoogleCloudPlatform ). Develop datamodels to support analytics and reporting. Monitor and troubleshoot data systems to ensure high performance and minimal downtime. Ensure data quality and security through governance best practices. Document workflows, processes, and architecture to facilitate collaboration and scalability. Stay updated with emerging data engineering technologies and trends. Qualifications Required Skills and Qualifications: Strong proficiency in SQL and Python for data processing and transformation. Hands-on experience with bigdatatechnologies like ApacheSpark , Hadoop , or Kafka . Knowledge of datawarehousingconcepts and tools such as Snowflake , BigQuery , or Redshift . Experience with workfloworchestrationtools like ApacheAirflow or Prefect . Familiarity with cloudplatforms (AWS, Azure, GCP) and their data services. Understanding of datagovernance , security , and compliance best practices. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Preferred Qualifications Certification in cloudplatforms (AWS, Azure, or GCP). Experience with NoSQLdatabases like MongoDB , Cassandra , or DynamoDB . Familiarity with DevOpspractices and tools like Docker , Kubernetes , and Terraform . Exposure to machinelearningpipelines and tools like MLflow or Kubeflow . Knowledge of datavisualizationtools like PowerBI , Tableau , or Looker .
Posted 2 weeks ago
3.0 years
8 - 30 Lacs
Pune, Maharashtra, India
On-site
Azure Databricks Engineer Industry & Sector: We are a fast-growing cloud data and analytics consultancy serving global enterprises across finance, retail, and manufacturing. Our teams design high-throughput lakehouse platforms, predictive analytics, and AI services on Microsoft Azure, unlocking data-driven decisions at scale. Role & Responsibilities Design, develop, and optimise end-to-end data pipelines on Azure Databricks using PySpark/Scala and Delta Lake. Build scalable ETL workflows to ingest structured and semi-structured data from Azure Data Lake, SQL, and API sources. Implement lakehouse architectures, partitioning, and performance tuning to ensure sub-second query response. Collaborate with Data Scientists to prepare feature stores and accelerate model training and inference. Automate deployment with Azure DevOps, ARM/Bicep, and Databricks CLI for secure, repeatable releases. Monitor pipeline health, cost, and governance, applying best practices for security, lineage, and data quality. Skills & Qualifications Must-Have 3+ years building large-scale Spark or Databricks workloads in production. Expert hands-on with PySpark/Scala, Delta Lake, and SQL optimisation. Deep knowledge of Azure services—Data Lake Storage Gen2, Data Factory/Synapse, Key Vault, and Event Hub. Proficiency in CI/CD, Git, and automated testing for data engineering. Understanding of data modelling, partitioning, and performance tuning strategies. Preferred Exposure to MLflow, feature store design, or predictive model serving. Experience implementing role-based access controls and GDPR/PCI compliance on Azure. Certification: Microsoft DP-203 or Databricks Data Engineer Professional. Benefits & Culture Work on cutting-edge Azure Databricks projects with Fortune 500 clients. Flat, learning-centric culture that funds certifications and conference passes. Hybrid leave policy, comprehensive health cover, and performance bonuses. Skills: performance tuning,pyspark,event hub,sql,ci/cd,data factory,automated testing,key vault,azure data lake storage gen2,data modelling,azure databricks,git,delta lake,scala,devops,sql optimisation,spark,data synapse
Posted 2 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🔍 We’re Hiring: ML Ops Engineer (5+ Years Experience) 📍 Preferred Location: Gurgaon (Open to Pune & Bangalore if needed) 💼 Employment Type: Full-Time Are you passionate about deploying and automating ML pipelines at scale? We’re looking for a hands-on ML Ops Engineer to join our growing team and help operationalize machine learning in production environments. ✅ Key Responsibilities: • Develop, deploy, and automate end-to-end ML pipelines. • Work closely with Data Science and Engineering teams to productionize ML models. • Ensure robustness, scalability, and automation in ML workflows. 🧠 Technical Requirements: Cloud & DevOps (AWS): • Deep experience with AWS core services • Hands-on with EKS, ECS, ECR, SageMaker (Jobs, Model Registry, Batch Transform, HP Tuning) • Familiar with Step Functions, EventBridge, SNS/SQS ML Concepts & MLOps Tools: • Strong grasp of how ML code is deployed in production environments • Experience with MLflow for tracking and model management • Solid understanding of CI/CD for ML (build, test, and deploy ML pipelines efficiently) 📍 Location Flexibility: • Preferred: Gurgaon • Open to: Pune & Bangalore for the right candidate ⸻ If you’re ready to scale ML in real-world applications, apply now or refer a friend who fits the bill! Dan@therxcloud.com
Posted 2 weeks ago
7.0 years
7 - 10 Lacs
Hyderābād
On-site
About this role: Wells Fargo is seeking a Senior Lead Business Execution Consultant. Shared Services Operations - Operational Excellence team is seeking an Applied Data Scientist with a strong data science background to deliver traditional AI and Generative AI solutions for driving operational efficiencies, improve internal and external customer experiences with the use of AI. The candidate needs to possess a mix of technical expertise, creative problem-solving skills, and the ability to align AI solutions with business needs. In this role, you will: Lead complex initiatives including creation, implementation, documentation, validation, articulation, and defense of highly advanced AI and Generative AI solutions. Deliver solutions for short and long-term objectives and provide analytical support for a wide array of business initiatives. Utilize neural network architectures, including transformer-based models such as GPT, BERT, and others for driving operational efficiencies. Familiarity with using diffusion models or multimodal AI depending on the use cases. Hands-on experience with fine-tuning, training, and deploying large language models (LLMs) in on-premises, cloud or hybrid environments. Strong hold on tokenization, embeddings, and model evaluation metrics. Present results of analysis, solution recommendations and AI-driven strategies for variety of business initiatives. Required Qualifications: 7+ years of Business Execution, Implementation, or Strategic Planning experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: 7+ years of Data Science experience, or equivalent demonstrated through one or a combination of the following: work experience, training, research and education. Bachelor's/master's degree in a discipline such as Data Science, Computer Science, statistics, or mathematics. Expertise in Python and key/major frameworks like TensorFlow, PyTorch, and HuggingFace. Experience in using ML Ops tools (e.g., MLflow, Kubeflow) for scaling and deploying models in production. Experience working with on Google Cloud Platform and expertise in using the GCP services. Expertise in Scala for processing large-scale data sets for use cases. Proficiency in Java, JavaScript (Node.js) for back-end integrations and for building interactive AI-based web applications or APIs. Experience with SQL and NoSQL languages for managing structured, unstructured, and semi-structured data for AI and Generative AI applications. Critical thinking and strong problem-solving skills. Ability to learn the latest technologies and keep up with the trends in the Gen-AI space and apply to the business problems quickly. Ability to multi-task and prioritize between projects and able to work independently and as part of a team. Graduate degree from a top tier university (e.g., IIT, ISI, IIIT, IIM, etc.,) is preferred. Job Expectations: Required to work individually or as part of a team on multiple AI and Generative AI projects and work closely with business partners across the organization. Mentor and coach budding Data Scientists on developing and implementing AI solutions. Perform various complex activities related to neural networks and transformer-based models. Provide analytical support for developing, evaluating, implementing, monitoring, and executing models across business verticals using emerging technologies. Expert knowledge on working on large datasets using SQL or NoSQL and present conclusions to key stakeholders. Establish a consistent and collaborative framework with the business and act as a primary point of contact in delivering the solutions. Experience in building quick prototypes to check feasibility and value to business. Expert in developing and maintaining modular codebase for reusability. Review and validate models and help improve the performance of the model under the preview of regulatory requirements. Work closely with technology teams to deploy the models to production. Prepare detailed documentations for projects for both internal and external use that comply with regulatory and internal audit requirements. Posting End Date: 2 Jul 2025 *Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants with Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment and Hiring Requirements: a. Third-Party recordings are prohibited unless authorized by Wells Fargo. b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.
Posted 2 weeks ago
0 years
6 - 10 Lacs
Gurgaon
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibility: Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Graduation Degree Experience with cloud platforms, particularly AWS AI services (Bedrock, SageMaker) and/or Azure OpenAI ServiceProven experience in developing and deploying LLM-powered applications in production Experience with foundation models and generative AI platforms (e.g. OpenAI, Anthropic, open-source models) Experience in building RAG solutions with vector databases (e.g. pgvector, Pinecone, OpenSearch) Familiarity with MLOps practices (deployment, monitoring, model lifecycle) Good understanding of modern NLP techniques (e.g. transformers, embeddings, prompt engineering) Solid understanding with Machine Learning Frameworks and Libraries (e.g., PyTorch, scikit-learn, TensorFlow, MLflow, Keras, XGBoost) Proven solid programming skills in Python, PySpark Proven exposure to AI orchestration frameworks such as LangChain, LangGraph and others Proven excellent communication and collaboration skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Full-time Career Site Team: Technology Company Description NielsenIQ is a consumer intelligence company that delivers the Full View™, the world’s most complete and clear understanding of consumer buying behavior that reveals new pathways to growth. Since 1923, NIQ has moved measurement forward for industries and economies across the globe. We are putting the brightest and most dedicated minds together to accelerate progress. Our diversity brings out the best in each other so we can leave a lasting legacy on the work that we do and the people that we do it with. NielsenIQ offers a range of products and services that leverage Machine Learning and Artificial Intelligence to provide insights into consumer behavior and market trends. This position opens the opportunity to apply the latest state of the art in AI/ML and data science to global and key strategic projects Job Description We are looking for a Research Scientist with a data-centric mindset to join our applied research and innovation team. The ideal candidate will have a strong background in machine learning, deep learning, operationalization of AI/ML and process automation. You will be responsible for analyzing data, researching the most appropriate techniques, and the development, testing, support and delivery of proof of concepts to resolve real-world and large-scale challenging problems. Job Responsibilities Develop and apply machine learning innovations with minimal technical supervision. Understand the requirements from stakeholders and be able to communicate results and conclusions in a way that is accurate, clear and winsome. Perform feasibility studies and analyse data to determine the most appropriate solution. Work on many different data challenges, always ensuring a combination of simplicity, scalability, reproducibility and maintainability within the ML solutions and source code. Both data and software must be developed and maintained with high-quality standards and minimal defects. Collaborate with other technical fellows on the integration and deployment of ML solutions. To work as a member of a team, encouraging team building, motivation and cultivating effective team relations. Qualifications Essential Requirements Bachelor's degree in Computer Science or an equivalent numerate discipline Demonstrated senior experience in Machine Learning, Deep Learning & other AI fields Experience working with large datasets, production-grade code & operationalization of ML solutions EDA analysis & practical hands-on experience with datasets, ML models (Pytorch or Tensorflow) & evaluations Able to understand scientific papers & develop the idea into executable code Analytical mindset, problem solving & logical thinking capabilities Proactive attitude, constructive, intellectual curiosity & persistence to find answers to questions A high level of interpersonal & communication skills in English & strong ability to meet deadlines Python, Pytorch, Git, pandas, dask, polars, sklearn, huggingface, docker, databricks Desired Skills Masters degree &/or specialization courses in AI/ML. PhD in science is an added value Experience in MLOPs (MLFlow, Prefect) & deployment of AI/ML solutions to the cloud (Azure preferred) Understanding & practice of LLMs & Generative AI (prompt engineering, RAG). Experience with Robotic Process Automation, Time Series Forecasting & Predictive modeling A practical grasp of databases (SQL, ElasticSearch, Pinecone, Faiss) Previous experience in retail, consumer, ecommerce, business, FMCG products (NielsenIQ portfolio) Additional Information With @NielsenIQ, we’re now an even more diverse team of 40,000 people – each with their own stories Our increasingly diverse workforce empowers us to better reflect the diversity of the markets we measure. Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy
Posted 2 weeks ago
4.0 - 8.0 years
3 - 5 Lacs
Chennai
On-site
Date: 27 Jun 2025 Company: Qualitest Group Country/Region: IN Key Responsibilities Design, develop, and deploy ML models and AI solutions across various domains such as NLP, computer vision, recommendation systems, time-series forecasting, etc. Perform data preprocessing, feature engineering, and model training using frameworks like TensorFlow, PyTorch, Scikit-learn, or similar. Collaborate with cross-functional teams to understand business problems and translate them into AI/ML solutions. Optimize models for performance, scalability, and reliability in production environments. Integrate ML pipelines with production systems using tools like MLflow, Airflow, Docker, or Kubernetes. Conduct rigorous model evaluation using metrics and validation techniques. Stay up-to-date with state-of-the-art AI/ML research and apply findings to enhance existing systems. Mentor junior engineers and contribute to best practices in ML engineering. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 4–8 years of hands-on experience in machine learning, deep learning, or applied AI. Proficiency in Python and ML libraries/frameworks (e.g., Scikit-learn, TensorFlow, PyTorch, XGBoost). Experience with data wrangling tools (Pandas, NumPy) and SQL/NoSQL databases. Familiarity with cloud platforms (AWS, GCP, or Azure) and ML tools (SageMaker, Vertex AI, etc.). Solid understanding of model deployment, monitoring, and CI/CD pipelines. Strong problem-solving skills and the ability to communicate technical concepts clearly.
Posted 2 weeks ago
1.0 years
0 Lacs
Noida
On-site
Lead Assistant Manager EXL/LAM/1402761 Digital SolutionsNoida Posted On 26 Jun 2025 End Date 10 Aug 2025 Required Experience 1 - 4 Years Basic Section Number Of Positions 4 Band B2 Band Name Lead Assistant Manager Cost Code D012603 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 250000.0000 - 1050000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group EXL Digital Sub Group Digital Solutions Organization Digital Solutions LOB Product Practice Market SBU GenAI CoE Country India City Noida Center Noida - Centre 59 Skills Skill SQL PYTHON DOCUMENT PREPARATION STAKEHOLDER COMMUNICATION Minimum Qualification B.TECH/B.E Certification No data available Job Description Key Responsibilities: Monitor and maintain health of production AI models (GenAI and traditional ML). Troubleshoot data/model/infra issues across model pipelines, APIs, embeddings, and prompt systems. Collaborate with Engineering and Data Science teams to deploy new versions and manage rollback if needed. Implement automated logging, alerting, and retraining pipelines. Handle prompt performance drift, input/output anomalies, latency issues, and quality regressions. Analyze feedback and real-world performance to propose model or prompt enhancements. Conduct A/B testing, manage baseline versioning and monitor model outputs over time. Document runbooks, RCA reports, model lineage and operational dashboards. Support GenAI adoption by assisting in evaluations, hallucination detection, and prompt optimization. Must-have Skills: 1+ year of experience in Data Science, ML, or MLOps. Good grasp of ML lifecycle, model versioning, and basic monitoring principles. Strong Python skills with exposure to ML frameworks (scikit-learn, pandas, etc.). Basic familiarity with LLMs and interest in GenAI (OpenAI, Claude, etc.). Exposure to AWS/GCP/Azure or any MLOps tooling. Comfortable reading logs, parsing metrics, and triaging issues across the stack. Eagerness to work in a production support environment with proactive ownership. Nice-to-Have Skills: Prompt engineering knowledge (system prompts, temperature, tokens, etc.). Hands-on with vector stores, embedding models, or LangChain/LlamaIndex. Experience with tools like MLflow, Prometheus, Grafana, Datadog, or equivalent. Basic understanding of retrieval pipelines or RAG architectures. Familiarity with CI/CD and containerization (Docker, GitHub Actions). Ideal Candidate Profile: A strong starter who wants to go beyond notebooks and see AI in action. Obsessed with observability, explainability, and zero-downtime AI. Wants to build a foundation in GenAI while leveraging their traditional ML skills. A great communicator who enjoys cross-functional collaboration. Workflow Workflow Type Digital Solution Center
Posted 2 weeks ago
5.0 - 8.0 years
4 - 8 Lacs
Noida
On-site
Manager EXL/M/1383329 Insurance Platform ServicesNoida Posted On 11 Jun 2025 End Date 26 Jul 2025 Required Experience 5 - 8 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code G100131 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 2000000.0000 - 3000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Insurance Sub Group Insurance Organization Insurance Platform Services LOB EXL OSI SBU Insurance Products & Platforms Country India City Noida Center Noida - Centre 59 Skills Skill DATA SCIENCE MACHINE LEARNING SKILLS Minimum Qualification B.TECH/B.E Certification No data available Job Description L2/L3 Gen AI Data Scientist Job Requirements Conduct research and analyze state-of-the-art Gen AI techniques to improve prompt generation models and algorithms. Should be able to utilize LLM open-source models and perform hyper-parameter tuning to configure prompts as per business needs. Develop and optimize high-quality prompts for applications. Conduct experiments, analyze data, and provide insights to improve the performance and effectiveness of prompt generation models. Beyond the foundational skills in problem-solving, Python coding, and the ability to navigate through challenges, this role requires expertise in deploying models into production Leadership skills are important. As Senior Data Scientist, you will review the work of junior team members to ensure top-notch quality and provide mentorship as and where needed Advanced understanding of statistical analysis and experienced in developing machine learning models Preferred experience in Insurance domain Skillset Required Experience in creating prompts with OpenAI and Anthropic models 6 - 8 years of experience as a NLP and Python developer and hands-on experience in prompt engineering Fluency with advanced Machine learning algorithms, Excel, Relational database (SQL/PLSQL) Data gathering, research and analytical abilities to develop insightful conclusions and generate solutions Experience on AWS services e.g. EC2, VPC, S3, ELB, RDS, Cloud Watch, Cloud Front, etc. is a plus Familiarity with Databricks and MLFlow is desired Good to have working knowledge of GitHub and Jira Workflow Workflow Type Digital Solution Center
Posted 2 weeks ago
0 years
3 Lacs
India
On-site
Collaborate with product managers and software engineers to integrate AI solutions into our products. Experience with LLMs, NLP, or computer vision. Familiarity with MLOps tools (e.g., MLflow, DVC) and model lifecycle management. Experience with cloud platforms (AWS, Azure, or GCP) and containerization tools (Docker, Kubernetes). Design and implement machine learning models and AI algorithms to solve real-world problems. Build and maintain scalable data pipelines and model deployment infrastructure. Monitor, evaluate, and continuously improve model performance in production. Stay current with the latest AI research and apply relevant findings to our products. Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related field. Proficiency in Python and experience with ML frameworks like TensorFlow, PyTorch, or Scikit-learn. Strong understanding of data structures, algorithms, and software engineering principles. Knowledge of vector databases and retrieval-augmented generation (RAG). Exposure to CI/CD pipelines and DevOps practices. Job Types: Full-time, Permanent Pay: From ₹25,000.00 per month Benefits: Health insurance Provident Fund Supplemental Pay: Yearly bonus Work Location: In person
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Company Description Betterdata is a leading provider of an AI Programmable Synthetic Data Platform that helps data & AI teams to share data safely and compliantly. By anonymizing sensitive real data into privacy-preserving synthetic data, our platform enables instant and secure data sharing globally while prioritizing data privacy. Visit our job openings at https://tinyurl.com/betterdatajobs. Role Description This is a full-time remote role for an Entry-level / Intern MLOps Engineer at Betterdata. The MLOps Engineer will be responsible for day-to-day tasks involving deploying machine learning models, managing pipelines, monitoring model performance, and ensuring smooth integration of data and ML systems. Key Responsibilities: Deploy and manage Kubernetes clusters and Docker containers for ML workloads Build and maintain automation using Jenkins, Argo Workflows, Argo Events, and Argo CD Write Bash and Python scripts to automate infrastructure and ML pipelines Work across cloud platforms (AWS, Azure, GCP) to configure and manage cloud-native components like: Kubernetes (AKs, EKS, GKE) Storage (e.g., S3, Blob, GCS) Compute (e.g., EC2, Azure VMs, GCE) Networking (e.g., VPC, Load Balancers) IAM and Secrets Management Monitor and troubleshoot infrastructure for ML pipelines Collaborate with the MLOps lead to explore and integrate new tools and practices Qualifications: Bash scripting Docker & Kubernetes Jenkins or similar CI/CD tools (e.g:Github actions) Python scripting One or more cloud platforms (AWS, Azure, GCP) Basic networking concepts Interest or experience in the Argo ecosystem (Workflows, Events, CD) Exposure to MLOps tools like MLflow, Airflow etc Experience deploying ML models or pipelines in cloud/on-premise air gapped environments Degree in Computer Science, Machine Learning, or related field. Exceptional analytical and problem-solving skills. Fast learner with a proactive mindset and eagerness to grow Clear communicator and effective collaborator Why Join Us: Join a team poised to make significant impacts in the synthetic data technology space, where your contributions will shape the future of the field. Real-world exposure to MLOps challenges and solutions. Our start-up environment offers the excitement of innovation and problem-solving, competitive compensation, opportunity to be converted and the chance to challenge and expand your skills at the intersection of data privacy and utility.
Posted 2 weeks ago
6.0 - 9.0 years
5 - 9 Lacs
Indore
On-site
About Us: Systango Technologies Limited (NSE: SYSTANGO) is a digital engineering company that offers enterprise-class IT and product engineering services to different size organizations. At Systango, we have a culture of efficiency - we use the best-in-breed technologies to commit quality at speed and world-class support to address critical business challenges. We leverage Gen AI, AI/Machine Learning and Blockchain to unlock the next stage of digitalization for traditional businesses. Our handpicked team is adept at web & enterprise development, mobile apps, QA and DevOps. Ulster University, Sila, Cuentas, Youtility, Porsche, MGM Grand, Deloitte, Grindr, and Tawk.to are some of the top clients that have entrusted us to enhance their digital capabilities and build disruptive innovations. We believe in making the impossible, Possible and we do it literally. About the Role: We are looking for a highly skilled and experienced AI/ML Lead with deep technical expertise in machine learning, deep learning, and Generative AI. The ideal candidate will have a strong programming foundation in Python and hands-on experience with modern ML/DL frameworks, version control systems, and data pipeline tools. This role requires both individual contribution and leadership in driving AI initiatives, while effectively communicating with cross-functional teams and stakeholders. Key Responsibilities: Lead the design, development, and deployment of ML/DL models for real-world applications. Apply advanced techniques such as ensemble learning, transformers, GANs, LSTMs , and reinforcement learning . Work across diverse domains like NLP, computer vision , or recommendation systems based on project needs. Build scalable APIs and services using Flask , Django , or FastAPI . Collaborate with data engineering teams to ensure data readiness for model training and evaluation. Evaluate and fine-tune models using techniques like cross-validation, hyperparameter tuning , and performance metrics. Drive GenAI adoption by leveraging LLM APIs for inference and contribute to LLM training and deployment (if applicable). Document solution architecture, workflows, and technical implementation details clearly. Mentor junior engineers and collaborate with product managers, data scientists, and other technical teams. Required Skills & Qualifications: 6–9 years of hands-on experience in AI/ML and deep learning. Strong programming skills in Python and good understanding of object-oriented programming . Deep knowledge of neural networks including GANs, transformers, LSTMs , etc. Proficient in scikit-learn, pandas, NumPy , and frameworks like TensorFlow-Keras or PyTorch . Experience with version control tools such as Git and GitHub . Solid skills in data visualization tools like Matplotlib , Seaborn , or similar. Familiarity with SQL and NoSQL databases . Experience building RESTful APIs using Django , Flask , or FastAPI . Hands-on experience with GenAI models, especially using LLM APIs for inference. Good understanding of model evaluation techniques and performance metrics. Outstanding verbal and written communication skills, with the ability to clearly articulate complex technical topics. Preferred Qualifications: Exposure to LLM training and deployment workflows. Experience with Big Data technologies such as Hadoop, Spark , etc. Certifications in AI/ML or cloud-based AI platforms (AWS, GCP, Azure). Experience in MLOps tools like MLflow, Kubeflow, or Weights & Biases (nice to have). Why Join Us? Be part of cutting-edge AI and GenAI product development. Work in a collaborative and innovation-driven environment. Lead high-impact projects with global exposure. Excellent growth opportunities and performance-driven culture.
Posted 2 weeks ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Greetings from Fresh Gravity About Fresh Gravity: Founded in 2015, Fresh Gravity helps businesses make data-driven decisions. We are driven by data and its potential as an asset to drive business growth and efficiency. Our consultants are passionate innovators who solve clients' business problems by applying best-in-class data and analytics solutions. We provide a range of consulting and systems integration services and solutions to our clients in the areas of Data Management, Analytics and Machine Learning, and Artificial Intelligence. In the last 10 years, we have put together an exceptional team and have delivered 200+ projects for over 80 clients ranging from startups to several fortune 500 companies. We are on a mission to solve some of the most complex business problems for our clients using some of the most exciting new technologies, providing the best of learning opportunities for our team. We are focused and intentional about building a strong corporate culture in which individuals feel valued, supported, and cared for. We foster an environment where creativity thrives, paving the way for groundbreaking solutions and personal growth. Our open, collaborative, and empowering work culture is the main reason for our growth and success. To know more about our culture and employee benefits, visit out website https://www.freshgravity.com/employee-benefits/ . We promise rich opportunities for you to succeed, to shine, to exceed even your own expectations. We are data driven. We are passionate. We are innovators. We are Fresh Gravity. Requirements Strong foundation in machine learning theory and the full model development lifecycle Proficient in Python and libraries: scikit-learn, pandas, numpy, transformers. Hands-on with TensorFlow and PyTorch for deep learning Experience using MLflow for tracking, versioning, and deployment. Working knowledge of ETL, SQL, and data modeling Practical experience with Azure services, including: Azure Databricks Azure Machine Learning / MLflow Azure Data Factory Azure Data Lake / Blob Storage Azure OpenAI Familiar with MLOps, version control, and pipeline automation Strong communication skills and experience in Agile, cross-functional teams Benefits In addition to a competitive package, we promise rich opportunities for you to succeed, to shine, to exceed even your own expectations. In keeping with Fresh Gravity's challenger ethos, we have developed the 5Dimensions (5D) benefits program. This program recognizes the multiple dimensions within each of us and seeks to provide opportunities for deep development across these dimensions. Enrich Myself; Enhance My Client; Build my Company, Nurture My Family; and Better Humanity.
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask. Implement gRPC services, event-driven systems (Kafka, PubSub), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development: data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow, Dagster, SageMaker, and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation, A/B testing, and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python, FastAPI/Flask, gRPC, and event-driven architectures. Experience with CI/CD, infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices: feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration. Familiarity with tools like Airflow/Dagster, SageMaker, and data pipeline architecture.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France