Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
30 - 40 Lacs
Bengaluru
Work from Office
Design, develop, and deploy AI/ML models; build scalable, low-latency ML infrastructure; run experiments; optimize algorithms; collaborate with data scientists, engineers, and architects; integrate models into production to drive business value. Required Candidate profile 5–10 yrs in AI/ML, strong in model development, optimization, and deployment. Skilled in Azure, ML pipelines, data science tools, and collaboration with cross-functional teams.
Posted 3 days ago
9.0 - 12.0 years
16 - 25 Lacs
Hyderabad
Work from Office
Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization Also experienced in developing model wrapers Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow
Posted 6 days ago
9.0 - 12.0 years
16 - 25 Lacs
Hyderabad
Work from Office
Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization Also experienced in developing model wrapers Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow
Posted 6 days ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Secunderabad
Work from Office
Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch. Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI.LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization. Also experienced in developing model wrapers. Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning. Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow.
Posted 1 week ago
4.0 - 9.0 years
6 - 11 Lacs
Bengaluru
Work from Office
ZS s Beyond Healthcare Analytics (BHCA) Team is shaping one of the key growth vector area for ZS, Beyond Healthcare engagement, comprising of clients from industries like Quick service restaurants, Technology, Food & Beverage, Hospitality, Travel, Insurance, Consumer Products Goods & other such industries across North America, Europe & South East Asia region. BHCA India team currently has presence across New Delhi, Pune and Bengaluru offices and is continuously expanding further at a great pace. BHCA India team works with colleagues across clients and geographies to create and deliver real world pragmatic solutions leveraging AI SaaS products & platforms, Generative AI applications, and other Advanced analytics solutions at scale. What You ll Do: Build, Refine and Use ML Engineering platforms and components. Scaling machine learning algorithms to work on massive data sets and strict SLAs. Build and orchestrate model pipelines including feature engineering, inferencing and continuous model training. Implement ML Ops including model KPI measurements, tracking, model drift & model feedback loop. Collaborate with client facing teams to understand business context at a high level and contribute in technical requirement gathering. Implement basic features aligning with technical requirements. Write production-ready code that is easily testable, understood by other developers and accounts for edge cases and errors. Ensure highest quality of deliverables by following architecture/design guidelines, coding best practices, periodic design/code reviews. Write unit tests as well as higher level tests to handle expected edge cases and errors gracefully, as well as happy paths. Uses bug tracking, code review, version control and other tools to organize and deliver work. Participate in scrum calls and agile ceremonies, and effectively communicate work progress, issues and dependencies. Consistently contribute in researching & evaluating latest architecture patterns/technologies through rapid learning, conducting proof-of-concepts and creating prototype solutions. What You ll Bring A master's or bachelor s degree in Computer Science or related field from a top university. 4+ years hands-on experience in ML development. Good understanding of the fundamentals of machine learning Strong programming expertise in Python, PySpark/Scala. Expertise in crafting ML Models for high performance and scalability. Experience in implementing feature engineering, inferencing pipelines, and real time model predictions. Experience in ML Ops to measure and track model performance, experience working with MLFlow Experience with Spark or other distributed computing frameworks. Experience in ML platforms like Sage maker, Kubeflow. Experience with pipeline orchestration tools such Airflow. Experience in deploying models to cloud services like AWS, Azure, GCP, Azure ML. Expertise in SQL, SQL DB's. Knowledgeable of core CS concepts such as common data structures and algorithms. Collaborate well with teams with different backgrounds / expertise / functions
Posted 1 week ago
8.0 - 10.0 years
11 - 18 Lacs
Pune
Work from Office
Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.
Posted 1 week ago
8.0 - 10.0 years
11 - 18 Lacs
Mumbai
Work from Office
Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.
Posted 1 week ago
10.0 - 20.0 years
15 - 30 Lacs
Chennai
Work from Office
We are seeking a highly experienced and technically adept Lead AI/ML Engineer to spearhead the development and deployment of cutting-edge AI solutions, with a focus on Generative AI and Natural Language Processing (NLP). The ideal candidate will be responsible for leading a high-performing team, architecting scalable ML systems, and driving innovation across AI/ML projects using modern toolchains and cloud-native technologies. Key Responsibilities Team Leadership: Lead, mentor, and manage a team of data scientists and ML engineers; drive technical excellence and foster a culture of innovation. AI/ML Solution Development: Design and deploy end-to-end machine learning and AI solutions, including Generative AI and NLP applications. Conversational AI: Build LLM-based chatbots and document intelligence tools using frameworks like LangChain , Azure OpenAI , and Hugging Face . MLOps Execution: Implement and manage the full ML lifecycle using tools such as MLFlow , DVC , and Kubeflow to ensure reproducibility, scalability, and efficient CI/CD of ML models. Cross-functional Collaboration: Partner with business and engineering stakeholders to translate requirements into impactful AI solutions. Visualization & Insights: Develop interactive dashboards and data visualizations using Streamlit , Tableau , or Power BI for presenting model results and insights. Project Management: Own delivery of projects with clear milestones, timelines, and communication of progress and risks to stakeholders. Required Skills & Qualifications Languages & Frameworks: Proficient in Python and frameworks like TensorFlow , PyTorch , Keras , FastAPI , Django NLP & Generative AI: Hands-on experience with BERT , LLaMA , Spacy , LangChain , Hugging Face , and other LLM-based technologies MLOps Tools: Experience with MLFlow , Kubeflow , DVC , ClearML for managing ML pipelines and experiment tracking Visualization: Strong in building visualizations and apps using Power BI , Tableau , Streamlit Cloud & DevOps: Expertise with Azure ML , Azure OpenAI , Docker , Jenkins , GitHub Actions Databases & Data Engineering: Proficient with SQL/NoSQL databases and handling large-scale datasets efficiently Preferred Qualifications Masters or PhD in Computer Science, AI/ML, Data Science, or related field Experience working in agile product development environments Strong communication and presentation skills with technical and non-technical stakeholders
Posted 1 week ago
8.0 - 13.0 years
10 - 14 Lacs
Bengaluru
Work from Office
General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Systems Engineer, you will research, design, develop, simulate, and/or validate systems-level software, hardware, architecture, algorithms, and solutions that enables the development of cutting-edge technology. Qualcomm Systems Engineers collaborate across functional teams to meet and exceed system-level requirements and standards. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 8+ years of Systems Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 7+ years of Systems Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 6+ years of Systems Engineering or related work experience. Principal Engineer Machine Learning We are looking for a Principal AI/ML Engineer with expertise in model inference , optimization , debugging , and hardware acceleration . This role will focus on building efficient AI inference systems, debugging deep learning models, optimizing AI workloads for low latency, and accelerating deployment across diverse hardware platforms. In addition to hands-on engineering, this role involves cutting-edge research in efficient deep learning, model compression, quantization, and AI hardware-aware optimization techniques . You will explore and implement state-of-the-art AI acceleration methods while collaborating with researchers, industry experts, and open-source communities to push the boundaries of AI performance. This is an exciting opportunity for someone passionate about both applied AI development and AI research , with a strong focus on real-world deployment, model interpretability, and high-performance inference . Education & Experience: 20+ years of experience in AI/ML development, with at least 5 years in model inference, optimization, debugging, and Python-based AI deployment. Masters or Ph.D. in Computer Science, Machine Learning, AI Leadership & Collaboration Lead a team of AI engineers in Python-based AI inference development . Collaborate with ML researchers, software engineers, and DevOps teams to deploy optimized AI solutions. Define and enforce best practices for debugging and optimizing AI models Key Responsibilities Model Optimization & Quantization Optimize deep learning models using quantization (INT8, INT4, mixed precision etc), pruning, and knowledge distillation . Implement Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT) for deployment. Familiarity with TensorRT, ONNX Runtime, OpenVINO, TVM AI Hardware Acceleration & Deployment Optimize AI workloads for Qualcomm Hexagon DSP, GPUs (CUDA, Tensor Cores), TPUs, NPUs, FPGAs, Habana Gaudi, Apple Neural Engine . Leverage Python APIs for hardware-specific acceleration , including cuDNN, XLA, MLIR . Benchmark models on AI hardware architectures and debug performance issues AI Research & Innovation Conduct state-of-the-art research on AI inference efficiency, model compression, low-bit precision, sparse computing, and algorithmic acceleration . Explore new deep learning architectures (Sparse Transformers, Mixture of Experts, Flash Attention) for better inference performance . Contribute to open-source AI projects and publish findings in top-tier ML conferences (NeurIPS, ICML, CVPR). Collaborate with hardware vendors and AI research teams to optimize deep learning models for next-gen AI accelerators. Details of Expertise: Experience optimizing LLMs, LVMs, LMMs for inference Experience with deep learning frameworks : TensorFlow, PyTorch, JAX, ONNX. Advanced skills in model quantization, pruning, and compression . Proficiency in CUDA programming and Python GPU acceleration using cuPy, Numba, and TensorRT . Hands-on experience with ML inference runtimes (TensorRT, TVM, ONNX Runtime, OpenVINO) Experience working with RunTimes Delegates (TFLite, ONNX, Qualcomm) Strong expertise in Python programming , writing optimized and scalable AI code. Experience with debugging AI models , including examining computation graphs using Netron Viewer, TensorBoard, and ONNX Runtime Debugger . Strong debugging skills using profiling tools (PyTorch Profiler, TensorFlow Profiler, cProfile, Nsight Systems, perf, Py-Spy) . Expertise in cloud-based AI inference (AWS Inferentia, Azure ML, GCP AI Platform, Habana Gaudi). Knowledge of hardware-aware optimizations (oneDNN, XLA, cuDNN, ROCm, MLIR, SparseML). Contributions to open-source community Publications in International forums conferences journals
Posted 1 week ago
3.0 - 5.0 years
5 - 7 Lacs
New Delhi, Chennai, Bengaluru
Work from Office
Your day at NTT DATA Cloud AI/GenAI Engineer(ServiceNow) We are seeking a talented AI/GenAI Engineer to join our team in delivering cutting-edge AI solutions to clients. The successful candidate will be responsible for implementing, developing, and deploying AI/GenAI models and solutions on cloud platforms. This role requires knowledge of ServiceNow modules like CSM and virtual agent development. Candidate should have strong technical aptitude, problem-solving skills, and the ability to work effectively with clients and internal teams. What youll be doing Key Responsibilities: Cloud AI Implementation: Implement and deploy AI/GenAI models and solutions using various cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI) and frameworks (e.g., TensorFlow, PyTorch, LangChain, Vellum). Build Virtual Agent in SN Design, develop and deploy virtual agents using SN agent builder Integrate SN Design and develop seamless integration of SN with other external AI systems Agentic AI: Assist in developing agentic AI systems on cloud platforms, enabling autonomous decision-making and action-taking capabilities in AI solutions. Cloud-Based Vector Databases: Implement cloud-native vector databases (e.g., Pinecone, Weaviate, Milvus) or cloud-managed services for efficient similarity search and retrieval in AI applications. Model Evaluation and Fine-tuning: Evaluate and optimize cloud-deployed generative models using metrics like perplexity, BLEU score, and ROUGE score, and fine-tune models using techniques like prompt engineering, instruction tuning, and transfer learning. Security for Cloud LLMs: Apply security practices for cloud-based LLMs, including data encryption, IAM policies, and network security configurations. Client Support Support client engagements by implementing AI requirements and contributing to solution delivery. Cloud Solution Implementation: Build scalable and efficient cloud-based AI/GenAI solutions according to architectural guidelines. Cloud Model Development: Develop and fine-tune AI/GenAI models using cloud services for specific use cases, such as natural language processing, computer vision, or predictive analytics. Testing and Validation Conduct testing and validation of cloud-deployed AI/GenAI models, including performance evaluation and bias detection. Deployment and Maintenance Deploy AI/GenAI models in production environments, ensuring seamless integration with existing systems and infrastructure. Cloud Deployment Deploy AI/GenAI models in cloud production environments and integrate with existing systems. Education Bachelor/Masters in Computer Science, AI, ML, or related fields. Experience 3-5 years of experience in engineering solutions, with a track record of delivering Cloud AI solutions. . Should have at least 2 years experience of SN and SN agent builder Technical Skills: Proficiency in cloud AI/GenAI services and technologies across major cloud providers (AWS, Azure, GCP) Experience with cloud-native vector databases and managed similarity search services Experience with SN modules like CSM and virtual agent builder Experience with security measures for cloud-based LLMs, including data encryption, access controls, and compliance requirements Programming Skills: Strong programming skills in languages like Python or R Cloud Platform Knowledge: Strong understanding of cloud platforms, their AI services, and best practices for deploying ML models in the cloud Communication Excellent communication and interpersonal skills, with the ability to work effectively with clients and internal teams. Problem-Solving Strong problem-solving skills, with the ability to analyse complex problems and develop creative solutions. Nice to have: Experience with serverless architectures for AI workloads Nice to have: Experience with ReactJS for rapid prototyping of cloud AI solution frontends Location: Delhi or Bangalore (with remote work options)
Posted 2 weeks ago
5.0 - 10.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Develop and deploy ML pipelines using MLOps tools, build FastAPI-based APIs, support LLMOps and real-time inferencing, collaborate with DS/DevOps teams, ensure performance and CI/CD compliance in AI infrastructure projects. Required Candidate profile Experienced Python developer with 4–8 years in MLOps, FastAPI, and AI/ML system deployment. Exposure to LLMOps, GenAI models, containerized environments, and strong collaboration across ML lifecycle
Posted 2 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Gurugram
Work from Office
Must-have skills: Marketing Analytics, Data Driven Merchandizing (Pricing/Promotions/Assortment Optimization), Statistical Timeseries Models, Store Clustering Algorithms, Descriptive Analytics, State Space Modeling, Mixed Effect Regression, NLP Techniques, Large Language Models, Azure ML Tech Stack, SQL, R, Python, AI/ML Model Development, Cloud Platform Experience (Azure/AWS/GCP), Data Pipelines, Client Management, Insights Communication Good to have skills: Non-linear Optimization, Resource Optimization, Cloud Capability Migration, Scalable Machine Learning Architecture Design Patterns, Econometric Modeling, AI Capability Building, Industry Knowledge:CPG, Retail Job Summary As part of our Data & AI practice, you will join a worldwide network of smart and driven colleagues experienced in leading statistical tools, methods, and applications. From data to analytics and insights to actions, our forward-thinking consultants provide analytically informed, issue-based insights at scale to help our clients improve outcomes and achieve high performance. Roles & Responsibilities: Working through the phases of project Define data requirements for Data Driven Growth Analytics capability. Clean, aggregate, analyze, interpret data, and carry out data quality analysis. Knowledge of market sizing, lift ratios estimation. Experience in working with non-linear optimization techniques. Proficiency in Statistical Timeseries models, store clustering algorithms, descriptive analytics to support merch AI capability. Hands on experience in state space modeling and mixed effect regression. Development of AI/ML models in Azure ML tech stack. Develop and Manage data pipelines. Aware of common design patterns for scalable machine learning architectures, as well as tools for deploying and maintaining machine learning models in production. Knowledge of cloud platforms and usage for pipelining and deploying and scaling elasticity models. Working knowledge of resource optimization Working knowledge of NLP techniques, Large language models. Manage client relationships and expectations and communicate insights and recommendations effectively. Capability building and thought leadership. Logical Thinking Able to think analytically, use a systematic and logical approach to analyze data, problems, and situations. Notices discrepancies and inconsistencies in information and materials. Task Management Advanced level of task management knowledge and experience. Should be able to plan own tasks, discuss and work on priorities, track, and report progress . Client Relationship Development Manage client expectations and develop trusted relationships Maintain strong communication with key stakeholders Act as a strategic advisor to clients on their data-driven marketing decisions Professional & Technical Skills: Must have at least 4+ years of work experience in Marketing analytics with a reputed organization.3+ years of experience in Data Driven Merchandizing which involves work experience in Pricing/Promotions/Assortment Optimization capabilities across retail clients.Strong understanding of econometric/statistical modeling:regression analysis, hypothesis testing, multivariate analysis, time series, optimization Expertise in Azure ML, SQL, R, Python, PySpark Proficiency in non-linear optimization and resource optimization Familiarity with design patterns for deploying and maintaining ML models in production Strong command over marketing data and business processes in Retail and CPG Hands-on with tools like Excel, Word, PowerPoint for communication and documentation Additional Information: Bachelor/Masters degree in Statistics/Economics/ Mathematics/ Computer Science or related disciplines with an excellent academic record Knowledge of CPG, Retail industry. Proficient in Excel, MS word, PowerPoint, etc. Strong client communication. Qualification Experience: Must have at least 4+ years of work experience in Marketing analytics with a reputed organization. 3+ years of experience in Data Driven Merchandizing which involves work experience in Pricing/Promotions/Assortment Optimization capabilities across retail clients. Educational Qualification: Bachelor/Masters degree in Statistics/Economics/Mathematics/Computer Science or related disciplines. Preferred advanced degrees include M.Tech, M.Phil/Ph.D in Statistics/Econometrics or related field from reputed institutions.
Posted 3 weeks ago
4.0 - 8.0 years
9 - 12 Lacs
Mumbai, Bengaluru, Delhi / NCR
Work from Office
Your Responsibilities: Build & automate ML pipelines (Kubeflow, MLflow, Airflow) Develop & optimize CI/CD workflows Deploy ML models at scale with Docker, Kubernetes, cloud ML platforms Implement monitoring & governance frameworks Must-Have Skills: Python, TensorFlow, PyTorch AWS, GCP, Azure ML expertise CI/CD, monitoring, data versioning knowledge Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 3 weeks ago
7.0 - 12.0 years
30 - 45 Lacs
Bengaluru
Work from Office
Build and deploy scalable ML models and MLOps pipelines in collaboration with data scientists Required Candidate profile 6–12 yrs in ML development, Python, model tuning, and enterprise AI deployment.
Posted 1 month ago
5 - 10 years
25 - 30 Lacs
Noida
Hybrid
Job purpose: The AI Platform Lead is responsible for overseeing and driving the evolution and operational excellence of the AI Platform with Company. This role will manage key systems such as Dataiku, Azure ML and Azure AI Services, ensuring they deliver scalable, robust, and innovative solutions to support enterprise initiatives. The successful candidate will combine technical expertise with effective leadership to foster an environment of continuous improvement and collaboration. Reporting lines and interactions: Operationally reports: Head of Data and Platforms Main stakeholders and interactions: Head of AI & Automation Hub BI / AI Platforms Manager Business Users and requestors Data & Solution architects Job scope: Key Responsibilities Platform Management : Supervise Platforms operations, maintenance of Dataiku and Azure components. Ensure high availability, performance, and scalability of different AI services. Suggest architecture evolutions to enhance system efficiency and accommodate business strategy Cloud Infrastructure Supervision : Oversee the maintenance and optimization of the Azure cloud infrastructure, ensuring robust security, reliability, and scalability. Supervise the AI platform's performance, implementing proactive measures to prevent downtime in the services and ensure seamless operation. Collaborate with cross-functional teams to align infrastructure improvements with business goals and technological advancements. Operational Excellence : Establish and enforce best practices, governance policies, and security standards across all AI operations. Monitor platform usage, performance metrics, and cost-effectiveness to drive continuous improvement. Evaluation of Emerging AI Services : Continuously review and assess new AI services and technologies that may be required for future projects. Ensure the integration of innovative solutions to enhance the platform's capabilities and support evolving business needs. Team Leadership & Collaboration : Guide a diverse team comprising DevOps, Dataiku Administrators, MLOps specialists, analysts and Run Team. Cultivate a cooperative atmosphere that promotes innovation and the sharing of ideas. Budget and Cost Management : Handle budgeting and cost management for AI Platform, ensuring efficient resource allocation and adherence to budget constraints. Reporting : Provide regular updates to the Head of Data and Platforms on Platform status, risks, and resource needs. Profile: Requirements: - Proven track record with more than 5 years of experience in AI/ML platform management or similar roles - Strong knowledge of machine learning, data analytics, and cloud computing - Extensive hands-on experience with technologies such as Dataiku, Azure ML, Azure AI Foundry, and related systems - Well-versed in agile methodologies, DevOps practices, and CI/CD pipeline management - Ability to manage multiple priorities in a fast-paced, dynamic environment - Skilled in leading cross-functional teams and executing large-scale technology initiatives - Skills and experience in Agile methodologies is a plus Additional attributes: - A proactive mindset with a focus on leveraging AI to drive business value - Experience with budget oversight and resource allocation - Knowledge of software development lifecycle and agile methodologies Work experience: Minimum of 5 years in AI/ML Platform management or related fields Minimum education level: Advanced degree (Masters or PhD preferred) in Computer Science, Data Science, Engineering, or a related field.
Posted 1 month ago
4 - 6 years
18 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
POSITION: MLOps Engineer LOCATION: Bangalore (Hybrid) Work timings - 12 pm - 9 pm Budget - Maximum 20 LPA ROLE OBJECTIVE The MLOps Engineer position will support various segments by enhancing and optimizing the deployment and operationalization of machine learning models. The primary objective is to collaborate with data scientists, data engineers, and business stakeholders to ensure efficient, scalable, and reliable ML model deployment and monitoring. The role involves integrating ML models into production systems, automating workflows, and maintaining robust CI/CD pipelines. RESPONSIBILITIES Model Deployment and Operationalization : Implement, manage, and optimize the deployment of machine learning models into production environments. CI/CD Pipelines: Develop and maintain continuous integration and continuous deployment pipelines to streamline the deployment process of ML models. Infrastructure Management: Design and manage scalable, reliable, and secure cloud infrastructure for ML workloads using platforms like AWS and Azure. Monitoring and Logging: Implement monitoring, logging, and alerting mechanisms to ensure the performance and reliability of deployed models. Automation: Automate ML workflows, including data preprocessing, model training, validation, and deployment using tools like Kubeflow, MLflow, and Airflow. Collaboration: Work closely with data scientists, data engineers, and business stakeholders to understand requirements and deliver solutions. Security and Compliance : Ensure that ML models and data workflows comply with security, privacy, and regulatory requirements. Performance Optimization : Optimize the performance of ML models and the underlying infrastructure for speed and cost-efficiency. EXPERIENCE Years of Experience: 4-6 years of experience in ML model deployment and operationalization. Technical Expertise : Proficiency in Python, Azure ML, AWS Sagemaker, and other ML tools and frameworks. Cloud Platforms: Extensive experience with cloud platforms such as AWS and Azure Cloud Platform. Containerization and Orchestration: Hands-on experience with Docker and Kubernetes for containerization and orchestration of ML workloads. EDUCATION/KNOWLEDGE Educational Qualification : Master's degree (preferably in Computer Science) or B.Tech / B.E. Domain Knowledge: Familiarity with EMEA business operations is a plus. OTHER IMPORTANT NOTES Flexible Shifts : Must be willing to work flexible shifts. Team Collaboration: Experience with team collaboration and cloud tools. Algorithm Building and Deployment : Proficiency in building and deploying algorithms using Azure/AWS platforms. Please share the following details along with the most updated resume to geeta.negi@compunnel.com if you are interested in the opportunity: Total Experience Relevant experience Current CTC Expected CTC Notice Period (Last working day if you are serving the notice period) Current Location SKILL 1 RATING OUT OF 5 SKILL 2 RATING OUT OF 5 SKILL 3 RATING OUT OF 5 (Mention the skill)
Posted 1 month ago
10 - 15 years
20 - 25 Lacs
Kolkata
Work from Office
We are looking for a skilled Solution Architect with 10 to 15 years of experience to join our team in Bengaluru. The role involves designing and implementing scalable, reliable, and high-performing data architecture solutions. ### Roles and Responsibility Design and develop data architecture solutions that meet business requirements. Collaborate with stakeholders to identify needs and translate them into technical data solutions. Provide technical leadership and support to software development teams. Define and implement data management policies, procedures, and standards. Ensure data quality and integrity through data cleansing and validation. Develop and implement data security and privacy policies, ensuring compliance with regulations like GDPR and HIPAA. Design and implement data migration plans from legacy systems to the cloud. Build data pipelines and workflows using Azure services such as Azure Data Factory, Azure Databricks, and Azure Stream Analytics. Develop and maintain data models and database schemas aligned with business requirements. Evaluate and select appropriate data storage technologies including Azure SQL Database, Azure Cosmos DB, and Azure Data Lake Storage. Troubleshoot data-related issues and provide technical support to data users. Stay updated on the latest trends and developments in data architecture and recommend improvements. Coordinate and interact with multiple teams for smooth operations. ### Job Requirements Proven experience as a Technical/Data Architect with over 10 years of product/solutions development experience. Hands-on experience with software/product architecture, design, development, testing, and implementation. Excellent communication skills, problem-solving aptitude, organizational, and leadership skills. Experience with Agile development methodology and strategic development/deployment methodologies. Understanding of source control (Git/VSTS), continuous integration/continuous deployment, and information security. Hands-on experience with Cloud-based (Azure) product/platform development and implementation. Good experience in designing and working with Data Lakes, Data Warehouses, and ETL tools (Azure based). Expertise in Azure Data Analytics with a thorough understanding of Azure Data Platform tools. Hands-on experience and good understanding of Azure services like Data Factory, Data Bricks, Synapse, Data Lake Gen2, Stream Analytics, Azure Spark, Azure ML, SQL Server DB, Cosmos DB. Hands-on experience in Information management and Business Intelligence projects, handling huge client data sets with functions including transfer, ingestion, processing, analyzing, and visualization. Excellent communication and problem-solving skills, and the ability to work effectively in a team environment.
Posted 1 month ago
12 - 22 years
50 - 55 Lacs
Hyderabad, Gurugram
Work from Office
Job Summary Director, Collection Platforms and AI As a director, you will be essential to drive customer satisfaction by delivering tangible business results to the customers. You will be working for the Enterprise Data Organization and will be an advocate and problem solver for the customers in your portfolio as part of the Collection Platforms and AI team. You will be using communication and problem-solving skills to support the customer on their automation journey with emerging automation tools to build and deliver end to end automation solutions for them. Team Collection Platforms and AI Enterprise Data Organizations objective is to drive growth across S&P divisions, enhance speed and productivity in our operations, and prepare our data estate for the future, benefiting our customers. Therefore, automation represents a massive opportunity to improve quality and efficiency, to expand into new markets and products, and to create customer and shareholder value. Agentic automation is the next frontier in intelligent process evolution, combining AI agents, orchestration layers, and cloud-native infrastructure to enable autonomous decision-making and task execution. To leverage the advancements in automation tools, its imperative to not only invest in the technologies but also democratize them, build literacy, and empower the work force. The Collection Platforms and AI team's mission is to drive this automation strategy across S&P Global and help create a truly digital workplace. We are responsible for creating, planning, and delivering transformational projects for the company using state of the art technologies and data science methods, developed either in house or in partnership with vendors. We are transforming the way we are collecting the essential intelligence our customers need to do decision with conviction, delivering it faster and at scale while maintaining the highest quality standards. What were looking for ? You will lead the design, development, and scaling of AI-driven agentic pipelines to transform workflows across S&P Global. This role requires a strategic leader who can architect end-to-end automation solutions using agentic frameworks, cloud infrastructure, and orchestration tools while managing senior stakeholders and driving adoption at scale. A visionary technical leader with knowledge of designing agentic pipelines and deploying AI applications in production environments. Understanding of cloud infrastructure (AWS/Azure/GCP), orchestration tools (e.g., Airflow, Kubeflow), and agentic frameworks (e.g., LangChain, AutoGen). Proven ability to translate business workflows into automation solutions, with emphasis on financial/data services use cases. An independent proactive person who is innovative, adaptable, creative, and detailed-oriented with high energy and a positive attitude. Exceptional skills in listening to clients, articulating ideas, and complex information in a clear and concise manner. Proven record of creating and maintaining strong relationships with senior members of client organizations, addressing their needs, and maintaining a high level of client satisfaction. Ability to understand what the right solution is for all type of problems, understanding and identifying the ultimate value of each project. Operationalize this technology across S&P Global, delivering scalable solutions that enhance efficiency, reduce latency, and unlock new capabilities for internal and external clients. Exceptional communication skills with experience presenting to C-level executives Responsibilities Engage with the multiple client areas (external and internal) and truly understand their problem and then deliver and support solutions that fit their needs. Understand the existing S&P Global product to leverage existing products as necessary to deliver a seamless end to end solution to the client. Evangelize agentic capabilities through workshops, demos, and executive briefings. Educate and spread awareness within the external client-base about automation capabilities to increase usage and idea generation. Increase automation adoption by focusing on distinct users and distinct processes. Deliver exceptional communication to multiple layers of management for the client. Provide automation training, coaching, and assistance specific to a users role. Demonstrate strong working knowledge of automation features to meet evolving client needs. Extensive knowledge and literacy of the suite of products and services offered through ongoing enhancements, and new offerings and how they fulfill customer needs. Establish monitoring frameworks for agent performance, drift detection, and self-healing mechanisms. Develop governance models for ethical AI agent deployment and compliance. Preferred Qualification 12+ years work experience with 5+ years in the Automation/AI space Knowledge of: Cloud platforms (AWS SageMaker, Azure ML; etc) Orchestration tools (Prefect, Airflow; etc) Agentic toolkits (LangChain, LlamaIndex, AutoGen) Experience in productionizing AI applications. Strong programming skills in python and common AI frameworks Experience with multi-modal LLMs and integrating vision and text for autonomous agents. Excellent written and oral communication in English Excellent presentation skills with a high degree of comfort speaking with senior executives, IT Management, and developers. Hands-on ability to build quick prototype/visuals to assist with high level product concepts and capabilities. Experience in deployment and management of applications utilizing cloud-based infrastructure. A desire to work in a fast-paced and challenging work environment Ability to work in a cross functional, multi geographic teams
Posted 1 month ago
7 - 10 years
20 - 35 Lacs
Hyderabad, Bengaluru
Hybrid
Role & responsibilities We are seeking an experienced and technically strong Machine Learning Engineer to design, implement, and operationalize ML models across Google Cloud Platform (GCP) and Microsoft Azure. The ideal candidate will have a robust foundation in machine learning algorithms, MLOps practices, and experience deploying models into scalable cloud environments. Responsibilities: Design, develop, and deploy machine learning solutions for use cases in prediction, classification, recommendation, NLP, and time series forecasting. Translate data science prototypes into production-grade, scalable models and pipelines. Implement and manage end-to-end ML pipelines using: Azure ML (Designer, SDK, Pipelines), Data Factory, and Azure Databricks Vertex AI (Pipelines, Workbench), BigQuery ML, and Dataflow Build and maintain robust MLOps workflows for versioning, retraining, monitoring, and CI/CD using tools like MLflow, Azure DevOps, and GCP Cloud Build. Optimize model performance and inference using techniques like hyperparameter tuning, feature selection, model ensembling, and model distillation. Use and maintain model registries, feature stores, and ensure reproducibility and governance. Collaborate with cloud architects, and software engineers to deliver ML-based solutions. Maintain and monitor model performance in production using Azure Monitor, Prometheus, Vertex AI Model Monitoring, etc. Document ML workflows, APIs, and system design for reusability and scalability. Primary Skills required (Must Have Expereince): 5 -7 years of experience in machine learning engineering or applied ML roles. Advanced proficiency in Python, with strong knowledge of libraries such as Scikit-learn, Pandas, NumPy, XGBoost, LightGBM, TensorFlow, PyTorch. Solid understanding of core ML concepts: supervised/unsupervised learning, cross-validation, bias-variance tradeoff, evaluation metrics (ROC-AUC, F1, MSE, etc.). Hands-on experience deploying ML models using: Azure ML (Endpoints, SDK), AKS, ACI Vertex AI (Endpoints, Workbench), Cloud Run, GKE Familiarity with cloud-native tools for storage, compute, and orchestration: Azure Blob Storage, ADLS Gen2, Azure Functions GCP Storage, BigQuery, Cloud Functions Experience with containerization and orchestration (Docker, Kubernetes, Helm). Strong understanding of CI/CD for ML, model testing, reproducibility, and rollback strategies. Experience implementing drift detection, model explainability (SHAP, LIME), and responsible AI practices.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17062 Jobs | Dublin
Wipro
9393 Jobs | Bengaluru
EY
7759 Jobs | London
Amazon
6056 Jobs | Seattle,WA
Accenture in India
6037 Jobs | Dublin 2
Uplers
5971 Jobs | Ahmedabad
Oracle
5764 Jobs | Redwood City
IBM
5714 Jobs | Armonk
Tata Consultancy Services
3524 Jobs | Thane
Capgemini
3518 Jobs | Paris,France