Jobs
Interviews

360 Aws Sagemaker Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 9.0 years

27 - 42 Lacs

Chennai

Work from Office

Role : MLOps Engineer Location - Kochi Mode of Interview - In Person Data - 14th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 3 months ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Chennai

Work from Office

Description - External Role – AIML Data Scientist Location : Kochi Mode of Interview - In Person Date : 14th June 2025 (Saturday) Job Description: 1. Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem b. Improve Model accuracy to deliver greater business impact c. Estimate business impact due to deployment of model 2. Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge 3. Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch 5. Experience in using Deep learning models with text, speech, image and video data a. Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc b. Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV c. Knowledge of State of the art Deep learning algorithms 6. Optimize and tune Deep Learnings model for best possible accuracy 7. Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau 8. Work with application teams, in deploying models on cloud as a service or on-prem a. Deployment of models in Test / Control framework for tracking b. Build CI/CD pipelines for ML model deployment 9. Integrating AI&ML models with other applications using REST APIs and other connector technologies 10. Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus

Posted 3 months ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Certified AWS Consultant with hands-on experience in AI platform development projects Experience in setting up, maintaining, and developing cloud infrastructure Proficiency with Infrastructure as Code tools such as CloudFormation and/or Terraform Strong knowledge of AWS services including SageMaker , S3 , EC2 , etc. In-depth proficiency in at least one high-level programming language ( Python , Java , etc.) Good understanding of data analytics use cases and AI/ML technologies Primary Skills SageMaker,S3, EC2 CloudFormation / Terraform Java/Python AI/ML

Posted 3 months ago

Apply

5.0 - 9.0 years

5 - 9 Lacs

Udupi, Karnataka, India

On-site

As part of our digital transformation efforts, we are building an advanced Intelligent Virtual Assistant (IVA) to enhance customer interactions, and we are seeking a talented and motivated Machine Learning (ML) / Artificial Intelligence (AI) Engineer to join our dynamic team full time to support this effort. Responsibilities: Design, develop, and implement AI-driven chatbots and IVAs to streamline customer interactions. Work on conversational AI platforms to create a seamless customer experience, with a focus on natural language processing (NLP), intent recognition, and sentiment analysis. Collaborate with cross-functional teams, including product managers and customer support, to translate business requirements into technical solutions. Build, train, and fine-tune machine learning models to enhance IVA capabilities and ensure high accuracy in responses. Continuously optimize models based on user feedback and data-driven insights to improve performance. Integrate IVA/chat solutions with internal systems such as CRM and backend databases. Ensure scalability, robustness, and security of IVA/chat solutions in compliance with industry standards. Participate in code reviews, testing, and deployment of AI solutions to ensure high quality and reliability. Requirements: Bachelors or Master s degree in Computer Science, Data Science, AI/ML, or a related field. 5+ years of experience in developing IVA/chatbots, conversational AI, or similar AI-driven systems using AWS services. Expert in using Amazon Lex, Amazon Polly, AWS lambda, AWS connect. AWS Bedrock experience with Sage maker will have added advantage. Solid understanding of API integration and experience working with RESTful services. Strong problem-solving skills, attention to detail, and ability to work independently and in a team. Excellent communication skills, both written and verbal. Experience in financial services or fintech projects. Knowledge of data security best practices and compliance requirements in the financial sector. This role requires significant overlap with CST time zone to ensure real-time collaboration with the team and stakeholders based in the U.S. Flexibility is key, and applicants should be available for meetings and work during U.S. business hours.

Posted 3 months ago

Apply

5.0 - 9.0 years

5 - 9 Lacs

Navi Mumbai, Maharashtra, India

On-site

As part of our digital transformation efforts, we are building an advanced Intelligent Virtual Assistant (IVA) to enhance customer interactions, and we are seeking a talented and motivated Machine Learning (ML) / Artificial Intelligence (AI) Engineer to join our dynamic team full time to support this effort. Responsibilities: Design, develop, and implement AI-driven chatbots and IVAs to streamline customer interactions. Work on conversational AI platforms to create a seamless customer experience, with a focus on natural language processing (NLP), intent recognition, and sentiment analysis. Collaborate with cross-functional teams, including product managers and customer support, to translate business requirements into technical solutions. Build, train, and fine-tune machine learning models to enhance IVA capabilities and ensure high accuracy in responses. Continuously optimize models based on user feedback and data-driven insights to improve performance. Integrate IVA/chat solutions with internal systems such as CRM and backend databases. Ensure scalability, robustness, and security of IVA/chat solutions in compliance with industry standards. Participate in code reviews, testing, and deployment of AI solutions to ensure high quality and reliability. Requirements: Bachelors or Master s degree in Computer Science, Data Science, AI/ML, or a related field. 5+ years of experience in developing IVA/chatbots, conversational AI, or similar AI-driven systems using AWS services. Expert in using Amazon Lex, Amazon Polly, AWS lambda, AWS connect. AWS Bedrock experience with Sage maker will have added advantage. Solid understanding of API integration and experience working with RESTful services. Strong problem-solving skills, attention to detail, and ability to work independently and in a team. Excellent communication skills, both written and verbal. Experience in financial services or fintech projects. Knowledge of data security best practices and compliance requirements in the financial sector. This role requires significant overlap with CST time zone to ensure real-time collaboration with the team and stakeholders based in the U.S. Flexibility is key, and applicants should be available for meetings and work during U.S. business hours.

Posted 3 months ago

Apply

0.0 - 4.0 years

0 - 4 Lacs

Navi Mumbai, Maharashtra, India

On-site

As an MLOps Engineer, you will be responsible for building and optimizing our machine learning infrastructure. You will leverage AWS services, containerization, and automation to streamline the deployment and monitoring of ML models. Your expertise in MLOps best practices, combined with your experience in managing large ML operations, will ensure our models are effectively deployed, managed, and maintained in production environments. Responsibilities: Machine Learning Operations (MLOps) & Deployment: Build, deploy, and manage ML models in production using AWS SageMaker, AWS Lambda, and other relevant AWS services. Develop automated pipelines for model training, validation, deployment, and monitoring to ensure high availability and low latency. Implement best practices for CI/CD in ML model deployment and manage versioning for seamless updates. Infrastructure Development & Optimization: Design and maintain scalable, efficient, and secure infrastructure for machine learning operations using AWS services (e.g., EC2, S3, SageMaker, ECR, ECS/EKS). Leverage containerization (Docker, Kubernetes) to deploy models as microservices, optimizing for scalability and resilience. Manage infrastructure as code (IaC) using tools like Terraform, AWS CloudFormation, or similar, ensuring reliable and reproducible environments. Model Monitoring & Maintenance: Set up monitoring, logging, and alerting for deployed models to track model performance, detect anomalies, and ensure uptime. Implement feedback loops to enable automated model retraining based on new data, ensuring models remain accurate and relevant over time. Troubleshoot and resolve issues in the ML pipeline and infrastructure to maintain seamless operations. AWS Connect & Integration: Integrate machine learning models with AWS Connect or similar services for customer interaction workflows, providing real-time insights and automation. Work closely with cross-functional teams to ensure models can be easily accessed and utilized by various applications and stakeholders. Collaboration & Stakeholder Engagement: Collaborate with data scientists, engineers, and DevOps teams to ensure alignment on project goals, data requirements, and model deployment standards. Provide technical guidance on MLOps best practices and educate team members on efficient ML deployment and monitoring processes. Actively participate in project planning, architecture decisions, and road mapping sessions to improve our ML infrastructure. Security & Compliance: Implement data security and compliance measures, ensuring all deployed models meet organizational and regulatory standards. Apply appropriate data encryption and manage access controls to safeguard sensitive information used in ML models. Requirements: Bachelor s or Master s degree in Computer Science, Engineering, or a related field. Experience: 5+ years of experience as an MLOps Engineer, DevOps Engineer, or similar role focused on machine learning deployment and operations. Strong expertise in AWS services, particularly SageMaker, EC2, S3, Lambda, and ECR/ECS/EKS. Proficiency in Python, including ML-focused libraries like scikit-learn and data manipulation libraries like pandas. Hands-on experience with containerization tools such as Docker and Kubernetes. Familiarity with infrastructure as code (IaC) tools such as Terraform or AWS CloudFormation. Experience with CI/CD pipelines, Git, and version control for ML model deployment. MLOps & Model Management: Proven experience in managing large ML projects, including model deployment, monitoring, and maintenance. AWS Connect & Integration: Understanding of AWS Connect for customer interactions and integration with ML models. Soft Skills: Strong communication and collaboration skills, with the ability to explain technical concepts to non-technical stakeholders. Experience with data streaming and message queues (e.g., Kafka, AWS Kinesis). Familiarity with monitoring tools like Prometheus, Grafana, or CloudWatch for tracking model performance. Knowledge of data governance, security, and compliance requirements related to ML data handling. Certification in AWS or relevant cloud platforms. Work Schedule: This role requires significant overlap with CST time zone to ensure real-time collaboration with the team and stakeholders based in the U.S. Flexibility is key, and applicants should be available for meetings and work during U.S. business hours.

Posted 3 months ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Hyderabad

Work from Office

Key Responsibilities: Design and develop machine learning and deep learning models for tasks such as text classification , entity recognition , sentiment analysis , and document intelligence . Build and optimize NLP pipelines using models like BERT , GPT , LayoutLM , and Transformer architectures . Implement and experiment with Generative AI techniques using frameworks like Hugging Face , OpenAI APIs , and PyTorch/TensorFlow . Perform data collection, web scraping , data cleaning , and feature engineering for structured and unstructured data sources. Deploy ML models using Docker , Kubernetes , and implement CI/CD pipelines for scalable and automated workflows. Use cloud services (e.g., GCP , Azure AI ) for model hosting, data storage, and compute resources. Collaborate with cross-functional teams to integrate ML models into production-grade applications. Apply MLOps practices including model versioning, monitoring, retraining pipelines, and reproducibility. Technical Skills: Languages & Libraries: Python, Pandas, NumPy, Scikit-learn, TensorFlow, Keras, PyTorch, OpenCV, Seaborn, XGBoost, NLTK, Hugging Face, BeautifulSoup, Selenium, Scrapy Modeling & NLP: Logistic Regression, Random Forest, SVM, CNN, RNN, Transformers, BERT, GPT, LLMs Tools & Platforms: Git, Docker, Kubernetes, CI/CD, Azure AI Services, GCP, Vertex AI Concepts: Machine Learning, Deep Learning, MLOps, Generative AI, Text Analytics, Predictive Analytics Databases & Querying: Basics of SQL Other Skills: Data Visualization (Matplotlib, Seaborn), Model Optimization, Version Control

Posted 3 months ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Role Overview Join our Pune AI Center of Excellence to drive software and product development in the AI space. As an AI/ML Engineer, youll build and ship core components of our AI products—owning end-to-end RAG pipelines, persona-driven fine-tuning, and scalable inference systems that power next-generation user experiences. Key Responsibilities Model Fine-Tuning & Persona Design Adapt and fine-tune open-source large language models (LLMs) (e.g. CodeLlama, StarCoder) to specific product domains. Define and implement “personas” (tone, knowledge scope, guardrails) at inference time to align with product requirements. RAG Architecture & Vector Search Build retrieval-augmented generation systems: ingest documents, compute embeddings, and serve with FAISS, Pinecone, or ChromaDB. Design semantic chunking strategies and optimize context-window management for product scalability. Software Pipeline & Product Integration Develop production-grade Python data pipelines (ETL) for real-time vector indexing and updates. Containerize model services in Docker/Kubernetes and integrate into CI/CD workflows for rapid iteration. Inference Optimization & Monitoring Quantize and benchmark models for CPU/GPU efficiency; implement dynamic batching and caching to meet product SLAs. Instrument monitoring dashboards (Prometheus/Grafana) to track latency, throughput, error rates, and cost. Prompt Engineering & UX Evaluation Craft, test, and iterate prompts for chatbots, summarization, and content extraction within the product UI. Define and track evaluation metrics (ROUGE, BLEU, human feedback) to continuously improve the product’s AI outputs. Must-Have Skills ML/AI Experience: 3–4 years in machine learning and generative AI, including 18 months on LLM- based products. Programming & Frameworks: Python, PyTorch (or TensorFlow), Hugging Face Transformers. RAG & Embeddings: Hands-on with FAISS, Pinecone, or ChromaDB and semantic chunking. Fine-Tuning & Quantization: Experience with LoRA/QLoRA, 4-bit/8-bit quantization, and model context protocol (MCP). Prompt & Persona Engineering: Deep expertise in prompt-tuning and persona specification for product use cases. Deployment & Orchestration: Docker, Kubernetes fundamentals, CI/CD pipelines, and GPU setup. Nice-to-Have Multi-modal AI combining text, images, or tabular data. Agentic AI systems with reasoning and planning loops. Knowledge-graph integration for enhanced retrieval. Cloud AI services (AWS SageMaker, GCP Vertex AI, or Azure Machine Learning)

Posted 3 months ago

Apply

10.0 - 15.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Job Description About Oracle APAC ISV Business Oracle APAC ISV team is one of the fastest-growing and high-performing business units in APAC. We are a prime team that operates to serve a broad range of customers across the APAC region. ISVs are at the forefront of today's fastest-growing industries. Much of this growth stems from enterprises shifting toward adopting cloud-native ISV SaaS solutions. This transformation drives ISVs to evolve from traditional software vendors to SaaS service providers. Industry analysts predict exponential growth in the ISV market over the coming years, making it a key growth pillar for every hyperscaler. Our Cloud engineering team works on pitch-to-production scenarios of bringing ISVs solutions on the Oracle cloud (#oci) with an aim to provide a cloud platform for running their business which is better performant, more flexible, more secure, compliant to open-source technologies and offers multiple innovation options yet being most cost effective. The team walks along the path with our customers and are being regarded as a trusted techno-business advisors by them. Required Skills/Experience Your versatility and hands-on expertise will be your greatest asset as you deliver on time bound implementation work items and empower our customers to harness the full power of OCI. We also look for: Bachelor's degree in Computer Science, Information Technology, or a related field. Relevant certifications in AI Services on OCI and/or other cloud platforms (AWS, Azure, Google Cloud) 8+ years of professional work experience Proven experience with end-to-end AI solution implementation, from data integration to model deployment and optimization. Experience in design, build, and deployment of end-to-end AI solutions with a focus on LLMs and RAG workflows. Proficiency in frameworks such as TensorFlow, PyTorch, scikit-learn, Keras and programming languages such as Python, R, or SQL.Experience with data wrangling, data pipelines, and data integration tools. Hands-on experience with LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Knowledge of containerization technologies such as Docker and orchestration tools like Kubernetes to scale AI models. Expertise in analytics platforms like Power BI, Tableau, or other business intelligence tools. Experience working with cloud platforms, particularly for AI and analytics workloads. Familiarity with cloud-based AI services like OCI AI, AWS SageMaker etc Experience with building and optimizing data pipelines for large-scale AI/ML applications using tools like Apache Kafka, Apache Spark, Apache Airflow, or similar. Excellent communication skills, with the ability to clearly explain complex AI and analytics concepts to non-technical stakeholders. Proven ability to work with diverse teams and manage client expectations Solid experience managing multiple implementation projects simultaneously while maintaining high-quality standards. Ability to develop and manage project timelines, resources, and budgets. Career Level - IC4 Responsibilities What Youll Do As a solution specialist, you will work closely with our cloud architects and key stakeholders of ISVs to propagate awareness and drive implementation of OCI native as well as open-source cloud-native technologies by ISV customers. Design, implement, and optimize AI and analytics solutions using OCI AI & Analytics Services that enable advanced analytics and AI use cases. Assist clients to architect & deploy AI systems that integrate seamlessly with existing client infrastructure, ensuring scalability, performance, and security. Support the deployment of machine learning models, including model training, testing, and fine-tuning. Ensure scalability, robustness, and performance of AI models in production environments. Design, build, and deploy end-to-end AI solutions with a focus on LLMs and Agentic AI workflows (including Proactive, Reactive, RAG etc.). Help customer migrate from other Cloud vendors AI platform or bring their own AI/ML models leveraging OCI AI services and Data Science platform. Design, propose and implement solution on OCI that helps customers move seamlessly when adopting OCI for their AI requirements Provides direction and specialist knowledge to clients in developing AI chatbots using ODA (Oracle digital Assistance), OIC (Oracle integration cloud) and OCI GenAI services. Configure, integrate, and customize analytics platforms and dashboards on OCI. Implement data pipelines and ensure seamless integration with existing IT infrastructure. Drive discussions on OCI GenAI and AI Platform across the region and accelerate implementation of OCI AI services into Production

Posted 3 months ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Chennai

Work from Office

Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 3 months ago

Apply

2.0 - 7.0 years

11 - 21 Lacs

Pune

Hybrid

Rapid7, a global cybersecurity company, is expanding its AI Centre of Excellence in India. We seek a Senior AI Engineer (MLOps) to build and manage MLOps infrastructure, deploy ML models, and support AI-powered threat detection systems. Work Location: Amar Tech Park Balewadi - Hinjawadi Rd, Patil Nagar, Balewadi, Pune, Maharashtra 411045 Key Responsibilities: Build and deploy ML/LLM models in AWS using Sagemaker, Terraform. Develop APIs/interfaces using Python, TypeScript, FastAPI/Flask. Manage data pipelines, model lifecycle, observability, and guardrails. Collaborate with cross-functional teams; follow agile and DevOps best practices. Requirements: 5+ years in software engineering, 3-5 years in ML deployment (AWS). Proficient in Python, TypeScript, Docker, Kubernetes, CI/CD. Experience with LLMs, GPU resources, and ML monitoring. Nice to Have: NLP, model risk management, scalable ML systems. Rapid7 values innovation, diversity, and ethical AIideal for engineers seeking impact in cybersecurity.

Posted 3 months ago

Apply

15 - 24 years

20 - 35 Lacs

Kochi, Chennai, Thiruvananthapuram

Work from Office

Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , alerting , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services .

Posted 3 months ago

Apply

11 - 14 years

35 - 50 Lacs

Chennai

Work from Office

Role: MLOps Engineer Location: PAN India Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 3 months ago

Apply

4 - 6 years

18 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

POSITION: MLOps Engineer LOCATION: Bangalore (Hybrid) Work timings - 12 pm - 9 pm Budget - Maximum 20 LPA ROLE OBJECTIVE The MLOps Engineer position will support various segments by enhancing and optimizing the deployment and operationalization of machine learning models. The primary objective is to collaborate with data scientists, data engineers, and business stakeholders to ensure efficient, scalable, and reliable ML model deployment and monitoring. The role involves integrating ML models into production systems, automating workflows, and maintaining robust CI/CD pipelines. RESPONSIBILITIES Model Deployment and Operationalization : Implement, manage, and optimize the deployment of machine learning models into production environments. CI/CD Pipelines: Develop and maintain continuous integration and continuous deployment pipelines to streamline the deployment process of ML models. Infrastructure Management: Design and manage scalable, reliable, and secure cloud infrastructure for ML workloads using platforms like AWS and Azure. Monitoring and Logging: Implement monitoring, logging, and alerting mechanisms to ensure the performance and reliability of deployed models. Automation: Automate ML workflows, including data preprocessing, model training, validation, and deployment using tools like Kubeflow, MLflow, and Airflow. Collaboration: Work closely with data scientists, data engineers, and business stakeholders to understand requirements and deliver solutions. Security and Compliance : Ensure that ML models and data workflows comply with security, privacy, and regulatory requirements. Performance Optimization : Optimize the performance of ML models and the underlying infrastructure for speed and cost-efficiency. EXPERIENCE Years of Experience: 4-6 years of experience in ML model deployment and operationalization. Technical Expertise : Proficiency in Python, Azure ML, AWS Sagemaker, and other ML tools and frameworks. Cloud Platforms: Extensive experience with cloud platforms such as AWS and Azure Cloud Platform. Containerization and Orchestration: Hands-on experience with Docker and Kubernetes for containerization and orchestration of ML workloads. EDUCATION/KNOWLEDGE Educational Qualification : Master's degree (preferably in Computer Science) or B.Tech / B.E. Domain Knowledge: Familiarity with EMEA business operations is a plus. OTHER IMPORTANT NOTES Flexible Shifts : Must be willing to work flexible shifts. Team Collaboration: Experience with team collaboration and cloud tools. Algorithm Building and Deployment : Proficiency in building and deploying algorithms using Azure/AWS platforms. Please share the following details along with the most updated resume to geeta.negi@compunnel.com if you are interested in the opportunity: Total Experience Relevant experience Current CTC Expected CTC Notice Period (Last working day if you are serving the notice period) Current Location SKILL 1 RATING OUT OF 5 SKILL 2 RATING OUT OF 5 SKILL 3 RATING OUT OF 5 (Mention the skill)

Posted 3 months ago

Apply

5 - 10 years

25 - 30 Lacs

Mumbai, Navi Mumbai, Chennai

Work from Office

We are looking for an AI Engineer (Senior Software Engineer). Interested candidates email me resumes on mayura.joshi@lionbridge.com OR WhatsApp on 9987538863 Responsibilities: Design, develop, and optimize AI solutions using LLMs (e.g., GPT-4, LLaMA, Falcon) and RAG frameworks. Implement and fine-tune models to improve response relevance and contextual accuracy. Develop pipelines for data retrieval, indexing, and augmentation to improve knowledge grounding. Work with vector databases (e.g., Pinecone, FAISS, Weaviate) to enhance retrieval capabilities. Integrate AI models with enterprise applications and APIs. Optimize model inference for performance and scalability. Collaborate with data scientists, ML engineers, and software developers to align AI models with business objectives. Ensure ethical AI implementation, addressing bias, explainability, and data security. Stay updated with the latest advancements in generative AI, deep learning, and RAG techniques. Requirements: 8+ years experience in software development according to development standards. Strong experience in training and deploying LLMs using frameworks like Hugging Face Transformers, OpenAI API, or LangChain. Proficiency in Retrieval-Augmented Generation (RAG) techniques and vector search methodologies. Hands-on experience with vector databases such as FAISS, Pinecone, ChromaDB, or Weaviate. Solid understanding of NLP, deep learning, and transformer architectures. Proficiency in Python and ML libraries (TensorFlow, PyTorch, LangChain, etc.). Experience with cloud platforms (AWS, GCP, Azure) and MLOps workflows. Familiarity with containerization (Docker, Kubernetes) for scalable AI deployments. Strong problem-solving and debugging skills. Excellent communication and teamwork abilities Bachelors or Masters degree in computer science, AI, Machine Learning, or a related field.

Posted 3 months ago

Apply

12 - 22 years

50 - 55 Lacs

Hyderabad, Gurugram

Work from Office

Job Summary Director, Collection Platforms and AI As a director, you will be essential to drive customer satisfaction by delivering tangible business results to the customers. You will be working for the Enterprise Data Organization and will be an advocate and problem solver for the customers in your portfolio as part of the Collection Platforms and AI team. You will be using communication and problem-solving skills to support the customer on their automation journey with emerging automation tools to build and deliver end to end automation solutions for them. Team Collection Platforms and AI Enterprise Data Organizations objective is to drive growth across S&P divisions, enhance speed and productivity in our operations, and prepare our data estate for the future, benefiting our customers. Therefore, automation represents a massive opportunity to improve quality and efficiency, to expand into new markets and products, and to create customer and shareholder value. Agentic automation is the next frontier in intelligent process evolution, combining AI agents, orchestration layers, and cloud-native infrastructure to enable autonomous decision-making and task execution. To leverage the advancements in automation tools, its imperative to not only invest in the technologies but also democratize them, build literacy, and empower the work force. The Collection Platforms and AI team's mission is to drive this automation strategy across S&P Global and help create a truly digital workplace. We are responsible for creating, planning, and delivering transformational projects for the company using state of the art technologies and data science methods, developed either in house or in partnership with vendors. We are transforming the way we are collecting the essential intelligence our customers need to do decision with conviction, delivering it faster and at scale while maintaining the highest quality standards. What were looking for ? You will lead the design, development, and scaling of AI-driven agentic pipelines to transform workflows across S&P Global. This role requires a strategic leader who can architect end-to-end automation solutions using agentic frameworks, cloud infrastructure, and orchestration tools while managing senior stakeholders and driving adoption at scale. A visionary technical leader with knowledge of designing agentic pipelines and deploying AI applications in production environments. Understanding of cloud infrastructure (AWS/Azure/GCP), orchestration tools (e.g., Airflow, Kubeflow), and agentic frameworks (e.g., LangChain, AutoGen). Proven ability to translate business workflows into automation solutions, with emphasis on financial/data services use cases. An independent proactive person who is innovative, adaptable, creative, and detailed-oriented with high energy and a positive attitude. Exceptional skills in listening to clients, articulating ideas, and complex information in a clear and concise manner. Proven record of creating and maintaining strong relationships with senior members of client organizations, addressing their needs, and maintaining a high level of client satisfaction. Ability to understand what the right solution is for all type of problems, understanding and identifying the ultimate value of each project. Operationalize this technology across S&P Global, delivering scalable solutions that enhance efficiency, reduce latency, and unlock new capabilities for internal and external clients. Exceptional communication skills with experience presenting to C-level executives Responsibilities Engage with the multiple client areas (external and internal) and truly understand their problem and then deliver and support solutions that fit their needs. Understand the existing S&P Global product to leverage existing products as necessary to deliver a seamless end to end solution to the client. Evangelize agentic capabilities through workshops, demos, and executive briefings. Educate and spread awareness within the external client-base about automation capabilities to increase usage and idea generation. Increase automation adoption by focusing on distinct users and distinct processes. Deliver exceptional communication to multiple layers of management for the client. Provide automation training, coaching, and assistance specific to a users role. Demonstrate strong working knowledge of automation features to meet evolving client needs. Extensive knowledge and literacy of the suite of products and services offered through ongoing enhancements, and new offerings and how they fulfill customer needs. Establish monitoring frameworks for agent performance, drift detection, and self-healing mechanisms. Develop governance models for ethical AI agent deployment and compliance. Preferred Qualification 12+ years work experience with 5+ years in the Automation/AI space Knowledge of: Cloud platforms (AWS SageMaker, Azure ML; etc) Orchestration tools (Prefect, Airflow; etc) Agentic toolkits (LangChain, LlamaIndex, AutoGen) Experience in productionizing AI applications. Strong programming skills in python and common AI frameworks Experience with multi-modal LLMs and integrating vision and text for autonomous agents. Excellent written and oral communication in English Excellent presentation skills with a high degree of comfort speaking with senior executives, IT Management, and developers. Hands-on ability to build quick prototype/visuals to assist with high level product concepts and capabilities. Experience in deployment and management of applications utilizing cloud-based infrastructure. A desire to work in a fast-paced and challenging work environment Ability to work in a cross functional, multi geographic teams

Posted 3 months ago

Apply

3 - 7 years

4 - 7 Lacs

Hyderabad

Work from Office

What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes . Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions. Identify and resolve complex data-related challenges. Adhere to standard processes for coding, testing, and designing reusable code/component. Explore new tools and technologies that will help to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field. Functional Skills: Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training. Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Excellent problem-solving skills and the ability to work with large, complex datasets. Strong understanding of data governance frameworks, tools, and standard methodologies. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA). Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Certified Data Scientist (preferred on Databricks or Cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). SAFe for Teams certification (preferred). Soft Skills: Excellent critical-thinking and problem-solving skills. Strong communication and collaboration skills. Demonstrated awareness of how to function in a team setting. Demonstrated presentation skills. Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 3 months ago

Apply

3 - 6 years

6 - 10 Lacs

Chennai

Work from Office

Role Summary As part of our AI-first strategy at Creatrix Campus , you'll play a critical role in deploying, optimizing, and maintaining Large Language Models (LLMs) like LLaMA, Mistral, and CodeS across our SaaS platform. This role is not limited to experimentationit is about operationalizing AI at scale. Youll ensure our AI services are reliable, secure, cost-effective, and product-ready for higher education institutions in 25+ countries. Youll work across infrastructure (cloud and on-prem), MLOps, and performance optimization while collaborating with software engineers, AI developers, and product teams to embed LLMs into real-world applications like accreditation automation, intelligent student forms, and predictive academic advising. Key Responsibilities LLM Deployment & Optimization Deploy, fine-tune, and optimize open-source LLMs (e.g., LLaMA, Mistral, CodeS, DeepSeek). Implement quantization (e.g., 4-bit, 8-bit) and pruning for efficient inference on commodity hardware. Build and manage inference APIs (REST/gRPC) for production use. Infrastructure Management Set up and manage on-premise GPU servers and VM-based deployments. Build scalable cloud-based LLM infrastructure using AWS (SageMaker, EC2), Azure ML, or GCP Vertex AI. Ensure cost efficiency by choosing appropriate hardware and job scheduling strategies. MLOps & Reliability Engineering Develop CI/CD pipelines for model training, testing, evaluation, and deployment. Integrate version control for models, data, and hyperparameters. Set up logging, tracing, and monitoring tools (e.g., MLflow, Prometheus, Grafana) for model performance and failure detection. Security, Compliance & Performance Ensure data privacy (FERPA/GDPR) and enforce security best practices across deployments. Apply secure coding standards and implement RBAC, encryption, and network hardening for cloud/on-prem. Cross-functional Integration Work closely with AI solution engineers, backend developers, and product owners to integrate LLM services into the platform. Support performance benchmarking and A/B testing of AI features across modules Documentation & Internal Enablement Document LLM pipelines, configuration steps, and infrastructure setup in internal playbooks. Create guides and reusable templates for future deployments and models. Required Qualifications Education: Bachelors or Masters in Computer Science, AI/ML, Data Engineering, or related field. Technical Skills: Strong Python experience with ML libraries (e.g., PyTorch, Hugging Face Transformers). Familiar with LangChain, LlamaIndex, or other RAG frameworks. Experience with Docker, Kubernetes, and API gateways (e.g., Kong, NGINX). Working knowledge of vector databases (FAISS, Pinecone, Qdrant). Familiarity with GPU deployment tools (CUDA, Triton Inference Server, HuggingFace Accelerate). Experience: 3+ years in an AI/MLOps role, including experience in LLM fine-tuning and deployment. Hands-on work with model inference in production environments (both cloud and on-prem). Exposure to SaaS and modular product environments is a plus.

Posted 3 months ago

Apply

10 - 14 years

12 - 16 Lacs

Mumbai

Work from Office

Skill required: Delivery - Advanced Analytics Designation: I&F Decision Sci Practitioner Assoc Mgr Qualifications: Master of Engineering/Masters in Business Economics Years of Experience: 10 to 14 years What would you do? Data & AI You will be a core member of Accenture Operations global Data & AI group, an energetic, strategic, high-visibility and high-impact team, to innovate and transform the Accenture Operations business using machine learning, advanced analytics to support data-driven decisioning. What are we looking for? Extensive experience in leading Data Science and Advanced Analytics delivery teams Strong statistical programming experience – Python or working knowledge on cloud native platforms like AWS Sagemaker is preferred Azure/ GCP Experience working with large data sets and big data tools like AWS, SQL, PySpark, etc. Solid knowledge in at least more than two of the following – Supervised and Unsupervised Learning, Classification, Regression, Clustering, Neural Networks, Ensemble Modelling (random forest, boosted tree, etc) Experience in working with Pricing models is a plus Experience in atleast one of these business domains:Energy, CPG, Retail, Marketing Analytics, Customer Analytics, Digital Marketing, eCommerce, Health, Supply Chain Extensive experience in client engagement and business development Ability to work in a global collaborative team environment Quick Learner and Independently deliver results. Qualifications:Masters / Ph.D. Computer science, Engineering, Statistics, Mathematics, Economics or related disciplines. Roles and Responsibilities: Leading team of data scientists to build and deploy data science models to uncover deeper insights, predict future outcomes, and optimize business processes for clients. Refining and improving data science models based on feedback, new data, and evolving business needs. Analyze available data to identify opportunities for enhancing brand equity, improving retail margins, achieving profitable growth, and expanding market share for clients. Data Scientists in Operations follow multiple approaches for project execution from adapting existing assets to Operations use cases, exploring third-party and open-source solutions for speed to execution and for specific use cases to engaging in fundamental research to develop novel solutions. Data Scientists are expected to collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced machine learning / data-AI solutions from design to deployment. Qualifications Master of Engineering,Masters in Business Economics

Posted 3 months ago

Apply

7 - 9 years

19 - 25 Lacs

Bengaluru

Work from Office

About The Role Job Title: Industry & Function AI Decision Science Manager + S&C GN Management Level:07 - Manager Location: Primary Bengaluru, Secondary Gurugram Must-Have Skills: Consumer Goods & Services domain expertise , AI & ML, Proficiency in Python, R, PySpark, SQL , Experience in cloud platforms (Azure, AWS, GCP) , Expertise in Revenue Growth Management, Pricing Analytics, Promotion Analytics, PPA/Portfolio Optimization, Trade Investment Optimization. Good-to-Have Skills: Experience with Large Language Models (LLMs) like ChatGPT, Llama 2, or Claude 2 , Familiarity with optimization methods, advanced visualization tools (Power BI, Tableau), and Time Series Forecasting Job Summary :As a Decision Science Manager , you will lead the design and delivery of AI solutions in the Consumer Goods & Services domain. This role involves working closely with clients to provide advanced analytics and AI-driven strategies that deliver measurable business outcomes. Your expertise in analytics, problem-solving, and team leadership will help drive innovation and value for the organization. Roles & Responsibilities: Analyze extensive datasets and derive actionable insights for Consumer Goods data sources (e.g., Nielsen, IRI, EPOS, TPM). Evaluate AI and analytics maturity in the Consumer Goods sector and develop data-driven solutions. Design and implement AI-based strategies to deliver significant client benefits. Employ structured problem-solving methodologies to address complex business challenges. Lead data science initiatives, mentor team members, and contribute to thought leadership. Foster strong client relationships and act as a key liaison for project delivery. Build and deploy advanced analytics solutions using Accenture's platforms and tools. Apply technical proficiency in Python, Pyspark, R, SQL, and cloud technologies for solution deployment. Develop compelling data-driven narratives for stakeholder engagement. Collaborate with internal teams to innovate, drive sales, and build new capabilities. Drive insights in critical Consumer Goods domains such as: Revenue Growth Management Pricing Analytics and Pricing Optimization Promotion Analytics and Promotion Optimization SKU Rationalization/ Portfolio Optimization Price Pack Architecture Decomposition Models Time Series Forecasting Professional & Technical Skills: Proficiency in AI and analytics solutions (descriptive, diagnostic, predictive, prescriptive, generative). Expertise in delivering large scale projects/programs for Consumer Goods clients on Revenue Growth Management - Pricing Analytics, Promotion Analytics, Portfolio Optimization, etc. Deep and clear understanding of typical data sources used in RGM programs POS, Syndicated, Shipment, Finance, Promotion Calendar, etc. Strong programming skills in Python, R, PySpark, SQL, and experience with cloud platforms (Azure, AWS, GCP) and proficient in using services like Databricks and Sagemaker. Deep knowledge of traditional and advanced machine learning techniques, including deep learning. Experience with optimization techniques (linear, nonlinear, evolutionary methods). Familiarity with visualization tools like Power BI, Tableau. Experience with Large Language Models (LLMs) like ChatGPT, Llama 2. Certifications in Data Science or related fields. Additional Information: The ideal candidate has a strong educational background in data science and a proven track record in delivering impactful AI solutions in the Consumer Goods sector. This position offers opportunities to lead innovative projects and collaborate with global teams. Join Accenture to leverage cutting-edge technologies and deliver transformative business outcomes. About Our Company | Qualifications Experience: Minimum 7-9 years of experience in data science, particularly in the Consumer Goods sector Educational Qualification: Bachelors or Masters degree in Statistics, Economics, Mathematics, Computer Science, or MBA (Data Science specialization preferred)

Posted 3 months ago

Apply

3 - 8 years

5 - 10 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Architecture Good to have skills : Amazon Web Services (AWS) Minimum 3 year(s) of experience is required Educational Qualification : 15 years full term education Job Title:AWS Data Engineer About The Role ::We are seeking a skilled AWS Data Engineer with expertise in AWS services such as Glue, Lambda, SageMaker, CloudWatch, and S3, coupled with strong Python/PySpark development skills. The ideal candidate will have a solid grasp of ETL concepts, proficient in writing complex SQL queries, and capable of handling client interactions independently. They should demonstrate a track record of efficiently resolving tickets, tasks, bugs, and enhancements within stipulated timelines. Good communication skills are essential, and basic knowledge of databases is preferred. Project Role Description :Design, build and configure applications to meet business process and application requirements. Must-Have Skills:AWS GlueAWS ArchitechtureAWS Expertise:Python/PySpark Development:SQL Mastery:Advanced knowledge of SQLClient Handling:Problem-Solving Skills:xCommunication Skills:Responsibilities:Develop and maintain AWS-based data solutions utilizing services like Glue, Lambda, SageMaker, CloudWatch,DynamoDb and S3. Implement ETL processes effectively within Glue jobs and PySpark scripts, ensuring optimal performance and reliability. Proficiently write and optimize complex SQL queries to extract, transform, and load data from various sources. Independently handle client interactions, understanding requirements, providing technical guidance, and ensuring client satisfaction. Resolve tickets, tasks, bugs, and enhancements promptly, meeting defined resolution timeframes. Communicate effectively with team members, stakeholders, and clients, providing updates, reports, and insights as required. Maintain a basic understanding of databases, supporting data-related activities and troubleshooting when necessary. Stay updated with industry trends, AWS advancements, and best practices, contributing to continuous improvement initiatives within the team. Requirements:Bachelor's degree in Computer Science, Engineering, or a related field.Proven experience working with AWS services, particularly Glue, Lambda, SageMaker, CloudWatch, and S3.Strong proficiency in Python and/or PySpark development for data processing and analysis.Solid understanding of ETL concepts, databases, and data warehousing principles.Excellent problem-solving skills and ability to work independently or within a team.Outstanding communication skills, both verbal and written, with the ability to interact professionally with clients and colleagues.Ability to manage multiple tasks concurrently and prioritize effectively in a dynamic work environment.Good to have:Basic knowledge of relational databases such as MySQL, PostgreSQL, or SQL Server. Educational Qualification:Bachelor Degree Qualifications 15 years full term education

Posted 3 months ago

Apply

5 - 10 years

27 - 30 Lacs

Kochi, Thiruvananthapuram

Work from Office

We are seeking a highly skilled and independent Senior Machine Learning Engineer Contractor to design, develop, and deploy advanced ML pipelines in an AWS environment. Key Responsibilities: Design, develop, and deploy robust and scalable machine learning models. Build and maintain ML pipelines for data preprocessing, model training, evaluation, and deployment. Collaborate with data scientists, data engineers, and product teams to identify ML use cases and develop prototypes. Optimize models for performance, accuracy, and scalability in real-time or batch systems. Monitor and troubleshoot deployed models to ensure ongoing performance. Location - Kochi, Trivandrum,Remote.

Posted 3 months ago

Apply

3 - 5 years

0 - 0 Lacs

Kochi

Work from Office

Job Summary: We are seeking a highly skilled Senior Python Developer with expertise in Machine Learning (ML) , Large Language Models (LLMs) , and cloud technologies . The ideal candidate will be responsible for end-to-end execution -- from requirement analysis and discovery to the design, development, and implementation of ML-driven solutions. The role demands both technical excellence and strong communication skills to work directly with clients, delivering POCs, MVPs, and scalable production systems. Key Responsibilities: Collaborate with clients to understand business needs and identify ML-driven opportunities. Independently design and develop robust ML models, time series models, deep learning solutions, and LLM-based systems. Deliver Proof of Concepts (POCs) and Minimum Viable Products (MVPs) with agility and innovation. Architect and optimize Python-based ML applications focusing on performance and scalability. Utilize GitHub for version control, collaboration, and CI/CD automation. Deploy ML models on cloud platforms such as AWS, Azure, or GCP . Follow best practices in software development including clean code, automated testing, and thorough documentation. Stay updated with evolving trends in ML, LLMs, and cloud ecosystem. Work collaboratively with Data Scientists, DevOps engineers, and Business Analysts. Must-Have Skills: Strong programming experience in Python and frameworks such as FastAPI, Flask, or Django . Solid hands-on expertise in ML using Scikit-learn, TensorFlow, PyTorch, Prophet , etc. Experience with LLMs (e.g., OpenAI, LangChain, Hugging Face , vector search). Proficiency in cloud services like AWS (S3, Lambda, SageMaker) , Azure ML , or GCP Vertex AI . Strong grasp of software engineering concepts: OOP, design patterns, data structures . Experience in version control systems ( Git/GitHub/GitLab ) and setting up CI/CD pipelines . Ability to work independently and solve complex problems with minimal supervision. Excellent communication and client interaction skills. Required Skills Python,Machine Learning,Machine Learning Models

Posted 3 months ago

Apply

5 - 10 years

10 - 20 Lacs

Pune

Hybrid

Experienced in AI Ops Engineer role focuses on deploying, monitoring, and scaling AI/GenAI models using MLOps, CI/CD, cloud (AWS/Azure/GCP), Python, Kubernetes, MLflow, security, and automation.

Posted 3 months ago

Apply

5 - 10 years

8 - 18 Lacs

Pune

Hybrid

Experienced AI Engineer with 4+ years in deploying scalable ML solutions on cloud platforms like AWS, Azure, and GCP and Skilled in Python, SQL, Kubernetes, and MLOps practices including CI/CD and model monitoring.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies