Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
20 - 22 Lacs
Bengaluru
Work from Office
Key Responsibilities Design and implement AI models for document processing, OCR, and data extraction from financial documents Develop machine learning pipelines for invoice classification, vendor matching, and payment prediction Build AI-powered reconciliation algorithms that can identify discrepancies and suggest corrections Prototype and iterate on natural language processing solutions for financial workflow automation Collaborate on end-to-end AI system architecture from data ingestion to model deployment Experiment with large language models and fine-tuning for financial domain applications Implement MLOps practices for model versioning, monitoring, and continuous improvement Required Qualifications 4+ years of experience in machine learning and AI development Strong proficiency in Python, with experience in ML frameworks (TensorFlow, PyTorch, scikit-learn) Experience with computer vision and OCR technologies (OpenCV, Tesseract, cloud OCR APIs) Knowledge of NLP and experience with transformers, BERT, or similar architectures Familiarity with cloud ML platforms (AWS SageMaker, Google AI Platform, Azure ML) Experience with data processing frameworks (Pandas, NumPy, Apache Spark) Understanding of financial processes (invoicing, payments, reconciliation) preferred Comfortable with rapid prototyping and iterative development approaches Preferred Qualifications Experience with document AI and intelligent document processing Knowledge of financial regulations and compliance requirements Experience with vector databases and retrieval-augmented generation (RAG) Familiarity with API development and microservices architecture Experience with containerization (Docker, Kubernetes)
Posted 1 month ago
4.0 - 7.0 years
3 - 6 Lacs
Noida
Work from Office
We are looking for a skilled AWS Data Engineer with 4 to 7 years of experience in data engineering, preferably in the employment firm or recruitment services industry. The ideal candidate should have a strong background in computer science, information systems, or computer engineering. Roles and Responsibility Design and develop solutions based on technical specifications. Translate functional and technical requirements into detailed designs. Work with partners for regular updates, requirement understanding, and design discussions. Lead a team, providing technical/functional support, conducting code reviews, and optimizing code/workflows. Collaborate with cross-functional teams to achieve project goals. Develop and maintain large-scale data pipelines using AWS Cloud platform services stack. Job Strong knowledge of Python/Pyspark programming languages. Experience with AWS Cloud platform services such as S3, EC2, EMR, Lambda, RDS, Dynamo DB, Kinesis, Sagemaker, Athena, etc. Basic SQL knowledge and exposure to data warehousing concepts like Data Warehouse, Data Lake, Dimensions, etc. Excellent communication skills and ability to work in a fast-paced environment. Ability to lead a team and provide technical/functional support. Strong problem-solving skills and attention to detail. A B.E./Master's degree in Computer Science, Information Systems, or Computer Engineering is required. The company offers a dynamic and supportive work environment, with opportunities for professional growth and development. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 month ago
3.0 - 8.0 years
3 - 6 Lacs
Noida
Work from Office
We are looking for a skilled MLOps professional with 3 to 11 years of experience to join our team in Hyderabad. The ideal candidate will have a strong background in Machine Learning, Artificial Intelligence, and Computer Vision. Roles and Responsibility Design, build, and maintain efficient, reusable, and tested code in Python and other applicable languages and library tools. Understand stakeholder needs and convey them to developers. Work on automating and improving development and release processes. Deploy Machine Learning (ML) to large production environments. Drive continuous learning in AI and computer vision. Test and examine code written by others and analyze results. Identify technical problems and develop software updates and fixes. Collaborate with software developers and engineers to ensure development follows established processes and works as intended. Plan out projects and participate in project management decisions. Job Minimum 3 years of hands-on experience with AWS services and products (Batch, SageMaker, StepFunctions, CloudFormation/CDK). Strong Python experience. Minimum 3 years of experience with Machine Learning/AI or Computer Vision development/engineering. Ability to provide technical leadership to developers for designing and securing solutions. Understanding of Linux utilities and Bash. Familiarity with containerization using Docker. Experience with data pipeline frameworks, such as MetaFlow is preferred. Experience with Lambda, SQS, ALB/NLBs, SNS, and S3 is preferred. Practical experience deploying Computer Vision/Machine Learning solutions at scale into production. Exposure to technologies/tools such as Keras, Pandas, TensorFlow, PyTorch, Caffe, NumPy, DVC/CML.
Posted 1 month ago
2.0 - 5.0 years
5 - 8 Lacs
Bengaluru
Work from Office
???We're Hiring: Machine Learning Engineer Data Science Focus (57+ Years Exp) ???? Location: Hybrid (Bangalore location candidates preferred) ???? Type: Contract | ???? Experience: 57+ years Are you a data-driven problem solver passionate about turning raw data into powerful machine learning modelsWe're on the lookout for a Machine Learning Engineer to join our team and drive end-to-end ML solutions that power high-impact business decisions This is not a software engineering or backend development role This is a pure data science-focused position, ideal for someone who thrives on experimentation, modeling, and real-world impact ???? What Youll Do Design & deploy ML models for classification, regression, NLP, forecasting, clustering & more Own the full model lifecycle: EDA, feature engineering, data prep, model training, and evaluation Work with both structured & unstructured datasets across diverse domains Evaluate models using business-critical and statistical metrics Collaborate with stakeholders to translate business problems into ML solutions Document experiments, findings, and ensure reproducibility Stay updated on the latest ML trends, tools, and frameworks ?? What Youll Bring 57+ years of hands-on ML/Data Science experience Strong proficiency in Python (scikit-learn, pandas, numpy, XGBoost, etc ) Deep knowledge of ML theory, statistics & model evaluation techniques Solid SQL skills and experience with ML workflows/tools (Jupyter, Git, MLflow) Clear communicator able to explain complex models to non-technical teams ?? Bonus Points For Experience with deep learning (TensorFlow, PyTorch), NLP, CV, or time series Exposure to cloud ML platforms (SageMaker, Azure ML, GCP Vertex AI) Familiarity with responsible AI, explainability (SHAP, LIME), and bias mitigation Experience working with large-scale data or distributed systems (Spark, Dask) ???? Why Work With Us High-impact role focused on solving real-world problems with ML Flexibility: Hybrid/remote work options Agile, collaborative, and research-driven environment Opportunities to explore new tools, experiment, and grow your ML toolkit ??? Ready to take your data science skills to the next level Apply now or DM us for more details Lets build something intelligent together #hiring #machinelearning #datascience #MLengineer #remotework #Python #AIcareers #contractrole
Posted 1 month ago
3.0 - 6.0 years
7 - 12 Lacs
Chennai
Work from Office
ChatGPT said: 3+ yrs AWS DevOps exp. Skilled in IaC tools, Docker, GitHub Actions/Jenkins. Exp with GPU instances & AI pipelines (SageMaker, DL AMIs). Strong in Linux, networking & cloud security
Posted 1 month ago
10.0 - 17.0 years
9 - 15 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Dear Candidate, Please find below job description Role :- MLOps + ML Engineer Job Description: Role Overview: We are looking for a highly experienced MLOps and ML Engineer to lead the design, deployment, and optimization of machine learning systems at scale. This role requires deep expertise in MLOps practices, CI/CD automation, and AWS SageMaker, with a strong foundation in machine learning engineering and cloud-native development. Key Responsibilities: Architect and implement robust MLOps pipelines for model development, deployment, monitoring, and governance. Lead the operationalization of ML models using AWS SageMaker and other AWS services. Build and maintain CI/CD pipelines for ML workflows using tools like GitHub Actions, Jenkins, or AWS CodePipeline. Automate model lifecycle management including retraining, versioning, and rollback. Collaborate with data scientists, ML engineers, and DevOps teams to ensure seamless integration and scalability. Monitor production models for performance, drift, and reliability. Establish best practices for reproducibility, security, and compliance in ML systems. Required Skills: 10+ years of experience in ML Engineering, MLOps, or related fields. Deep hands-on experience with AWS SageMaker, Lambda, S3, CloudWatch, and related AWS services. Strong programming skills in Python and experience with Docker, Kubernetes, and Terraform. Expertise in CI/CD tools and infrastructure-as-code. Familiarity with model monitoring tools (e.g., Evidently, Prometheus, Grafana). Solid understanding of ML algorithms, data pipelines, and production-grade systems. Preferred Qualifications: AWS Certified Machine Learning Specialty or DevOps Engineer certification. Experience with feature stores, model registries, and real-time inference systems. Leadership experience in cross-functional ML/AI teams. Primary Skills: MLOps, ML Engineering, AWS related services (SageMaker/S3/CloudWatch) Regards Divya Grover +91 8448403677
Posted 2 months ago
4.0 - 8.0 years
8 - 12 Lacs
Noida
Work from Office
Required Skills & Qualifications: - 57 years of industry experience building and deploying machine learning models. - Strong proficiency with machine learning algorithms including XGBoost, linear regression, and classification models. - Hands-on experience with AWS SageMaker for model development, training, and deployment. - Solid programming skills in Python (and relevant libraries such as scikit-learn, pandas, NumPy, etc.). - Strong understanding of model evaluation metrics, cross-validation, hyperparameter tuning, and performance optimization. - Experience in working with structured and unstructured datasets. - Knowledge of best practices in model deployment and monitoring in a production environment (ML Ops desirable). - Familiarity with tools like Docker, Git, CI/CD pipelines, and AWS ML services. - Excellent problem-solving skills, critical thinking, and attention to detail. - Strong communication and collaboration skills. Nice to Have: - Experience with additional AWS services like Lambda, S3, Step Functions, CloudWatch. - Exposure to deep learning frameworks like TensorFlow or PyTorch. - Familiarity with DataOps practices and agile methodologies. Mandatory Competencies Data Science - Machine learning Python - Numpy Data Science - Python Python - Panda Data Science - AWS Sagemaker
Posted 2 months ago
7.0 - 12.0 years
8 - 12 Lacs
Gurugram
Work from Office
Experience in AWS SageMaker development, pipelines, real-time and batch transform jobs Expertise in AWS, Terraform / Cloud formation for IAC Experience in AWS networking concepts Experience in coding skills python, TensorFlow, pytorch or scikit-learn.
Posted 2 months ago
6.0 - 8.0 years
16 - 31 Lacs
Noida, Chennai, Bengaluru
Hybrid
Design, develop, and deploy AI/ML-powered applications using AWS services such as SageMaker , Bedrock , Lambda , Comprehend , Rekognition , and Lex . Collaborate with business and technical stakeholders to identify impactful AI use cases and translate them into scalable technical solutions. Build and maintain robust workflows to support AI model training, testing, and deployment. Integrate AI/ML capabilities with existing enterprise systems and applications. Prototype and evaluate AI models using pre-trained services or by developing custom models. Ensure AI applications meet performance, reliability, security, and compliance standards. Stay current with industry trends, tools, and best practices in AI and cloud-native application development. Mentor internal teams and contribute to knowledge-sharing around AI tools, frameworks, and methodologies.
Posted 2 months ago
2.0 - 3.0 years
5 - 7 Lacs
Hyderabad
Remote
We are looking for a highly skilled Machine Learning Engineer with strong expertise in SQL, Python, data modeling, and machine learning, along with hands-on experience working with Snowflake and AWS services like SageMaker. The ideal candidate will also be proficient in data visualization tools and familiar with big data frameworks. Key Responsibilities: Design, build, and deploy scalable machine learning models using Python and SQL. Work extensively with Snowflakes advanced features for data storage, processing, and ML integrations. Utilize AWS tools, especially SageMaker, for model training, deployment, and monitoring. Develop robust data pipelines and workflows, ensuring accuracy and efficiency. Create dashboards and visualizations to communicate data-driven insights. Collaborate with cross-functional teams including data engineers, product managers, and business analysts. Requirements: Proficiency in SQL, Python, data modeling, and machine learning algorithms. Hands-on experience with Snowflake and AWS SageMaker (or similar cloud-based ML services). Experience with data visualization tools like Tableau, Power BI, or equivalent. Familiarity with big data technologies such as Spark, Hadoop, or Kafka is a plus. Experience with other ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) is welcome. Strong problem-solving skills and the ability to work independently in a remote environment.
Posted 2 months ago
9.0 - 14.0 years
20 - 22 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Role Overview: We are looking for a highly experienced MLOps and ML Engineer to lead the design, deployment, and optimization of machine learning systems at scale. This role requires deep expertise in MLOps practices, CI/CD automation, and AWS SageMaker, with a strong foundation in machine learning engineering and cloud-native development. Key Responsibilities: Architect and implement robust MLOps pipelines for model development, deployment, monitoring, and governance. Lead the operationalization of ML models using AWS SageMaker and other AWS services. Build and maintain CI/CD pipelines for ML workflows using tools like GitHub Actions, Jenkins, or AWS CodePipeline. Automate model lifecycle management including retraining, versioning, and rollback. Collaborate with data scientists, ML engineers, and DevOps teams to ensure seamless integration and scalability. Monitor production models for performance, drift, and reliability. Establish best practices for reproducibility, security, and compliance in ML systems. Required Skills: 10+ years of experience in ML Engineering, MLOps, or related fields. Deep hands-on experience with AWS SageMaker, Lambda, S3, CloudWatch, and related AWS services. Strong programming skills in Python and experience with Docker, Kubernetes, and Terraform. Expertise in CI/CD tools and infrastructure-as-code. Familiarity with model monitoring tools (e.g., Evidently, Prometheus, Grafana). Solid understanding of ML algorithms, data pipelines, and production-grade systems. Preferred Qualifications: AWS Certified Machine Learning Specialty or DevOps Engineer certification. Experience with feature stores, model registries, and real-time inference systems. Leadership experience in cross-functional ML/AI teams. Primary Skills: MLOps, ML Engineering, AWS related services (SageMaker/S3/CloudWatch)
Posted 2 months ago
1.0 - 4.0 years
5 - 9 Lacs
Hyderabad
Work from Office
AI Opportunities with Soul AIs Expert Community!. Are you an MLOps Engineer ready to take your expertise to the next levelSoul AI (by Deccan AI) is building an elite network of AI professionals, connecting top-tier talent with cutting-edge projects. Why Join. Above market-standard compensation. Contract-based or freelance opportunities (2"“12 months). Work with industry leaders solving real AI challenges. Flexible work locations- Remote | Onsite | Hyderabad/Bangalore. Your Role:. Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines. Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD). Automate ML workflows (feature engineering, retraining, deployment). Scale ML models with Docker, Kubernetes, Airflow. Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure). Must-Have Skills: . Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines. Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML). Expertise in monitoring tools (MLflow, Prometheus, Grafana). Knowledge of distributed data processing (Spark, Kafka). (BonusExperience in A/B testing, canary deployments, serverless ML). Next Steps:. Register on Soul AIs website. Get shortlisted & complete screening rounds. Join our Expert Community and get matched with top AI projects. Dont just find a job. Build your future in AI with Soul AI!.
Posted 2 months ago
1.0 - 4.0 years
5 - 9 Lacs
Bengaluru
Work from Office
AI Opportunities with Soul AIs Expert Community!. Are you an MLOps Engineer ready to take your expertise to the next levelSoul AI (by Deccan AI) is building an elite network of AI professionals, connecting top-tier talent with cutting-edge projects. Why Join. Above market-standard compensation. Contract-based or freelance opportunities (2"“12 months). Work with industry leaders solving real AI challenges. Flexible work locations- Remote | Onsite | Hyderabad/Bangalore. Your Role:. Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines. Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD). Automate ML workflows (feature engineering, retraining, deployment). Scale ML models with Docker, Kubernetes, Airflow. Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure). Must-Have Skills: . Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines. Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML). Expertise in monitoring tools (MLflow, Prometheus, Grafana). Knowledge of distributed data processing (Spark, Kafka). (BonusExperience in A/B testing, canary deployments, serverless ML). Next Steps:. Register on Soul AIs website. Get shortlisted & complete screening rounds. Join our Expert Community and get matched with top AI projects. Dont just find a job. Build your future in AI with Soul AI!.
Posted 2 months ago
1.0 - 4.0 years
5 - 9 Lacs
Mumbai
Work from Office
AI Opportunities with Soul AIs Expert Community!. Are you an MLOps Engineer ready to take your expertise to the next levelSoul AI (by Deccan AI) is building an elite network of AI professionals, connecting top-tier talent with cutting-edge projects. Why Join. Above market-standard compensation. Contract-based or freelance opportunities (2"“12 months). Work with industry leaders solving real AI challenges. Flexible work locations- Remote | Onsite | Hyderabad/Bangalore. Your Role:. Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines. Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD). Automate ML workflows (feature engineering, retraining, deployment). Scale ML models with Docker, Kubernetes, Airflow. Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure). Must-Have Skills: . Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines. Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML). Expertise in monitoring tools (MLflow, Prometheus, Grafana). Knowledge of distributed data processing (Spark, Kafka). (BonusExperience in A/B testing, canary deployments, serverless ML). Next Steps:. Register on Soul AIs website. Get shortlisted & complete screening rounds. Join our Expert Community and get matched with top AI projects. Dont just find a job. Build your future in AI with Soul AI!.
Posted 2 months ago
1.0 - 4.0 years
5 - 9 Lacs
Kolkata
Work from Office
AI Opportunities with Soul AIs Expert Community!. Are you an MLOps Engineer ready to take your expertise to the next levelSoul AI (by Deccan AI) is building an elite network of AI professionals, connecting top-tier talent with cutting-edge projects. Why Join. Above market-standard compensation. Contract-based or freelance opportunities (2"“12 months). Work with industry leaders solving real AI challenges. Flexible work locations- Remote | Onsite | Hyderabad/Bangalore. Your Role:. Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines. Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD). Automate ML workflows (feature engineering, retraining, deployment). Scale ML models with Docker, Kubernetes, Airflow. Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure). Must-Have Skills: . Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines. Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML). Expertise in monitoring tools (MLflow, Prometheus, Grafana). Knowledge of distributed data processing (Spark, Kafka). (BonusExperience in A/B testing, canary deployments, serverless ML). Next Steps:. Register on Soul AIs website. Get shortlisted & complete screening rounds. Join our Expert Community and get matched with top AI projects. Dont just find a job. Build your future in AI with Soul AI!.
Posted 2 months ago
5.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior AI Cloud Operations Engineer Seniority: 4-5 OffShore Profile Summary: We're looking for a Senior AI Cloud Operations Engineer to start building a new for AI Cloud Operations team, starting with this strategic position. We are searching for an experienced Senior AI Cloud Operations Engineer with deep expertise in AI technologies to lead our cloud-based AI infrastructure management. This role is integral to ensuring our AI systems scalability, reliability, and performance, enabling us to deliver cutting-edge solutions. The ideal candidate will have a robust understanding of machine learning frameworks, cloud services architecture, and operations management. Key Responsibilities: Cloud Architecture Design: Design, architect, and manage scalable cloud infrastructure tailored for AI workloads, leveraging platforms like AWS, Azure, or Google Cloud. System Monitoring and Optimization: Implement comprehensive monitoring solutions to ensure high availability and swift performance, utilizing tools like Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: Work closely with data scientists to operationalize AI models, ensuring seamless integration with existing systems and workflows. Familiarity with tools such as MLflow or TensorFlow Serving can be beneficial. Automation and Orchestration: Develop automated deployment pipelines using orchestration tools like Kubernetes and Terraform to streamline operations and reduce manual interventions. Security and Compliance: Ensure that all cloud operations adhere to security best practices and compliance standards, including data privacy regulations like GDPR or HIPAA. Documentation and Reporting: Create and maintain detailed documentation of cloud configurations, procedures, and operational metrics to foster transparency and continuous improvement. Performance Tuning: Conduct regular performance assessments and implement strategies to optimize cloud resource utilization and reduce costs without compromising system effectiveness. Issue Resolution: Rapidly identify, diagnose, and resolve technical issues, minimizing downtime and ensuring maximum uptime. Qualifications: Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred. Professional Experience: 5+ years of extensive experience in cloud operations, particularly within AI environments. Demonstrated expertise in deploying and managing complex AI systems in cloud settings . Technical Expertise: Deep knowledge of cloud platforms (AWS, Azure, Google Cloud) including their AI-specific services such as AWS SageMaker or Google AI Platform. AI/ML Proficiency: In-depth understanding of AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, along with experience in ML model lifecycle management. Infrastructure as Code: Proficiency in infrastructure-as-code tools such as Terraform and AWS CloudFormation to automate and manage cloud deployment processes. Containerization and Microservices: Expertise in managing containerized applications using Docker and orchestrating services with Kubernetes. Soft Skills: Strong analytical, problem-solving, and communication skills, with the ability to work effectively both independently and in collaboration with cross-functional teams. Preferred Qualifications Advanced certifications in cloud services, such as AWS Certified Solutions Architect or Google Cloud Professional Data Engineer. Experience in advanced AI techniques such as deep learning or reinforcement learning. Knowledge of emerging AI technologies and trends to drive innovation within existing infrastructure. List of Used Tools: Cloud Provider: Azure, AWS or Google. Performance & monitor: Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: MLflow or TensorFlow Serving Automation and Orchestration: Kubernetes and Terraform Security and Compliance: Data privacy regulations like GDPR or HIPAA. Job Title: Senior AI Cloud Operations Engineer Seniority: 4-5 OffShore Profile Summary: We're looking for a Senior AI Cloud Operations Engineer to start building a new for AI Cloud Operations team, starting with this strategic position. We are searching for an experienced Senior AI Cloud Operations Engineer with deep expertise in AI technologies to lead our cloud-based AI infrastructure management. This role is integral to ensuring our AI systems scalability, reliability, and performance, enabling us to deliver cutting-edge solutions. The ideal candidate will have a robust understanding of machine learning frameworks, cloud services architecture, and operations management. Key Responsibilities: Cloud Architecture Design: Design, architect, and manage scalable cloud infrastructure tailored for AI workloads, leveraging platforms like AWS, Azure, or Google Cloud. System Monitoring and Optimization: Implement comprehensive monitoring solutions to ensure high availability and swift performance, utilizing tools like Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: Work closely with data scientists to operationalize AI models, ensuring seamless integration with existing systems and workflows. Familiarity with tools such as MLflow or TensorFlow Serving can be beneficial. Automation and Orchestration: Develop automated deployment pipelines using orchestration tools like Kubernetes and Terraform to streamline operations and reduce manual interventions. Security and Compliance: Ensure that all cloud operations adhere to security best practices and compliance standards, including data privacy regulations like GDPR or HIPAA. Documentation and Reporting: Create and maintain detailed documentation of cloud configurations, procedures, and operational metrics to foster transparency and continuous improvement. Performance Tuning: Conduct regular performance assessments and implement strategies to optimize cloud resource utilization and reduce costs without compromising system effectiveness. Issue Resolution: Rapidly identify, diagnose, and resolve technical issues, minimizing downtime and ensuring maximum uptime. Qualifications: Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred. Professional Experience: 5+ years of extensive experience in cloud operations, particularly within AI environments. Demonstrated expertise in deploying and managing complex AI systems in cloud settings . Technical Expertise: Deep knowledge of cloud platforms (AWS, Azure, Google Cloud) including their AI-specific services such as AWS SageMaker or Google AI Platform. AI/ML Proficiency: In-depth understanding of AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, along with experience in ML model lifecycle management. Infrastructure as Code: Proficiency in infrastructure-as-code tools such as Terraform and AWS CloudFormation to automate and manage cloud deployment processes. Containerization and Microservices: Expertise in managing containerized applications using Docker and orchestrating services with Kubernetes. Soft Skills: Strong analytical, problem-solving, and communication skills, with the ability to work effectively both independently and in collaboration with cross-functional teams. Preferred Qualifications Advanced certifications in cloud services, such as AWS Certified Solutions Architect or Google Cloud Professional Data Engineer. Experience in advanced AI techniques such as deep learning or reinforcement learning. Knowledge of emerging AI technologies and trends to drive innovation within existing infrastructure. List of Used Tools: Cloud Provider: Azure, AWS or Google. Performance & monitor: Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: MLflow or TensorFlow Serving Automation and Orchestration: Kubernetes and Terraform Security and Compliance: Data privacy regulations like GDPR or HIPAA.
Posted 2 months ago
1.0 - 6.0 years
18 - 33 Lacs
Gurugram
Hybrid
RESPONSIBILITIES: Develop, productionize, and deploy scalable, resilient software solutions for operationalizing AI & ML. Deploy Machine Learning (ML) models and Large Language Models (LLM) securely and efficiently, both in the cloud and on-premises, using state of the art platforms, tools, and techniques. Provide effective model observability, monitoring, and metrics by instrumenting logging, dashboards, alerts, etc. In collaboration with Data Engineers, design and build pipelines for extraction, transformation, and loading of data from a variety of data sources for AI & ML models as well as RAG architectures for LLMs. Enable Data Scientists to work more efficiently by providing tools for experiment tracking and test automation. Ensure scalability of built solutions by developing and running rigorous load tests. Facilitate integration of AI & ML capabilities into user experience by building APIs, UIs, etc. Stay current on new developments in AI & ML frameworks, tools, techniques, and architectures available for solution development, both private and open source. Coach data scientists and data engineers on software development best practices to write scalable, maintainable, well-designed code. Agile Project Work Work in cross-functional agile teams of highly skilled software/machine learning engineers, data scientists, DevOps engineers, designers, product managers, technical delivery teams, and others to continuously innovate AI and MLOps solutions. Act as a positive champion for broader organization to develop stronger understanding of software design patterns that deliver scalable, maintainable, well-designed analytics solutions. Advocate for security and responsibility best practices and tools. Acts as an expert on complex technical topics that require cross-functional consultation. Perform other duties as required. QUALIFICATIONS: Experience applying continuous integration/continuous delivery best practices, including Version Control, Trunk Based Development, Release Management, and Test-Driven Development Experience with popular MLOps tools (e.g., Domino Data Labs, Dataiku, mlflow, AzureML, Sagemaker), and frameworks (e.g.: TensorFlow, Keras, Theano, PyTorch, Caffe, etc.) Experience with LLM platforms (OpenAI, Bedrock, NVAIE) and frameworks (LangChain, LangFuse, vLLM, etc.) Experience in programming languages common to data science such as Python, SQL, etc. Understanding of LLMs, and supporting concepts (tokenization, guardrails, chunking, Retrieval Augmented Generation, etc.). Knowledge of ML lifecycle (wrangling data, model selection, model training, modeling validation and deployment at scale) and experience working with data scientists Familiar with at least one major cloud provider (Azure, AWS, GCP), including resource provisioning, connectivity, security, autoscaling, IaC. Familiar with cloud data warehousing solutions such as Snowflake, Fabric, etc. Experience with Agile and DevOps software development principles/methodologies and working on teams focused on delivering business value. Experience influencing and building mindshare convincingly with any audience. Confident and experienced in public speaking. Ability to communicate complex ideas in a concise way. Fluent with popular diagraming and presentation software. Demonstrated experience in teaching and/or mentoring professionals.
Posted 2 months ago
10.0 - 15.0 years
20 - 35 Lacs
Noida, Gurugram, Greater Noida
Hybrid
Role & responsibilities Machine Learning, Data Science, Model Customization [4+ Years] Exp with performing above on cloud services e.g AWS SageMaker and other tools AI/ Gen AI skills: [1 or 2 years] MCP, RAG pipelines, A2A, Agentic / AI Agents Framework Auto Gen, Lang graph, Lang chain, codeless workflow builders etc. Preferred candidate profile Build working POC and prototypes rapidly. Build / integrate AI driven solutions to solve the identified opportunities, challenges. Lead cross functional teams in identifying and prioritizing key business areas in which AI solutions can result benefits. Proposals to executives and business leaders on broad range of technology, strategy and standard, governance for AI. Work on functional design, process design (flow mapping), prototyping, testing, defining support model in collaboration with Engineering and business leaders. Articulate and document the solutions architecture and lessons learned for each exploration and accelerated incubation. Relevant IT Experience: - 10+ years of relevant IT experience in given technology
Posted 2 months ago
5.0 - 10.0 years
25 - 37 Lacs
Bengaluru
Remote
Role Purpose: As a Software Development Engineer in the MLOps team, you will help design and build scalable, high-performance infrastructure for deploying and serving machine learning models. Role Value: As an SDE III, youll architect robust ML infrastructure to support efficient model deployment, serving, and optimization. You'll develop CI/CD pipelines for ML workflows, utilize compiler and hardware-based optimizations to improve inference performance, and drive cost efficiency. You will also influence development processes and tooling for continuous improvement. Key Responsibilities: Design and optimize model serving infrastructure with a focus on low latency and cost efficiency Build scalable inference pipelines across different hardware acceleration options Implement monitoring and observability solutions for ML systems Collaborate with ML Engineers to define best practices for deployment Develop enterprise-grade, cost-efficient ML solutions Work closely with MLEs, QA, and DevOps teams in a distributed environment Evaluate new technologies and contribute to system architecture decisions Drive continuous improvements in ML infrastructure Required Experience & Skills: 5+ years of experience in software engineering using Python Hands-on experience with ML frameworks (especially PyTorch) Experience optimizing ML models using hardware accelerators (e.g., AWS Neuron, ONNX, TensorRT) Familiarity with AWS ML services and hardware-accelerated compute (e.g., SageMaker, Inferentia, Trainium) Proven ability to build and maintain serverless architectures on AWS Strong understanding of event-driven patterns (SQS/SNS) and caching strategies Proficiency with Docker and container orchestration tools Solid grasp of RESTful API design and implementation Focus on secure, high-quality code with experience using static code analysis tools Strong problem-solving, algorithmic thinking, and communication skills Nice to Have: Experience with model compilation, quantization, and inference benchmarking Exposure to regulated environments with compliance-heavy requirements for cloud-based solutions Role & responsibilities Preferred candidate profile
Posted 2 months ago
7.0 - 9.0 years
18 - 22 Lacs
Bengaluru
Work from Office
Job Title: Industry & Function AI Decision Science Manager + S&C GN Management Level:07 - Manager Location: Primary Bengaluru, Secondary Gurugram Must-Have Skills: Consumer Goods & Services domain expertise , AI & ML, Proficiency in Python, R, PySpark, SQL , Experience in cloud platforms (Azure, AWS, GCP) , Expertise in Revenue Growth Management, Pricing Analytics, Promotion Analytics, PPA/Portfolio Optimization, Trade Investment Optimization. Good-to-Have Skills: Experience with Large Language Models (LLMs) like ChatGPT, Llama 2, or Claude 2 , Familiarity with optimization methods, advanced visualization tools (Power BI, Tableau), and Time Series Forecasting Job Summary : As a Decision Science Manager , you will lead the design and delivery of AI solutions in the Consumer Goods & Services domain. This role involves working closely with clients to provide advanced analytics and AI-driven strategies that deliver measurable business outcomes. Your expertise in analytics, problem-solving, and team leadership will help drive innovation and value for the organization. Roles & Responsibilities: Analyze extensive datasets and derive actionable insights for Consumer Goods data sources (e.g., Nielsen, IRI, EPOS, TPM). Evaluate AI and analytics maturity in the Consumer Goods sector and develop data-driven solutions. Design and implement AI-based strategies to deliver significant client benefits. Employ structured problem-solving methodologies to address complex business challenges. Lead data science initiatives, mentor team members, and contribute to thought leadership. Foster strong client relationships and act as a key liaison for project delivery. Build and deploy advanced analytics solutions using Accentures platforms and tools. Apply technical proficiency in Python, Pyspark, R, SQL, and cloud technologies for solution deployment. Develop compelling data-driven narratives for stakeholder engagement. Collaborate with internal teams to innovate, drive sales, and build new capabilities. Drive insights in critical Consumer Goods domains such as Revenue Growth Management Pricing Analytics and Pricing Optimization Promotion Analytics and Promotion Optimization SKU Rationalization/ Portfolio Optimization Price Pack Architecture Decomposition Models Time Series Forecasting Professional & Technical Skills: Proficiency in AI and analytics solutions (descriptive, diagnostic, predictive, prescriptive, generative). Expertise in delivering large scale projects/programs for Consumer Goods clients on Revenue Growth Management - Pricing Analytics, Promotion Analytics, Portfolio Optimization, etc. Deep and clear understanding of typical data sources used in RGM programs POS, Syndicated, Shipment, Finance, Promotion Calendar, etc. Strong programming skills in Python, R, PySpark, SQL, and experience with cloud platforms (Azure, AWS, GCP) and proficient in using services like Databricks and Sagemaker. Deep knowledge of traditional and advanced machine learning techniques, including deep learning. Experience with optimization techniques (linear, nonlinear, evolutionary methods). Familiarity with visualization tools like Power BI, Tableau. Experience with Large Language Models (LLMs) like ChatGPT, Llama 2. Certifications in Data Science or related fields. Additional Information: The ideal candidate has a strong educational background in data science and a proven track record in delivering impactful AI solutions in the Consumer Goods sector. This position offers opportunities to lead innovative projects and collaborate with global teams. Join Accenture to leverage cutting-edge technologies and deliver transformative business outcomes. About Our Company | AccentureQualification Experience: Minimum 7-9 years of experience in data science, particularly in the Consumer Goods sector Educational Qualification: Bachelors or Masters degree in Statistics, Economics, Mathematics, Computer Science, or MBA (Data Science specialization preferred)
Posted 2 months ago
7.0 - 11.0 years
4 - 7 Lacs
Mumbai
Work from Office
Skill required: Delivery - Advanced Analytics Designation: I&F Decision Sci Practitioner Specialist Qualifications: Master of Engineering/Masters in Business Economics Years of Experience: 7 to 11 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do Data & AI You will be a core member of Accenture Operations global Data & AI group, an energetic, strategic, high-visibility and high-impact team, to innovate and transform the Accenture Operations business using machine learning, advanced analytics to support data-driven decisioning. What are we looking for Extensive experience in leading Data Science and Advanced Analytics delivery teams Strong statistical programming experience Python or working knowledge on cloud native platforms like AWS Sagemaker is preferred Azure/ GCP Experience working with large data sets and big data tools like AWS, SQL, PySpark, etc. Solid knowledge in at least more than two of the following Supervised and Unsupervised Learning, Classification, Regression, Clustering, Neural Networks, Ensemble Modelling (random forest, boosted tree, etc) Experience in working with Pricing models is a plus Experience in atleast one of these business domains:Energy, CPG, Retail, Marketing Analytics, Customer Analytics, Digital Marketing, eCommerce, Health, Supply Chain Extensive experience in client engagement and business development Ability to work in a global collaborative team environment Quick Learner and Independently deliver results. Qualifications:Masters / Ph.D. Computer science, Engineering, Statistics, Mathematics, Economics or related disciplines. Roles and Responsibilities: Building data science models to uncover deeper insights, predict future outcomes, and optimize business processes for clients. Utilizing advanced statistical and machine learning techniques to develop models that can assist in decision-making and strategic planning. Refining and improving data science models based on feedback, new data, and evolving business needs. Data Scientists in Operations follow multiple approaches for project execution from adapting existing assets to Operations use cases, exploring third-party and open-source solutions for speed to execution and for specific use cases to engaging in fundamental research to develop novel solutions. Data Scientists are expected to collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced machine learning / data-AI solutions from design to deployment. Qualification Master of Engineering,Masters in Business Economics
Posted 2 months ago
10.0 - 14.0 years
3 - 7 Lacs
Mumbai
Work from Office
Skill required: Delivery - Advanced Analytics Designation: I&F Decision Sci Practitioner Assoc Mgr Qualifications: Master of Engineering/Masters in Business Economics Years of Experience: 10 to 14 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do Data & AI You will be a core member of Accenture Operations global Data & AI group, an energetic, strategic, high-visibility and high-impact team, to innovate and transform the Accenture Operations business using machine learning, advanced analytics to support data-driven decisioning. What are we looking for Extensive experience in leading Data Science and Advanced Analytics delivery teams Strong statistical programming experience Python or working knowledge on cloud native platforms like AWS Sagemaker is preferred Azure/ GCP Experience working with large data sets and big data tools like AWS, SQL, PySpark, etc. Solid knowledge in at least more than two of the following Supervised and Unsupervised Learning, Classification, Regression, Clustering, Neural Networks, Ensemble Modelling (random forest, boosted tree, etc) Experience in working with Pricing models is a plus Experience in atleast one of these business domains:Energy, CPG, Retail, Marketing Analytics, Customer Analytics, Digital Marketing, eCommerce, Health, Supply Chain Extensive experience in client engagement and business development Ability to work in a global collaborative team environment Quick Learner and Independently deliver results. Qualifications:Masters / Ph.D. Computer science, Engineering, Statistics, Mathematics, Economics or related disciplines. Roles and Responsibilities: Leading team of data scientists to build and deploy data science models to uncover deeper insights, predict future outcomes, and optimize business processes for clients. Refining and improving data science models based on feedback, new data, and evolving business needs. Analyze available data to identify opportunities for enhancing brand equity, improving retail margins, achieving profitable growth, and expanding market share for clients. Data Scientists in Operations follow multiple approaches for project execution from adapting existing assets to Operations use cases, exploring third-party and open-source solutions for speed to execution and for specific use cases to engaging in fundamental research to develop novel solutions. Data Scientists are expected to collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced machine learning / data-AI solutions from design to deployment. Qualification Master of Engineering,Masters in Business Economics
Posted 2 months ago
5.0 - 8.0 years
4 - 7 Lacs
Mumbai
Work from Office
Skill required: Delivery - Advanced Analytics Designation: I&F Decision Sci Practitioner Sr Analyst Qualifications: Any Graduation Years of Experience: 5 - 8 Years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do Data & AI You will be a core member of Accenture Operations global Data & AI group, an energetic, strategic, high-visibility and high-impact team, to innovate and transform the Accenture Operations business using machine learning, advanced analytics to support data-driven decisioning. What are we looking for Extensive experience in leading Data Science and Advanced Analytics delivery teams Strong statistical programming experience Python or working knowledge on cloud native platforms like AWS Sagemaker is preferred Azure/ GCP Experience working with large data sets and big data tools like AWS, SQL, PySpark, etc. Solid knowledge in at least more than two of the following Supervised and Unsupervised Learning, Classification, Regression, Clustering, Neural Networks, Ensemble Modelling (random forest, boosted tree, etc) Experience in working with Pricing models is a plus Experience in atleast one of these business domainsEnergy, CPG, Retail, Marketing Analytics, Customer Analytics, Digital Marketing, eCommerce, Health, Supply Chain Extensive experience in client engagement and business development Ability to work in a global collaborative team environment Quick Learner and Independently deliver results. QualificationsMasters / Ph.D. Computer science, Engineering, Statistics, Mathematics, Economics or related disciplines. Roles and Responsibilities: Extensive experience in leading Data Science and Advanced Analytics delivery teams Strong statistical programming experience Python or working knowledge on cloud native platforms like AWS Sagemaker is preferred Azure/ GCP Experience working with large data sets and big data tools like AWS, SQL, PySpark, etc. Solid knowledge in at least more than two of the following Supervised and Unsupervised Learning, Classification, Regression, Clustering, Neural Networks, Ensemble Modelling (random forest, boosted tree, etc) Experience in working with Pricing models is a plus Experience in atleast one of these business domainsEnergy, CPG, Retail, Marketing Analytics, Customer Analytics, Digital Marketing, eCommerce, Health, Supply Chain Extensive experience in client engagement and business development Ability to work in a global collaborative team environment Quick Learner and Independently deliver results. QualificationsMasters / Ph.D. Computer science, Engineering, Statistics, Mathematics, Economics or related disciplines. Qualification Any Graduation
Posted 2 months ago
4.0 - 7.0 years
15 - 22 Lacs
Bengaluru
Work from Office
We are looking for a top-notch Senior Software Engineer who is passionate about writing clean, scalable, and secure code. If you take pride in building sustainable applications that meet customer needs and thrive in a collaborative, agile environment, this role is for you. You’ll work with experienced engineers across the enterprise and gain exposure to a variety of automation and cloud technologies. As a Python developer, you will contribute to complex assignments involving cloud-native architectures, automation pipelines, serverless computing, and object-oriented programming. Technical Skills: Proficiency in Python and cloud platforms (AWS, Azure) Experience with MLFlow, Kubernetes, Terraform, AWS SageMaker, Lambda, Step Functions Familiarity with configuration management tools (Terraform, Ansible, CloudFormation) Experience with CI/CD pipelines (e.g., Jenkins, Groovy scripts) Containerization and orchestration (Docker, Kubernetes, ECS, ECR) Understanding of serverless architecture and cloud-native application design Knowledge of infrastructure as code (IaC), IaaS, PaaS, and SaaS models Exposure to AI/ML technologies and model management is a plus Strong verbal and written communication skills Qualifications: Bachelor’s degree in Computer Science, Information Systems, or a related field 4+ years of experience in architecting, designing, and implementing cloud solutions on AWS and/or Azure Proven experience with both relational and non-relational database systems Experience leading data architecture or cloud transformation initiatives Strong troubleshooting and analytical skills Relevant certifications in AWS or Azure preferred Roles and Responsibilities Analyze and translate business requirements into scalable and resilient designs Own and continuously improve parts of the application in an agile environment Develop high-quality, maintainable products using best engineering practices Collaborate with other developers and share design philosophies across the team Work in cross-functional teams including DevOps, Data, UX, and QA Build and manage fully automated build/test/deployment environments Ensure high availability and provide rapid response to production issues Contribute to the design of useful, usable, and desirable products Adapt to new programming languages, platforms, and frameworks as needed
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are seeking a Machine Learning Engineer to develop predictive models and deploy them into production. Ideal for professionals passionate about AI and data science. Key Responsibilities: Develop and train machine learning models Preprocess and analyze large datasets Deploy models using scalable infrastructure Collaborate with product teams to integrate ML solutions Required Skills & Qualifications: Strong knowledge of Python and ML libraries (scikit-learn, TensorFlow, PyTorch) Experience with data preprocessing and feature engineering Familiarity with model deployment techniques Bonus: Experience with cloud ML services (AWS SageMaker, Google AI Platform) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
57101 Jobs | Dublin
Wipro
24505 Jobs | Bengaluru
Accenture in India
19467 Jobs | Dublin 2
EY
17463 Jobs | London
Uplers
12745 Jobs | Ahmedabad
IBM
12087 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11498 Jobs | Seattle,WA
Accenture services Pvt Ltd
10993 Jobs |
Oracle
10696 Jobs | Redwood City