Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Senior Artificial Intelligence Developer Location: Pune Experience: 3–8 Years Company: Asmadiya Technologies Pvt. Ltd. About the Role Asmadiya Technologies is seeking a Senior AI Developer to lead the design and deployment of advanced AI solutions across enterprise-grade applications. You will architect intelligent systems, mentor junior engineers, and drive innovation in the areas of machine learning, deep learning, computer vision, and large language models. If you're ready to turn AI research into impactful production systems, we want to work with you. Key Responsibilities Lead end-to-end design, development, and deployment of scalable AI/ML solutions in production environments. Architect AI pipelines and integrate models with enterprise systems and APIs. Collaborate cross-functionally with product managers, data engineers, and software teams to align AI initiatives with business goals. Optimize models for performance, scalability, and interpretability using MLOps practices. Conduct deep research and experimentation with the latest AI techniques (e.g., Transformers, Reinforcement Learning, GenAI). Review code, mentor team members, and set technical direction for AI projects. Own model governance, ethical AI considerations, and post-deployment monitoring. Required Skills & Qualifications Bachelor’s/Master’s in Computer Science, Artificial Intelligence, Data Science, or a related field. 3–8 years of hands-on experience in AI/ML, including production model deployment. Advanced Python skills and deep expertise in libraries such as TensorFlow, PyTorch, Hugging Face, and Scikit-learn. Proven experience in deploying models to production (REST APIs, containers, cloud ML services). Deep understanding of ML algorithms, optimization, statistical modeling, and deep learning. Familiarity with tools like MLflow, Docker, Kubernetes, Airflow, and CI/CD pipelines for ML. Experience with cloud AI/ML services (AWS SageMaker, GCP Vertex AI, or Azure ML). Preferred Skills Hands-on with LLMs and GenAI tools (OpenAI, LangChain, RAG architecture, vector DBs). Experience in NLP, computer vision, or recommendation systems at scale. Knowledge of model explainability (SHAP, LIME), bias detection, and AI ethics. Strong understanding of software engineering best practices, microservices, and API architecture. What We Offer ✅ Leadership role in cutting-edge AI product development ✅ Influence on AI strategy and technical roadmap ✅ Exposure to enterprise and global AI projects ✅ Fast-paced, growth-focused work environment ✅ Flexible work hours, supportive leadership, and a collaborative team Apply now by sending your resume to: careers@asmadiya.com
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title : AI/ML Engineer Experience : 5- 8 Years Location : Hybrid Pune / Bangalore / Hyderabad (12 days/week onsite) About The Role We are looking for an experienced AI/ML Engineer to join our growing team. This role demands a strong foundation in Generative AI, LLMs, and RAG pipelines, coupled with robust software engineering and backend development skills. You will be responsible for building scalable AI solutions, contributing to system architecture, and driving end-to-end project ownership from design to deployment. Key Responsibilities Design, develop, and deploy Generative AI applications using LLMs, LangChain, LangGraph, and RAG pipelines. Build scalable backend services and APIs using FastAPI in a microservices architecture. Collaborate with cross-functional teams to translate business requirements into robust AI/ML solutions. Own end-to-end technical delivery of AI projects, including architectural decisions, implementation, and deployment. Apply strong ML/NLP fundamentals to build and fine-tune models that solve real-world problems. Write clean, modular, and scalable Python code with a strong focus on performance and reliability. Mentor junior engineers and contribute to engineering best practices. Required Skills & Qualifications 5- 8 years of experience in AI/ML engineering or software development with a focus on machine learning and backend systems. Strong hands-on experience with Generative AI technologies: LLMs, LangChain, LangGraph, and RAG. Proficiency in FastAPI and building production-ready microservices. Deep understanding of ML/NLP concepts, data preprocessing, and model deployment. Excellent Python programming skills and a solid grasp of algorithmic problem-solving. Proven ability to take technical ownership and clearly explain project architecture and components. Experience with version control (Git), CI/CD pipelines, and cloud-based development (AWS/GCP/Azure) is a plus. Preferred Qualifications Prior experience deploying AI/ML solutions in enterprise environments. Familiarity with vector databases, embeddings, and model monitoring tools. Exposure to MLOps tools and frameworks (e.g., MLflow, Weights & Biases). (ref:hirist.tech)
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
haryana
On-site
We are looking for a highly skilled AI/ML Engineer with expertise in developing machine learning solutions, utilizing graph databases like Neo4j, and constructing scalable production systems. As our ideal candidate, you should have a strong passion for applying artificial intelligence, machine learning, and data science techniques to solve real-world problems. You should also possess experience in dealing with complex rules, logic, and reasoning systems. Your responsibilities will include designing, developing, and deploying machine learning models and algorithms for production environments, ensuring their scalability and robustness. You will be expected to utilize graph databases, particularly Neo4j, to model, query, and analyze data relationships in large-scale connected data systems. Building and optimizing ML pipelines to ensure they are production-ready and capable of handling real-time data volumes will be a crucial aspect of your role. In addition, you will develop rule-based systems and collaborate with data scientists, software engineers, and product teams to integrate ML solutions into existing products and platforms. Implementing algorithms for entity resolution, recommendation engines, fraud detection, or other graph-related tasks using graph-based ML techniques will also be part of your responsibilities. You will work with large datasets, perform exploratory data analysis, feature engineering, and model evaluation. Post-deployment, you will monitor, test, and iterate on ML models to ensure continuous improvement in model performance and adaptability. Furthermore, you will participate in architectural decisions to ensure the efficient use of graph databases and ML models, while staying up-to-date with the latest advancements in AI/ML research, particularly in graph-based machine learning, reasoning systems, and logical AI. Requirements: - Bachelor's or Master's degree in Computer Science, Machine Learning, Artificial Intelligence, or a related field. - 1+ years of experience in AI/ML engineering or a related field, with hands-on experience in building and deploying ML models in production environments and self-made projects using graph databases. - Proficiency in Neo4j or other graph databases, with a deep understanding of Cypher query language and graph theory concepts. - Strong programming skills in Python, Java, or Scala, along with experience using ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). - Experience with machine learning pipelines and tools like Airflow, Kubeflow, or MLflow for model tracking and deployment. - Hands-on experience with rule engines or logic programming systems (e.g., Drools, Prolog). - Experience with cloud platforms such as AWS, GCP, or Azure for ML deployments. - Familiarity with containerization and orchestration technologies like Docker and Kubernetes. - Experience working with large datasets, SQL/NoSQL databases, and handling data preprocessing at scale.,
Posted 2 weeks ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary MLOps Engineering Director Overview Horizontal Data Science Enablement Team within SSO Data Science is looking for a MLOps Engineering Director who can help solve MLOps problems, manage the Databricks platform for the entire organization, build CI/CD or automation pipelines, and lead best practices. Role And Responsibilities Oversee the administration, configuration, and maintenance of Databricks clusters and workspaces. Continuously monitor Databricks clusters for high workloads or excessive usage costs, and promptly alert relevant stakeholders to address issues impacting overall cluster health. Implement and manage security protocols, including access controls and data encryption, to safeguard sensitive information in adherence with Mastercard standards. Facilitate the integration of various data sources into Databricks, ensuring seamless data flow and consistency. Identify and resolve issues related to Databricks infrastructure, providing timely support to users and stakeholders. Work closely with data engineers, data scientists, and other stakeholders to support their data processing and analytics needs. Maintain comprehensive documentation of Databricks configurations, processes, and best practices and lead participation in security and architecture reviews of the infrastructure Bring MLOps expertise to the table, namely within the scope of, but not limited to: Model monitoring Feature catalog/store Model lineage maintenance CI/CD pipelines to gatekeep model lifecycle from development to production Own and maintain MLOps solutions either by leveraging open-sourced solutions or with a 3rd party vendor Build LLMOps pipelines using open-source solutions. Recommend alternatives and onboard products to the solution Maintain services once they are live by measuring and monitoring availability, latency and overall system health. Manage a small team of MLOps engineers All About You Master’s degree in computer science, software engineering, or a similar field. Strong experience with Databricks and its management of roles and resources Experience in cloud technologies and operations Experience supporting API’s and Cloud technologies Experience with MLOps solutions like MLFlow Experience with performing data analysis, data observability, data ingestion and data integration. 7+ Yrs DevOps, SRE, or general systems engineering experience. 5+ years of hands-on experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef. Experience architecting and implementing data governance processes and tooling (such as data catalogs, lineage tools, role-based access control, PII handling) Strong coding ability in Python or other languages like Java, and C++, plus a solid grasp of SQL fundamentals Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. What Could Set You Apart SQL tuning experience. Strong automation experience Strong Data Observability experience. Operations experience in supporting highly scalable systems. Ability to operate in a 24x7 environment encompassing global time zones Self-Motivating and creatively solves software problems and effectively keep the lights on for modeling systems. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-252407
Posted 2 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At ZoomInfo, we encourage creativity, value innovation, demand teamwork, expect accountability and cherish results. We value your take charge, take initiative, get stuff done attitude and will help you unlock your growth potential. One great choice can change everything. Thrive with us at ZoomInfo. At ZoomInfo we encourage creativity, value innovation, demand teamwork, expect accountability and cherish results. If you are collaborative, take initiative, and get stuff done we want to talk to you! We have high aspirations for the company and are looking for the right people to help fulfill the dream. We strive to continually improve every aspect of the company and use cutting-edge technologies and processes to delight our customers and rapidly increase revenue. About The Role: Enterprise Apps AI Center of Excellence (CoE) is a newly established team that operates with the agility of a startup to generate substantial value for Finance through Artificial Intelligence. Going beyond simple efficiency improvements, we focus on building innovative AI solutions that unlock new capabilities and insights for the Enterprise Applications organization. Our mission is to transform business processes, enhance strategic decision-making, and drive significant value using advanced AI applications. The team will be responsible for identifying, analyzing, developing, and deploying high-impact use cases. The ultimate goal is to empower the enterprise business applications users with cutting-edge AI, enabling more strategic and effective operations. As the Senior Engineer, you will lead our team in conceptualizing, developing, and deploying impactful AI/ML solutions that revolutionize finance processes. You will be accountable for the entire AI solution lifecycle – from ideation and requirements gathering to development, implementation, and ongoing support. This role involves leading and supporting technically a geographically diverse team located in the US and India. Identify and POC efficiency opportunities for software development, software optimization / tech debt reduction, QA, UAT, Unit Testing, Regression Testing What You'll do: Hands-on experience with developing, debugging, training, evaluating, deploying at scale, optimizing and fine tuning state-of-the-art AI models especially in NLP, Generative AI with Python, PyTorch, TensorFlow, Keras, ML algorithms (supervised, unsupervised, reinforcement learning), Gen AI frameworks (GPT-4, Llama, Claude, Gemini, Hugging Face, LangChain) Deep expertise in Neural Networks, Deep Learning, Conversational AI such as ChatBots, IVA, AgentAssist & NLP (LLMs, Transformers, RNNs, Semantic Search), Prompt Engineering Hands-on experience with AIOps , MLOps, DataOps in build the production like AI/ML pipelines with KubeFlow, MLFlow, AutoML on cloud platforms (AWS, Azure, GCP) Hands-on experience with AI-powered search (vector db, semantic search) and Microservices development - Java, Python, Spring Boot and NoSQL. Comprehensive experience with AI/ML advancements such as building the Multimodal AI, Advanced development with AutoML, Agentic AI, AI Integration, AI CyberSecurity and AI powered automation. Hands-on experience with data modelling, data processing with NLP techniques in Data Science to extract the insights. Hands-on experience with data pipelines, data warehousing, and data governance for building and deploying ML models. Responsible for proactive research in AI/ML space for the advancements and new industry trends and lead AI/ML & Data initiatives, to identify the business opportunities and critical AI use cases to deliver the AI/ML end-to-end solutions in finance business applications. Should be able to articulate the potential ROI and benefits of proposed AI solutions. Collaborate with technology, product, and data teams to seamlessly integrate validated AI solutions into existing finance workflows, clearly communicating technical details. Provide technical leadership and mentorship to the engineering team, fostering a culture of innovation and continuous learning. Utilize advanced statistical analysis and deep learning machine learning algorithms to address finance-related challenges and improve internal processes and strategies. What you bring: Minimum of 8 years hands-on experience in efficiently driving data science and/or AI/ML use cases, with a primary focus in the domain of finance and accounting. Strong understanding of software development lifecycle (SDLC) and standard methodologies for coding, design, testing, and deployment. Bachelor's or Master’s degree in Computer Science, Software Engineering, or in Statistics, Applied Mathematics, or an equivalent quantitative field. Excellent technical, problem-solving, and communication skills. Ability to analyze and resolve difficult technical issues quickly in a high-pressure environment. Experience working and presenting proposals to executives in a clear and compelling way Strong proficiency in modern programming languages (e.g., Java, Python) and frameworks (e.g., React, Node.js) to build AI/ML solutions in finance applications. Exposure to integration platforms such Boomi Strong Excel skills (required). Experience with SQL, Tableau, or other BI tools for extracting and visualizing data is a plus. About us: ZoomInfo (NASDAQ: GTM) is the Go-To-Market Intelligence Platform that empowers businesses to grow faster with AI-ready insights, trusted data, and advanced automation. Its solutions provide more than 35,000 companies worldwide with a complete view of their customers, making every seller their best seller. ZoomInfo may use a software-based assessment as part of the recruitment process. More information about this tool, including the results of the most recent bias audit, is available here. ZoomInfo is proud to be an equal opportunity employer, hiring based on qualifications, merit, and business needs, and does not discriminate based on protected status. We welcome all applicants and are committed to providing equal employment opportunities regardless of sex, race, age, color, national origin, sexual orientation, gender identity, marital status, disability status, religion, protected military or veteran status, medical condition, or any other characteristic protected by applicable law. We also consider qualified candidates with criminal histories in accordance with legal requirements. For Massachusetts Applicants: It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. ZoomInfo does not administer lie detector tests to applicants in any location.
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Overview: ShyftLabs is seeking an experienced Databricks Architect to lead the design, development, and optimization of big data solutions using the Databricks Unified Analytics Platform. This role requires deep expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to architect scalable, high-performance data platforms and drive data-driven innovation. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to accelerate business growth across various industries by focusing on creating value through innovation. Job Responsibilities Architect, design, and optimize big data and AI/ML solutions on the Databricks platform Develop and implement highly scalable ETL pipelines for processing large datasets Lead the adoption of Apache Spark for distributed data processing and real-time analytics Define and enforce data governance, security policies, and compliance standards Optimize data lakehouse architectures for performance, scalability, and cost-efficiency Collaborate with data scientists, analysts, and engineers to enable AI/ML-driven insights Oversee and troubleshoot Databricks clusters, jobs, and performance bottlenecks Automate data workflows using CI/CD pipelines and infrastructure-as-code practices Ensure data integrity, quality, and reliability across all data processes Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field 8+ years of hands-on experience in data engineering, with at least 5+ years in Databricks Architect and Apache Spark Proficiency in SQL, Python, or Scala for data processing and analytics Extensive experience with cloud platforms (AWS, Azure, or GCP) for data engineering Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture Hands-on experience with CI/CD tools and DevOps best practices Familiarity with data security, compliance, and governance best practices Strong problem-solving and analytical skills in a fast-paced environment Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer) Hands-on experience with MLflow, Feature Store, or Databricks SQL Exposure to Kubernetes, Docker, and Terraform Experience with streaming data architectures (Kafka, Kinesis, etc.) Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker) Prior experience working with retail, e-commerce, or ad-tech data platforms We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Overview: ShyftLabs is seeking a skilled Databricks Engineer to support in designing, developing, and optimizing big data solutions using the Databricks Unified Analytics Platform. This role requires strong expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to drive data-driven insights and ensure scalable, high-performance data architectures. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsiblities Design, implement, and optimize big data pipelines in Databricks Develop scalable ETL workflows to process large datasets Leverage Apache Spark for distributed data processing and real-time analytics Implement data governance, security policies, and compliance standards Optimize data lakehouse architectures for performance and cost-efficiency Collaborate with data scientists, analysts, and engineers to enable advanced AI/ML workflows Monitor and troubleshoot Databricks clusters, jobs, and performance bottlenecks Automate workflows using CI/CD pipelines and infrastructure-as-code practices Ensure data integrity, quality, and reliability in all pipelines Basic Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field 5+ years of hands-on experience with Databricks and Apache Spark Proficiency in SQL, Python, or Scala for data processing and analysis Experience with cloud platforms (AWS, Azure, or GCP) for data engineering Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture Experience with CI/CD tools and DevOps best practices Familiarity with data security, compliance, and governance best practices Strong problem-solving and analytical skills with an ability to work in a fast-paced environment Preferred Qualifications Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer) Hands-on experience with MLflow, Feature Store, or Databricks SQL Exposure to Kubernetes, Docker, and Terraform Experience with streaming data architectures (Kafka, Kinesis, etc.) Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker) Prior experience working with retail, e-commerce, or ad-tech data platforms We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Posted 2 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
🚀 We’re Hiring! MLOps Engineer – Pune, India | Asmadiya Technologies Pvt. Ltd. Are you passionate about deploying machine learning models at scale? Want to be part of a fast-growing tech company building intelligent digital solutions? Asmadiya Technologies is looking for an experienced MLOps Engineer (3–6 years) to join our AI/ML team in Pune . What You’ll Do Build and manage scalable ML pipelines (training → deployment → monitoring) Automate model deployment using CI/CD tools (Jenkins, GitHub Actions, etc.) Work with Docker, Kubernetes, and AWS (EKS, S3, SageMaker) Ensure ML model governance, performance tracking, and system reliability Collaborate closely with data scientists, DevOps, and product teams What You Bring 3–6 years of hands-on experience in software engineering, DevOps, or MLOps Strong Python + ML stack (scikit-learn, TensorFlow/PyTorch) Deep knowledge of cloud platforms (preferably AWS ) Experience with ML monitoring, model registries, and orchestration tools Bonus: Knowledge of LLMs, MLFlow, Kubeflow, or Airflow Why Join Us? Work on cutting-edge AI/ML projects that make real-world impact A collaborative, innovative, and fast-paced work culture Growth opportunities and flexible work environment Ready to take your MLOps career to the next level? Apply now at careers@asmadiya.com or DM us to learn more!
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
If you are looking for a career at a dynamic company with a people-first mindset and a deep culture of growth and autonomy, ACV is the right place for you! Competitive compensation packages and learning and development opportunities, ACV has what you need to advance to the next level in your career. We will continue to raise the bar every day by investing in our people and technology to help our customers succeed. We hire people who share our passion, bring innovative ideas to the table, and enjoy a collaborative atmosphere. Who We Are ACV is a technology company that has revolutionized how dealers buy and sell cars online. We are transforming the automotive industry. ACV Auctions Inc. (ACV), has applied innovation and user-designed, data driven applications and solutions. We are building the most trusted and efficient digital marketplace with data solutions for sourcing, selling and managing used vehicles with transparency and comprehensive insights that were once unimaginable. We are disruptors of the industry and we want you to join us on our journey. Our network of brands include ACV Auctions, ACV Transportation, ClearCar, MAX Digital and ACV Capital within its Marketplace Products, as well as, True360 and Data Services. ACV Auctions in Chennai, India are looking for talented individuals to join our team. As we expand our platform, we're offering a wide range of exciting opportunities across various roles in corporate, operations, and product and technology. Our global product and technology organization spans product management, engineering, data science, machine learning, DevOps and program leadership. What unites us is a deep sense of customer centricity, calm persistence in solving hard problems, and a shared passion for innovation. If you're looking to grow, lead, and contribute to something larger than yourself, we'd love to have you on this journey. Let's build something extraordinary together. Join us in shaping the future of automotive! At ACV we focus on the Health, Physical, Financial, Social and Emotional Wellness of our Teammates and to support this we offer industry leading benefits and wellness programs. What You Will Do ACV’s Machine Learning (ML) team is looking to grow its MLOps team. Multiple ACV operations and product teams rely on the ML team’s solutions. Current deployments drive opportunities in the marketplace, in operations, and sales, to name a few. As ACV has experienced hyper growth over the past few years, the volume, variety, and velocity of these deployments has grown considerably. Thus, the training, deployment, and monitoring needs of the ML team has grown as we’ve gained traction. MLOps is a critical function to help ourselves continue to deliver value to our partners and our customers. Successful candidates will demonstrate excellent skill and maturity, be self-motivated as well as team-oriented, and have the ability to support the development and implementation of end-to-end ML-enabled software solutions to meet the needs of their stakeholders. Those who will excel in this role will be those who listen with an ear to the overarching goal, not just the immediate concern that started the query. They will be able to show their recommendations are contextually grounded in an understanding of the practical problem, the data, and theory as well as what product and software solutions are feasible and desirable. The Core Responsibilities Of This Role Are Working with fellow machine learning engineers to build, automate, deploy, and monitor ML applications. Developing data pipelines that feed ML models. Deploy new ML models into production. Building REST APIs to serve ML models predictions. Monitoring performance of models in production. Required Qualifications Graduate education in a computationally intensive domain or equivalent work experience. 5+ years of prior relevant work or lab experience in ML projects/research Advanced proficiency with Python, SQL etc. Experience with building and deploying REST APIs (FastAPI, Flask) Experience with distributed caching technologies (Redis) Experience with real-time data streaming and processing (Kafka) Experience with cloud services (AWS / GCP) and kubernetes, docker, CI/CD. Preferred Qualifications Experience with MLOps-specific tooling like Vertex AI, Ray, Feast, Kubeflow, or MLFlow, etc. are a plus. Experience with building data pipelines Experience with training ML models Our Values Trust & Transparency | People First | Positive Experiences | Calm Persistence | Never Settling At ACV, we are committed to an inclusive culture in which every individual is welcomed and empowered to celebrate their true selves. We achieve this by fostering a work environment of acceptance and understanding that is free from discrimination. ACV is committed to being an equal opportunity employer regardless of sex, race, creed, color, religion, marital status, national origin, age, pregnancy, sexual orientation, gender, gender identity, gender expression, genetic information, disability, military status, status as a veteran, or any other protected characteristic. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires reasonable accommodation, please let us know. Data Processing Consent When you apply to a job on this site, the personal data contained in your application will be collected by ACV Auctions Inc. and/or one of its subsidiaries ("ACV Auctions"). By clicking "apply", you hereby provide your consent to ACV Auctions and/or its authorized agents to collect and process your personal data for purpose of your recruitment at ACV Auctions and processing your job application. ACV Auctions may use services provided by a third party service provider to help manage its recruitment and hiring process. For more information about how your personal data will be processed by ACV Auctions and any rights you may have, please review ACV Auctions' candidate privacy notice here. If you have any questions about our privacy practices, please contact datasubjectrights@acvauctions.com.
Posted 2 weeks ago
0 years
0 Lacs
West Bengal, India
Remote
SETV HEALTHCARE TECHNOLOGY PRIVATE LIMITED Position: Artificial Intelligence (AI) Intern Location: Hybrid Employment Type: Internship (leading to Part-Time Opportunity) Duration: 3 Months Unpaid; Performance-Based conversions with stipend for part time from 4th Month (₹5,000 – ₹15,000/month) Working Hours: 5 hours/day + 1 hour lunch/dinner break Shift Options: 7:00 PM – 1:00 AM (IST) Company Description: SETV.W , a flagship division of SETV Global , is transforming healthcare through the power of Artificial Intelligence. We develop real-world AI solutions that empower clinicians, enhance diagnostics, and bridge gaps in access and quality of care. We are seeking curious, driven, and technically strong individuals to help build the next generation of medical AI systems. Duties and Responsibilities Research, design, and implement machine learning and deep learning models for image, video, text, and multi-modal healthcare data. Work on real-world healthcare datasets including medical imaging (CT, MRI, ultrasound), tabular clinical data, and electronic health records. Preprocess, clean, and annotate data using tools like NumPy, OpenCV, and Pandas. Train, fine-tune, and evaluate models using TensorFlow, PyTorch, Hugging Face Transformers, or ONNX. To work with LLMs and agents Collaborate with web, DevOps, and UI/UX teams for seamless AI model integration. Build and document modular AI pipelines using tools like MLflow, FastAPI, or Flask. Contribute to performance benchmarking, error analysis, and model explain ability (e.g., Grad-CAM, SHAP, LIME). Conduct literature reviews and propose innovative enhancements to existing models. Support deployment of models in production and edge environments (via TorchScript, TFLite, ONNX, etc.). Maintain version-controlled notebooks and adhere to reproducible ML practices. Requirements Strong understanding of ML/DL algorithms (CNNs, RNNs, Transformers, etc.). Experience with at least one deep learning framework (PyTorch, TensorFlow, or Keras). Solid Python programming skills and hands-on knowledge of data science libraries (NumPy, Pandas, Scikit-learn). Familiarity with real-world datasets and domain challenges in computer vision or NLP. Good grasp of evaluation metrics (Accuracy, F1-score, AUC, IoU, Dice, etc.). Exposure to model deployment, containerization (Docker), or API integration is a plus. Strong problem-solving skills and ability to work independently on research problems. Preferred Skills (Bonus): Experience with healthcare-specific AI tasks (segmentation, detection, classification, triage). Familiarity with DICOM/NIfTI data formats and libraries like MONAI or SimpleITK. Knowledge of Hugging Face, LangChain, or OpenAI API. Understanding of edge deployment strategies (NVIDIA Jetson, Coral, etc.). Research exposure (published papers, blog posts, or GitHub contributions). Qualifications: Education: Currently pursuing or recently completed a Bachelor's or Master’s degree or PHD in Computer Science, Data Science, AI, Biomedical Engineering, or a related field. Portfolio: Strong GitHub profile, Kaggle experience, or AI project portfolio required. Domain Knowledge: Awareness of medical AI standards (HIPAA, FDA guidelines) is an advantage. Selection Process (3 Rounds): Resume Screening – Based on portfolio, skills, and relevance. Technical Interview – Problem-solving, ML concepts, and project discussion. Founding Office Interview(Optional) – Strategic thinking, innovation mindset, and cultural alignment. What We Offer: Real-World Impact: Work on AI models that improve healthcare outcomes. Mentorship: Get guidance from leading experts in AI and medical tech. Career Growth: Opportunity for conversion to a paid part-time role and optional full time opportunities. Flexible Remote Work: Structured hours that support learning and productivity. Recognition: Performance-based incentives and loyalty rewards.
Posted 2 weeks ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Intelligent Image Management Inc (IIMI) is an IT Services company reimagines and digitizes data through document automation using modern, cloud-native app development. As one of the world's leading multinational IT services companies with offices in the USA and Singapore, India, Sri Lanka, Bangladesh, Nepal and Kenya. Over 7,000 people are employed by IIMI worldwide whose mission is to advance data process automation. US and European Fortune 500 companies are among our clients. Become part of a team that puts its people first. Founded in 1996, Intelligent Image Management Inc. has always believed in its people. We strive to foster an environment where all feel welcome, supported, and empowered to be innovative and reach their full potential. Website: https://www.iimdirect.com/ About the Role: We are looking for a highly experienced and driven Senior Data Scientist to join our advanced AI and Data Science team. You will play a key role in building and deploying machine learning models—especially in the areas of computer vision, document image processing, and large language models (LLMs) . This role requires a combination of hands-on technical skills and the ability to design scalable ML solutions that solve real-world business problems. Key Responsibilities: Design and develop end-to-end machine learning pipelines, from data preprocessing and feature engineering to model training, evaluation, and deployment. Lead complex ML projects using deep learning, computer vision, and document analysis methods (e.g., object detection, image classification, segmentation, layout analysis). Build solutions for document image processing using tools like Google Cloud Vision, AWS Textract , and OCR libraries. Apply LLMs (Large Language Models), both open-source (e.g., LLaMA, Mistral, Falcon, GPT-NeoX) and closed-source (e.g., OpenAI GPT, Claude, Gemini), to automate text understanding, extraction, summarization, classification, and question-answering tasks. Integrate LLMs into applications for intelligent document processing, NER, semantic search, embeddings, and chat-based interfaces. Use Python (along with libraries such as OpenCV, PyTorch, TensorFlow, Hugging Face Transformers) and for building scalable, multi-threaded data processing pipelines. Implement and maintain ML Ops practices using tools such as MLflow, AWS SageMaker, GCP AI Platform , and containerized deployments. Collaborate with engineering and product teams to embed ML models into scalable production systems. Stay up to date with emerging research and best practices in machine learning, LLMs, and document AI. Required Qualifications: Bachelor’s or master’s degree in computer science, Mathematics, Statistics, Engineering, or a related field. Minimum 5 years of experience in machine learning, data science, or AI engineering roles. Strong background in deep learning, computer vision, and document image processing . Practical experience with LLMs (open and closed source), including fine-tuning, prompt engineering, and inference optimization. Solid grasp of MLOps , model versioning, and model lifecycle management. Expertise in Python , with strong knowledge of ML and CV libraries. Experience with Java and multi-threading is a plus. Familiarity with NLP tasks including Named Entity Recognition , classification, embeddings , and text summarization . Experience with cloud platforms (AWS/GCP) and their ML toolkits Preferred Skills: • Experience with retrieval-augmented generation (RAG), vector databases, and LLM evaluation tools. • Exposure to CI/CD for ML workflows and best practices in production ML. • Ability to mentor junior team members and lead cross-functional AI projects. Work Location: Work from Office Send cover letter, complete resume, and references to email: tech.jobs@iimdirect.com Industry: Outsourcing/Offshoring Employment Type Full-time
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
About Us: Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS. We specialize in delivering high-quality human-curated data and AI-first scaled operations services. Based in San Francisco and Hyderabad, we are a fast-moving team on a mission to build AI for Good, driving innovation and societal impact. Role Overview :We are seeking a skilled AI/ML Engineer to join our client’s team (Top Tier Consulting Fir m) and operationalize our machine learning workflows. You will work closely with data scientists, engineers, and product teams to design, deploy, monitor, and maintain scalable ML pipelines in production. The ideal candidate has a strong foundation in ML systems, DevOps principles, and cloud-native technologies . Key Responsibilitie s:Collaborate with cross-functional teams to define ML problem statements and translate them into technical task s.Design and implement robust data pipelines for collecting, cleaning, and validating large dataset s.Develop, train, and evaluate machine learning models using appropriate algorithms and frameworks (e.g., scikit-learn, TensorFlow, PyTorch ).Package and deploy ML models as scalable services or APIs, ensuring performance, security, and reliabilit y.Monitor and maintain models in production, including retraining and performance tunin g.Implement best practices in MLOps: experiment tracking, versioning, CI/CD pipelines, and model monitorin g.Document methodologies, workflows, and technical decisions clearly for both technical and non-technical audience s.Stay up to date with industry trends and contribute to evaluating and adopting new tools or frameworks where relevan t.Required Skill s:Strong programming skills in Python and experience with ML libraries (scikit-learn, PyTorch, TensorFlow ).Solid understanding of machine learning fundamentals, including data preprocessing, feature engineering, model selection, and evaluatio n.Hands-on experience with SQL and data manipulation using tools like panda s.Experience deploying models in production environments, including serving models via REST API s.Familiarity with containerization (Docker) and orchestration (Kubernetes is a plus ).Knowledge of cloud platforms (AWS, GCP, or Azure) and experience with relevant ML tools (e.g., SageMaker, Vertex AI) is a plu s.Good understanding of software engineering best practices: version control, testing, code review s.Nice-to-Hav e:Experience with big data frameworks (e.g., Spark) for processing large dataset s.Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases ).Familiarity with MLOps workflows and tools for monitoring data/model drif t.Domain-specific expertise in NLP, Computer Vision, or Time Series Analysi s.Educational Qualification s:Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related fiel d.
Posted 2 weeks ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled MLOps/LLMOps Engineer who will play a critical role in the deployment, scaling, and maintenance of Generative AI models. This position involves close collaboration with data scientists, ML/GenAI engineers, and DevOps teams to ensure seamless integration and operation of GenAI models within production environments at PwC as well as our clients. The ideal candidate will have a strong background in MLOps practices, along with experience and interest in Generative AI technologies. Years of Experience: Candidates with 4+ years of hands on experience Core Qualifications 3+ years of hands-on experience developing and deploying AI models in production environments with 1 year of experience in developing proofs of concept and prototypes Strong background in software development, with experience in building and maintaining scalable, distributed systems Strong programming skills in languages like Python and familiarity in ML frameworks and libraries (e.g., TensorFlow, PyTorch) Knowledge of containerization and orchestration tools like Docker and Kubernetes. Familiarity with cloud platforms (AWS, GCP, Azure) and their ML/AI service offerings Experience with continuous integration and delivery tools such as Jenkins, GitLab CI/CD, or CircleCI. Experience with infrastructure as code tools like Terraform or CloudFormation. Technical Skills Must to Have: Proficiency with MLOps tools such as MLflow, Kubeflow, Airflow, or similar for managing machine learning workflows and lifecycle. Practical understanding of generative AI frameworks (e.g., HuggingFace Transformers, OpenAI GPT, DALL-E) Expertise in containerization technologies like Docker and orchestration tools such as Kubernetes for scalable model deployment. Expertise in MLOps and LLMOps practices, including CI/CD for ML models Strong knowledge of one or more cloud-based AI services (e.g., AWS SageMaker, Azure ML, Google Vertex AI) Nice To Have Experience with advanced GenAI applications such as natural language generation, image synthesis, and creative AI. Familiarity with experiment tracking and model registry tools. Knowledge of high-performance computing and parallel processing techniques. Contributions to open-source MLOps or GenAI projects. Key Responsibilities Develop and implement MLOps strategies tailored for Generative AI models to ensure robustness, scalability, and reliability. Design and manage CI/CD pipelines specialized for ML workflows, including the deployment of generative models such as GANs, VAEs, and Transformers. Monitor and optimize the performance of AI models in production, employing tools and techniques for continuous validation, retraining, and A/B testing. Collaborate with data scientists and ML researchers to understand model requirements and translate them into scalable operational frameworks. Implement best practices for version control, containerization, infrastructure automation, and orchestration using industry-standard tools (e.g., Docker, Kubernetes). Ensure compliance with data privacy regulations and company policies during model deployment and operation. Troubleshoot and resolve issues related to ML model serving, data anomalies, and infrastructure performance. Stay up-to-date with the latest developments in MLOps and Generative AI, bringing innovative solutions to enhance our AI capabilities. Project Delivery Design and implement scalable and reliable deployment pipelines for ML/GenAI models to move them from development to production environments Ensure models are deployed with appropriate versioning and rollback mechanisms to maintain stability and ease of updates. Oversee the cloud infrastructure setup, automated data ingestion pipelines, ensuring they meets the needs of GenAI workloads in terms of computation power, storage, and network requirements. Create detailed documentation for deployment pipelines, monitoring setups, and operational procedures to ensure transparency and ease of maintenance. Actively participate in retrospectives to identify areas for improvement in the deployment process. Client Engagement Collaborate with clients to understand their business needs, goals, and specific requirements for Generative AI solutions. Collaborate with solution architects to design ML/LLMOps that meet client needs Present technical approaches and results to both technical and non-technical stakeholders Conduct training sessions and workshops for client teams to help them understand, operate, and maintain the deployed AI models. Create comprehensive documentation and user guides to assist clients in managing and leveraging the Generative AI solutions effectively. Innovation And Knowledge Sharing Stay updated with the latest trends, research, and advancements in MLOps/LLMOps and Generative AI, and apply this knowledge to improve existing systems and processes. Develop internal tools and frameworks to accelerate ML/GenAI model development and deployment Mentor junior team members on MLOps/LLMOps best practices Contribute to technical blog posts and whitepapers on MLOps/LLMOps Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA
Posted 2 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu
Remote
Title: Senior Data Scientist Years of Experience : 8+ years *Location: The selected candidate is required to work onsite at our Chennai/Kovilpatti location for the initial Three-month project training and execution period. After the Three months , the candidate will be offered remote opportunities.* The Senior Data Scientist will lead the development and implementation of advanced analytics and AI/ML models to solve complex business problems. This role requires deep statistical expertise, hands-on model building experience, and the ability to translate raw data into strategic insights. The candidate will collaborate with business stakeholders, data engineers, and AI engineers to deploy production-grade models that drive innovation and value. Key responsibilities · Lead end-to-end model lifecycle: data exploration, feature engineering, model training, validation, deployment, and monitoring · Develop predictive models, recommendation systems, anomaly detection, NLP models, and generative AI applications · Conduct statistical analysis and hypothesis testing for business experimentation · Optimize model performance using hyperparameter tuning, ensemble methods, and explainable AI (XAI) · Collaborate with data engineering teams to improve data pipelines and quality · Document methodologies, build reusable ML components, and publish technical artifacts · Mentor junior data scientists and contribute to CoE-wide model governance Technical Skills · ML Frameworks: Scikit-learn, TensorFlow, PyTorch, XGBoost · Statistical tools: Python (NumPy, Pandas, SciPy), R, SAS · NLP & LLMs: Hugging Face Transformers, GPT APIs, BERT, LangChain · Model deployment: MLflow, Docker, Azure ML, AWS Sagemaker · Data visualization: Power BI, Tableau, Plotly, Seaborn · SQL and NoSQL (CosmosDB, MongoDB) · Git, CI/CD tools, and model monitoring platforms Qualification · Master’s in Data Science, Statistics, Mathematics, or Computer Science · Microsoft Certified: Azure Data Scientist Associate or equivalent · Proven success in delivering production-ready ML models with measurable business impact · Publications or patents in AI/ML will be considered a strong advantage Job Types: Full-time, Permanent Work Location: Hybrid remote in Chennai, Tamil Nadu Expected Start Date: 12/07/2025
Posted 2 weeks ago
12.0 years
4 - 9 Lacs
Gurgaon
On-site
We are looking for a Principal Technical Consultant – Data Engineering & AI who can lead modern data and AI initiatives end-to-end — from enterprise data strategy to scalable AI/ML solutions and emerging Agentic AI systems. This role demands deep expertise in cloud-native data architectures, advanced machine learning, and AI solution delivery, while also staying at the frontier of technologies like LLMs, RAG pipelines, and AI agents. You’ll work with C-level clients to translate AI opportunities into engineered outcomes. Roles and Responsibilities AI Solution Architecture & Delivery: Design and implement production-grade AI/ML systems, including predictive modeling, NLP, computer vision, and time-series forecasting. Architect and operationalize end-to-end ML pipelines using MLflow, SageMaker, Vertex AI, or Azure ML — covering feature engineering, training, monitoring, and CI/CD. Deliver retrieval-augmented generation (RAG) solutions combining LLMs with structured and unstructured data for high-context enterprise use cases. Data Platform & Engineering Leadership: Build scalable data platforms with modern lakehouse patterns using: Ingestion: Kafka, Azure Event Hubs, Kinesis Storage & Processing: Delta Lake, Iceberg, Snowflake, BigQuery, Spark, dbt Workflow Orchestration: Airflow, Dagster, Prefect Infrastructure: Terraform, Kubernetes, Docker, CI/CD pipelines Implement observability and reliability features into data pipelines and ML systems. Agentic AI & Autonomous Workflows (Emerging Focus): Explore and implement LLM-powered agents using frameworks like LangChain, Semantic Kernel, AutoGen, or CrewAI. Develop prototypes of task-oriented AI agents capable of planning, tool use, and inter-agent collaboration for domains such as operations, customer service, or analytics automation. Integrate agents with enterprise tools, vector databases (e.g., Pinecone, Weaviate), and function-calling APIs to enable context-rich decision making. Governance, Security, and Responsible AI: - Establish best practices in data governance, access controls, metadata management, and auditability. Ensure compliance with security and regulatory requirements (GDPR, HIPAA, SOC2). Champion Responsible AI principles including fairness, transparency, and safety. Consulting, Leadership & Practice Growth: Lead large, cross-functional delivery teams (10–30+ FTEs) across data, ML, and platform domains. Serve as a trusted advisor to clients’ senior stakeholders (CDOs, CTOs, Heads of AI). Mentor internal teams and contribute to the development of accelerators, reusable components, and thought leadership. Key Skills 12+ years of experience across data platforms, AI/ML systems, and enterprise solutioning Cloud-native design experience on Azure, AWS, or GCP Expert in Python, SQL, Spark, ML frameworks (scikit-learn, PyTorch, TensorFlow) Deep understanding of MLOps, orchestration, and cloud AI tooling Hands-on with LLMs, vector DBs, RAG pipelines, and foundational GenAI principles Strong consulting acumen: client engagement, technical storytelling, stakeholder alignment Qualifications Master’s or PhD in Computer Science, Data Science, or AI/ML Certifications: Azure AI-102, AWS ML Specialty, GCP ML Engineer, or equivalent Exposure to agentic architectures, LLM fine-tuning, or multi-agent collaboration frameworks Experience with open-source contributions, conference talks, or whitepapers in AI/Data
Posted 2 weeks ago
3.0 - 5.0 years
1 - 6 Lacs
Gurgaon
On-site
About Adsparkx: Adsparkx is a leading Global Performance Marketing Agency headquartered in India. We have been empowering brands since 2014 helping them acquire high quality and engaging users globally via data-driven decisions. We are innovators, hustlers and ad-tech moguls/experts who function with the belief of catalyzing a disruptive change in the industry by providing empowered and customized digital experiences to consumers/brands. Adsparkx unlocks the full potential of your business with its diligent workforce, catering to worldwide clients at their time zones. We operate globally and have offices in Gurgaon, Chandigarh, Singapore and US. We value partnerships and have maintained sustainable relationships with reputed brands, shaping their success stories through services like Affiliate Marketing, Branding, E-commerce, Lead Generation, and Programmatic Media Buying. We have helped navigate over 200 brands to success. Our clientele includes names like Assurance IQ, Inc, Booking.com, Groupon, etc. If you wish to change the game of your brand, visit us here- https://adsparkx.com Job Title: Ai Engineer Location: Gurugram, Haryana Employment Type: Full-Time Experience Required: 3-5 Years Objective of the Role: We are seeking a highly skilled Ai Enginee r who will be responsible for building, testing, and maintaining robust, scalable, and secure web applications. The ideal candidate will have strong expertise in Python and Django , with additional exposure to machine learning , generative AI frameworks , and modern deep learning architectures . This role involves optimizing performance, ensuring security, working with APIs, and collaborating closely with cross-functional teams to deliver high-quality backend solutions. Key Responsibilities: 3-6 years of hands-on experience in Python. Design, develop, and maintain backend services and RESTful APIs using Django or Django REST Framework. Work with third-party APIs and external services to ensure smooth data integration. Optimize application performance and implement robust security practices. Design scalable and efficient data models; work with relational and NoSQL databases. Implement and maintain CI/CD pipelines using tools like Docker, Git, Jenkins, or GitHub Actions. Collaborate with front-end developers, DevOps engineers, and product managers to deliver end-to-end solutions. Integrate and deploy ML models and AI features in production environments (a strong plus). Write clean, modular, and testable code following best practices. Troubleshoot, debug, and upgrade existing systems. Required Skills and Qualifications: Strong proficiency in Python and Django framework. Experience with PostgreSQL , MongoDB , or MySQL . Familiarity with Docker , Gunicorn , Nginx , and CI/CD pipelines. Experience with machine learning and deep learning concepts. Exposure to Generative AI , Transformers , Agentic Frameworks , and Fine-Tuning techniques . Hands-on experience with PyTorch or TensorFlow (PyTorch preferred). Ability to translate ML/AI solutions into production-ready APIs or services. Strong problem-solving and debugging skills. Nice to Have: Knowledge of FastAPI or Flask. Experience deploying models via TorchServe or ONNX. Familiarity with MLOps practices and tools like MLflow, DVC, or SageMaker. If you're passionate about backend development and excited to work at the intersection of software engineering and AI innovation , we’d love to hear from you.
Posted 2 weeks ago
16.0 years
2 - 6 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: WHAT Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines. Knows & brings in external ML frameworks and libraries. Consistently avoids common pitfalls in model development and deployment. HOW Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customer Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 8+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Artificial Intelligence Engineer Location: Noida (In-Office) Employment Type: Full-time, Permanent Job ID: AI-007 About Us We are a cybersecurity product company building next-generation platforms for threat detection, malware sandboxing, SIEM, and real-time telemetry analytics. As we scale the intelligence layer across our ecosystem, we are looking for an AI Developer to lead the design, development, and integration of AI-driven services into our products. Key Responsibilities Design and develop AI/ML-based services for cybersecurity applications such as threat detection, malware classification, behavior analysis, log correlation, and anomaly detection. Seamlessly integrate AI solutions into existing platforms including our sandbox system, SIEM, and T-Sense telemetry engine. Build, deploy, and maintain AI microservices/APIs using tools like FastAPI or Flask for scalable inference and automation. Fine-tune or train models using real-world security datasets; optimize performance for production environments. Leverage LLMs, vector search , or prompt-engineered workflows where applicable to enhance product capabilities. Collaborate closely with product managers, backend developers, and security researchers to map real use-cases into applied AI features. Stay current with advancements in AI/ML, especially in the context of security and adversarial defense. Required Skills Proficiency in Python and core AI/ML libraries: TensorFlow, PyTorch, scikit-learn, HuggingFace, etc. Experience building, training, and serving machine learning models in production . Solid understanding of cybersecurity concepts , logs, behavioral analysis, or threat classification. Hands-on experience with LLM APIs , embeddings, and integration of vector databases (e.g., FAISS, Weaviate, Pinecone). Ability to build AI solutions with Docker, REST APIs , and deploy in real-time environments. Comfortable converting AI prototypes into modular, testable, and scalable backend services. Nice-to-Have Past experience developing custom AI models for niche or production use-cases. Familiarity with SIEM platforms, malware datasets , log pipelines, or SOC workflows. Experience with MLOps tools like MLflow, DVC, or Kubeflow. Open-source contributions or public AI/security projects are a big plus. Preferred Experience 2+ years in applied AI/ML development, preferably with exposure to cybersecurity products. Bachelor's or Master’s degree in Computer Science, Data Science, AI, or a related field. Portfolio, GitHub, or case studies demonstrating applied AI projects or systems. Why Join Us Work on cutting-edge AI systems in a cybersecurity-first environment. Opportunity to influence and own AI components across multiple products. Solve complex, real-world threats using AI at scale. In-office collaboration with a focused product-engineering team based in Noida. Competitive salary, learning budget, product ownership, and growth opportunities. How to Apply Fill in the form for applying: https://forms.gle/yFSoycUj46SDFjuq8
Posted 2 weeks ago
5.0 - 10.0 years
25 - 30 Lacs
Chennai
Work from Office
Job Summary We are seeking a strategic and innovative Senior Data Scientist to join our high-performing Data Science team. In this role, you will lead the design, development, and deployment of advanced analytics and machine learning solutions that directly impact business outcomes. You will collaborate cross-functionally with product, engineering, and business teams to translate complex data into actionable insights and data products. Key Responsibilities Lead and execute end-to-end data science projects, encompassing problem definition, data exploration, model creation, assessment, and deployment. Develop and deploy predictive models, optimization techniques, and statistical analyses to address tangible business needs. Articulate complex findings through clear and persuasive storytelling for both technical experts and non-technical stakeholders. Spearhead experimentation methodologies, such as A/B testing, to enhance product features and overall business outcomes. Partner with data engineering teams to establish dependable and scalable data infrastructure and production-ready models. Guide and mentor junior data scientists, while also fostering team best practices and contributing to research endeavors. Required Qualifications & Skills: Masters or PhD in Computer Science, Statistics, Mathematics, or a related 5+ years of practical experience in data science, including deploying models to Expertise in Python and SQL; Solid background in ML frameworks such as scikit-learn, TensorFlow, PyTorch, and Competence in data visualization tools like Tableau, Power BI, matplotlib, and Comprehensive knowledge of statistics, machine learning principles, and experimental Experience with cloud platforms (AWS, GCP, or Azure) and Git for version Exposure to MLOps tools and methodologies (e.g., MLflow, Kubeflow, Docker, CI/CD). Familiarity with NLP, time series forecasting, or recommendation systems is a Knowledge of big data technologies (Spark, Hive, Presto) is desirable Timings:1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri)
Posted 2 weeks ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Key Responsibilities Develop and manage end-to-end ML pipelines from training to production. Automate model training, validation, and deployment using CI/CD. Ensure scalability and reliability of AI systems with Docker & Kubernetes. Optimize ML model performance for low latency and high availability. Design and maintain cloud-based/hybrid ML infrastructure. Implement monitoring, logging, and alerting for deployed models. Ensure security, compliance, and governance in AI model deployments. Collaborate with Data Scientists, Engineers, and Product Managers. Define and enforce MLOps best practices (versioning, reproducibility, rollback). Maintain model registry and conduct periodic pipeline reviews. Experience & Skills 5+ years in MLOps, DevOps, or Cloud Engineering. Strong experience in ML model deployment, automation, and monitoring. Proficiency in Kubernetes, Docker, Terraform, and cloud platforms (AWS, Azure, GCP). Hands-on with CI/CD tools (GitHub Actions, Jenkins). Expertise in ML frameworks (TensorFlow, PyTorch, MLflow, Kubeflow). Understanding of APIs, microservices, and infrastructure-as-code. Experience with monitoring tools (Prometheus, Grafana, ELK, Datadog). Strong analytical and debugging skills. Preferred: Cloud or MLOps certifications, real-time ML, ETL, and AI ethics knowledge. Tools & Technologies Cloud & Infrastructure: AWS, GCP, Azure, Terraform, Kubernetes, Docker. MLOps & Model Management: MLflow, Kubeflow, TFX, SageMaker. CI/CD & Automation: GitHub Actions, Jenkins, ArgoCD, Airflow. Monitoring & Logging: Prometheus, Grafana, ELK Stack, Datadog. Collaboration & Documentation: Slack, Confluence, JIRA, Notion. Why Join Yavar? Join us at an exciting growth phase, working with cutting-edge AI technology and talented teams. This role offers competitive compensation, equity participation, and the opportunity to shape a world-class engineering organization. Your leadership will be crucial in turning our innovative vision into reality, creating products that reshape how enterprises harness artificial intelligence. *At Yavar, talent knows no boundaries. While experience matters, we value your drive for excellence and ability to execute. Ready to build the future of enterprise AI? We're eager to start a conversation.* To apply, please contact: digital@yavar.ai Location: Chennai / Coimbatore.
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
FEQ326R365 As a Manager, Field Engineering (Data & AI) (Strategic Accounts, Data & AI) will lead a team of Solutions Architects (Data & AI) focusing on large enterprise & strategic customers across Europe with a presence in India. This is a unique role where you will help to build and lead our technical pre-sales team across India that is dedicated to supporting customers that are head quartered in Europe and the UK. Leading a team in India, will require significant collaboration and partnership with teams in the UK, France, Germany and the rest of Europe. Your experience partnering with the sales organisation will help close revenue with opportunities of +$1M ARR with the right approach whilst coaching new sales and pre-sales team members to work together. You will guide and get involved to enhance your team's effectiveness; be an expert at positioning and articulating business-value focused solutions to our customers and prospects; support various stages of the sales cycles; and build relationships with key stakeholders in large corporations. The Impact You Will Have Manage hiring, building the Pre-Sales team consisting of Solutions Architects in the Data & AI domain. Rapidly scale the designated Field Engineering segment organisation without sacrificing calibre. Build a collaborative culture within a rapid-growth team. To embody and promote Databricks' customer-obsessed, teamwork and diverse culture. Support increasing Return on Investment of SA involvement in sales cycles by 2-3x over 18 months. Promote a solution and value-based selling field-engineering organisation. Coach and mentor the Solutions Architect team to understand our customers’ business needs and identify revenue potential in their accounts. Interface with leadership & C-suite stakeholders at strategic customers in the assigned region to position the strength of Databricks, the comprehensive solutions strategy, and build trust and credibility in the account. Build Databricks' brand in India in partnership with the Marketing and Sales team Bring the experience, priorities, and takeaways of the field engineering team to the planning and strategy roadmap of the organisation. What We Look For Proven experience in successfully building and managing a presales team Technical or consulting background either in Data Engineering, Database technologies or Data Science Proven experience in driving strategic planning and accurately forecast sales trends in a consumption-driven business Ability to partner and collaborate with Sales and other cross-functional leaders Success in instituting processes for technical field members to lead efficiency and Data-driven innovation Passionate about data and AI market and Cloud software models and being able to deliver a strong POV on the value of Databricks solutions About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 2 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
GAQ326R278 We are looking for an experienced Cash Application Analyst to manage and optimize the cash application process at our tech company, which operates on a usage-based billing model. The ideal candidate will ensure the accurate and timely application of customer payments to invoices , contribute to efficient cash flow management, and collaborate with key stakeholders to enhance processes. This role requires a meticulous professional with strong analytical skills, experience in high-volume cash application environments, and an ability to navigate in a fast paced environment. This position will be catering to the US (EST) timezone . The Impact You Will Have Oversee the daily cash application process, ensuring customer payments are accurately and promptly applied to invoices Monitor and reconcile incoming payments across multiple payment channels, including ACH, wire transfers, credit card transactions, and checks. Collaborate closely with the billing and collections teams to resolve discrepancies and support seamless end-to-end cash management Develop and implement cash application policies and procedures that align with the unique aspects of a usage-based billing system Identify and address payment discrepancies, customer account issues, and unapplied cash, facilitating timely resolutions Maintain and update comprehensive documentation of cash application processes and customer payment records Generate reports on cash application metrics, providing actionable insights to senior finance leadership Participate in system enhancements and software implementations to improve cash application automation and efficiency Liaise with the customer support team to handle customer inquiries related to payments and account reconciliations Drive continuous process improvements and leverage technology to enhance accuracy, reduce processing times, and streamline operations Ensure compliance with company policies and relevant financial regulations What We Look For Bachelor’s degree in Finance, Accounting, Business Administration, or a related field preferred Minimum of 4 years of experience in cash application, accounts receivable, or related financial operations Strong understanding of usage-based billing models and associated financial processes Must have in-depth NetSuite experience and advanced Excel skills SaaS Cash App experience preferred Excellent attention to detail and problem-solving skills Proficiency with ERP systems and payment processing platforms Strong analytical skills with the ability to interpret data and generate reports Effective communication and interpersonal skills for collaboration with cross-functional teams Operate in a fast-paced environment with tight deadlines Experience in handling complex reconciliations and certifications This role will be in EST hours - 6pm IST onwards. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 2 weeks ago
5.0 years
15 - 20 Lacs
Thiruvananthapuram Taluk, India
Remote
Are you passionate about building AI systems that create real-world impact? We are hiring a Senior AI Engineer with 5+ years of experience to design, develop, and deploy cutting-edge AI/ML solutions. 📍 Location: [Trivandrum / Kochi / Remote – customize based on your need] 💼 Experience: 5+ years 💰 Salary: ₹15–20 LPA 🚀 Immediate Joiners Preferred 🔧 What You’ll Do Design and implement ML/DL models for real business problems Build data pipelines and perform preprocessing for large datasets Use advanced techniques like NLP, computer vision, reinforcement learning Deploy AI models using MLOps best practices Collaborate with data scientists, developers & product teams Stay ahead of the curve with the latest research and tools ✅ What We’re Looking For 5+ years of hands-on AI/ML development experience Strong in Python, with experience in TensorFlow, PyTorch, Scikit-learn, Hugging Face Knowledge of NLP, CV, DL architectures (CNNs, RNNs, Transformers) Experience with cloud platforms (AWS/GCP/Azure) and AI services Solid grasp of MLOps, model versioning, deployment, monitoring Strong problem-solving, communication, and mentoring skills 💻 Tech Stack You’ll Work With Languages: Python, SQL Libraries: TensorFlow, PyTorch, Keras, Transformers, Scikit-learn Tools: Git, Docker, Kubernetes, MLflow, Airflow Platforms: AWS, GCP, Azure, Vertex AI, SageMaker Skills: cloud platforms (aws, gcp, azure),docker,computer vision,git,pytorch,airflow,hugging face,nlp,ml,ai,deep learning,kubernetes,mlflow,mlops,tensorflow,scikit-learn,python,machine learning
Posted 2 weeks ago
4.0 - 9.0 years
0 - 1 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Hi Pleasae Find JD and send me your updated Resume Bachelor's or Masters degree in Computer Science, Data Science, Engineering, or a related field. 3+ years of experience in MLOps, DevOps, or ML Engineering roles. Strong experience with containerization (Docker) and orchestration (Kubernetes). Proficiency in Python and experience working with ML libraries like TensorFlow, PyTorch, or scikit-learn. Familiarity with ML pipeline tools such as MLflow, Kubeflow, TFX, Airflow, or SageMaker Pipelines. Hands-on experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code tools (Terraform, CloudFormation). Solid understanding of CI/CD principles, especially as applied to machine learning workflows. Nice-to-Have Experience with feature stores, model registries, and metadata tracking. Familiarity with data versioning tools like DVC or LakeFS. Exposure to data observability and monitoring tools. Knowledge of responsible AI practices including fairness, bias detection, and explainability.
Posted 2 weeks ago
7.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Title: Python Developer (AI/ML Projects – 5–7 Years Experience) Location: Onsite – Indore, India Job Type: Full-time Experience Required: 5 to 7 years Notice Period: Immediate Joiners Preferred About the Role: We are seeking an experienced Python Developer with 5–7 years of professional experience, including hands-on project work in Artificial Intelligence (AI) and Machine Learning (ML) . The ideal candidate should have strong backend development skills along with a solid foundation in AI/ML, capable of designing scalable solutions and deploying intelligent systems. Key Responsibilities: Design, develop, and maintain backend applications using Python. Build and integrate RESTful APIs and third-party services. Work on AI/ML projects including model development, training, deployment, and performance tuning. Collaborate with Data Scientists and ML Engineers to implement and productionize machine learning models. Manage data pipelines and model lifecycle using tools like MLflow or similar. Write clean, testable, and efficient code using Python best practices. Work with relational and NoSQL databases such as PostgreSQL, MySQL, MongoDB, etc. Participate in code reviews, architecture discussions, and agile ceremonies. Required Skills & Experience: 5–7 years of hands-on Python development experience. Strong experience with frameworks such as Django, Flask, or FastAPI. Proven track record of working on AI/ML projects (end-to-end model lifecycle). Good understanding of machine learning libraries like Scikit-learn, TensorFlow, Keras, PyTorch, etc. Experience with data preprocessing, model training, evaluation, and deployment. Familiarity with data handling tools: Pandas, NumPy, etc. Working knowledge of REST API development and integration. Experience with Docker, Git, CI/CD, and cloud platforms (AWS/GCP/Azure). Familiarity with databases – SQL and NoSQL. Experience with model tracking tools like MLflow or DVC is a plus. Preferred Qualifications: Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. Experience with cloud-based AI/ML services (AWS SageMaker, Azure ML, GCP AI Platform). Exposure to MLOps practices and tools is highly desirable. Understanding of NLP, Computer Vision, or Generative AI concepts is a plus.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi