Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Team Data is at the core of Outreach's strategy. It drives ourselves and our customers to the highest levels of success. We use it for everything from customer health scores and revenue dashboards to operational metrics of our AWS infrastructure, to helping increase product engagement and user productivity through natural language understanding, to predictive analytics and causal inference via experimentation. As our customer base continues to grow, we are looking towards new ways of leveraging our data to deeper understand our customers’ needs and deliver new products and features to help continuously improve their customer engagement workflows. The mission of the Data Science team is to enable such continuous optimization by reconstructing customer engagement workflows from data, developing metrics to measure the success and efficiency of these workflows, and providing tools to support the optimization of these workflows. As a member of the team, you will work closely with other data scientists, machine learning engineers, and application engineers to define and implement our strategy for delivering on this mission Your Daily Adventures Will Include Design, implement, and improve machine learning Systems. Contribute to machine learning applications end to end, i.e. from research to prototype to production. Work with product managers, designers, and customers to define vision and strategy for a given product. Our Vision Of You A hybrid data science engineer who can navigate both sides with little help from others You understand the typical lifecycle of machine learning product development, from inception to production. Experience in Gen AI application/agent development is a plus. You have strong programming skills in at least one programming language (Python, Golang, etc.). Experience in framework like Langchain, OpenAI Agent SDK is a plus. You have experience building microservices. Experience with Golang is a plus. You have substantial experience with building and managing infrastructure for deploying and running ML models in production You have experience working with distributed data processing frameworks such as Spark. Experience with Spark's MLlib, AWS, Databricks, MLFlow are a plus You have a knowledge in statistics and machine learning and have practical experience applying it to solve real-world problems. You are hands-on, able to quickly pick up new tools and languages, and excited about building things and experimenting. You go above and beyond to help your team You should be able to work alongside experienced engineers, designers, and product managers to help deliver new customer-facing features and products. You have an degree in Computer Science, Data Science, or a related field, and 4-6 years of industry or equivalent experience
Posted 5 days ago
5.0 - 10.0 years
20 - 35 Lacs
Kochi, Bengaluru
Work from Office
Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer with a strong foundation in programming and machine learning, hands-on experience with AWS Machine Learning services (especially SageMaker), and a solid understanding of Data Engineering and MLOps practices. You will be responsible for designing, developing, deploying, and maintaining scalable ML solutions in a cloud-native environment. Key Responsibilities: • Design and implement machine learning models and pipelines using AWS SageMaker and related services. • Develop and maintain robust data pipelines for training and inference workflows. • Collaborate with data scientists, engineers, and product teams to translate business requirements into ML solutions. • Implement MLOps best practices including CI/CD for ML, model versioning, monitoring, and retraining strategies. • Optimize model performance and ensure scalability and reliability in production environments. • Monitor deployed models for drift, performance degradation, and anomalies. • Document processes, architectures, and workflows for reproducibility and compliance. Required Skills & Qualifications: • Strong programming skills in Python and familiarity with ML libraries (e.g., scikitlearn, TensorFlow, PyTorch). • Solid understanding of machine learning algorithms, model evaluation, and tuning. • Hands-on experience with AWS ML services, especially SageMaker, S3, Lambda, Step Functions, and CloudWatch. • Experience with data engineering tools (e.g., Apache Airflow, Spark, Glue) and workflow orchestration. Machine Learning Engineer - Job Description • Proficiency in MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines, Docker, Kubernetes). • Familiarity with monitoring tools and logging frameworks for ML systems. • Excellent problem-solving and communication skills. Preferred Qualifications: • AWS Certification (e.g., AWS Certified Machine Learning Specialty). • Experience with real-time inference and streaming data. • Knowledge of data governance, security, and compliance in ML systems
Posted 5 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role : ML Ops Support Engineer *Job description:* We are looking for a skilled MLOps Support Engineer to join our team. This role involves monitoring and managing ML model operational pipelines in AzureML and MLflow, with an emphasis on automation, integration validation, and CI/CD pipeline management. The ideal candidate will be technically sound in Python, Azure CLI, and MLOps tools, and capable of ensuring stability and reliability in model deployment lifecycles. *Objectives of the role:* Support and monitor MLOps pipelines in AzureML and MLflow Manage CI/CD pipelines for model deployment and updates Handle model registry processes, ensuring best practices for versioning and tracking Perform testing & validation of integrated endpoints to ensure non-functional stability Automate monitoring and upkeep of ML pipelines to relieve the data science team Troubleshoot and resolve pipeline and integration-related issues *Responsibilities:* Support production ML pipelines using AzureML and MLflow Configure and manage model versioning and registry lifecycle Automate alerts, monitoring tasks, and routine pipeline operations Validate REST API endpoints for ML models Implement CI/CD workflows for ML deployments Document and troubleshoot operational issues related to ML services Collaborate with data scientists and platform teams to ensure delivery continuity *Required Skills & Qualifications:* Proficiency in AzureML, MLflow, and Databricks Strong command over Python Experience with Azure CLI and scripting Good understanding of CI/CD practices in MLOps Knowledge of model registry management and deployment validation 3–5 years of relevant experience in MLOps environments Skills that are good to have, but not mandatory: Exposure to monitoring tools (e.g., Azure Monitor, Prometheus) Experience with REST API testing (e.g., Postman) Familiarity with Docker/Kubernetes in ML deployments
Posted 5 days ago
4.0 years
25 - 35 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 2500000 - Rs 3500000 (ie INR 25-35 LPA) Min Experience: 4 years Location: Bengaluru JobType: full-time We are looking for a highly skilled and motivated Machine Learning Engineer with 4-6 years of experience to join our growing team. As a core member of our AI/ML division, you will be responsible for designing, developing, deploying, and maintaining machine learning solutions that power real-world products and systems. You'll collaborate with data scientists, software engineers, and product teams to bring cutting-edge ML models from research into production. The ideal candidate will have a strong foundation in machine learning algorithms, statistical modeling, and data preprocessing, along with proven experience deploying models at scale. This role offers the opportunity to work on a variety of projects involving predictive modeling, recommendation systems, NLP, computer vision, and more. Requirements Key Responsibilities: Design and develop robust, scalable, and efficient machine learning models for various business applications. Collaborate with data scientists and analysts to understand project objectives and translate them into ML solutions. Perform data cleaning, feature engineering, and exploratory data analysis on large, structured and unstructured datasets. Evaluate, fine-tune, and optimize model performance using techniques such as cross-validation, hyperparameter tuning, and ensembling. Deploy models into production using tools and frameworks such as Docker, MLflow, Airflow, or Kubernetes. Continuously monitor model performance and retrain/update models as required to maintain accuracy and relevance. Conduct code reviews, maintain proper documentation, and contribute to best practices in ML development and deployment. Work with stakeholders to identify opportunities for leveraging ML to drive business decisions. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, Statistics, or a related field. 4-6 years of hands-on experience in designing and implementing machine learning models in real-world applications. Strong understanding of classical ML algorithms (e.g., regression, classification, clustering, ensemble methods) and experience with deep learning techniques (CNNs, RNNs, transformers) is a plus. Proficient in programming languages such as Python (preferred) and experience with ML libraries/frameworks like scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Experience in data preprocessing, feature selection, and pipeline automation. Familiarity with version control systems (e.g., Git) and collaborative development environments. Ability to interpret model results and communicate findings to technical and non-technical stakeholders. Strong problem-solving skills and a passion for innovation and continuous learning. Preferred Qualifications: Experience with cloud-based ML platforms (AWS SageMaker, Azure ML, or Google Cloud AI Platform). Exposure to big data technologies (e.g., Spark, Hadoop) and data pipeline tools (e.g., Airflow). Prior experience in model deployment and ML Ops practices
Posted 5 days ago
0.0 - 3.0 years
0 - 1 Lacs
Chepauk, Chennai, Tamil Nadu
On-site
Position Title: AI Specialist - Impact Based Forecasting Due to the operational nature of this role, preference will be given toapplicants who are currently based in Chennai, India and possess valid work authorization. RIMES is committed to diversity and equal opportunity in employment. Open Period: 11 July 2025 – 10 August 2025 Background: The Regional Integrated Multi-Hazard Early Warning System for Africa and Asia (RIMES) is an international and intergovernmental institution, owned and governed by its Member States, for the generation, application, and communication of multi-hazard early warning information. RIMES was formed in the aftermath of the 2004 Indian Ocean tsunami, as a collective response by countries in Africa and Asia to establish a regional early warning system within a multi-hazard framework, to strengthen preparedness and response to trans-boundary hazards. RIMES was formally established on 30 April 2009 and registered with the United Nations on 1 July 2009. It operates from its regional early warning center located at the Asian Institute of Technology (AIT) campus in Pathumthani, Thailand. Position Description: The AI Specialist – Impact-Based Forecasting design and implement AI-based solutions to support predictive analytics and intelligent decision support across sectors (e.g., climate services, disaster management /The AI Specialist will play a central role in building robust data pipelines, integrating multi-source datasets, and enabling real-time data-driven decision-making by stakeholders. The role involves drawing from and contribute to multi-disciplinary datasets and working closely with a multi-disciplinary team within RIMES for generating IBF DSS, developing contingency plans, automating monitoring systems, contributing to Post-Disaster Needs Assessments (PDNA), and applying AI/ML techniques for risk reduction. This position requires a strong understanding of meteorological, hydrological, vulnerability and exposure patterns and translates data into actionable insights for disaster preparedness and resilience planning. The position reports to the Meteorology and Disaster Risk Modeling Specialist and India Regional Program Adviser, overseeing the AI Specialist – Impact-Based Forecasting’s work or as assigned by RIMES’ institutional structure and in close coordination with the Systems Research and Development Specialist and Project Manager. Duty station: RIMES Project Office Chennai, India (or other locations as per project requirements). Type of Contract: Full-time Project-Based contract Skills and Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. Experience: Minimum of 3 years of experience in data engineering, analytics, or IT systems for disaster management, meteorology, or climate services. Experience in multi-stakeholder projects and facilitating capacity-building programs. Knowledge Skills and Abilities: Machine Learning Fundamentals: Deep understanding of various ML algorithms, including supervised, unsupervised, and reinforcement learning. This includes regression, classification, clustering, time series analysis, anomaly detection, etc. Deep Learning: Proficiency with deep learning architectures (e.g., CNNs, RNNs, LSTMs, Transformers) and frameworks (TensorFlow, PyTorch, Keras). Ability to design, train, and optimize complex neural networks. Strong programming skills in Python, with extensive libraries (NumPy, Pandas, SciPy, Scikit-learn, Matplotlib, Seaborn, GeoPandas). Familiarity with AI tools: such as PyTorch, TensorFlow, Keras, MLflow, etc. Data Visualization: Ability to create clear, compelling visualizations to communicate complex data and model outputs. Familiarity with early warning systems, disaster risk frameworks, and sector-specific IBF requirements is a strong plus. Proficiency in technical documentation and user training. Personal Qualities: Excellent interpersonal skills; team-orientated work style; pleasant personality. Strong desire to learn and undertake new challenges. Creative problem-solver; willing to work hard. Analytical thinker with problem-solving skills. Strong attention to detail and ability to work under pressure. Self-motivated, adaptable, and capable of working in multicultural and multidisciplinary environments. Strong communication skills and the ability to coordinate with stakeholders. Major Duties and Responsibilities: Impact Based Forecasting Collaborate with other members of the IT team, meteorologists, hydrologists, GIS specialists, and disaster risk management experts within RIMES to ensure the development of IBF DSS. Develop AI models (e.g., NLP, computer vision, reinforcement learning) Integrate models into applications and dashboards. Ensure model explainability and ethical compliance. Assist the RIMES Team in applying AI/ML models to forecast hazards and project likely impacts based on exposure and vulnerability indices. Work with forecasters and domain experts to automate the generation of impact-based products. Ensure data security, backup, and compliance with data governance and interoperability standards. Train national counterparts on the use and management of the AL, including analytics dashboards. Collaborate with GIS experts, hydromet agencies, and emergency response teams for integrated service delivery. Technical documentation on data architecture, models, and systems. Capacity Building and Stakeholder Engagement Facilitate training programs for team members and stakeholders, focusing on RIMES policies, regulations, and the use of forecasting tools. Develop and implement a self-training plan to enhance personal expertise, obtaining a trainer certificate as required. Prepare and implement training programs to enhance team capacity and submit training outcome reports. Reporting Prepare technical reports, progress updates, and outreach materials for stakeholders. Maintain comprehensive project documentation, including strategies, milestones, and outcomes. Capacity-building workshop materials and training reports. Other Responsibilities Utilize AI skills to assist in system implementation plans and decision support system (DSS) development. Utilize skills to assist in system implementation plans and decision support system (DSS) development. Assist in 24/7 operational readiness for client early warning systems such as SOCs, with backup support from RIMES Headquarters. Undertake additional tasks as assigned by the immediate supervisor or HR manager based on recommendations from RIMES technical team members and organizational needs. The above responsibilities are illustrative and not exhaustive. Undertake any other relevant tasks that could be needed from time to time. Contract Duration The contract will initially be for one year and may be extended based on the satisfactory completion of a 180-day probationary period and subsequent annual performance reviews. How to Apply: Interested candidates should send your application letter, resume, salary expectation and 2 references to rimeshra@rimes.int by midnight of 10 August 2025, Bangkok time. Please state “AI Specialist—Impact-Based Forecasting: Your Name “the Subject line of the email. Only short-listed applicants will be contacted. Ms. Dusadee Padungkul Head-Department of Operational Support Regional Integrated Multi-Hazard Early Warning System AIT Campus, 58 Moo 9 Paholyothin Rd., Klong 1, Klong Luang, Pathumthani 12120 Thailand. RIMES promotes diversity and inclusion in the workplace. Well-qualified applicants particularly women are encouraged to apply. Job Type: Full-time Pay: ₹50,000.00 - ₹100,000.00 per month Schedule: Monday to Friday Ability to commute/relocate: Chepauk, Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Kindly specify your salary expectation per month. Do you have any experience or interest in working with international or non-profit organizations? Please explain. Education: Bachelor's (Required) Experience: working with international organization: 1 year (Preferred) Data engineering: 3 years (Required) Data analytics: 3 years (Required) Disaster management: 3 years (Preferred) Language: English (Required) Location: Chepauk, Chennai, Tamil Nadu (Required)
Posted 5 days ago
4.0 years
0 Lacs
Hyderābād
On-site
Line of Service Advisory Industry/Sector GPS X-Sector Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In business analysis at PwC, you will focus on analysing and interpreting data to provide strategic insights and recommendations for improving business performance. Your work will involve strong analytical skills and the ability to effectively communicate findings to stakeholders. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: At PwC, our purpose is to build trust in society and solve important problems. We’re a network of firms in 157 countries with more than 300,000 people who are committed to delivering quality in Assurance, Advisory and Tax services. Within Advisory, PwC has a large team that focus on transformation in Government through Digital inclusion. The open position is for a candidate who desires to work with government clients and bring about a change in society. A successful candidate will be expected to work pro-actively and effectively on multiple client engagements over the period of time and take ownership of the entire project delivery he/she entrusted with. Responsibilities: · Lead the design and implementation of AI/ML models, particularly in the areas of Generative AI and advanced data analytics · Develop and fine-tune large language models (LLMs) and transformer-based architectures for specific use cases · Collaborate with data engineers, product owners and business stakeholders to translate requirements into intelligent solutions · Deploy ML models in production using MLOps practices, ensuring scalability and performance · Drive experimentation with cutting-edge Gen AI techniques (e.g., text generation, summarization, image synthesis) · Conduct data exploration, feature engineering and statistical modeling to support various business needs · Mentor junior team members and guide them in model development and evaluation best practices · Stay up to date with the latest research and industry trends in AI, ML and Gen AI · Document methodologies, models and workflows for knowledge sharing and reuse Mandatory skill sets: · 4+ years of experience in AI/ML and data science, with proven experience in Generative AI · Hands-on expertise with Python and relevant ML/AI libraries (e.g., PyTorch, TensorFlow, Hugging Face, Scikit-learn) · Strong understanding of LLMs (e.g., GPT, BERT, T5), transformers and prompt engineering · Experience with NLP techniques such as text classification, summarization, entity recognition and conversational AI · Ability to build and evaluate supervised and unsupervised learning models · Proficient in data wrangling, exploratory data analysis and statistical techniques · Familiarity with model deployment tools and platforms (Docker, FastAPI, MLflow, AWS/GCP/Azure ML services) · Excellent problem-solving, analytical thinking and communication skills Preferred skill sets: · Experience fine-tuning open-source LLMs using domain-specific data · Exposure to reinforcement learning (RLHF), diffusion models or multimodal AI · Familiarity with vector databases (e.g., FAISS, Pinecone) and retrieval-augmented generation (RAG) architectures · Experience with ML pipelines and automation (CI/CD for ML, Kubeflow, Airflow) · Background in conversational AI, chatbots or virtual assistants · Knowledge of data privacy, ethical AI and explainable AI principles · Publications, Kaggle participation or open-source contributions in the AI/ML space Years of experience required: · 4+ years Education qualification: · Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Data Science, Statistics or a related field Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills AI Programming Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Administration, Business Analysis, Business Case Development, Business Data Analytics, Business Process Analysis, Business Process Modeling, Business Process Re-Engineering (BPR), Business Requirements Analysis, Business Systems, Communication, Competitive Analysis, Creativity, Embracing Change, Emotional Regulation, Empathy, Feasibility Studies, Functional Specification, Inclusion, Intellectual Curiosity, IT Project Lifecycle, Learning Agility {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 5 days ago
8.0 years
0 Lacs
Ahmedabad
On-site
Job Description: We are looking for a highly skilled AI/ML Engineer who can design, implement, and optimize machine learning solutions, including traditional models, deep learning architectures, and generative AI systems. The ideal candidate will have strong hands-on experience with MLOps, data pipelines, and LLM optimization. You will collaborate with data engineers and cross-functional teams to develop scalable, ethical, and high-performance AI/ML solutions that drive business impact. Requirements: Key Responsibilities: Develop, implement, and optimize AI/ML models using both traditional machine learning and deep learning techniques. Design and deploy generative AI models for innovative business applications. Collaborate with data engineers to build and maintain high-quality data pipelines and preprocessing workflows. Integrate responsible AI practices to ensure ethical, explainable, and unbiased model behavior. Develop and maintain MLOps workflows to streamline training, deployment, monitoring, and continuous integration of ML models. Optimize large language models (LLMs) for efficient inference, memory usage, and performance. Work closely with product managers, data scientists, and engineering teams to integrate AI/ML into core business processes. Conduct rigorous testing, validation, and benchmarking of models to ensure accuracy, reliability, and robustness. Stay abreast of the latest research and advancements in AI/ML, LLMs, MLOps, and generative models. Required Skills & Qualifications: Strong foundation in machine learning, deep learning, and statistical modeling techniques. Hands-on experience with TensorFlow, PyTorch, scikit-learn, or similar ML frameworks. Proficient in Python and ML engineering tools such as MLflow, Kubeflow, or SageMaker. Experience deploying generative AI solutions, including text, image, or audio generation. Understanding of responsible AI concepts, including fairness, accountability, and transparency in model building. Solid experience with MLOps pipelines and continuous delivery of ML models. Proficiency in optimizing transformer models or LLMs for production workloads. Familiarity with cloud services (AWS, GCP, Azure) and containerized deployments (Docker, Kubernetes). Excellent problem-solving and communication skills. Ability to work collaboratively with cross-functional teams. Preferred Qualifications: Experience with data versioning tools like DVC or LakeFS. Exposure to vector databases and retrieval-augmented generation (RAG) pipelines. Knowledge of prompt engineering, fine-tuning, and quantization techniques for LLMs. Familiarity with Agile workflows and sprint-based delivery. Contributions to open-source AI/ML projects or published papers in conferences/journals. About Company / Benefits: Lucent Innovation is a company 8 years old, India's premier IT solutions provider company that offers web & web application development services to global clients. We are Shopify Expert and Shopify Plus Partner, have its office registered in India. Lucent Innovation has a highly skilled team of IT professionals. We ensure that our employees have a work-life balance. In order to achieve the same, We work 5 days working, no night shift and employees are encouraged to report to the office on time and leave on time. The company organizes several indoor/outdoor activities throughout the year. Besides these, the company organizes trips for employees. Celebrations are an integral part of our work culture. We celebrate all major festivals like Diwali, Holi, Lohri, Christmas day, Navratra (Dandia), Makar Sankranti etc. We also enjoy several other important occasions like New Year, Independence Day, Republic Day, Women's Day, Mother's day, etc. Perks: - 5days working Flexible working hours No hidden policy Friendly working Environment In-house training Quarterly and Yearly rewards & Appreciation
Posted 5 days ago
3.0 years
16 - 20 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
3.0 years
16 - 20 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
3.0 years
16 - 20 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About the Role: We are looking for a passionate and results-driven Data Scientist with 2–3 years of experience to join our data science team. This role involves building robust machine learning and deep learning models for high-impact financial use cases such as fraud detection, risk scoring, personalization, and automation. You should have strong Python programming skills, hands-on experience with end-to-end ML/DL model development, MLOps deployment, Experience building and exposing APIs for model interaction and integration with production systems is essential. Key Responsibilities: •Design, develop, and deploy ML/DL models for FinTech use cases (e.g., fraud detection, customer risk classification, churn prediction). •Handle and process large, highly imbalanced datasets using advanced resampling, cost-sensitive learning, or anomaly detection techniques. •Implement and automate MLOps pipelines for training, testing, monitoring, and deploying models to production (e.g., using MLflow or Kubeflow). •Build APIs and backend interfaces for seamless model consumption in production applications. •Collaborate closely with Data Engineers, Product Managers, and Frontend Developers to operationalize ML solutions. •Document model assumptions, performance metrics, and testing methodology for audit and compliance readiness. •Contribute to continuous model monitoring and re-training pipelines to ensure production accuracy and relevance. •Stay current on emerging ML and GenAI techniques (exposure to GenAI is a plus but not mandatory). Key Requirements: •2–3 years of hands-on experience in data science/machine learning/Deep Learning developer role. •Domain experience in FinTech, payments, or financial services (mandatory). •Proficiency in Python and popular ML/DL libraries: scikit-learn, XGBoost, TensorFlow, PyTorch. •Experience with model deployment, Docker, FastAPI/Flask, and building APIs. •Experience with MLOps tools (e.g., MLflow, DVC). •Strong knowledge of data preprocessing, feature engineering, and model evaluation •Familiarity with version control (Git), CI/CD workflows, and agile practices. •Strong communication and documentation skills.
Posted 5 days ago
4.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.
Posted 5 days ago
4.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.
Posted 5 days ago
4.0 years
12 - 20 Lacs
Pune, Maharashtra, India
On-site
About Improzo At Improzo (Improve + Zoe; meaning Life in Greek), we believe in improving life by empowering our customers. Founded by seasoned Industry leaders, we are laser focused for delivering quality-led commercial analytical solutions to our clients. Our dedicated team of experts in commercial data, technology, and operations has been evolving and learning together since our inception. Here, you won't find yourself confined to a cubicle; instead, you'll be navigating open waters, collaborating with brilliant minds to shape the future. You will work with leading Life Sciences clients, seasoned leaders and carefully chosen peers like you! People are at the heart of our success, so we have defined our CARE values framework with a lot of effort, and we use it as our guiding light in everything we do. We CARE! Customer-Centric: Client success is our success. Prioritize customer needs and outcomes in every action. Adaptive: Agile and Innovative, with a growth mindset. Pursue bold and disruptive avenues that push the boundaries of possibilities. Respect: Deep respect for our clients & colleagues. Foster a culture of collaboration and act with honesty, transparency, and ethical responsibility. Execution: Laser focused on quality-led execution; we deliver! Strive for the highest quality in our services, solutions, and customer experiences. About The Role We're looking for a Data Scientist in Pune to drive insights for pharma clients using advanced ML, Gen AI, and LLMs on complex healthcare data. You'll optimize Pharma commercial strategies (forecasting, marketing, SFE) and improve patient outcomes (journey mapping, adherence, RWE). Key Responsibilities Data Exploration & Problem Framing: Proactively engage with client/business stakeholders (e.g., Sales, Marketing, Market Access, Commercial Operations, Medical Affairs, Patient Advocacy teams) to deeply understand their challenges and strategic objectives. Explore, clean, and prepare large, complex, and sometimes messy datasets from various sources, including but not limited to: sales data, prescription data, claims data, Electronic Health Records (EHRs), patient support program data, CRM data, and real-world evidence (RWE) datasets. Translate ambiguous business problems into well-defined data science questions and develop appropriate analytical frameworks. Advanced Analytics & Model Development Design, develop, validate, and deploy robust statistical models and machine learning algorithms (e.g., predictive models, classification, clustering, time series analysis, causal inference, natural language processing). Develop models for sales forecasting, marketing mix optimization, customer segmentation (HCPs, payers, pharmacies), sales force effectiveness (SFE) analysis, incentive compensation modelling, and market access analytics (e.g., payer landscape, formulary impact). Analyze promotional effectiveness and patient persistency/adherence. Build models for patient journey mapping, patient segmentation for personalized interventions, treatment adherence prediction, disease progression modelling, and identifying drivers of patient outcomes from RWE. Contribute to understanding patient behavior, unmet needs, and the impact of interventions on patient health. Generative AI & LLM Solutions Extracting insights from unstructured text data (e.g., clinical notes, scientific literature, sales call transcripts, patient forum discussions). Summarization of complex medical or commercial documents. Automated content generation for internal use (e.g., draft reports, competitive intelligence summaries). Enhancing data augmentation or synthetic data generation for model training. Developing intelligent search or Q&A systems for commercial or medical inquiries. Apply techniques like prompt engineering, fine-tuning of LLMs, and retrieval-augmented generation (RAG). Insight Generation & Storytelling Transform complex analytical findings into clear, concise, and compelling narratives and actionable recommendations for both technical and non-technical audiences. Create impactful data visualizations, dashboards, and presentations using tools like Tableau, Power BI, or Python/R/Alteryx visualization libraries. Collaboration & Project Lifecycle Management Collaborate effectively with cross-functional teams including product managers, data engineers, software developers, and other data scientists. Support the entire data science lifecycle, from conceptualization and data acquisition to model development, deployment (MLOps), and ongoing monitoring in production environments. Qualifications Master's or Ph.D. in Data Science, Statistics, Computer Science, Applied Mathematics, Economics, Bioinformatics, Epidemiology, or a related quantitative field. 4+ years progressive experience as a Data Scientist, with demonstrated success in applying advanced analytics to solve business problems, preferably within the healthcare, pharmaceutical, or life sciences industry using pharma dataset extensively (e.g. sales data from Iqvia, Symphony, Komodo, etc., CRM data from Veeva, OCE, etc.) Must-have: Solid understanding of pharmaceutical commercial operations (e.g., sales force effectiveness, marketing, market access, CRM). Must-have: Experience working with real-world patient data (e.g., claims, EHR, pharmacy data, patient registries) and understanding of patient journeys. Strong programming skills in Python (e.g., Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and/or R for data manipulation, statistical analysis, and machine learning. Expertise in SQL for data extraction, manipulation, and analysis from relational databases. Experience with machine learning frameworks and libraries. Proficiency in data visualization tools (e.g., Tableau, Power BI) and/or visualization libraries (e.g., Matplotlib, Seaborn, Plotly). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Spark, Hadoop) is a significant advantage. Specific experience with Natural Language Processing (NLP) techniques, Generative AI models (e.g., Transformers, diffusion models), Large Language Models (LLMs), and prompt engineering is highly desirable. Experience with fine-tuning LLMs, working with models from Hugging Face, or utilizing major LLM APIs (e.g., OpenAI, Anthropic, Google). Experience with MLOps practices and tools (e.g., MLflow, Kubeflow, Docker, Kubernetes). Knowledge of pharmaceutical or biotech industry regulations and compliance requirements like HIPAA, CCPA, SOC, etc. Excellent communication, presentation, and interpersonal skills, with the ability to effectively interact with both technical and non-technical stakeholders at all levels. Attention to details, biased for quality and client centricity. Ability to work independently and as part of a cross-functional team. Strong leadership, mentoring, and coaching skills. Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge Analytics projects, transforming the life sciences industry Collaborative and supportive work environment. Opportunities for professional development and growth. Skills: data manipulation,analytics,llm,generative ai,commercial pharma,mlops,sql,python,natural language processing,data visualization,models,r,machine learning,statistical analysis,genai,data,patient outcomes
Posted 5 days ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary We are looking for a skilled and passionate AI/ML Engineer to join our team and contribute to designing, developing, and deploying scalable machine learning models and AI solutions. The ideal candidate will have hands-on experience with data preprocessing, model building, evaluation, and deployment, with a strong foundation in mathematics, statistics, and software development. Key Responsibilities Design and implement machine learning models to solve business problems. Collect, preprocess, and analyze large datasets from various sources. Build, test, and optimize models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Deploy ML models using cloud services (AWS, Azure, GCP) or edge platforms. Collaborate with data engineers, data scientists, and product teams. Monitor model performance and retrain models as necessary. Stay up to date with the latest research and advancements in AI/ML. Create documentation and reports to communicate findings and model Skills & Qualifications : Bachelor's/Masters degree in Computer Science, Data Science, AI/ML, or related field. 2+ years of hands-on experience in building and deploying ML models. Proficiency in Python (preferred), R, or similar languages. Experience with ML/DL frameworks such as TensorFlow, PyTorch, Scikit-learn, XGBoost. Strong grasp of statistics, probability, and algorithms. Familiarity with data engineering tools (e.g., Pandas, Spark, SQL). Experience in model deployment (Docker, Flask, FastAPI, MLflow, etc.). Knowledge of cloud-based ML services (AWS SageMaker, Azure ML, GCP AI Platform). (ref:hirist.tech)
Posted 6 days ago
3.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Job Title : AI/ML Engineer Job Summary We are seeking a talented and passionate AI/ML Engineer with at least 3 years of experience to join our growing data science and machine learning team. The ideal candidate will have hands-on experience in building and deploying machine learning models, data preprocessing, and working with real-world datasets. You will collaborate with cross-functional teams to develop intelligent systems that drive business value. Key Responsibilities Design, develop, and deploy machine learning models for various business use cases. Analyze large and complex datasets to extract meaningful insights. Implement data preprocessing, feature engineering, and model evaluation pipelines. Work with product and engineering teams to integrate ML models into production environments. Conduct research to stay up to date with the latest ML and AI trends and technologies. Monitor and improve model performance over time. Required Qualifications Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. Minimum 3 years of hands-on experience in building and deploying machine learning models. Strong proficiency in Python and ML libraries such as scikit-learn, TensorFlow, PyTorch, and XGBoost. Experience with training, fine-tuning, and evaluating ML models in real-world applications. Proficiency in Large Language Models (LLMs) including experience using or fine-tuning models like .BERT, GPT, LLaMA, or open-source transformers. Experience with model deployment, serving ML models via REST APIs or microservices using frameworks like FastAPI, Flask, or TorchServe. Familiarity with model lifecycle management tools such as MLflow, Weights & Biases, or Kubeflow. Understanding of cloud-based ML infrastructure (AWS SageMaker, Google Vertex AI, Azure ML, etc.). Ability to work with large-scale datasets, perform feature engineering, and optimize model performance. Strong communication skills and the ability to work collaboratively in cross-functional teams. (ref:hirist.tech)
Posted 6 days ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Join us and drive the design and deployment of AI/ML frameworks revolutionizing telecom services. As a key member of our team, you will architect and build scalable, secure AI systems for service assurance, orchestration, and fulfillment, working directly with network experts to drive business impact. You will be responsible for defining architecture blueprints, selecting the right tools and platforms, and guiding cross-functional teams to deliver scalable AI systems. This role offers significant growth potential, mentorship opportunities, and the chance to shape the future of telecoms using the latest AI technologies and platforms. Key Responsibilities HOW YOU WILL CONTRIBUTE AND WHAT YOU WILL LEARN Design end-to-end AI architecture tailored to telecom services business functions (e.g., Service assurance, Orchestration and Fulfilment). Define data strategy and AI workflows including Inventory Model, ETL, model training, deployment, and monitoring. Evaluate and select AI platforms, tools, and frameworks suited for telecom-scale workloads for development and testing of Inventory services solutions Work closely with telecom network experts and Architects to align AI initiatives with business goals. Ensure scalability, performance, and security in AI systems across hybrid/multi-cloud environments. Mentor AI developers Key Skills And Experience You have: 10+ years' experience in AI/ML design and deployment with a Graduation or equivalent degree. Practical Experience on AI/ML techniques and scalable architecture design for telecom operations, inventory management, and ETL. Exposure to data platforms (Kafka, Spark, Hadoop), model orchestration (Kubeflow, MLflow), and cloud-native deployment (AWS Sagemaker, Azure ML). Proficient in programming (Python, Java) and DevOps/MLOps best practices. It will be nice if you had: Worked with any of the LLM models (llama family) and LLM agent frameworks like LangChain / CrewAI / AutoGen Familiarity with telecom protocols, OSS/BSS platforms, 5G architecture, and NFV/SDN concepts. Excellent communication and stakeholder management skills. About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. About The Team As Nokia's growth engine, we create value for communication service providers and enterprise customers by leading the transition to cloud-native software and as-a-service delivery models. Our inclusive team of dreamers, doers and disruptors push the limits from impossible to possible.
Posted 6 days ago
3.0 years
0 Lacs
Panaji, Goa, India
On-site
About the Project We are seeking a highly skilled and pragmatic AI/ML Engineer to join the team building "a Stealth Prop-tech startup," a groundbreaking digital real estate platform in Dubai. This is a complex initiative to build a comprehensive ecosystem integrating long-term sales, short-term stays, and advanced technologies including AI/ML, data analytics, Web3/blockchain, and conversational AI. You will be responsible for operationalizing the machine learning models that power our most innovative features, ensuring they are scalable, reliable, and performant. This is a crucial engineering role in a high-impact project, offering the chance to build the production infrastructure for cutting-edge AI in the PropTech space. Job Summary As an AI/ML Engineer, you will bridge the gap between data science and software engineering. You will be responsible for taking the models developed by our data scientists and deploying them into our production environment. Your work will involve building robust data pipelines, creating scalable training and inference systems, and developing the MLOps infrastructure to monitor and maintain our models. You will collaborate closely with data scientists, backend developers, and product managers to ensure our AI-driven features are delivered efficiently and reliably to our users. Key Responsibilities Design, build, and maintain scalable infrastructure for training and deploying machine learning models at scale. Operationalize ML models, including the "TruValue UAE" AVM and the property recommendation engine, by creating robust, low-latency APIs for production use. Develop and manage data pipelines (ETL) to feed our machine learning models with clean, reliable data for both training and real-time inference. Implement and manage the MLOps lifecycle, including CI/CD for models, versioning, monitoring for model drift, and automated retraining. Optimize the performance of machine learning models for speed and cost-efficiency in a cloud environment. Collaborate with backend engineers to seamlessly integrate ML services with the core platform architecture. Work with data scientists to understand model requirements and provide engineering expertise to improve model efficacy and feasibility. Build the technical backend for the AI-powered chatbot, integrating it with NLP services and the core platform data. Required Skills and Experience 3-5+ years of experience in a Software Engineering, Machine Learning Engineering, or related role. A Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. Strong software engineering fundamentals with expert proficiency in Python. Proven experience deploying machine learning models into a production environment on a major cloud platform (AWS, Google Cloud, or Azure). Hands-on experience with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Experience building and managing data pipelines using tools like Apache Airflow, Kubeflow Pipelines, or cloud-native solutions. Collaborate with cross-functional teams to integrate AI solutions into products. Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker) and orchestration (Kubernetes). Preferred Qualifications Experience in the PropTech (Property Technology) or FinTech sectors is highly desirable. Direct experience with MLOps tools and platforms (e.g., MLflow, Kubeflow, AWS SageMaker, Google AI Platform). Familiarity with big data technologies (e.g., Spark, BigQuery, Redshift). Experience building real-time machine learning inference systems. Strong understanding of microservices architecture. Experience working in a collaborative environment with data scientists.
Posted 6 days ago
40.0 years
2 - 6 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-218849 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jul. 08, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking an experienced MDM Engineer with 8–12 years of experience to lead development and operations of our Master Data Management (MDM) platforms, with hands-on experience in data engineering experience. This role will involve handling the backend data engineering solution within MDM team. This is a technical role that will require hands-on work. To succeed in this role, the candidate must have strong Data Engineering experience. Candidate must have experience on technologies like (SQL, Python, PySpark, Databricks, AWS, API Integrations etc). Roles & Responsibilities: Develop distributed data pipelines using PySpark on Databricks for ingesting, transforming, and publishing master data Write optimized SQL for large-scale data processing, including complex joins, window functions, and CTEs for MDM logic Implement match/merge algorithms and survivorship rules using Informatica MDM or Reltio APIs Build and maintain Delta Lake tables with schema evolution and versioning for master data domains Use AWS services like S3, Glue, Lambda, and Step Functions for orchestrating MDM workflows Automate data quality checks using IDQ or custom PySpark validators with rule-based profiling Integrate external enrichment sources (e.g., D&B, LexisNexis) via REST APIs and batch pipelines Design and deploy CI/CD pipelines using GitHub Actions or Jenkins for Databricks notebooks and jobs Monitor pipeline health using Databricks Jobs API, CloudWatch, and custom logging frameworks Implement fine-grained access control using Unity Catalog and attribute-based policies for MDM datasets Use MLflow for tracking model-based entity resolution experiments if ML-based matching is applied Collaborate with data stewards to expose curated MDM views via REST endpoints or Delta Sharing Basic Qualifications and Experience: 8 to 13 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced proficiency in PySpark for distributed data processing and transformation Strong SQL skills for complex data modeling, cleansing, and aggregation logic Hands-on experience with Databricks including Delta Lake, notebooks, and job orchestration Deep understanding of MDM concepts including match/merge, survivorship, and golden record creation Experience with MDM platforms like Informatica MDM or Reltio, including REST API integration Proficiency in AWS services such as S3, Glue, Lambda, Step Functions, and IAM Familiarity with data quality frameworks and tools like Informatica IDQ or custom rule engines Experience building CI/CD pipelines for data workflows using GitHub Actions, Jenkins, or similar Knowledge of schema evolution, versioning, and metadata management in data lakes Ability to implement lineage and observability using Unity Catalog or third-party tools Comfort with Unix shell scripting or Python for orchestration and automation Hands on experience on RESTful APIs for ingesting external data sources and enrichment feeds Good-to-Have Skills: Experience with Tableau or PowerBI for reporting MDM insights. Exposure to Agile practices and tools (JIRA, Confluence). Prior experience in Pharma/Life Sciences. Understanding of compliance and regulatory considerations in master data. Professional Certifications : Any MDM certification (e.g. Informatica, Reltio etc) Any Data Analysis certification (SQL, Python, PySpark, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. GCF Level 05A
Posted 6 days ago
3.0 years
16 - 20 Lacs
India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 6 days ago
6.0 - 8.0 years
10 - 16 Lacs
Pune
Hybrid
We are hiring for AI/GL engineer with 6-8 years of experience. Job Location - Pune Immediate joiners - 30 days AI/ML Engineer Job Purpose: To leverage expertise in machine learning operations (ML Ops) to develop and maintain robust, scalable, and efficient ML infrastructure, enhancing a financial crime product and ensuring the seamless deployment and monitoring of ML models. Job Description: As an ML Ops Engineer - Senior Consultant, you will play a crucial role in a project aimed at overhauling and enhancing a financial crime product. This includes upgrading back-end components, data structures, and reporting capabilities. You will be responsible for developing and maintaining the ML infrastructure, ensuring the seamless deployment, monitoring, and management of ML models. This role involves collaborating with cross-functional teams to gather requirements, ensuring data accuracy, and providing actionable insights to support strategic decision-making. Key Responsibilities: • Develop and maintain robust, scalable, and efficient ML infrastructure. • Ensure seamless deployment, monitoring, and management of ML models. • Collaborate with cross-functional teams to gather and analyze requirements. • Ensure data accuracy and integrity across all ML Ops solutions. • Provide actionable insights to support strategic decision-making. • Contribute to the overhaul and enhancement of back-end components, data structures, and reporting capabilities. • Support compliance and regulatory reporting needs. • Troubleshoot and resolve ML Ops-related issues in a timely manner. • Stay updated with the latest trends and best practices in ML Ops. • Mentor junior team members and provide guidance on best practices in ML Ops Skills Required: • Proficiency in ML Ops tools and frameworks (e.g., MLflow, Kubeflow, TensorFlow Extended) • Strong programming skills (e.g., Python, Bash) • Experience with CI/CD pipelines and automation tools (e.g., Jenkins, GitLab CI) • Excellent analytical and problem-solving abilities • Effective communication and collaboration skills • Attention to detail and commitment to data accuracy Skills Desired: • Experience with cloud platforms (e.g., AWS, Azure, GCP) • Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes) • Familiarity with big data technologies (e.g., Hadoop, Spark) • Ability to work in an agile development environment • Experience in the financial crime domain
Posted 6 days ago
0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Education AI / ML Engineer - Required Skills & Qualifications: Bachelor’s or Master’s in Computer Science, Engineering, Data Science, AI/ML, Mathematics, or related field Technical Skills Proficient in Python and ML libraries: scikit-learn, XGBoost, pandas, NumPy, matplotlib, seaborn, etc. Strong understanding of machine learning algorithms and deep learning architectures (CNNs, RNNs, Transformers, etc.) Hands-on with TensorFlow , PyTorch , or Keras Experience in data preprocessing , feature selection , EDA , and model interpretability Comfortable with API development and deploying models using Flask, FastAPI, or similar Experience with MLOps tools like MLflow , Kubeflow , DVC , Airflow , etc. Familiarity with cloud platforms like AWS (SageMaker, S3, Lambda), GCP (Vertex AI), or Azure ML Strong understanding of version control (Git), CI/CD, and containerization (Docker) Bonus Skills (Good To Have) NLP frameworks (e.g., spaCy, NLTK, Hugging Face Transformers) Computer Vision experience using OpenCV or YOLO/Detectron Knowledge of Reinforcement Learning or Generative AI (GANs, LLMs) Experience with vector databases (e.g., Pinecone, Weaviate) and LangChain for AI agent building Familiarity with data labeling platforms and annotation workflows Soft Skills Analytical mindset with problem-solving skills Strong communication and collaboration abilities Ability to work independently in a fast-paced, agile environment Passion for AI/ML and eagerness to stay updated with the latest developments Skills: fastapi,pandas,ml libraries,pinecone,docker,langchain,matplotlib,yolo,numpy,yolo/detectron,nltk,machine learning,spacy,vertex ai,feature selection,lambda,gans,tensorflow,airflow,weaviate,data labeling,nlp frameworks,seaborn,python,git,ai,cnns,xgboost,model interpretability,deep learning architectures,dvc,generative ai,opencv,s3,detectron,aws,data preprocessing,api development,ci/cd,gcp,scikit-learn,transformers,vector databases,mlops tools,mlflow,sagemaker,machine learning algorithms,hugging face transformers,kubeflow,eda,annotation workflows,llms,containerization,reinforcement learning,ml,pytorch,flask,rnns,azure ml,keras
Posted 6 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a visionary AI Architect to lead the design and integration of cutting-edge AI systems, including Generative AI , Large Language Models (LLMs) , multi-agent orchestration , and retrieval-augmented generation (RAG) frameworks. This role demands a strong technical foundation in machine learning, deep learning, and AI infrastructure, along with hands-on experience in building scalable, production-grade AI systems on the cloud. The ideal candidate combines architectural leadership with hands-on proficiency in modern AI frameworks, and can translate complex business goals into innovative, AI-driven technical solutions. Primary Stack & Tools: Languages : Python, SQL, Bash ML/AI Frameworks : PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers GenAI & LLM Tooling : OpenAI APIs, LangChain, LlamaIndex, Cohere, Claude, Azure OpenAI Agentic & Multi-Agent Frameworks : LangGraph, CrewAI, Agno, AutoGen Search & Retrieval : FAISS, Pinecone, Weaviate, Elasticsearch Cloud Platforms : AWS, GCP, Azure (preferred: Vertex AI, SageMaker, Bedrock) MLOps & DevOps : MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines, Terraform, FAST API Data Tools : Snowflake, BigQuery, Spark, Airflow Key Responsibilities: Architect scalable and secure AI systems leveraging LLMs , GenAI , and multi-agent frameworks to support diverse enterprise use cases (e.g., automation, personalization, intelligent search). Design and oversee implementation of retrieval-augmented generation (RAG) pipelines integrating vector databases, LLMs, and proprietary knowledge bases. Build robust agentic workflows using tools like LangGraph , CrewAI , or Agno , enabling autonomous task execution, planning, memory, and tool use. Collaborate with product, engineering, and data teams to translate business requirements into architectural blueprints and technical roadmaps. Define and enforce AI/ML infrastructure best practices , including security, scalability, observability, and model governance. Manage technical road-map, sprint cadence, and 3–5 AI engineers; coach on best practices. Lead AI solution design reviews and ensure alignment with compliance, ethics, and responsible AI standards. Evaluate emerging GenAI & agentic tools; run proofs-of-concept and guide build-vs-buy decisions. Qualifications: 10+ years of experience in AI/ML engineering or data science, with 3+ years in AI architecture or system design. Proven experience designing and deploying LLM-based solutions at scale, including fine-tuning , prompt engineering , and RAG-based systems . Strong understanding of agentic AI design principles , multi-agent orchestration , and tool-augmented LLMs . Proficiency with cloud-native ML/AI services and infrastructure design across AWS, GCP, or Azure. Deep expertise in model lifecycle management, MLOps, and deployment workflows (batch, real-time, streaming). Familiarity with data governance , AI ethics , and security considerations in production-grade systems. Excellent communication and leadership skills, with the ability to influence technical and business stakeholders.
Posted 6 days ago
4.0 years
0 Lacs
India
Remote
Job Description As an Azure Databricks Data Engineer , your responsibilities include: Technical Requirements Gathering and Development of Functional Specifications. Design, develop, and maintain scalable data pipelines and ETL processes using Azure Databricks, Data Factory, and other Azure services. Implement and optimize Spark jobs, data transformations, and data processing workflows in Databricks. Develop and integrate custom machine learning models using Azure Machine Learning, MLflow, and other relevant libraries. Leverage Azure DevOps and CI/CD best practices to automate the deployment and management of data pipelines and infrastructure. Conducting troubleshooting on data models. Work with the Agile multicultural teams in Asia, the EU, Canada, and the USA. Profile Requirements For this position of Azure Databricks Data Engineer , we are looking for someone with: (Required) At least 4 years of experience in developing and maintaining data pipelines using Azure Databricks, Azure Data Factory, and Spark. (Required) Fluent English communication and soft skills. (Required) Knowledge and Experience in CICD such as Terraform, ARM, and Bicep Script. (Required) Solid technical skills in Python, and SQL (Required) Familiarity with machine learning concepts, tools, and libraries (e.g., TensorFlow, PyTorch, Scikit-learn, MLflow) (Required) Strong problem-solving, communication, and analytical skills. Willingness to learn and expand technical skills in other areas. Adastra Culture Manifesto Servant Leadership Managers are servants to employees. Managers are elected to make sure that employees have all the processes, resources, and information they need to provide services to clients in an efficient manner. Any manager up to the CEO is visible and reachable for a chat regardless their title. Decisions are taken with a consent in an agile manner and executed efficiently in no overdue time. We accept that wrong decisions happen, and we appreciate the learning before we adjust the process for a continuous improvement. Employees serve clients. Employees listen attentively to client needs and collaborate internally as a team to cater to them. Managers and employees work together to get things done and are accountable to each other. Corporate KPIs are transparently reviewed on monthly company events with all employees. Performance Driven Compensation We recognize and accept that some of us are more ambitious, more gifted, or more hard-working. We also recognize that some of us look for a stable income and lesser hassle at a different stage of their careers. There is a place for everyone, we embrace and need this diversity. Grades in our company are not based on number of years of experience, they are value driven based on everyone’s ability to deliver independently their work to clients and/or lead others. There is no “anniversary/annual” bonus, we distribute bonuses on a monthly recurring basis as an instant gratification for performance and this bonus is practically unlimited. There is no “annual indexation” of salaries, you may be upgraded several times within the year, or none, based on your own pace of progress, ambitions, relevant skillset and recognition by clients. Work-Life Integration We challenge the notion of work-life balance, we embrace the notion of work-life integration instead. This philosophy looks into our lives as a single whole where we serve ourselves, our families and our clients in an integrated manner. We encourage 100% flexible working hours where you arrange your day. This means you are free when you have little work, but this also means extra effort if you are behind schedule. Working on a Western project also means nobody bothers you during the whole day but you may have to jump on a scrum call in the evening to talk to your team overseas. We appreciate time and we minimize time spent on Adastra meetings. We are also a remote-first company. While we have our collaboration offices and social events, we encourage people to work 100% remote from home whenever possible. This means saving time and money on commute, staying home with elderly and little ones, not missing the special moments in life. This also means you can work from any of our other offices in Europe, North America or Australia, or move to a place with lower cost of living without impacting your income. We trust you by default until you fail our trust. Global Diversity Adastra Thailand is an international organization. We hire globally and our biggest partners and clients are in Europe, North America and Australia. We work on teams with individuals from different culture, ethnicity, sexual preference, political views or religion. We have zero tolerance to anyone who doesn’t pay respect to others or is abusive in any way. We speak different languages to one another, but we speak English when we are together or with clients. Our company is a safe space where communication is encouraged but boundaries regarding sensitive topics are respected. We accept and converge together to serve our teams and clients and ultimately have good time at work. Lifelong Learning On annual average we invest 25% of our working hours to personal development and upskilling outside project work, regardless of seniority or role. We feature more than 400 courses on our Training Repo and we continue to actively purchase or tailor hands-on content. We certify people on our expense. We like to say we are technology agnostic; we learn the principles of data management and we apply it on different use cases and different technology stacks. We believe that the juniors today are the seniors tomorrow, we treat everyone with respect and mentor them into the roles they deserve. We encourage seniors to give back to the IT community through leadership and mentorship. On your last day with us we may give you an open dated job offer so that you feel welcome to return home as others did before you. More About Adastra: Visit Adastra (adastracorp.com) and/or contact us: at HRIN@adastragrp.com
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Build and manage CI/CD pipelines for ML models. Deploy models to cloud/on-premise environments. Monitor model performance and automate retraining workflows. Implement model versioning and reproducibility. Collaborate with data scientists and engineers. Requirements Looking for an ML DevOps Engineer to streamline the deployment and monitoring of ML models. The role requires strong DevOps skills with knowledge of ML lifecycle management Experience with Docker, Kubernetes, Jenkins, or similar tools. Familiarity with ML platforms like MLflow, Kubeflow, or SageMaker. Strong scripting skills in Python and Shell. Knowledge of cloud platforms (AWS, Azure, GCP). Understanding of MLOps best practices and ML lifecycle Date Opened 07/09/2025 Job Type Full time Years of Experience 3 - 6 Years Domain Chemicals City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600001
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France