Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Lowe’s Lowe's Companies, Inc. (NYSE: LOW) is a FORTUNE® 50 home improvement company serving approximately 17 million customer transactions a week in the U.S. With total fiscal year 2022 sales of over $97 billion, approximately $92 billion of sales were generated in the U.S., where Lowe's operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Mooresville, N.C., Lowe's supports the communities it serves through programs focused on creating safe, affordable housing and helping to develop the next generation of skilled trade experts. About Lowe’s India At Lowe's India, we are the enablers who help create an engaging customer experience for our $97 billion home improvement business at Lowe's. Our 4000+ associates work across technology, analytics, business operations, finance & accounting, product management, and shared services. We leverage new technologies and find innovative methods to ensure that Lowe's has a competitive edge in the market. About the Team The pricing Analytics team supports pricing managers and merchants in defining and optimizing the pricing strategies for various product categories across the channels .The team leverages advance analytics to forecast/measure the impact of pricing actions , develop strategic price zones, recommend price changes and identify sales/margin opportunities to achieve company targets . Job Summary: The primary purpose of this role is to develop and maintain descriptive and predictive analytics models and tools that support Lowe's pricing strategy. Collaborating closely with the Pricing team, the analyst will help translate pricing goals and objectives into data and analytics requirements. Utilizing both open source and commercial data science tools, the analyst will gather and wrangle data to deliver data driven insights, trends, and identify anomalies . The analyst will apply the most suitable statistical and machine learning techniques to answer relevant questions and provide retail recommendations . The analyst will actively collaborate with product and business team, incorporating feedback through out the development to drive continuous improvement and ensure a best-in-class position in the pricing space. Roles & Responsibilities: Core Responsibilities: Translate pricing strategy and business objectives into analytics requirements. Develop and implement processes for collecting, exploring, structuring, enhancing, and cleaning large datasets from both internal and external sources. Conduct data validation, detect outliers, and perform root cause analysis to prepare data for statistical and machine learning models. Research, design, and implement relevant statistical and machine learning models to solve specific business problems. Ensure the accuracy of data science and machine learning model results and build trust in their reliability. Apply machine learning model outcomes to relevant business use cases. Assist in designing and executing A/B tests, multivariate experiments, and randomized controlled trials (RCTs) to evaluate the effects of price changes. Perform advanced statistical analyses (e.g., causal inference, Bayesian analysis, regression modeling) to extract actionable insights from experimentation data. Collaborate with teams such as Pricing Strategy & Execution, Analytics COE, Merchandising, IT, and others to define, prioritize, and develop innovative solutions. Keep up to date with the latest developments in data science, statistics, and experimentation techniques. Automate routine manual processes to improve efficiency. Years of Experience: 3-6 years of relevant experience Education Qualification & Certifications (optional) Required Minimum Qualifications : Bachelor’s or Masters in Engineering/business analytics/Data Science/Statistics/economics/math Skill Set Required Primary Skills (must have) 3+ Years of experience in advance quantitative analysis , statistical modeling and Machine Learning. Ability to perform various analytical concepts like Regression, Sampling techniques, hypothesis, Segmentation, Time Series Analysis, Multivariate Statistical Analysis, Predictive Modelling. 3+ years’ experience in corporate Data Science, Analytics, Pricing & Promotions, Merchandising, or Revenue Management . 3+ years’ experience working with common analytics and data science software and technologies such as SQL, Python, R, or SAS. 3+ years’ experience working with Enterprise level databases ( e.g., Hadoop, Teradata, Oracle, DB2 ) 3+ years’ experience using enterprise-grade data visualization tools ( e.g., Power BI , Tableau ) 3+ years’ experience working with cloud platforms ( e.g., GCP, Azure ,AWS ) Secondary Skills (desired) Technical expertise in Alteryx, Knime.
Posted 2 weeks ago
1.0 years
3 - 9 Lacs
Hyderābād
Remote
Data Scientist II Hyderabad, Telangana, India Date posted Jul 07, 2025 Job number 1828565 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Research, Applied, & Data Sciences Discipline Data Science Employment type Full-Time Overview At OneNote, we are driven by a bold vision: "To help activate a second brain for everyone to realize their full potential." We are embarking on the next chapter of our evolution via Copilot Notebooks: notebooks designed for an AI powered future. We're building solutions that make capturing ideas seamless, understanding complex information intuitive, and taking informed action Whether it’s brainstorming the next big idea, organizing life’s intricate details, or simply finding clarity amid complexity, OneNote Copilot Notebooks is here. Join us as we reshape the future of AI by turning possibilities into realities — and help millions of users across the globe activate their second brain. We are looking for a Data Scientist II to join our team and help us shape the future of OneNote. In this role, you will partner with product, design, and engineering teams to deliver actionable insights, build experimentation frameworks, and identify growth opportunities. Your work will directly influence product development and user engagement strategies across millions of users. Our culture thrives on innovation, inclusion, growth mindset, and a strong sense of purpose. If you’re passionate about using data to drive decisions and want to work on a high-impact product at the cutting edge of productivity and AI, we’d love to hear from you. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 1+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical tec OR Bachelor's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 2+ years data-science experience (e.g., managing structured and unstructured data, applying statistical tec OR equivalent experience. 1+ year(s) customer-facing, project-delivery experience, professional services, and/or consulting experience. Proficiency in SQL and at least one programming language such as Python or R. Experience with business intelligence tools (e.g., Power BI, Tableau). Strong statistical knowledge and experience with A/B testing, causal inference, or other experimentation methodologies. Experience working with large datasets and big data technologies (e.g., Azure Data Lake, Synapse, Databricks, Spark, or equivalent). Ability to work independently and collaboratively in a fast-paced, ambiguous environment. Candidate must be comfortable manipulating and analyzing complex, high dimensional data from varying sources to solve difficult problems. Candidate must be able to communicate complex ideas and concepts to leadership and deliver results. Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred Qualifications: Experience in product analytics, growth strategy, or user engagement optimization. Familiarity with Microsoft Office ecosystem or productivity tools is a plus. Experience with Copilot/LLM-related user scenarios or AI-driven products. Strong storytelling and communication skills, with the ability to turn complex data into clear, actionable narratives for executives and product teams. Passion for building delightful and impactful user experiences with measurable outcomes. Responsibilities You will understand each customer’s business goals and learn best practices for identifying growth opportunities. You’ll also examine projects through a customer-oriented focus and manage customer expectations regarding project progress. Collaborate with cross-functional teams to define metrics, design experiments, and uncover user behaviors that influence product adoption and growth. Use statistical analysis, data mining, and machine learning techniques to generate insights from large-scale structured and unstructured data. Acquire the data necessary for your project plan and use querying, visualization, and reporting techniques to describe that data. You’ll also explore data for key attributes and collaborate with others to perform data science experiments using established methodologies. Model techniques, select the correct tool and approach to complete objectives, and evaluate the output for statistical and business significance. You’ll also analyze model performance and incorporate customer feedback into its evaluation. Build dashboards and reports that enable product teams to track key performance indicators and make data-informed decisions. Develop and iterate on models to identify high-value scenarios, recommend features, and improve user retention and engagement. Communicate insights clearly and effectively to both technical and non-technical stakeholders, influencing product strategy and priorities. Contribute to a culture of data excellence by championing best practices in experimentation, measurement, and data governance. Understanding the current state of the industry, including current trends, so that you can contribute to thought leadership best practices. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 2 weeks ago
0 years
8 - 16 Lacs
Saket
On-site
We're looking for a hands-on Computer Vision Engineer who thrives in fast-moving environments and loves building real-world, production-grade AI systems. If you enjoy working with video, visual data, cutting-edge ML models, and solving high-impact problems, we want to talk to you. This role sits at the intersection of deep learning, computer vision, and edge AI, building scalable models and intelligent systems that power our next-generation sports tech platform. Requirements: Strong command of Python and familiarity with C/C++ Experience with one or more deep learning frameworks: PyTorch, TensorFlow, Keras. Solid foundation in YOLO, Transformers, or OpenCV for real-time visual AI. Understanding of data preprocessing, feature engineering, and model evaluation using NumPy, Pandas, etc. Good grasp of computer vision, convolutional neural networks (CNNs), and object detection techniques. Exposure to video streaming workflows (e. g., GStreamer, FFmpeg, RTSP). Ability to write clean, modular, and efficient code. Experience deploying models in production, especially on GPU/edge devices. Interest in reinforcement learning, sports analytics, or real-time systems An undergraduate degree (Master's or PhD preferred) in Computer Science, Artificial Intelligence, or a related discipline is preferred. A strong academic background is a plus. Responsibilities: Design, train, and optimize deep learning models for real-time object detection, tracking, and video understanding. Implement and deploy AI models using frameworks like PyTorch, TensorFlow/Keras, and Transformers. Work with video and image datasets using OpenCV, YOLO, NumPy, Pandas, and visualization tools like Matplotlib. Collaborate with data engineers and edge teams to deploy models on real-time streaming pipelines. Optimize inference performance for edge devices (Jetson, T4 etc. ) and handle video ingestion workflows. Prototype new ideas rapidly, conduct A/B tests, and validate improvements in real-world scenarios. Document processes, communicate findings clearly, and contribute to our growing AI knowledge base. Job Type: Full-time Pay: ₹800,000.00 - ₹1,600,000.00 per year Schedule: Day shift Ability to commute/relocate: Saket, Delhi, Delhi: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Work Location: In person
Posted 2 weeks ago
0 years
2 - 10 Lacs
Bengaluru
On-site
Proud to share LSEG in the India is Great Place to Work certified (Jun ’25 – Jun ’26). Learn more about life and purpose of our company directly from India colleagues’ video: Bengaluru, India | Where We Work | LSEG Team Overview Join the Innovation & Intelligence Team within the Data & Analytics Operations function at the London Stock Exchange Group (LSEG). We are a dynamic group of data scientists, engineers, and analysts who deliver impactful automation and AI solutions that enhance operational performance across the organisation. We work in an agile, collaborative environment, partnering closely with Engineering and D&A Operations teams to shape the technology roadmap and drive innovation. Our mission is to accelerate value delivery while building scalable, future-ready capabilities. What You’ll Do As an ML Engineer, you will: Design and implement solutions using AI/ML and other Automation technologies to solve real-world business challenges. Collaborate with cross-functional teams to gather requirements and translate them into working prototypes and production-ready tools. Build and test Proof of Concepts (POCs) and Minimum Viable Products (MVPs) to validate new ideas and approaches. Develop and maintain data pipelines, inference workflows, and other automation components. Continuously learn and apply emerging technologies to enhance solution effectiveness. Contribute to a culture of innovation, experimentation, and continuous improvement. What We’re Looking For Proven experience in AI/ML solution development, ideally in a fast-paced or enterprise environment. Strong programming skills in Python and demonstrated ability to use SQL for complex data tasks. Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP). Ability to think analytically and solve complex problems with creativity and rigour. Excellent communication skills to articulate technical concepts to diverse audiences. A proactive mindset and eagerness to learn new tools and techniques. Inclusion and Accessibility LSEG is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. If you require any accommodations during the recruitment process, please let us know—we’re here to support you. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject . If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Machine Learning Engineer – Applied AI & Scalable Model Deployment Location : Sector 63, Gurgaon – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM to 8:00 PM Experience : 2–6 years in machine learning engineering or applied data science roles Apply at : careers@darwix.ai Subject Line : Application – Machine Learning Engineer – [Your Name] About Darwix AI Darwix AI is India’s leading GenAI SaaS platform powering real-time sales enablement and conversational intelligence for large enterprise teams. Our products— Transform+ , Sherpa.ai , and Store Intel —support revenue teams in BFSI, retail, real estate, and healthcare by delivering multilingual voice analysis, real-time AI nudges, agent coaching, and in-store behavioral analytics. Darwix AI is redefining how large-scale human interactions drive revenue outcomes. As we expand rapidly across India, MENA, and Southeast Asia, we are strengthening our core ML engineering team to accelerate new feature development and production deployments. Role Overview As a Machine Learning Engineer , you will design, build, and operationalize robust ML models for real-time and batch processing workflows across Darwix AI’s product suite. Your work will span conversational intelligence, voice and text analytics, predictive scoring, and decision-support systems. You will collaborate closely with AI research engineers, backend teams, and product managers to translate business problems into scalable and maintainable ML pipelines. This is a hands-on, impact-first role focused on turning advanced ML models into production systems used by large enterprise teams daily. Key ResponsibilitiesModel Development & Training Design, build, and optimize models for tasks such as classification, scoring, topic detection, and conversation summarization. Work on feature engineering pipelines, data preprocessing, and large-scale training on structured and unstructured datasets. Evaluate model performance using robust metrics (accuracy, recall, precision, WER for voice tasks). Deployment & Productionization Package and deploy models as scalable APIs and microservices integrated with core product workflows. Optimize inference pipelines for latency, throughput, and cost in production environments. Work closely with DevOps and backend engineers to ensure robust CI/CD, monitoring, and auto-recovery workflows. Data & Pipeline Engineering Develop and maintain data pipelines to ingest, clean, transform, and label large volumes of voice and text data. Implement logging, data versioning, and audit trails to ensure traceable and reproducible experiments. Monitoring & Continuous Improvement Build automated evaluation frameworks to detect model drift and performance degradation. Analyze live production data to identify opportunities for iterative improvements and fine-tuning. Contribute to A/B testing design for model-driven features to validate business impact. Collaboration & Documentation Work with cross-functional teams to gather requirements, define success criteria, and drive end-to-end feature implementation. Maintain clear technical documentation for data flows, model architectures, and deployment processes. Mentor junior engineers on best practices in ML system design and operationalization. Required Skills & Qualifications 2–6 years of experience in ML engineering, applied ML, or data science with a strong focus on production systems. Proficiency in Python , including experience with ML libraries such as PyTorch, TensorFlow, Scikit-learn, or Hugging Face. Solid understanding of data preprocessing, feature engineering, and ML model lifecycle management. Experience deploying models as REST APIs or microservices in cloud or containerized environments. Strong knowledge of relational and NoSQL databases, and familiarity with data pipeline tools. Good understanding of MLOps concepts, including CI/CD for ML, model monitoring, and A/B testing. Preferred Qualifications Exposure to speech or voice analytics , including speech-to-text systems and audio signal processing. Familiarity with large language models (LLMs), embeddings, or retrieval-augmented generation (RAG) pipelines. Experience with distributed training, GPU optimization, or large-scale batch inference. Knowledge of vector databases (FAISS, Pinecone) and real-time recommendation systems. Prior experience in SaaS product environments targeting enterprise clients. Success in This Role Means Models integrated into production systems delivering measurable improvements to business KPIs. High availability, low-latency inference pipelines powering real-time features for large enterprise users. Rapid iteration cycles from model conception to production deployment. Strong, well-documented, and reusable ML infrastructure supporting ongoing product and feature launches. You Will Excel in This Role If You Are passionate about building ML systems that create real business impact, not just offline experiments. Enjoy working with noisy, multilingual, and large-scale datasets in high-stakes settings. Love solving engineering challenges involved in scaling AI solutions to thousands of enterprise users. Thrive in a fast-paced, ownership-driven environment where ideas translate quickly to live features. Value documentation, reproducibility, and collaboration as much as technical depth. How to Apply Email your updated CV to careers@darwix.ai Subject Line: Application – Machine Learning Engineer – [Your Name] (Optional): Include links to your GitHub, published papers, blog posts, or a short note on a real-world ML system you helped deploy and what challenges you overcame. This is a unique opportunity to join the core engineering team at one of India’s most innovative GenAI startups and shape how enterprise teams leverage AI for real-time decision-making and revenue growth. If you are ready to build AI at scale, Darwix AI wants to hear from you.
Posted 2 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
BrandAstra is an AI-powered emotional intelligence platform designed to revolutionize how brands connect with and understand their consumers.By leveraging emotional intelligence, BrandAstra offers real-time social listening and sentiment analysis to help brands assess, predict, and enhance consumer sentiment across multiple digital touchpoints that includes internal and external platforms. Job overview We are seeking a talented Generative AI Engineer with 3–4 years of experience to join our innovative team. In this role, you will develop and implement cutting-edge generative AI models to generate for content generation, automation and personalized experiences. It will help solve complex problems and drive innovation in our marketing campaigns. Your work will directly influence how brands connect with and understand their audiences. Key responsibilities: Design, train, and fine-tune LLMs and generative models (e.g., GPT, LLaMA, BERT variants) Build modular pipelines using LangChain or similar to integrate models into marketing intelligence workflows Work with the product and data teams to convert raw user interaction and social signals into training-ready datasets Implement inference optimizations and retrieval-augmented generation (RAG) systems for real-time outputs Evaluate outputs against performance benchmarks, and iterate based on live feedback Stay ahead of the latest releases in open-source model families, vector stores, and fine-tuning techniques Must Haves: Proficiency in Python Strong experience with PyTorch and TensorFlow Familiarity with Transformers (HuggingFace library) Hands-on experience with LangChain , LlamaIndex , and Hugging Face Hub Working knowledge of LLaMA models, RAG workflows, and vector DBs (e.g., FAISS, Pinecone) Experience deploying models via Hugging Face Spaces , FastAPI , or similar tools Solid understanding of prompt engineering, tokenization, and memory management for LLMs Qualification 3–4 years of hands-on experience in building and deploying AI/ML systems Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or equivalent Experience working in agile product teams with engineers and PMs Familiarity with real-world use cases involving unstructured customer data Bonus: Prior work in marketing tech, NLP for business intelligence, or AI SaaS products What's in it for you? Opportunity to work on a category-defining product at the edge of AI and marketing A sharp, collaborative team of builders and thinkers Flexible work schedule and autonomy to shape your roadmap Competitive compensation. If you're passionate about using Generative AI to help brands understand people better, this is your place. Apply now and help build the brain behind the world’s smartest marketing teams.
Posted 2 weeks ago
6.0 years
60 - 65 Lacs
Greater Hyderabad Area
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Java, Node, Deployment, Image Processing, AWS, Computer Vision, object detection, FastAPI Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI Engineer – Voice, NLP, and GenAI Systems Location : Sector 63, Gurgaon – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM to 8:00 PM Experience : 2–6 years in AI/ML, NLP, or applied machine learning engineering Apply at : careers@darwix.ai Subject Line : Application – AI Engineer – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how enterprise sales, field, and support teams engage with customers. Our suite— Transform+ , Sherpa.ai , and Store Intel —powers real-time multilingual voice analytics, AI nudges, coaching systems, and computer vision analytics for major enterprises across India, MENA, and Southeast Asia. We work with some of the largest names such as Aditya Birla Capital, Sobha, GIVA, and Bank Dofar. Our systems process thousands of daily conversations, live call transcripts, and omnichannel data to deliver actionable revenue insights and in-the-moment enablement. Role Overview As an AI Engineer , you will play a key role in designing, developing, and scaling AI and NLP systems that power our core products. You will work at the intersection of voice AI, natural language processing (NLP), large language models (LLMs), and speech-to-text pipelines. You will collaborate with product, backend, and frontend teams to integrate ML models into production workflows, optimize inference pipelines, and improve the accuracy and performance of real-time analytics used by enterprise sales and field teams. Key ResponsibilitiesAI & NLP System Development Design, train, fine-tune, and deploy NLP models for conversation analysis, scoring, sentiment detection, and call summarization. Work on integrating and customizing speech-to-text (STT) pipelines (e.g., WhisperX, Deepgram) for multilingual audio data. Develop and maintain classification, extraction, and sequence-to-sequence models to handle real-world sales and service conversations. LLM & Prompt Engineering Experiment with and integrate large language models (OpenAI, Cohere, open-source LLMs) for live coaching and knowledge retrieval use cases. Optimize prompts and design retrieval-augmented generation (RAG) workflows to support real-time use in product modules. Develop internal tools for model evaluation and prompt performance tracking. Productionization & Integration Build robust model APIs and microservices in collaboration with backend engineers (primarily Python, FastAPI). Optimize inference time and resource utilization for real-time and batch processing needs. Implement monitoring and logging for production ML systems to track drift and failure cases. Data & Evaluation Work on audio-text alignment datasets, conversation logs, and labeled scoring data to improve model performance. Build evaluation pipelines and create automated testing scripts for accuracy and consistency checks. Define and track key performance metrics such as WER (word error rate), intent accuracy, and scoring consistency. Collaboration & Research Work closely with product managers to translate business problems into model design requirements. Explore and propose new approaches leveraging the latest research in voice, NLP, and generative AI. Document research experiments, architecture decisions, and feature impact clearly for internal stakeholders. Required Skills & Qualifications 2–6 years of experience in AI/ML engineering, preferably with real-world NLP or voice AI applications. Strong programming skills in Python , including libraries like PyTorch, TensorFlow, Hugging Face Transformers. Experience with speech processing , audio feature extraction, or STT pipelines. Solid understanding of NLP tasks: tokenization, embedding, NER, summarization, intent detection, sentiment analysis. Familiarity with deploying models as APIs and integrating them with production backend systems. Good understanding of data pipelines, preprocessing techniques, and scalable model architectures. Preferred Qualifications Prior experience with multilingual NLP systems or models tuned for Indian languages. Exposure to RAG pipelines , embeddings search (e.g., FAISS, Pinecone), and vector databases. Experience working with voice analytics, diarization, or conversational scoring frameworks. Understanding of DevOps basics for ML (MLflow, Docker, GitHub Actions for model deployment). Experience in SaaS product environments serving enterprise clients. Success in This Role Means Accurate, robust, and scalable AI models powering production workflows with minimal manual intervention. Inference pipelines optimized for enterprise-scale deployments with high availability. New features and improvements delivered quickly to drive direct business impact. AI-driven insights and automations that enhance user experience and boost revenue outcomes for clients. You Will Excel in This Role If You Love building AI systems that create measurable value in the real world, not just in research labs. Enjoy solving messy, real-world data problems and working on multilingual and noisy data. Are passionate about voice and NLP, and constantly follow advancements in GenAI. Thrive in a fast-paced, high-ownership environment where ideas quickly become live features. How to Apply Email your updated CV to careers@darwix.ai Subject Line: Application – AI Engineer – [Your Name] (Optional): Share links to your GitHub, open-source contributions, or a short note about a model or system you designed and deployed in production. This is an opportunity to build foundational AI systems at one of India’s fastest-scaling GenAI startups and to impact how large enterprises engage millions of customers every day. If you are ready to transform how AI meets revenue teams—Darwix AI wants to hear from you.
Posted 2 weeks ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role name: Automation Test Lead (AI/ML) Years of exp: 5 - 8 yrs About Dailoqa Dailoqa’s mission is to bridge human expertise and artificial intelligence to solve the challenges facing financial services. Our founding team of 20+ international leaders, including former CIOs and senior industry experts, combines extensive technical expertise with decades of real-world experience to create tailored solutions that harness the power of combined intelligence. With a focus on Financial Services clients, we have deep expertise across Risk & Regulations, Retail & Institutional Banking, Capital Markets, and Wealth & Asset Management. Dailoqa has global reach in UK, Europe, Africa, India, ASEAN, and Australia. We integrate AI into business strategies to deliver tangible outcomes and set new standards for the financial services industry. Working at Dailoqa will be hard work, our environment is fluid and fast-moving and you'll be part of a community that values innovation, collaboration, and relentless curiosity. We’re looking at people who: Are proactive, curious adaptable, and patient Shape the company's vision and will have a direct impact on its success. Have the opportunity for fast career growth. Have the opportunity to participate in the upside of an ultra-growth venture. Have fun 🙂 Don’t apply if: You want to work on a single layer of the application. You prefer to work on well-defined problems. You need clear, pre-defined processes. You prefer a relaxed and slow paced environment. Role Overview As an Automation Test Lead at Dailoqa, you’ll architect and implement robust testing frameworks for both software and AI/ML systems. You’ll bridge the gap between traditional QA and AI-specific validation, ensuring seamless integration of automated testing into CI/CD pipelines while addressing unique challenges like model accuracy, GenAI output validation, and ethical AI compliance. Key Responsibilities Test Automation Strategy & Framework Design Design and implement scalable test automation frameworks for frontend (UI/UX), backend APIs, and AI/ML model-serving endpoints using tools like Selenium, Playwright, Postman, or custom Python/Java solutions. Build GenAI-specific test suites for validating prompt outputs, LLM-based chat interfaces, RAG systems, and vector search accuracy. Develop performance testing strategies for AI pipelines (e.g., model inference latency, resource utilization). Continuous Testing & CI/CD Integration Establish and maintain continuous testing pipelines integrated with GitHub Actions, Jenkins, or GitLab CI/CD. Implement shift-left testing by embedding automated checks into development workflows (e.g., unit tests, contract testing). AI/ML Model Validation Collaborate with data scientists to test AI/ML models for accuracy, fairness, stability, and bias mitigation using tools like TensorFlow Model Analysis or MLflow. Validate model drift and retraining pipelines to ensure consistent performance in production. Quality Metrics & Reporting Define and track KPIs: Test coverage (code, data, scenarios) Defect leakage rate Automation ROI (time saved vs. maintenance effort) Model accuracy thresholds Report risks and quality trends to stakeholders in sprint reviews. Drive adoption of AI-specific testing tools (e.g., LangChain for LLM testing, Great Expectations for data validation). Technical Requirements Must-Have 5–8 years in test automation, with 2+ years validating AI/ML systems. Expertise in: Automation tools: Selenium, Playwright, Cypress, REST Assured, Locust/JMeter CI/CD: Jenkins, GitHub Actions, GitLab AI/ML testing: Model validation, drift detection, GenAI output evaluation Languages: Python, Java, or JavaScript Certifications: ISTQB Advanced, CAST, or equivalent. Experience with MLOps tools: MLflow, Kubeflow, TFX Familiarity with vector databases (Pinecone, Milvus) and RAG workflows. Strong programming/scripting experience in JavaScript, Python, Java, or similar Experience with API testing, UI testing, and automated pipelines Understanding of AI/ML model testing, output evaluation, and non-deterministic behavior validation Experience with testing AI chatbots, LLM responses, prompt engineering outcomes, or AI fairness/bias Familiarity with MLOps pipelines and automated validation of model performance in production Exposure to Agile/Scrum methodology and tools like Azure Boards Soft Skills Strong problem-solving skills for balancing speed and quality in fast-paced AI development. Ability to communicate technical risks to non-technical stakeholders. Collaborative mindset to work with cross-functional teams (data scientists, ML engineers, DevOps).
Posted 2 weeks ago
6.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Job Description Job Title : Senior Data Scientist - 6+years Location : Mumbai, India Notice period : Max 45 days' notice period we can consider Must needed Team Handling experience . · Apply advanced machine learning/statistical algorithms scalable to huge data sets to: • Determine the most meaningful ad, served to the right user at the optimal time, and the best price • Identify behaviors, interests and segments of web users across billions of transactions to find the most optimal audience for a given advertising activity • Eliminate suspicious / non-human traffic • Maintenance our products, support customers in analyzing the reasons behind the decisions made by our algorithms and look for new improvements in the process of striving for their excellence · Work closely with other Data Scientists, development, and product teams to implement algorithms into production-level software. Mentor less experienced colleagues. · Contribute to identify opportunities for leveraging company data to drive business solutions · Be an active technical challenger in the team for the purpose of mutual improvement and broadening of team and company horizons · Design solutions and lead cross-functional technical projects from ideation to deployment . · Excellent mathematical and statistical skills (statistical inference), experience in working with large datasets. Knowledge of data pipelines, ETL processes. · Very good knowledge of multiply supervised and unsupervised machine learning techniques with math background and hands-on experience · Great problem-solving and analytical skills. · Ability to structure a large business problem into tractable and reasonable components, design and deploy scalable machine learning solutions · Proficiency in Python, SQL · Experience with big data tools (e.g. Spark, Hadoop) Interested candidate can share there Resume on komal.aher@thesearchfield.com
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Ready to be pushed beyond what you think you’re capable of? At Coinbase, our mission is to increase economic freedom in the world. It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform — and with it, the future global financial system. To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system. We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up. We want someone who will run towards, not away from, solving the company’s hardest problems. Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be. While many roles at Coinbase are remote-first, we are not remote-only. In-person participation is required throughout the year. Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment. Attendance is expected and fully supported. The mission of the Platform Product Group engineers is to build a trusted, scalable and compliant platform to operate with speed, efficiency and quality. Our teams build and maintain the platforms critical to the existence of Coinbase. There are many teams that make up this group which include Product Foundations (i.e. Identity, Payment, Risk, Proofing & Regulatory, Finhub), Machine Learning, Customer Experience, and Infrastructure. As a Staff Machine Learning Platform Engineer at Coinbase, you will play a pivotal role in building an open financial system. The team builds the foundational components for training and serving ML models at Coinbase. Our platform is used to combat fraud, personalize user experiences, and to analyze blockchains. We are a lean team, so you will get the opportunity to apply your software engineering skills across all aspects of building ML at scale, including stream processing, distributed training, and highly available online services. What you’ll be doing (ie. job duties): Form a deep understanding of our Machine Learning Engineers’ needs and our current capabilities and gaps. Mentor our talented junior engineers on how to build high quality software, and take their skills to the next level. Continually raise our engineering standards to maintain high-availability and low-latency for our ML inference infrastructure that runs both predictive ML models and LLMs. Optimize low latency streaming pipelines to give our ML models the freshest and highest quality data. Evangelize state-of-the-art practices on building high-performance distributed training jobs that process large volumes of data. Build tooling to observe the quality of data going into our models and to detect degradations impacting model performance. What we look for in you (ie. job requirements): 10+ yrs of industry experience as a Software Engineer. You have a strong understanding of distributed systems. You lead by example through high quality code and excellent communication skills. You have a great sense of design, and can bring clarity to complex technical requirements. You treat other engineers as a customer, and have an obsessive focus on delivering them a seamless experience. You have a mastery of the fundamentals, such that you can quickly jump between many varied technologies and still operate at a high level. Nice to Have: Experience building ML models and working with ML systems. Experience working on a platform team, and building developer tooling. Experience with the technologies we use (Python, Golang, Ray, Tecton, Spark, Airflow, Databricks, Snowflake, and DynamoDB). Job #: GPBE06IN *Answers to crypto-related questions may be used to evaluate your onchain experience Please be advised that each candidate may submit a maximum of four applications within any 30-day period. We encourage you to carefully evaluate how your skills and interests align with Coinbase's roles before applying. Commitment to Equal Opportunity Coinbase is committed to diversity in its workforce and is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the Know Your Rights notice here . Additionally, Coinbase participates in the E-Verify program in certain locations, as required by law. Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations[at]coinbase.com to let us know the nature of your request and your contact information. For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here) . Global Data Privacy Notice for Job Candidates and Applicants Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. For US applicants only, by submitting your application you are agreeing to arbitration of disputes as outlined here.
Posted 2 weeks ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Software Architect – Artificial Intelligence Experience: 5+ years Location: Hybrid (India) | Bengaluru, NCR, or Hyderabad The Role Our client is building the infrastructure layer for AI in hospitals. As Software Architect , you’ll lead the technical design and evolution of our AI platform—one that enables reasoning agents, handles population-scale data, and powers clinical workflows with intelligence and speed. This role combines hands-on engineering with strategic influence. You’ll define core abstractions, scale systems across environments, and enable a world-class team of engineers to build on top of the foundation you lay. What You’ll Own Architect scalable AI systems—from data ingestion and orchestration to model inference and observability Design and evolve the platform’s agentic core: integrate models, tools, reasoning engines, and feedback loops Build clean, reusable APIs and frameworks that product and clinical teams can rely on Work across the product lifecycle: from user needs and hospital feedback to iteration and deployment Mentor engineers, enforce high standards, and navigate tough technical trade-offs in a startup environment What You Bring 5+ years leading the design of complex, mission-critical systems at scale Strong experience with LLM-based or agent-driven architectures , especially in secure, compliance-bound environments Proven ability to set technical direction while staying hands-on with architecture, code, and reviews Depth in backend infrastructure: cloud-native systems, data pipelines, deployment workflows, monitoring Excellent communication, decision-making, and mentoring skills A degree in computer science or a related field Bonus: Experience training or fine-tuning custom AI models Who This Is For You’re a system-level thinker who sees complexity as a challenge, not a blocker. You want to build with purpose—and you’re comfortable shaping both the codebase and the engineering culture that defines how it grows. If you’re looking to lead from the front, solve messy real-world problems, and work alongside world-class builders—this is your kind of role. Reach out via DM or write to sophia.d@thecheckmatepartners, aditi.p@thecheckmatepartners.com
Posted 2 weeks ago
0 years
0 Lacs
Madurai, Tamil Nadu, India
On-site
Role : AIML Engineer Location : Madurai/ Chennai Language: Python DBs : SQL Core Libraries: Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet SOTA ML : ML Models, Boosting & Ensemble models etc. Explainability : Shap / Lime Required skills: Deep Learning: PyTorch, PyTorch Forecasting, Data Processing: Pandas, NumPy, Polars (optional), PySpark Hyperparameter Tuning: Optuna, Amazon SageMaker Automatic Model Tuning Deployment & MLOps: Batch & Realtime with API endpoints, MLFlow Serving: TorchServe, Sagemaker endpoints / batch Containerization: Docker Orchestration & Pipelines: AWS Step Functions, AWS SageMaker Pipelines AWS Services: SageMaker (Training, Inference, Tuning) S3 (Data Storage) CloudWatch (Monitoring) Lambda (Trigger-based Inference) ECR, ECS or Fargate (Container Hosting)
Posted 2 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the company Were hiring for a deep tech startup at the cutting edge of AI + Quantum Computing , offering a fully integrated stack from superconducting quantum hardware to AI powered software platforms for solving extremely complex business and scientific challenges across industries like drug discovery, logistics, chip design, energy, and climate modeling. The companys 25‑qubit superconducting quantum system is India’s first full stack build, backed by the National Quantum Mission and enterprise partnerships What You'll Do Design and build multi‑agent LLM systems for complex reasoning workflows Design domain specific scaffolds (prompts, datasets) and robust evaluation frameworks to guide AI systems in specialized environments Implement reinforcement learning enhancements (RLHF, DPO, GRPO, SFT) for agent optimization Fine tune and deploy small reasoning models; perform post‑training domain adaptation Engineer scalable training/inference pipelines on multi node GPU clusters with containerized infrastructure Collaborate with product and vertical teams to transition research into real world applications Contribute to publications, open source work, and internal learning across the company Who We're Looking For MS or PhD in CS, ML, AI or a related technical field 4+ years in applied AI research Proficient in Python + PyTorch or JAX, including scaled or custom implementations Practical experience with RL techniques like RLHF, DPO, GRPO, SFT Exposure to chain of thought prompting and dataset creation Demonstrated experience building and deploying multi‑agent systems Experience with distributed infrastructure: Docker/Kubernetes, Ray, Hydra, MLflow Bonus if you have publications (NeurIPS, ICLR, ICML, AAMAS) or domain experience in drug discovery, materials, quantum control, chip design Open source contributions to AI research or tooling Experience optimizing large scale models for inference and deployment. What We Offer Work on world class AI Quantum systems that push scientific and industrial boundaries Access to one of the country's most advanced quantum systems + top tier GPU/HPC infrastructure Ownership and flexibility, define your research direction and release real world impact Competitive package with performance bonus and ESOPs Collaborative, knowledge sharing culture with industry leading experts If you fit the qualifications and are passionate about pushing the frontier of AI + Quantum, we’d love to hear from you! Know someone perfect for this? Tag them or share this post in your network we’re building something extraordinary together
Posted 2 weeks ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As a Principal Software Engineer for Data, the person will lead the design and implementation of scalable, secure, and high-performance data pipelines across that involve healthcare clinical data, using modern big data and cloud technologies (Azure, Databricks, and Spark), ensuring alignment with UnitedHealth Group’s data governance standards. This role requires a hands-on leader who can write and review code, mentor teams, and collaborate across business and technical stakeholders to drive data strategy and innovation. The person needs to be ready to take up AI and AIOps as part of their work and support the data science teams with ideas and reviews their work. Primary Responsibilities Design and lead the implementation of robust, scalable, and secure data architectures for clinical and healthcare data for batch and real time pipelines Architect end-to-end data pipelines using big data and cloud-native technologies (e.g., Spark, Databricks, Azure Data Factory) Ensure data solutions meet performance, scalability, and compliance requirements, including HIPAA and internal governance policies Build and optimize data ingestion, transformation, and storage pipelines for structured and unstructured clinical data. Guide teams that are doing it and ensure support for incremental data processing Ensure data quality, lineage is embedded in all solutions Lead code reviews, proof-of-concepts, and performance tuning for large-scale data systems Collaborate with data governance teams to ensure adherence to UHG and healthcare data standards, lineage, certification, Data use rights, and data privacy Contribute to the maturity of data governance domains and participate in governance councils and working groups Design, Build and monitor MLOps pipelines, model inference and robust piplelines for running AI operations on data Secondary Responsibilities Mentor data engineers and analysts, fostering a culture of technical excellence and continuous learning Collaborate with product managers, data scientists, and business stakeholders to translate requirements into data solutions Influence architectural decisions across teams and contribute to enterprise-wide data strategy Stay current with emerging technologies in cloud, big data, and AI/ML, and evaluate their applicability to healthcare data Promote the use of generative AI tools (e.g., GitHub Copilot) to enhance development productivity and innovation Drive adoption of DevOps and DataOps practices, including CI/CD, IaC, and automated testing for data pipelines Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Cloud Platforms: Solid experience with Azure (preferred), AWS, or GCP Experience with designing and managing semantic data elements (metadata, configuration, master data). Come up with automated pipelines to keep them up-to-date from upstream sources Good experience with designing, evolving and reviewing database schema. Experience with schema management for unstructured data, structured data, relational, star schema Data Modelling: Deep understanding of dimensional modeling, canonical models, and healthcare data standards (e.g., HL7, FHIR) DevOps/DataOps: Familiarity with CI/CD, IaC (Terraform, ARM) Data Engineering: Expertise in building ETL/ELT pipelines, data lakes, and real-time streaming architectures using python, scala or other comparable technologies Big Data Technologies: Proficient in Apache Spark, Databricks, Delta Lake, and distributed data processing Programming: Proficiency in Python, SQL, and optionally Scala or Java Proven track record of designing and delivering large-scale data solutions in cloud environments Proven solid leadership, communication, and stakeholder management skills Proven ability to mentor and influence across teams and levels Proven strategic thinker with a passion for data-driven innovation Proven ability to get into details whenever required and spend time in understanding and solving problems Preferred Qualifications 10+ years of experience in data architecture, data engineering, or related roles, with a focus on healthcare or clinical data Experience with healthcare data interoperability standards (FHIR, HL7, CCD) Familiarity with MLOps and integrating data pipelines with ML workflows Contributions to open-source projects or publications in data architecture or healthcare analytics At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
3.0 years
0 Lacs
Karnal, Haryana, India
Remote
🏢 Company Description Live Eye Surveillance is a U.S.-focused AI surveillance and remote monitoring company, headquartered in Seattle with technology operations in India. We specialize in real-time, proactive security solutions for retail, QSRs, warehouses, and commercial spaces. Our in-house AI-powered Video Management Software (VMS) integrates advanced IP camera systems, intelligent video analytics, live audio deterrence, and 24/7 human monitoring to deter crime before it happens. With enterprise-grade infrastructure and a commitment to reducing shrinkage, liability, and manpower costs, Live Eye is redefining modern surveillance for multi-location businesses. ⸻ 💼 Role: AI/ML Lead – Facial Recognition & Video Intelligence Location: Karnal, Haryana Employment Type: Full-Time Experience: 3+ Years in AI/ML, Computer Vision, and Team Leadership ⸻ 🧠 Role Description We are looking for an AI/ML Lead to spearhead the development of cutting-edge Facial Recognition, Object Detection, and Video Intelligence features for our proprietary VMS platform. You will lead the research, development, and optimization of AI models deployed across real-time IP camera feeds. You’ll also manage a small team of AI/ML engineers and work closely with our backend, frontend, and mobile teams to build scalable, production-ready AI modules that directly impact global security operations. ⸻ 🧪 Responsibilities • Build and deploy production-grade models for facial recognition, object/person detection, and activity recognition • Optimize AI pipelines for real-time performance and edge device compatibility • Lead, mentor, and manage a small AI/ML engineering team • Collaborate with product managers, software engineers, and cloud architects to integrate AI modules into the VMS platform • Stay on top of the latest developments in deep learning and computer vision research • Ensure model accuracy, efficiency, and scalability across diverse real-world environments ⸻ 🔧 Required Qualifications • 3+ years of hands-on experience in Machine Learning and Deep Learning (preferably in Computer Vision) • Proficiency in Python, TensorFlow/PyTorch, OpenCV, and relevant DL libraries • Strong background in building models for facial recognition (FaceNet, ArcFace, etc.) and object detection (YOLO, SSD, Faster R-CNN) • Experience with ONNX, TensorRT, or other optimization tools for edge inference • Experience integrating AI models with IP camera feeds (RTSP/ONVIF protocols preferred) • Solid understanding of data preprocessing, model evaluation, and tuning • Strong communication, problem-solving, and team leadership skills • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field ⸻ 🌐 Nice to Have • Experience with Jetson Nano/Xavier, Edge TPUs, or other AI hardware • Background in surveillance systems, security platforms, or VMS architecture • Familiarity with cloud platforms like AWS, Azure, or GCP • Experience with Docker, Git, and CI/CD for deploying ML models ⸻ 🌟 What We Offer • Leadership opportunity in a rapidly growing AI security tech company • Hands-on role in building core IP for our next-gen surveillance platform • Flexible hybrid work setup with direct impact on global deployments • Competitive compensation and opportunity for fast growth ⸻ 📩 To apply: Send your resume to careers@myliveeye.com 🌐 Visit: www.myliveeye.com
Posted 2 weeks ago
8.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Role Overview We are hiring a Technical Lead – AI Security to join our CISO team in Mumbai. This is a critical, hands-on role — ensuring the trustworthiness, resilience, and compliance of AI/ML systems, including large language models (LLMs). You will work at the intersection of cybersecurity and AI, shaping secure testing, understanding secure MLOps/LLMOps workflows, and leading technical implementation of defenses against emerging AI threats. This role requires both strategic vision and strong engineering depth. Key Responsibilities · Lead and operationalize the AI/ML and LLM security roadmap across training, validation, deployment, and runtime to enable AI Security Platform Approach. · Design and implement defenses against threats like adversarial attacks, data poisoning, model inversion, prompt injection, and fine-tuning exploits using industry leading open source and commercial tools. · Build hardened workflows for model security, integrity verification, and auditability in production AI environments. · Leverage AI security tools for scanning, fuzzing, and penetration testing models. · Apply best practices from OWASP Top 10 for ML/LLMs, MITRE ATLAS, NIST AI RMF, and ISO/IEC 42001 to test AI/ML assets. · Ensure AI model security testing framework aligns with internal policy, national regulatory requirements, and global best practices. · Plan and execute security tests for AI/LLM systems, including jailbreaking, RAG hardening, and bias/toxicity validation. Required Skills & Experience · 8+ years in cybersecurity, with at least 3+ years hands-on in AI/ML security or secure MLOps/LLMOps · Proficient in Python, TensorFlow/PyTorch, HuggingFace, LangChain, and common data science libraries · Deep understanding of adversarial ML/LLM, model evaluation under threat conditions, and inference/training-time attack vectors · Experience securing cloud-based AI workloads (AWS, Azure, or GCP) · Familiarity with secure DevOps and CI/CD practices · Strong understanding of AI-specific threat models (MITRE ATLAS) and security benchmarks (OWASP Top 10 for ML/LLMs) · Ability to communicate technical risk clearly to non-technical stakeholders · Ability to guide developers and data scientists to solve the AI Security risks. · Certifications: CISSP, OSCP, GCP ML Security, or relevant AI/ML certificates · Experience with AI security tools or platforms (e.g., model registries, lineage tracking, policy enforcement) · Experience with RAG, LLM-based agents, or agentic workflows · Experience in regulated sectors (finance, public sector)
Posted 2 weeks ago
5.0 years
0 Lacs
India
Remote
AdZeta is a B2B technology company that leverages AI-powered smart bidding technology to drive high LTV and profitability for D2C e-commerce brands. We turn first-party data into predictive, value-based bidding and personalised customer journeys. To accelerate our roadmap, we’re hiring a Senior Data Engineer to architect the data pipelines, services, and admin tools that power it all. What You’ll Own Secure Data Ingestion Design & implement high-throughput, encrypted pipelines that pull data from Shopify, GA4, CRMs, and ad platforms. Enforce token rotation, rate limiting, and schema validation. Data Storage Foundations Stand up the initial data-lake / warehouse layer (MySQL + object storage; moving to BigQuery or Snowflake). Define partitioning, indexing, and lifecycle policies for multi-TB datasets. Prediction API Build REST/JSON endpoints that expose LTV and propensities from our ML models. Optimise for low-latency inference and auto-scale under load. Admin Panel Development Ship the first-generation admin portal in PHP + MySQL (Laravel or similar) Implement RBAC, audit logging, and health dashboards for internal teams. Infrastructure & DevOps Provision and harden servers (AWS or GCP) using Terraform / CloudFormation. Own CI/CD, container orchestration (Docker, ECS or Kubernetes), monitoring (Grafana/Prometheus), and incident response run-books. Required Skills & Experience 5+ years building scalable backend systems (PHP, Python, or Node preferred). Strong database chops—MySQL/PostgreSQL schema design, query optimisation, and replication. Experience with message queues / streaming (Kafka, Pub/Sub, or SQS). Comfortable in cloud infrastructure (AWS or GCP), IaC (Terraform, Pulumi), and containerisation. Solid understanding of security best practices: TLS, IAM, secrets management, OWASP. Proven track record integrating third-party APIs and handling large data volumes. Bonus: exposure to ML inference pipelines, Looker/BI tools, or server-side tagging. Why AdZeta Remote-first & async-friendly culture with flexible PTO. Ownership: competitive salary + equity option pool. Annual learning stipend for certs, conferences, or AI experimentation. Direct line of sight to C-suite; your insights shape product and go-to-market road-maps. We move fast, targeting a two-week turnaround from application to offer.
Posted 2 weeks ago
2.0 - 6.0 years
1 - 3 Lacs
Hyderābād
On-site
About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 2 weeks ago
2.0 - 6.0 years
1 - 3 Lacs
Gurgaon
On-site
About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 2 weeks ago
4.0 - 10.0 years
5 - 10 Lacs
Noida
On-site
Lead Assistant Manager EXL/LAM/1412764 Data And Analytics ServicesNoida Posted On 05 Jul 2025 End Date 19 Aug 2025 Required Experience 4 - 10 Years Basic Section Number Of Positions 4 Band B2 Band Name Lead Assistant Manager Cost Code D014377 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 2400000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Healthcare Organization Data And Analytics Services LOB Analytics SBU Services Country India City Noida Center Noida-SEZ BPO Solutions Skills Skill GCP GCP/AWS/CI - CD/DEVOPS AI PYTHON SQL Minimum Qualification B.TECH/B.E MCA Certification No data available Job Description Cloud AI Engineer We're looking for a highly skilled and experienced Cloud AI Engineer to join our dynamic team. In this role, you'll be instrumental in designing, developing, and deploying cutting-edge artificial intelligence and machine learning solutions leveraging the full suite of Google Cloud Platform (GCP) services. Objectives of this role Lead the end-to-end development cycle of AI applications, from conceptualization and prototyping to deployment and optimization, with a core focus on LLM-driven solutions. Architect and implement highly performant and scalable AI services, effectively integrating with GCP's comprehensive AI/ML ecosystem. Collaborate closely with product managers, data scientists, and MLOps engineers to translate complex business requirements into tangible, AI-powered features. Continuously research and apply the latest advancements in LLM technology, prompt engineering, and AI frameworks to enhance application capabilities and performance. ## Responsibilities Develop and deploy production-grade AI applications and microservices primarily using Python and FastAPI, ensuring robust API design, security, and scalability. Design and implement end-to-end LLM pipelines, encompassing data ingestion, processing, model inference, and output generation. Utilize Google Cloud Platform (GCP) services extensively, including Vertex AI (Generative AI, Model Garden, Workbench), Cloud Functions, Cloud Run, Cloud Storage, and BigQuery, to build, train, and deploy LLMs and AI models. Expertly apply prompt engineering techniques and strategies to optimize LLM responses, manage context windows, and reduce hallucinations. Implement and manage embeddings and vector stores for efficient information retrieval and Retrieval-Augmented Generation (RAG) patterns. Work with advanced LLM orchestration frameworks such as LangChain, LangGraph, Google ADK, and CrewAI to build sophisticated multi-agent systems and complex AI workflows. Integrate AI solutions with other enterprise systems and databases, ensuring seamless data flow and interoperability. Participate in code reviews, establish best practices for AI application development, and contribute to a culture of technical excellence. Keep abreast of the latest advancements in GCP AI/ML services and broader AI/ML technologies, evaluating and recommending new tools and approaches. ## Required skills and qualifications Two or more years of hands-on experience as an AI Engineer with a focus on building and deploying AI applications, particularly those involving Large Language Models (LLMs). Strong programming proficiency in Python, with significant experience in developing web APIs using FastAPI. Demonstrable expertise with Google Cloud Platform (GCP), specifically with services like Vertex AI (Generative AI, AI Platform), Cloud Run/Functions, and Cloud Storage. Proven experience in prompt engineering, including advanced techniques like few-shot learning, chain-of-thought prompting, and instruction tuning. Practical knowledge and application of embeddings and vector stores for semantic search and RAG architectures. Hands-on experience with at least one major LLM orchestration framework (e.g., LangChain, LangGraph, CrewAI). Solid understanding of software engineering principles, including API design, data structures, algorithms, and testing methodologies. Experience with version control systems (Git) and CI/CD pipelines. Preferred skills and qualifications Bachelor’s or Master's degree in Computer Science Good to have: Experience with MLOps practices for deploying, monitoring, and maintaining AI models in production. Understanding of distributed computing and data processing technologies. Contributions to open-source AI projects or a strong portfolio showcasing relevant AI/LLM applications. Excellent analytical and problem-solving skills with a keen attention to detail. Strong communication and interpersonal skills, with the ability to explain complex technical concepts to non-technical stakeholders. Workflow Workflow Type L&S-DA-Consulting
Posted 2 weeks ago
2.0 - 6.0 years
1 - 3 Lacs
Ahmedabad
On-site
About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role: As our Agentic System Architect, you will define and own the end-to-end architecture of our Python-based autonomous agent platform. Leveraging cutting-edge frameworks—LangChain, LangGraph, RAG pipelines, and more—you’ll ensure our multi-agent workflows are resilient, scalable, and aligned with business objectives Key Responsibilities Architectural Strategy & Standards Define system topology: microservices, agent clusters, RAG retrieval layers, and knowledge-graph integrations. Establish architectural patterns for chain-based vs. graph-based vs. retrieval-augmented workflows. Component & Interface Design Specify Python modules for LLM integration, RAG connectors (Haystack, LlamaIndex), vector store adapters, and policy engines. Design REST/gRPC and message-queue interfaces compatible with Kafka/RabbitMQ, Semantic Kernel, and external APIs. Scalability & Reliability Architect auto-scaling of Python agents on Kubernetes/EKS (including GPU-enabled inference pods). Define fault-tolerance patterns (circuit breakers, retries, bulkheads) and lead chaos-testing of agent clusters. Security & Governance Embed authentication/authorization in agent flows (OIDC, OAuth2) and secure data retrieval (encrypted vector stores). Implement governance: prompt auditing, model-version control, drift detection, and usage quotas. Performance & Cost Optimization Specify profiling/tracing requirements (OpenTelemetry in Python) across chain, graph, and RAG pipelines. Architect caching layers and GPU/CPU resource policies to minimize inference latency and cost. Cross-Functional Leadership Collaborate with AI research, DevOps, and product teams to align roadmaps with strategic goals. Review and enforce best practices in Python code, CI/CD (GitHub Actions), and IaC (Terraform). 7. Documentation & Evangelism Produce architecture diagrams, decision records, and runbooks illustrating agentic designs (ReAct, CoT, RAG). Mentor engineers on agentic patterns—chain-of-thought, graph traversals, retrieval loops—and Python best practices. Preferred Qualifications Bachelor’s Degree in Computer Science, Information Technology, or related fields (e.g., B.Tech, B.E., B.Sc. in Computer Science) Preferred/Ideal Educational Qualification: Master’s Degree (optional but highly valued) in one of the following: M.Tech or M.E. in Computer Science / AI / Data Science M.Sc. in Artificial Intelligence or Machine Learning Integrated M.Tech programs in AI/ML from top-tier institutions like IITs, IIIT-H, IISc Bonus or Value-Add Qualifications: Ph.D. or Research Experience in NLP, Information Retrieval, or Agentic AI (especially relevant if applying to R&D-heavy teams like Microsoft Research, TCS Research, or AI startups) Certifications or online credentials in: LangChain, RAG architectures (DeepLearning.AI, Cohere, etc.) Advanced Python (Coursera/edX/Springboard/NPTEL) Cloud-based ML operations (AWS/Azure/GCP) Additional Skill Set: Hands-on with agentic frameworks: LangChain, LangGraph, Microsoft AutoGen Experience building RAG pipelines with Haystack, LlamaIndex, or custom retrieval modules Familiarity with vector databases (FAISS, Pinecone, Chroma) and knowledge-graph stores (Neo4j) Expertise in observability stacks (Prometheus, Grafana, OpenTelemetry) Background in LLM SDKs (OpenAI, Anthropic) and function-calling paradigms Core Skills & Competencies System Thinking: Decompose complex business goals into modular, maintainable components Python Mastery: Idiomatic Python, async/await, package management (Poetry/venv) Distributed Design: Microservices, agent clusters, RAG retrieval loops, event streams Security-First: Embed authentication, authorization, and auditabilitys Leadership: Communicate complex system designs clearly to both technical and non-technical stakeholders We are looking for someone with a proven track record in leveraging cuing-edge agentic frameworks and protocols. This includes hands-on experience with technologies such as Agent-to-Agent (A2A) communication protocols, LangGraph, LangChain, CrewAI, and other similar multi-agent orchestration tools. Your expertise will be crucial in transforming traditional, reactive AI applications into proactive, goal-driven intelligent agents that can signicantly enhance operational eciency, decision-making, and customer engagement in high-stakes domains. We envision this role as instrumental in driving innovation, translating cuing-edge academic research into deployable solutions, and contributing to the development of robust, scalable, and ethical AI agentic systems.
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru North, Karnataka, India
Remote
Job Description GalaxEye Space, is a deep-tech Space start-up spun off from IIT-Madras and is currently based in Bengaluru, Karnataka. We are dedicated to advancing the frontiers of space exploration. Our mission is to develop cutting-edge solutions that address the challenges of the modern space industry by specialising in developing a constellation of miniaturised, multi-sensor SAR+EO satellites. Our new age technology enables all-time, all-weather imaging, this with leveraging advanced processing and AI capabilities, we ensure near real-time data delivery and are glad to highlight that we have successfully demonstrated these imaging capabilities, the first of its kind in the world, across various platforms such as Drones as well as HAPS (High-Altitude Pseudo Satellites). Responsibilities Architect and maintain the build pipeline that converts R&D Python notebooks into immutable, versioned executables and libraries Optimize the Python codes for extracting maximum GPU performance Define and enforce coding standards, branching strategy, semantic release tags, and artifact-signing process Lead a team of full-stack developers to integrate Python inference services with the React-Electron UI via gRPC/REST contracts Stand-up and maintain an offline replica environment (VM or bare- metal) that mirrors the forward-deployed system; gate releases through this environment in CI Own automated test suites: unit, contract, regression, performance, and security scanning Coordinate multi-iteration hand-offs with forward engineers; triage returned diffs, merge approved changes, and publish patched releases Mentor the team, conduct code & design reviews, and drivecontinuous-delivery best practices in an air-gap-constrained context Requirements 5+ yrs in software engineering with at least 2 yrs technical-lead experience Deep Python expertise (packaging, virtualenv/venv, dependency pinning) and solid JavaScript/TypeScript skills for React-Electron CI/CD mastery (GitHub Actions, Jenkins, GitLab CI) with artifact repositories (Artifactory/Nexus) and infrastructure-as-code (Packer, Terraform, Ansible) Strong grasp of cryptographic signing, checksum verification, and secure supply-chain principles Experience releasing software to constrained or disconnected environments Additional Skills Knowledge of containerization (Docker/Podman) and offline image distribution Prior work on remote-sensing or geospatial analytics products Benefits Acquire valuable opportunities for learning and development through close collaboration with the founding team. Contribute to impactful projects and initiatives that drive meaningful change. We provide a competitive salary package that aligns with your expertise and experience. Enjoy comprehensive health benefits, including medical, dental, and vision coverage, ensuring the well-being of you and your family. Work in a dynamic and innovative environment alongside a dedicated and passionate team. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#5BBD6E;border-color:#5BBD6E;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France