Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description TrueFan uses proprietary AI technology to connect fans and celebrities and is now focused on revolutionizing customer-business interactions with AI-powered personalized video solutions. Our platform enables brands to create unique, engaging video experiences that drive customer loyalty and deeper connections. Job Description DevOps with MLOps Engineer Company Overview We are a cutting-edge AI company focused on developing advanced lip-syncing technology using deep neural networks. Our solutions enable seamless synchronisation of speech with facial movements in videos, creating hyper-realistic content for various industries such as entertainment, marketing, and more. Position: MLOps Engineer We are looking for a talented and motivated MLOps Engineer to join our team. The ideal candidate will play a crucial role in managing and scaling our machine learning models and infrastructure, enabling seamless deployment and automation of our lip-sync video generation systems. Key Responsibilities Model Training/Deployment Pipelines and Monitoring: Design, implement, and maintain scalable and automated pipelines for deploying deep neural network models. Monitor and manage Production models, ensuring high availability, low latency, and smooth performance. Automate workflows for data preprocessing (face alignment, feature extraction, audio analysis), model retraining, and video generation. Implement Logging, Tracking, and Monitoring Systems to ensure data integrity and visibility into the model lifecycle. Infrastructure Management: Build and manage cloud-based infrastructure (AWS, GCP, or Azure) for efficient model training, deployment, and data storage. Collaborate with DevOps to manage containerization (Docker, Kubernetes) and ensure robust CI/CD pipelines using github and jenkins for model delivery. Monitor resource for GPU/ CPU-intensive tasks like video processing, model inference, and training using Prometheus , Grafana, alert manager, ELK stack. Collaboration: Work closely with ML engineers to integrate models into production pipelines. Provide tools and frameworks for rapid experimentation and model versioning. Required Skills Basic Python Strong experience with cloud platforms (AWS, GCP, Azure) and cloud-based machine learning services. Expert knowledge of containerization technologies (Docker, Kubernetes) and infrastructure-as-code (Terraform, CloudFormation) Have understanding of Deployment of both synchronous and asynchronous API using Flask, Django, Celery, Redis, RabbitMQ , Kafka Deployed and Scaled AI/ML in Production. Familiarity with deep learning frameworks (TensorFlow, PyTorch). Familiarity with video processing tools like FFMPEG and Dlib for handling dynamic frame data. Basic understanding of ML models Preferred Qualifications Experience in image and video-based deep learning tasks. Familiarity with media streaming and video processing pipelines for real-time generation. Experience with real-time inference and deploying models in latency-sensitive environments. Strong problem-solving skills with a focus on optimising machine learning model infrastructure for scalability and performance.
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: DevOps Engineer Location: Mumbai (Immediate Joiners Preferred and On-site) Company: OnFinance AI Role Overview: We're urgently looking for a skilled DevOps Engineer to join our dynamic team in Mumbai. The ideal candidate will have a strong background in Terraform, Cloud Architecture, diagramming process & data flows, and efficient LLM inferencing specifically for the India region. Key Responsibilities Manage and optimize cloud infrastructure using Terraform. Architect, deploy, and maintain cloud solutions primarily on AWS and Azure. Design clear and efficient process and data flow diagrams. Enhance and maintain efficient LLM inference processes optimized for the Indian region. Required Skills And Experience Proven expertise with Terraform. In-depth knowledge of AWS and Azure cloud architecture. Proficiency in designing detailed process and data flow diagrams. Hands-on experience optimizing Large Language Model (LLM) inferencing, specifically tuned for performance in India. Hiring Process (Fast-track) Step 1: Two take-home assignments, each followed by a technical interview with our CTO and tech team. Step 2: Final culture-fit interview. Why Join OnFinance AI? Rapid hiring process—position expected to close within one week. Opportunity to work in an innovative AI-driven environment. Collaborative and vibrant work culture. How To Apply Submit your application immediately via this link: https://forms.gle/1g7JfpXUgJdy6xsJ7 We look forward to welcoming you aboard OnFinance AI!
Posted 3 weeks ago
2.0 years
0 Lacs
India
Remote
Senior Machine Learning Engineer (AI-Powered Software Platform for Hidden Physical-Threat Detection & Real-Time Intelligence) About the Company: Aerobotics7 (A7) is a mission-driven deep-tech startup focused on developing a UAV-based next-gen sensing and advanced AI platform to detect, identify, and mitigate hidden threats like landmines, UXOs, and IEDs in real-time. We are embarking on a rapid development phase, creating innovative solutions leveraging cutting-edge technologies. Our dynamic team is committed to building impactful products through continuous learning, and close cross-collaboration. Position Overview: We are seeking a Senior Machine Learning Engineer with a strong research orientation to join our team. This role will focus on developing and refining proprietary machine learning models for drone-based landmine detection and mitigation. The ideal candidate will design, develop, and optimize advanced ML workflows with an emphasis on rigorous research, novel model development, and experimental validation in deep learning, multi-modal/sensor fusion and computer vision applications. Key Responsibilities: Lead the end-to-end AI model development process, including research, experimentation, design, and implementation. Architect, train, and deploy deep learning models on cloud (GCP) and edge devices, ensuring real-time performance. Develop and optimize multi-modal ML/DL models integrating multiple sensor inputs. Implement and fine-tune CNNs, Vision Transformers (ViTs), and other deep-learning architectures. Design and improve sensor fusion techniques for enhanced perception and decision-making. Optimize AI inference for low-latency and high-efficiency deployment on production. Cross-collaborate with software and hardware teams to integrate AI solutions into mission-critical applications. Develop scalable pipelines for model training, validation, and continuous improvement. Ensure robustness, interpretability, and security of AI models in deployment. Required Skills: • Strong expertise in deep learning frameworks (TensorFlow, PyTorch). • Experience with CNNs, ViTs, and other DL architectures. • Hands-on experience in multi-modal ML and sensor fusion techniques. • Proficiency in cloud-based AI model deployment (GCP experience preferred). • Experience with edge AI optimization (NVIDIA Jetson, TensorRT, OpenVINO). • Strong knowledge of data preprocessing, augmentation, and synthetic data generation. • Proficiency in model quantization, pruning, and optimization for real-time applications. • Familiarity with computer vision, object detection, and real-time inference techniques. • Ability to work with limited datasets, including generating synthetic data (VAEs or s similar), data annotation and augmentation strategies. • Strong coding skills in Python and C++ with experience in high-performance computing. Preferred Qualifications: • Experience: 2-4+ Years. • Experience with MLOps, including CI/CD pipelines, model versioning, and monitoring. • Knowledge of reinforcement learning techniques. • Experience in working in fast-paced startup environments. • Prior experience working on AI-driven autonomous systems, robotics, or UAVs. • Understanding of embedded systems and hardware acceleration for AI workloads. Benefits: NOTE: THIS ROLE IS UNDER AEROBOTICS7 INVENTIONS PVT. LTD., AN INDIAN ENTITY. IT IS A REMOTE INDIA-BASED ROLE WITH COMPENSATION ALIGNED TO INDIAN MARKET STANDARDS. WHILE OUR PARENT COMPANY IS US-BASED, THIS POSITION IS FOR CANDIDATES RESIDING AND WORKING IN INDIA. Competitive startup-level salary and comprehensive benefits package. Future opportunity for equity options in the company. Opportunity to work on impactful, cutting-edge technology in a collaborative startup environment. Professional growth with extensive learning and career development opportunities. Direct contribution to tangible, real-world impact. How to Apply: Interested candidates are encouraged to submit their resume along with an (optional) cover letter highlighting their relevant experience and passion for working in a dynamic startup environment. For any questions or further information, feel free to reach out to us directly by emailing us at careers@aerobotics7.com.
Posted 3 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Junglee Games: With over 140 million users, Junglee Games is a leader in the online skill gaming space. Founded in San Francisco in 2012 and part of the Flutter Entertainment Group, we are revolutionizing how people play games. Our notable games include Howzat, Junglee Rummy, and Junglee Poker. Our team comprises over 1000 talented individuals who have worked on internationally acclaimed AAA titles like Transformers and Star Wars: The Old Republic and contributed to Hollywood hits such as Avatar. Junglee’s mission is to build entertainment for millions of people around the world and connect them through games. Junglee Games is not just a gaming company but a blend of innovation, data science, cutting-edge tech, and, most importantly, a values-driven culture that is creating the next set of conscious leaders. Job overview: As our Data Scientist III, you will be a key driver of strategic and tactical data science projects. You’ll partner closely with Product, Marketing, and Engineering teams to influence decisions and optimize outcomes using data, models, and experimentation. Job Location: Gurgaon Key Responsibilities: Lead complex analytics projects end-to-end: problem scoping, data wrangling, modeling, validation, and communication. Develop and deploy statistical models and machine learning solutions to improve core metrics (e.g., retention, monetization, user segmentation, etc.). Own experimentation and causal inference for product features (A/B testing, uplift modeling, etc.). Translate business problems into quantitative solutions and drive product and marketing roadmap with data insights. Collaborate with Data Engineering to design scalable data pipelines and ML deployment workflows. Qualifications & skills required: 5–7 years of experience in data science, preferably in B2C, gaming, consumer tech, or related industries. Strong foundation in statistics, probability, and machine learning (classification, regression, clustering, time-series, etc.). Hands-on experience with Python/R and SQL. Experience with tools like Airflow, Spark, DBT, or similar is a plus. Experience designing and analyzing experiments (A/B testing, DoE). Strong problem-solving mindset with a focus on impact and ownership. Excellent communication skills; ability to simplify complex findings for non-technical stakeholders. Be a part of Junglee Games to: Value Customers & Data - Prioritize customers, use data-driven decisions, master KPIs, and leverage ideation and A/B testing to drive impactful outcomes. Inspire Extreme Ownership - We embrace ownership, collaborate effectively, and take pride in every detail to ensure every game becomes a smashing success. Lead with Love - We reject micromanagement and fear, fostering open dialogue, mutual growth, and a fearless yet responsible work ethic. Embrace change - Change drives progress and our strength lies in adapting swiftly and recognizing when to evolve to stay ahead. Play the Big Game - We think big, challenge norms, and innovate boldly, driving impactful results through fresh ideas and inventive problem-solving. Avail a comprehensive benefits package that includes paid gift coupons, fitness plans, gadget allowances, fuel costs, family healthcare, and much more. Know more about us Explore the world of Junglee Games through our website, www.jungleegames.com . Get a glimpse of what Life at Junglee Games looks like on LinkedIn . Here is a quick snippet of the Junglee Games Offsite’24 Liked what you saw so far? Be A Junglee
Posted 3 weeks ago
175.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Responsible for contacting clients with overdue accounts to secure the settlement of the account. Also they do preventive work to avoid future overdues with accounts that have a high exposure. L aunched in 2012, Amex Offers is a digital advertising platform that connects Merchants and brands with the tens of millions of American Express Card Members across the globe. The Amex Offers team develops strategic marketing partnerships that deliver outstanding value for Merchants and Advertisers to reach these high-spending Card Members in the digital channels where they engage with American Express, delivering deep insights and improving results. Through this complete marketing solution, we can help advertisers get laser-focused on who our mutual customers are, what they want, and how we can meet their needs. We are seeking a commercially minded Director of International Advertiser Analytics & Optimization to lead the strategic analytics function supporting our merchant partnerships and the scaling of proven AMEX Offers strategies. In this highly strategic role, you will work together with Business Development, Customer Success and Product to develop compelling data-driven insights that unlock growth opportunities for merchants, optimize campaign performance, and shape our CLO value proposition. The role will be interfacing stakeholders in Australia, Canada, United Kingdom and the United States. This role requires a strong focus on defining efficient growth opportunities, critical market and advertiser specific nuances and an ability to lead analytics capacity prioritization and support global top advertiser prioritization. Core to this is stakeholder management, clear communication, and data-drive perspectives. Ideal candidates will have analytics or data consulting backgrounds, experience directly working with transaction-based data and familiarity with CLO platforms or bank loyalty ecosystems. Key Responsibilities Act as the analytics lead in strategic merchant / advertiser conversations, providing pre-sales insights, performance forecasts and ROI modelling; in our top markets. Deliver post-campaign analysis and insights that quantify lift, incrementality and customer behavior change. Ensure clarity and integrity in data interpretation presented to merchants and internal stakeholders. Partner closely with Business Development, Customer Success, Product and broader analytics teams to refine and evolve AmEx Offers & Digital Media value proposition. Deliver insights more broadly Influence go-to-market strategies by identifying vertical-specific trends and merchant priorities. Act as a key partner to the broader Analytics, Product, Commercial & Strategy teams, ensuring measurement frameworks are embedded across campaign planning and performance workflows. Lead and mentor a 12+ team of India-based analytics while fostering a strong technical culture of rigor, transparency and intellectual curiosity. Minimum Qualifications 8+ years in media analytics, reporting, or performance measurement - ideally in ad tech, martech, or digital media. Strong solid understanding of digital media KPIs, CLO campaign dynamics, and marketing measurement techniques. Fluency in SQL and proficiency in Python; experience with BI Tools (e.g. Tableau & Power BI) Excellent communication and storytelling skills – comfortable presenting to C-level merchant/advertiser clients Strong foundation In statistical modelling, experimentation, and causal inference. Strong leadership skills and experience leading a high-output analytics or insights team Excellent communication and stakeholder management abilities. Self-starter, ability to drive insights from the data, provide actionable steps and drive results We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Description - AI Data Scientist Location: Remote Department: Data & AI Engineering Employment Type: Full-time Experience Level: Mid-level About the Role: We are seeking an experienced AI Data Engineer to design, build, and deploy data pipelines and ML infrastructure to power scalable AI/ML solutions. This role involves working at the intersection of data engineering, MLOps, and model deployment—supporting the end-to-end lifecycle from data ingestion to model production. Key Responsibilities: Data Engineering & Development Design, develop, and train AI models to solve complex business problems and enable intelligent automation. Design, develop, and maintain scalable data pipelines and workflows for AI/ML applications. Ingest, clean, and transform large volumes of structured and unstructured data from diverse sources (APIs, streaming, databases, flat files). Build and manage data lakes, data warehouses, and feature stores. Prepare training datasets and implement data preprocessing logic. Perform data quality checks, validation, lineage tracking, and schema versioning. Model Deployment & MLOps Package and deploy AI/ML models to production using CI/CD workflows. Implement model inference pipelines (batch or real-time) using containerized environments (Docker, Kubernetes). Use MLOps tools (e.g., MLflow, Kubeflow, SageMaker, Vertex AI) for model tracking, versioning, and deployment. Monitor deployed models for performance, drift, and reliability. Integrate deployed models into applications and APIs (e.g., REST endpoints). Platform & Cloud Engineering Manage cloud-based infrastructure (AWS, GCP, or Azure) for data storage, compute, and ML services. Automate infrastructure provisioning using tools like Terraform or CloudFormation. Optimize pipeline performance and resource utilization for cost-effectiveness. Requirements: Must-Have Skills Bachelor's/Master’s in Computer Science, Engineering, or related field. 2+ years of experience in data engineering, ML engineering, or backend infrastructure. Proficient in Python, SQL, and data processing frameworks (e.g., Spark, Pandas). Experience with cloud platforms (AWS/GCP/Azure) and services like S3, BigQuery, Lambda, or Databricks. Hands-on experience with CI/CD, Docker, and container orchestration (Kubernetes, ECS, EKS). Preferred Skills Experience deploying ML models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Familiarity with API development (Flask/FastAPI) for serving models. Experience with Airflow, Prefect, or Dagster for orchestrating pipelines. Understanding of DevOps and MLOps best practices. Soft Skills: Strong communication and collaboration with cross-functional teams. Proactive problem-solving attitude and ownership mindset. Ability to document and communicate technical concepts clearly.
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title: GenAI Developer Key Skills: GenAI, Python, GenAI platforms, LLM , MLOps, cloud platforms, Docker, Kubernetes, CI/CD pipelines, GenAI frameworks, LangChain, AutoGen, LlamaIndex, CrewA Job Locations: Hyderabad, Bangalore Experience: 3 - 8 Years Budget: 7 - 15 LPA Education Qualification : Any Graduation Work Mode: Hybrid Employment Type: Contract Notice Period: Immediate - 15 Days Interview Mode: 2 Rounds of Technical Interview + Including Client round Job Description: We are seeking a highly skilled and motivated GenAI Infrastructure Architect and Developer to lead the design, development, and deployment of scalable GenAI platforms. This role will focus on building robust infrastructure to support agent-based systems, LLM orchestration, and real-time AI-driven automation across enterprise environments. Key Responsibilities: Architect and implement GenAI infrastructure using cloud-native technologies (AWS, Azure, GCP). Design and deploy scalable, secure, and resilient GenAI pipelines for model training, inference, and monitoring. Collaborate with cross-functional teams to integrate GenAI services into enterprise workflows (e.g., automation, observability, data pipelines). Optimize performance, cost, and reliability of GenAI workloads. Ensure compliance with security, governance, and data privacy standards. Required Skills: Strong experience with cloud platforms (AWS Bedrock, Azure AI, GCP Vertex AI). Proficiency in containerization (Docker, Kubernetes) and CI/CD pipelines. Familiarity with GenAI frameworks like LangChain, AutoGen, LlamaIndex, or CrewAI. Hands-on experience with observability, logging, and monitoring tools. Understanding of LLM lifecycle management, prompt engineering, and fine-tuning. Knowledge of data engineering and MLOps practices. Interested Candidates please share your CV to pnomula@people-prime.com
Posted 3 weeks ago
3.0 years
0 Lacs
India
Remote
If you haven't built and maintained AI/LLM systems in production, developed full-stack applications with complex backend architectures, or debugged distributed systems under pressure, we kindly ask that you don't apply. We need a hands-on developer with strong support engineering experience who codes daily while ensuring system reliability. About Us We're an AI-first startup revolutionizing speech-to-text technology through cutting-edge LLM integration and machine learning pipelines. Our platform combines advanced AI models with real-time processing capabilities, serving enterprise clients who demand both accuracy and reliability. As we scale our AI infrastructure globally, we need a technical leader who can both build and support our systems. Role Overview We're seeking a Senior Support Engineer who is primarily a hands-on full-stack developer with deep AI/LLM infrastructure experience and production support expertise. This isn't a traditional support role - you'll spend 70% of your time coding and building systems, 30% on support and reliability engineering. You'll architect and implement AI/LLM integrations, develop full-stack applications, optimize backend performance, and maintain production systems. This role requires someone who can write production code, debug complex distributed systems, and take ownership of both development and operational excellence. Key Responsibilities 1. AI/LLM Infrastructure Development - Design and implement LLM integration pipelines (OpenAI, Anthropic, local models) - Build AI model inference systems with real-time processing capabilities - Develop prompt engineering frameworks and model optimization systems - Create AI/ML monitoring and evaluation frameworks - Implement vector databases and semantic search capabilities - Build automated model training and deployment pipelines 2. Full Stack Development - Backend Focus - Develop scalable backend APIs using Python/Node.js/Go - Design and optimize database architectures (PostgreSQL, MongoDB, Redis) - Build microservices architectures with proper service communication - Implement authentication, authorization, and security frameworks - Create data processing pipelines for audio/text transcription workflows - Develop real-time WebSocket and event-driven systems 3. Production Support & System Reliability - Monitor and maintain production AI/LLM systems with 99.9% uptime - Respond to critical incidents and perform root cause analysis - Debug complex distributed system issues across the full stack - Implement comprehensive monitoring, alerting, and observability systems - Maintain CI/CD pipelines and automated deployment processes - Create technical documentation and incident response procedures Technical Requirements 1. AI/LLM Infrastructure Experience - 3+ years hands-on experience with LLM APIs (OpenAI, Anthropic, Hugging Face) - Production experience with AI model deployment and inference systems - Knowledge of vector databases (Pinecone, Weaviate, Chroma) and embeddings - Experience with ML frameworks (PyTorch, TensorFlow, Transformers) - Understanding of prompt engineering, RAG systems, and AI evaluation metrics 2. Backend Development Expertise - 5+ years full-stack development with strong backend focus - Expert-level Python, Node.js, or Go for backend services - Advanced database optimization (PostgreSQL, MongoDB, Redis) - Microservices architecture and API design patterns - Experience with message queues (RabbitMQ, Apache Kafka) - Cloud infrastructure expertise (AWS, GCP, Azure) 3. Production Support Experience - 3+ years maintaining production systems under high load - Incident response and on-call rotation experience - Proficiency with monitoring tools (Datadog, New Relic, Grafana) - Experience with containerization (Docker, Kubernetes) - Knowledge of CI/CD pipelines and Infrastructure as Code 4. Full Stack Capabilities - Frontend development with React, Vue.js, or Angular - Understanding of modern web technologies and performance optimization - Experience with real-time applications and WebSocket implementation - Mobile development experience (React Native, Flutter) preferred Preferred Qualifications - Experience with speech-to-text, NLP, or audio processing systems - Background in fintech, healthcare, or regulated industries - Contributions to open-source AI/ML projects - Experience with startup environments and rapid scaling - DevOps and infrastructure automation experience What You'll Build - AI-powered transcription services with multi-model inference - Real-time audio processing pipelines with LLM integration - Scalable backend APIs serving millions of requests - Monitoring dashboards for AI model performance and system health - Automated deployment systems for AI/ML models - Full-stack applications for enterprise clients Technical Environment - AI/ML Stack: OpenAI GPT-4, Anthropic Claude, Hugging Face models, PyTorch - Backend: Python/FastAPI, Node.js, PostgreSQL, Redis, Docker, Kubernetes - Cloud: AWS (Lambda, ECS, RDS, S3), infrastructure automation with Terraform - Monitoring: Datadog, Grafana, ELK stack, custom AI model monitoring - Frontend: React, TypeScript, modern web frameworks Working Arrangements - 100% remote, full-time position with rotating shift schedule for global engineering support coverage - Engineering support coverage across multiple time zones (building toward 24/7 coverage as the team grows) - Collaborative environment with structured handoffs between regional support teams - Reasonable on-call responsibilities with fair rotation - Modern collaboration tools, comprehensive documentation systems, and remote-first culture What We Offer - Competitive compensation package - Opportunity to work with cutting-edge AI technology and solve complex technical challenges at scale - Supportive team culture despite global support requirements - Clear career growth path toward senior technical leadership, specialized expertise, and architectural roles How to Apply Submit your resume with a cover letter addressing: - Your hands-on experience building AI/LLM systems in production - Specific examples of full-stack applications you've developed - Your approach to maintaining production systems under pressure - Experience with both development and support engineering responsibilities - Examples of complex backend optimization or distributed system debugging Include GitHub profile or portfolio demonstrating: - AI/ML projects with real-world applications - Full-stack development capabilities - Production system monitoring and reliability engineering We're looking for a technical leader who can build our AI infrastructure while ensuring operational excellence. If you're passionate about both creating and maintaining cutting-edge AI systems, we'd love to hear from you.
Posted 3 weeks ago
0.0 - 12.0 years
0 Lacs
Gurugram, Haryana
On-site
Associate Director, ML Engineering Gurgaon, India; Ahmedabad, India; Hyderabad, India; Virtual, Gurgaon, India Information Technology 317386 Job Description About The Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India
Posted 3 weeks ago
0.0 - 12.0 years
0 Lacs
Gurugram, Haryana
On-site
About the Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana
On-site
Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are seeking a highly skilled and visionary Senior Embedded Systems Architect to lead the design and implementation of next-generation AI-powered embedded platforms. This role demands deep technical proficiency across embedded systems, AI model deployment, hardware–software co-design, and media-centric inference pipelines. You will architect full-stack embedded AI solutions using custom AI accelerators such as Google Coral (Edge TPU), Hailo, BlackHole (Torrent), and Kendryte, delivering real-time performance in vision, audio, and multi-sensor edge deployments. The ideal candidate brings a combination of system-level thinking, hands-on prototyping, and experience in optimizing AI workloads for edge inference. This is a high-impact role where you will influence product architecture, ML tooling, hardware integration, and platform scalability for a range of IoT and intelligent device applications. Requirements Key Responsibilities ️ System Architecture & Design Define and architect complete embedded systems for AI workloads — from sensor acquisition to real-time inference and actuation . Design multi-stage pipelines for vision/audio inference: e.g., ISP preprocessing CNN inference postprocessing. Evaluate and benchmark hardware platforms with AI accelerators (TPU/NPU/DSP) for latency, power, and throughput. Edge AI & Accelerator Integration Work with Coral, Hailo, Kendryte, Movidius, and Torrent accelerators using their native SDKs (EdgeTPU Compiler, HailoRT, etc.). Translate ML models (TensorFlow, PyTorch, ONNX) for inference on edge devices using cross-compilation , quantization , and toolchain optimization . Lead efforts in compiler flows such as TVM, XLA, Glow, and custom runtime engines. ️ Media & Sensor Processing Pipelines Architect pipelines involving camera input , ISP tuning , video codecs , audio preprocessors , or sensor fusion stacks . Integrate media frameworks such as V4L2 , GStreamer , and OpenCV into real-time embedded systems. Optimize for frame latency, buffering, memory reuse, and bandwidth constraints in edge deployments. ️ Embedded Firmware & Platform Leadership Lead board bring-up, firmware development (RTOS/Linux), peripheral interface integration, and low-power system design. Work with engineers across embedded, AI/ML, and cloud to build robust, secure, and production-ready systems. Review schematics and assist with hardware–software trade-offs, especially around compute, thermal, and memory design. Required Qualifications Education: BE/B.Tech/M.Tech in Electronics, Electrical, Computer Engineering, Embedded Systems, or related fields. Experience: Minimum 5+ years of experience in embedded systems design. Minimum 3 years of hands-on experience with AI accelerators and ML model deployment at the edge. Technical Skills Required Embedded System Design Strong C/C++, embedded Linux, and RTOS-based development experience. Experience with SoCs and MCUs such as STM32, ESP32, NXP, RK3566/3588, TI Sitara, etc. Cross-architecture familiarity: ARM Cortex-A/M, RISC-V, DSP cores. ML & Accelerator Toolchains Proficiency with ML compilers and deployment toolchains: ONNX, TFLite, HailoRT, EdgeTPU compiler, TVM, XLA . Experience with quantization , model pruning , compiler graphs , and hardware-aware profiling . Media & Peripherals Integration experience with camera modules , audio codecs , IMUs , and other digital/analog sensors . Experience with V4L2 , GStreamer , OpenCV , MIPI CSI , and ISP tuning is highly desirable. System Optimization Deep understanding of compute budgeting , thermal constraints , memory management , DMA , and low-latency pipelines . Familiarity with debugging tools: JTAG , SWD , logic analyzers , oscilloscopes , perf counters , and profiling tools. Preferred (Bonus) Skills Experience with Secure Boot , TPM , Encrypted Model Execution , or Post-Quantum Cryptography (PQC) . Familiarity with safety standards like IEC 61508 , ISO 26262 , UL 60730 . Contributions to open-source ML frameworks or embedded model inference libraries. Why Join Us? At EURTH TECHTRONICS PVT LTD , you won't just be optimizing firmware — you will architect full-stack intelligent systems that push the boundary of what's possible in embedded AI. Work on production-grade, AI-powered devices for industrial, consumer, defense, and medical applications . Collaborate with a high-performance R&D team that builds edge-first, low-power, secure, and scalable systems . Drive core architecture and set the technology direction for a fast-growing, innovation-focused organization. How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.
Posted 3 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana
On-site
Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are looking for a hands-on and technically proficient Embedded Software Team Lead to drive the development of intelligent edge systems that combine embedded firmware, machine learning inference, and hardware acceleration. This role is perfect for someone who thrives at the intersection of real-time firmware design, AI model deployment, and hardware-software co-optimization. You will lead a team delivering modular, scalable, and efficient firmware pipelines that run quantized ML models on accelerators like Hailo, Coral, Torrent (BlackHole), Kendryte, and other emerging chipsets. Your focus will include model runtime integration, low-latency sensor processing, OTA-ready firmware stacks, and CI/CD pipelines for embedded products at scale Requirements Key Responsibilities Technical Leadership & Planning Own the firmware lifecycle across multiple AI-based embedded product lines. Define system and software architecture in collaboration with hardware, ML, and cloud teams. Lead sprint planning, code reviews, performance debugging, and mentor junior engineers. ️ ML Model Deployment & Runtime Integration Collaborate with ML engineers to port, quantize, and deploy models using TFLite , ONNX , or HailoRT . Build runtime pipelines that connect model inference with real-time sensor data (vision, IMU, acoustic). Optimize memory and compute flows for edge model execution under power/bandwidth constraints. Firmware Development & Validation Build production-grade embedded stacks using RTOS (FreeRTOS/Zephyr) or embedded Linux . Implement secure bootloaders, OTA update mechanisms, and encrypted firmware interfaces. Interface with a variety of peripherals including cameras, IMUs, analog sensors, and radios (BLE/Wi-Fi/LoRa). ️ CI/CD, DevOps & Tooling for Embedded Set up and manage CI/CD pipelines for firmware builds, static analysis, and validation. Integrate Docker-based toolchains, hardware-in-loop (HIL) testing setups, and simulators/emulators. Ensure codebase quality, maintainability, and test coverage across the embedded stack. Required Qualifications Education: BE/B.Tech/M.Tech in Embedded Systems, Electronics, Computer Engineering, or related fields. Experience: Minimum 4+ years of embedded systems experience. Minimum 2 years in a technical lead or architect role. Hands-on experience in ML model runtime optimization and embedded system integration. Technical Skills Required Embedded Development & Tools Expert-level C/C++ , hands-on with RTOS and Yocto-based Linux . Proficient with toolchains like GCC/Clang, OpenOCD, JTAG/SWD, Logic Analyzers. Familiarity with OTA , bootloaders , and memory management (heap/stack analysis, linker scripts). ML Model Integration Proficiency in TFLite , ONNX Runtime , HailoRT , or EdgeTPU runtimes . Experience with model conversion, quantization (INT8, FP16), runtime optimization. Ability to read/modify model graphs and connect to inference APIs. Connectivity & Peripherals Working knowledge of BLE, Wi-Fi, LoRa, RS485 , USB, and CAN protocols. Integration of camera modules , MIPI CSI , IMUs , and custom analog sensors . ️ DevOps for Embedded Hands-on with GitLab/GitHub CI, Docker, and containerized embedded builds. Build system expertise: CMake , Make , Bazel , or Yocto preferred. Experience in automated firmware testing (HIL, unit, integration). Preferred (Bonus) Skills Familiarity with machine vision pipelines , ISP tuning , or video/audio codec integration . Prior work on battery-operated devices , energy-aware scheduling , or deep sleep optimization . Contributions to embedded ML open-source projects or model deployment tools. Why Join Us? At EURTH TECHTRONICS PVT LTD , we go beyond firmware—we’re designing and deploying embedded intelligence on every device, from industrial gateways to smart consumer wearables. Build and lead teams working on cutting-edge real-time firmware + ML integration . Work on full-stack embedded ML systems using the latest AI accelerators and embedded chipsets . Drive product-ready, scalable software platforms that power IoT, defense, medical , and consumer electronics . How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.
Posted 3 weeks ago
0.0 - 4.0 years
0 Lacs
Hyderabad, Telangana
On-site
Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are seeking a passionate and skilled Embedded ML Engineer to work on cutting-edge ML inference pipelines for low-power, real-time embedded platforms. You will help design and deploy highly efficient ML models on custom hardware accelerators like Hailo, Coral (Edge TPU), Kendryte K210, and Torrent/BlackHole in real-world IoT systems. This role combines model optimization, embedded firmware development, and toolchain management. You will be responsible for translating large ML models into efficient quantized versions, benchmarking them on custom hardware, and integrating them with embedded firmware pipelines that interact with real-world sensors and peripherals. Requirements Key Responsibilities ML Model Optimization & Conversion Convert, quantize, and compile models built in TensorFlow, PyTorch , or ONNX to hardware-specific formats. Work with compilers and deployment frameworks like TFLite , HailoRT , EdgeTPU Compiler , TVM , or ONNX Runtime . Use techniques such as post-training quantization , pruning , distillation , and model slicing . ️ Embedded Integration & Inference Deployment Integrate ML runtimes in C/C++ or Python into firmware stacks built on RTOS or embedded Linux . Handle real-time sensor inputs (camera, accelerometer, microphone) and pass them through inference engines. Manage memory, DMA transfers, inference buffers, and timing loops for deterministic behavior. Benchmarking & Performance Tuning Profile and optimize models for latency, memory usage, compute load , and power draw . Work with runtime logs, inference profilers, and vendor SDKs to squeeze maximum throughput on edge hardware. Conduct accuracy vs performance trade-off studies for different model variants. Testing & Validation Design unit, integration, and hardware-in-loop (HIL) tests to validate model execution on actual devices. Collaborate with hardware and firmware teams to debug runtime crashes, inference failures, and edge cases. Build reproducible benchmarking scripts and test data pipelines. Required Qualifications Education: BE/B.Tech/M.Tech in Electronics, Embedded Systems, Computer Science, or related disciplines. Experience: 2–4 years in embedded ML, edge AI, or firmware development with ML inference integration. Technical Skills Required Embedded Firmware & Runtime Strong experience in C/C++ , basic Python scripting. Experience with RTOS (FreeRTOS, Zephyr) or embedded Linux. Understanding of memory-mapped I/O, ring buffers, circular queues, and real-time execution cycles. ML Model Toolchains Experience with TensorFlow Lite , ONNX Runtime , HailoRT , EdgeTPU , uTensor , or TinyML . Knowledge of quantization-aware training or post-training quantization techniques. Familiarity with model conversion pipelines and hardware-aware model profiling. Media & Sensor Stack Ability to work with input/output streams from cameras , IMUs , microphones , etc. Experience integrating inference with V4L2, GStreamer, or custom ISP preprocessors is a plus. Tooling & Debugging Git, Docker, cross-compilation toolchains (Yocto, CMake). Debugging with SWD/JTAG, GDB, or serial console-based logging. Profiling with memory maps, timing charts, and inference logs. Preferred (Bonus) Skills Previous work with low-power vision devices , audio keyword spotting , or sensor fusion ML . Familiarity with edge security (encrypted models, secure firmware pipelines). Hands-on with simulators/emulators for ML testing (Edge Impulse, Hailo’s HEF emulator, etc.). Participation in TinyML forums , open-source ML toolkits, or ML benchmarking communities. Why Join Us? At EURTH TECHTRONICS PVT LTD , we're not just building IoT firmware—we're deploying machine learning intelligence on ultra-constrained edge platforms , powering real-time decisions at the edge. Get exposure to full-stack embedded ML pipelines — from model quantization to runtime integration. Work with a world-class team focused on ML efficiency, power optimization, and embedded system scalability .️ Contribute to mission-critical products used in industrial automation, medical wearables, smart infrastructure , and more. How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.
Posted 3 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We’re a fast-moving, early-stage startup working at the cutting edge of AI to solve real-world problems in the future of work space. As part of the broader AI ecosystem, we’re not just following trends—we’re building the next wave of vertical AI infrastructure. We’re seeking a driven Senior Backend Engineer to join our team. This role is perfect for an engineer who thrives in chaos, ships confidently, and can own DevOps, backend infra, and AI-powered feature delivery like a pro. You must be available to start in 1 to 2 weeks! What You’ll Own: Architect, scale, and secure our platform on AWS (EC2, Lambda, RDS, etc.). Automate deployments, logging, monitoring, and backups. Optimize and expand our FastAPI backend and Next.js platform to support new workflows, smart inference, and user-triggered pipelines. Troubleshoot issues across AI APIs, improve prompt strategy, and guide the integration of ML models and data-driven components into the backend. Implement security best practices across APIs, databases, auth flows, and user data. Be the grown-up in the room when it comes to system design. Unblock the team. Ship high-impact features weekly. Handle what the full-stack lead can’t get to. Be the difference between “2 weeks” and “2 days.” You’re a Fit If You: Have 7+ years of experience in backend/platform/devops roles, ideally within a startup or SaaS environment Are fluent in AWS, FastAPI, and CI/CD pipelines Have built and scaled APIs in production and know how to handle rate limits, timeouts, retries, and error handling Can debug and optimize AI-driven workflows, APIs, and prompt-based interfaces Understand data security, auth, encryption, and compliance Enjoy working with founders directly and thrive in high-ownership, low-structure environments Bonus: Experience with Hugging Face, Pandas, Supabase, Postgres , or building AI-first apps
Posted 3 weeks ago
0 years
0 Lacs
India
On-site
Company Description Triple I is a leading provider of AI-powered tools for ESG reporting automation. The company offers solutions that handle the entire ESG process, from real-time data integration to audit-ready reports aligned with industry regulations. Trusted by teams across various industries, Triple I simplifies ESG reporting to help enterprises move faster, stay compliant, and reduce workloads. Role Description We’re looking for a skilled AI Engineer to build a powerful AI-driven system that can analyze, transform, and standardize raw datasets into a predefined destination schema — with full language normalization, schema mapping, and intelligent data validation. This role is perfect for someone with deep expertise in data pipelines, NLP, and intelligent schema inference who thrives on creating scalable, adaptable solutions that go far beyond hardcoded logic. What You’ll Be Doing Develop a generalizable AI algorithm that transforms raw, unstructured (or semi-structured) source datasets into a standardized schema Automate schema mapping, data enrichment, PK/FK handling, language translation, and duplicate detection Build logic to flag unresolved data, generate an UnresolvedData_Report, and explain confidence or failure reasons Ensure all outputs are generated in English only, regardless of input language Experiment with 2–3 AI/ML approaches (e.g. NLP models, rule-based logic, transformers, clustering) and document tradeoffs Deliver all outputs (destination tables) in clean, validated formats (CSV/XLSX) Maintain detailed documentation of preprocessing, validation, and accuracy logic Key Responsibilities Design AI logic to dynamically extract, map, and organize data into 10+ destination tables Handle primary key/foreign key relationships across interconnected tables Apply GHG Protocol logic to assign Scope 1, 2, or 3 emissions automatically based on activity type Build multilingual support: auto-translate non-English input and ensure destination is 100% English Handle duplicate and conflicting records with intelligent merging or flagging logic Generate automated validation logs for transparency and edge case handling
Posted 3 weeks ago
0 years
0 Lacs
Delhi, India
On-site
We're looking for a hands-on Computer Vision Engineer who thrives in fast-moving environments and loves building real-world, production-grade AI systems. If you enjoy working with video, visual data, cutting-edge ML models, and solving high-impact problems, we want to talk to you. This role sits at the intersection of deep learning, computer vision, and edge AI, building scalable models and intelligent systems that power our next-generation sports tech platform Responsibilities Design, train, and optimize deep learning models for real-time object detection, tracking, and video understanding. Implement and deploy AI models using frameworks like PyTorch, TensorFlow/Keras, and Transformers. Work with video and image datasets using OpenCV, YOLO, NumPy, Pandas, and visualization tools like Matplotlib. Collaborate with data engineers and edge teams to deploy models on real-time streaming pipelines. Optimize inference performance for edge devices (Jetson, T4 etc. ) and handle video ingestion workflows. Prototype new ideas rapidly, conduct A/B tests, and validate improvements in real-world scenarios. Document processes, communicate findings clearly, and contribute to our growing AI knowledge base. Requirements Strong command of Python and familiarity with C/C++ Experience with one or more deep learning frameworks: PyTorch, TensorFlow, Keras. Solid foundation in YOLO, Transformers, or OpenCV for real-time visual AI. Understanding of data preprocessing, feature engineering, and model evaluation using NumPy, Pandas, etc. Good grasp of computer vision, convolutional neural networks (CNNs), and object detection techniques. Exposure to video streaming workflows (e. g., GStreamer, FFmpeg, RTSP). Ability to write clean, modular, and efficient code. Experience deploying models in production, especially on GPU/edge devices. Interest in reinforcement learning, sports analytics, or real-time systems An undergraduate degree (Master's or PhD preferred) in Computer Science, Artificial Intelligence, or a related discipline is preferred. A strong academic background is a plus. This job was posted by Siddhartha Dutta from Tech At Play.
Posted 3 weeks ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
As a Senior Machine Learning Engineer, you will be responsible for designing, developing, and deploying cutting-edge models for end-to-end content generation, including AI-driven image/video generation, lip syncing, and multimodal AI systems. You will work on the latest advancements in deep generative modeling to create highly realistic and controllable AI-generated media. Responsibilities Research and Develop: Design and implement state-of-the-art generative models, including Diffusion Models, 3D VAEs, and GANs for AI-powered media synthesis. End-to-End Content Generation: Build and optimize AI pipelines for high-fidelity image/video generation and lip syncing using diffusion and autoencoder models. Speech and Video Synchronization: Develop advanced lip-syncing and multimodal generation models that integrate speech, video, and facial animation for hyper-realistic AI-driven content. Real-Time AI Systems: Implement and optimize models for real-time content generation and interactive AI applications using efficient model architectures and acceleration techniques. Scaling and Production Deployment: Work closely with software engineers to deploy models efficiently on cloud-based architectures (AWS, GCP, or Azure). Collaboration and Research: Stay ahead of the latest trends in deep generative models, diffusion models, and transformer-based vision systems to enhance AI-generated content quality. Experimentation and Validation: Design and conduct experiments to evaluate model performance, improve fidelity, realism, and computational efficiency, and refine model architectures. Code Quality and Best Practices: Participate in code reviews, improve model efficiency, and document research findings to enhance team knowledge-sharing and product development. Requirements Bachelor's or Master's degree in Computer Science, Machine Learning, or a related field. 3+ years of experience working with deep generative models, including Diffusion Models, 3D VAEs, GANs, and autoregressive models. Strong proficiency in Python and deep learning frameworks such as PyTorch. Expertise inmulti-modal AI, text-to-image, and image-to-video generation, audio to lipsync Strong understanding of machine learning principles and statistical methods. Good to have experience in real-time inference optimization, cloud deployment, and distributed training. Strong problem-solving abilities and a research-oriented mindset to stay updated with the latest AI advancements. Familiarity with generative adversarial techniques, reinforcement learning for generative models, and large-scale AI model training. Preferred Qualifications Experience with transformers and vision-language models(e. g., CLIP, BLIP, GPT-4V). Background in text-to-video generation, lip-sync generation, and real-time synthetic media applications. Experience in cloud-based AI pipelines (AWS, Google Cloud, or Azure) and model compression techniques (quantization, pruning, distillation). Contributions to open-source projects or published research in AI-generated content, speech synthesis, or video synthesis. This job was posted by Meghna Sidda from TrueFan.
Posted 3 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Responsibilities Ship Micro-services - Build FastAPI services that handle 800 req/s today and will triple within a year (sub-200 ms p95). Power Real-Time Learning - Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily. Design for Scale & Safety - Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch. Deploy Globally - Roll out Dockerised services behind NGINX on AWS (EC2 S3 SQS) and GCP (GKE) via Kubernetes. Automate Releases - GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week. Own Reliability - Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend. Expose Gen-AI at Scale - Publish LLM inference and vector-search endpoints in partnership with the AI team. Ship Fast, Learn Fast - Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in Requirements 2+ yrs Python back-end experience (FastAPI / Flask). Strong with Docker and container orchestration basics. Hands-on with GitLab CI/CD, AWS (EC2 S3 SQS), or GCP (GKE / Compute) in production. SQL/NoSQL (Postgres, MongoDB) + You've built systems from scratch and have solid system-design fundamentals. k8s at scale, Terraform. Experience with AI/ML inference services (LLMs, vector DBs). Go / Rust for high-perf services. Observability: Prometheus, Grafana, OpenTelemetry. This job was posted by Rimjhim Tripathi from CareerNinja.
Posted 3 weeks ago
5.0 years
50 Lacs
Pune/Pimpri-Chinchwad Area
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
5.0 years
0 Lacs
Delhi, India
On-site
About This Role As a Staff AI Engineer you will get to play with petabyte data gathered from a multitude of data sources including Balbix proprietary sensors and 3rd party threat feeds. You will leverage a variety of AI techniques including deep learning, probabilistic graphical models, graph learning, recommendation systems, reinforcement learning, NLP, etc. And of course, you will be part of a team building a world-class product addressing one of the grand challenges in the technology industry. DATA SCIENCE AT BALBIX At Balbix we believe in using the right algorithms and tools to ensure correctness, performance and deliver an excellent user experience. We draw boldly from the latest in AI/ML research but are unafraid to go beyond bayesian inference and statistical models if the situation demands it. We are generalists, caring as much about storytelling with data, as about bleeding edge techniques, scalable model training and deployment. We are building a data science culture with equal emphasis on knowing our data, grokking security first principles, caring about customer needs, explaining our model predictions, deploying at scale, communicating our work, and adapting the latest advances. We look out for each other, enjoy each others’ company, and keep an open channel of communication about all things data and non-data. You Will Design and develop an ensemble of classical and deep learning algorithms for modeling complex interactions between people, software, infrastructure and policies in an enterprise environment Design and implement algorithms for statistical modeling of enterprise cybersecurity risk Apply data-mining, AI and graph analysis techniques to address a variety of problems including modeling, relevance and recommendation. Build production quality solutions that balance complexity and performance Participate in the engineering life-cycle at Balbix, including designing high quality ML infrastructure and data pipelines, writing production code, conducting code reviews and working alongside our infrastructure and reliability teams Drive the architecture and the usage of open source software library for numerical computation such as TensorFlow, PyTorch, and ScikitLearn You Are Able to take on very complex problems, learn quickly, iterate, and persevere towards a robust solution Product-focused and passionate about building truly usable systems Collaborative and comfortable working across teams including data engineering, front end, product management, and DevOps Responsible and like to take ownership of challenging problems A good communicator, and facilitate teamwork via good documentation practices Comfortable with ambiguity and thrive in designing algorithms for evolving needs Intuitive in using the right type of models to address different product needs Curious about the world and your profession, constant learner You Have A Ph.D./M.S. in Computer Science or Electrical Engineering with hands-on software engineering experience 5+ years of experience in the field of Machine Learning and programming in Python. Expertise in programming concepts and building large scale systems. Knowledge of state-of-the-art algorithms combined with expertise in statistical analysis and modeling. Robust understanding of NLP, Probabilistic Graphical Models, Deep Learning with graphs structures, model explainability, etc. Foundational knowledge of probability, statistics and linear algebra
Posted 3 weeks ago
0 years
0 Lacs
India
On-site
Who you are You're someone who’s already shipped GenAI stuff—even if it was small: a chatbot, a RAG tool, or an agent prototype. You live in Python, LangChain, LlamaIndex, Hugging Face, and vector DBs like FAISS or Milvus. You know your way around prompts—noisy chains, rerankers, retrievals. You've deployed models or services on Azure/AWS/GCP, wrapped them into FastAPI endpoints, and maybe even wired a bit of terraform/ARM. You’re not building from spreadsheets; you're iterating with real data, debugging hallucinations, and swapping out embeddings in production. You can read blog posts and paper intros, follow new methods like QLoRA, and build on them. You're fine with ambiguity and startup chaos—no strict specs, no roadmap, just a mission. You work in async Slack, ask quick questions, push code that works, and help teammates stay afloat. You're not satisfied with just getting things done—you want GenAI to feel reliable, usable, and maybe even fun. What you’ll actually do You’ll build real GenAI features: agentic chatbots for document lookup, conversation assistants, or knowledge workflows. You’ll design and implement RAG systems: data ingestion, embeddings, vector indexing, retrievals, and prompt pipelines. You’ll write inference APIs in FastAPI that work with vector stores and cloud LLM endpoints. You’ll containerize services with Docker, push to Azure/AWS/GCP, wire basic CI/CD, monitor latency and faulty responses, and iterate fast. You’ll experiment with LoRA/QLoRA fine-tuning on small LLMs, test prompt variants, and measure output quality. You’ll collaborate with DevOps to ensure deployment reliability, QA to make tests more robust, and frontend folks to shape UX. You’ll share your work in quick “demo & dish” sessions: what's working, what's broken, what you're trying next. You’ll tweak embeddings, watch logs, and improve pipelines one experiment at a time. You’ll help write internal docs or “how-tos” so others can reuse your work. Skills and knowledge You have solid experience in Python backend development (FastAPI/Django) Experienced with LLM frameworks: LangChain, LlamaIndex, CrewAI, or similar Comfortable with vector databases: FAISS, Pinecone, Milvus Able to fine-tune models using PEFT/LoRA/QLoRA Knowledge of embeddings, retrieval systems, RAG pipelines, and prompt engineering Familiar with cloud deployment and infra-as-code (Azure, AWS, GCP with Docker/K8s, Terraform/ARM) Good understanding of monitoring and observability—tracking response latency, hallucinations, and costs Able to read current research, try prototypes, and apply them pragmatically Works well in minimal-structure startups; self-driven, team-minded, proactive communicator
Posted 3 weeks ago
2.0 years
0 Lacs
Delhi, India
On-site
What is Hunch? Hunch is a dating app that helps you land a date without swiping like a junkie. Designed for people tired of mindless swiping and commodified matchmaking, Hunch leverages a powerful AI-engine to help users find meaningful connections by focusing on personality over just looks. With 2M+ downloads and a 4.4-star rating , Hunch is going viral in the US by challenging the swipe-left/right norm of traditional apps. Hunch is a Series A funded ($23 Million) startup building the future of social discovery in a post-AI world. Link to our fundraising announcement Key Offerings Of Hunch Swipe Less, Vibe More: Curated profiles, cutting the clutter of endless swiping. Personality Matters: Opinion-based, belief-based, and thought-based compatibility rather than just focusing on looks. Every Match, Verified: No bots, no catfishing—just real, trustworthy connections Match Scores: Our AI shows compatibility percentages, helping users identify their “100% vibe match.” We're looking for a highly motivated and skilled Data Engineer . You'll design, build, and optimize our robust data infrastructure. You'll also develop scalable data pipelines, ensure data quality, and collaborate closely with our machine learning teams. We're looking for someone passionate about data who thrives in a dynamic environment. If you enjoy tackling complex challenges with cutting-edge technologies, we encourage you to apply. What You'll Do: Architect & Optimize Data Infrastructure: Design, implement, and maintain highly scalable data infrastructure. This includes processes for auto-scaling and easy maintainability of our data pipelines. Develop & Deploy Data Pipelines: Lead the design, implementation, testing, and deployment of resilient data pipelines. These pipelines will ingest, transform, and process large datasets efficiently. Empower ML Workflows: Partner with Machine Learning Engineers to understand their specific data needs. This includes providing high-quality data for model training and ensuring low-latency data delivery for real-time inference. Ensure seamless data flow and efficient integration with ML models. Ensure Data Integrity: Establish and enforce robust systems and processes. These will ensure comprehensive data quality assurance, validation, and reliability across the entire data lifecycle. What You'll Bring: Experience: A minimum of 2+ years of professional experience in data engineering. You should have a proven track record of delivering solutions in a production environment. Data Storage Expertise: Hands-on experience with relational databases (e.g., PostgreSQL, MySQL, Redshift) and cloud object storage (e.g., S3) is required. Experience with distributed file systems (e.g., HDFS) and NoSQL databases is a plus. Big Data Processing: Demonstrated proficiency with big data processing platforms and frameworks. Examples include Hadoop, Spark, Hive, Presto, and Trino. Pipeline Orchestration & Messaging: Practical experience with key data pipeline tools. This includes message queues (e.g., Kafka, Kinesis), workflow orchestrators (e.g., dbt, Airflow), change data capture (e.g., Debezium), and ETL services (e.g., AWS Glue ETL). Programming Prowess: Strong programming skills in Python and SQL are essential. Proficiency in at least one JVM-based language (e.g., Java, Scala) is also required. ML Acumen: A solid understanding of machine learning workflows. This includes data preparation and feature engineering concepts. Innovation & Agility: You should be a creative problem-solver. You'll need a proactive approach to experimenting with new technologies. What we have to offer Competitive financial rewards + annual PLI (Performance Linked Incentives). Meritocracy-driven, candid, and diverse culture. Employee benefits like Medical Insurance One annual all expenses paid by company trip for all employees to bond Although we work from our office in New Delhi, we are flexible in our style and approach Life @Hunch Work Culture: At Hunch we take our work seriously but don’t take ourselves too seriously. Everyone is encouraged to think as owners and not renters, and we prefer to let builders build, empowering people to pursue independent ideas. Impact: Your work will shape the future of social engagement and connect people around the world. Collaboration: Join a diverse team of creative minds and be part of a supportive community. Growth: We invest in your development and provide opportunities for continuous learning. Backed by Global Investors: Hunch is a Series A funded startup, backed by Hashed, AlphaWave, Brevan Howard and Polygon Studios Experienced Leadership: Hunch is founded by a trio of industry veterans - Ish Goel (CEO), Nitika Goel (CTO), and Kartic Rakhra (CMO) - serial entrepreneurs with the last exit from Nexus Mutual, a web3 consumer-tech startup.
Posted 3 weeks ago
8.0 - 10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As a Data Scientist at Kyndryl you are the bridge between business problems and innovative solutions, using a powerful blend of well-defined methodologies, statistics, mathematics, domain expertise, consulting, and software engineering. You'll wear many hats, and each day will present a new puzzle to solve, a new challenge to conquer. You will dive deep into the heart of our business, understanding its objectives and requirements – viewing them through the lens of business acumen, and converting this knowledge into a data problem. You’ll collect and explore data, seeking underlying patterns and initial insights that will guide the creation of hypotheses. Analytical professional who uses statistical methods, machine learning, and programming skills to extract insights and knowledge from data. Their primary goal is to solve complex business problems, make predictions, and drive strategic decision-making by uncovering patterns and trends within large datasets. In this role, you will embark on a transformative process of business understanding, data understanding, and data preparation. Utilizing statistical and mathematical modeling techniques, you'll have the opportunity to create models that defy convention – models that hold the key to solving intricate business challenges. With an acute eye for accuracy and generalization, you'll evaluate these models to ensure they not only solve business problems but do so optimally. Additionally, you're not just building and validating models – you’re deploying them as code to applications and processes, ensuring that the model(s) you've selected sustains its business value throughout its lifecycle. Your expertise doesn't stop at data; you'll become intimately familiar with our business processes and have the ability to navigate their complexities, identifying issues and crafting solutions that drive meaningful change in these domains. You will develop and apply standards and policies that protect our organization's most valuable asset – ensuring that data is secure, private, accurate, available, and, most importantly, usable. Your mastery extends to data management, migration, strategy, change management, and policy and regulation. Key Responsibilities: Problem Framing: Collaborating with stakeholders to understand business problems and translate them into data-driven questions. Data Collection and Cleaning: Sourcing, collecting, and cleaning large, often messy, datasets from various sources, preparing them for analysis. Exploratory Data Analysis (EDA): Performing initial investigations on data to discover patterns, spot anomalies, test hypotheses, and check assumptions with the help of summary statistics and graphical representations. Model Development: Building, training, and validating machine learning models (e.g., regression, classification, clustering, deep learning) to predict outcomes or identify relationships. Statistical Analysis: Applying statistical tests and methodologies to draw robust conclusions from data and quantify uncertainty. Feature Engineering: Creating new variables or transforming existing ones to improve model performance and provide deeper insights. Model Deployment: Working with engineering teams to deploy models into production environments, making them operational for real-time predictions or insights. Communication and Storytelling: Presenting complex findings and recommendations clearly and concisely to both technical and non-technical audiences, often through visualizations and narratives. Monitoring and Maintenance: Tracking model performance in production and updating models as data patterns evolve or new data becomes available. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 8 - 10 years of experience as an Data Scientist . Programming Languages: Strong proficiency in Python and/or R, with libraries for data manipulation (e.g., Pandas, dplyr), scientific computing (e.g., NumPy), and machine learning (e.g., Scikit-learn, TensorFlow, PyTorch). Statistics and Probability: A solid understanding of statistical inference, hypothesis testing, probability distributions, and experimental design. Machine Learning: Deep knowledge of various machine learning algorithms, their underlying principles, and when to apply them. Database Querying: Proficiency in SQL for extracting and manipulating data from relational databases. Data Visualization: Ability to create compelling and informative visualizations using tools like Matplotlib, Seaborn, Plotly, or Tableau. Big Data Concepts: Familiarity with concepts and tools for handling large datasets, though often relying on Data Engineers for infrastructure. Domain Knowledge: Understanding of the specific industry or business domain to contextualize data and insights. Preferred Technical And Professional Experience Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 3 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role The Core Analytics & Science Team (CAS) is Uber's primary science organisation, covering both our main lines of business as well as the underlying platform technologies on which those businesses are built. We are a key part of Uber's cross-functional product development teams, helping to drive every stage of product development through data analytic, statistical, and algorithmic expertise. CAS owns the experience and algorithms powering Uber's global Mobility and Delivery products. We optimise and personalise the rider experience, target incentives and introduce customizations for routing and matching for products and use cases that go beyond the core Uber capabilities. What the Candidate Will Do ---- Refine ambiguous questions and generate new hypotheses and design ML based solutions that benefit product through a deep understanding of the data, our customers, and our business Deliver end-to-end solutions rather than algorithms, working closely with the engineers on the team to productionize, scale, and deploy models world-wide. Use statistical techniques to measure success, develop northstar metrics and KPIs to help provide a more rigorous data-driven approach in close partnership with Product and other subject areas such as engineering, operations and marketing Design experiments and interpret the results to draw detailed and impactful conclusions. Collaborate with data scientists and engineers to build and improve on the availability, integrity, accuracy, and reliability of data logging and data pipelines. Develop data-driven business insights and work with cross-functional partners to find opportunities and recommend prioritisation of product, growth, and optimisation initiatives. Present findings to senior leadership to drive business decisions Basic Qualifications ---- Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative fields. 4+ years experience as a Data Scientist, Machine learning engineer, or other types of data science-focused functions Knowledge of underlying mathematical foundations of machine learning, statistics, optimization, economics, and analytics Hands-on experience building and deployment ML models Ability to use a language like Python or R to work efficiently at scale with large data sets Significant experience in setting up and evaluation of complex experiments Experience with exploratory data analysis, statistical analysis and testing, and model development Knowledge in modern machine learning techniques applicable to marketplace, platforms Proficiency in technologies in one or more of the following: SQL, Spark, Hadoop Preferred Qualifications Advanced SQL expertise Proven track record to wrangle large datasets, extract insights from data, and summarise learnings/takeaways. Proven aptitude toward Data Storytelling and Root Cause Analysis using data Advanced understanding of statistics, causal inference, and machine learning Experience designing and analyzing large scale online experiments Ability to deliver on tight timelines and prioritise multiple tasks while maintaining quality and detail Ability to work in a self-guided manner Ability to mentor, coach and develop junior team members Superb communication and organisation skills
Posted 3 weeks ago
5.0 years
0 Lacs
Bengaluru East, Karnataka, India
Remote
We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems. This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Learning. Key Responsibilities Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks. Own the complete ML lifecycle : data ingestion, model development, experimentation, evaluation, deployment, and monitoring. Leverage transfer learning, foundation models, or self-supervised approaches where suitable. Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure. Continuously monitor model performance and implement retraining workflows to ensure accuracy over time. Stay ahead of the curve on cutting-edge AI research (e.g., generative AI, video understanding, audio embeddings) and incorporate innovations into production systems. Write clean, well-documented, and reusable code to support agile experimentation and long-term platform sustainability. Requirements Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 5-8+ years of experience in AI/ML Engineering, with at least 3 years in applied deep Skills : Languages : Expert in Python; good knowledge of R or Java is a plus. ML/DL Frameworks : Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX. Computer Vision : Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe). Audio AI : Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc.). Data Engineering : Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data. NLP/LLMs : Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred. Cloud & MLOps : Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML. Deployment & Infrastructure : Experience with Docker, Kubernetes, REST APIs, serverless ML inference. CI/CD & Version Control : Git, DVC, ML pipelines, Jenkins, Airflow, etc. Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components. Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders. Proven ability to work cross-functionally with designers, engineers, product managers, and analysts. Demonstrated bias for action, rapid experimentation, and iterative delivery of impact. Benefits Competitive compensation and full-time benefits. Opportunities for certification and professional growth. Flexible work hours and remote work options. Inclusive, innovative, and supportive team culture. (ref:hirist.tech)
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France