Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
4 - 9 Lacs
Hyderābād
On-site
Overview: WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance_ Responsibilities: Senior Software Architect – GPU Kernel Optimization & Distributed AI Systems THE TEAM: Join AMD’s high-impact team at the heart of innovation in AI, ML, and high-performance computing (HPC). We’re a collaborative group of software architects and GPU engineers focused on pushing the boundaries of AI model performance across distributed, GPU-accelerated platforms. Our work drives the next generation of AMD’s AI software stack, enabling large-scale machine learning training and inference workloads in data centers and enterprise environments. THE ROLE: As a Senior Software Developer, you will develop both GPU kernel-level optimization and distributed software efforts for large-scale AI workloads. This is a technical leadership role with direct influence over critical software components in AMD’s AI stack. You’ll architect and implement optimized compute kernels, guide software teams through the full product lifecycle, and work closely with internal and external partners to deploy scalable, high-performance solutions. THE PERSON: We’re looking for a highly skilled, deep systems thinker who thrives in complex problem domains involving parallel computing, GPU architecture, and AI model execution. You are confident leading software architecture decisions and know how to translate business goals into robust, optimized software solutions. You’re just as comfortable writing performance-critical code as you are guiding agile development teams across product lifecycles. Ideal candidates have a strong balance of low-level programming, distributed systems knowledge, and leadership experience—paired with a passion for AI performance at scale. KEY RESPONSIBILITIES: GPU Kernel Optimization : Develop and optimize GPU kernels to accelerate inference and training of large machine learning models while ensuring numerical accuracy and runtime efficiency. Multi-GPU and Multi-Node Scaling: Architect and implement strategies for distributed training/inference across multi-GPU/multi-node environments using model/data parallelism techniques. Performance Profiling: Identify bottlenecks and performance limitations using profiling tools; propose and implement optimizations to improve hardware utilization. Parallel Computing : Design and implement multi-threaded and synchronized compute techniques for scalable execution on modern GPU architectures. Benchmarking & Testing: Build robust benchmarking and validation infrastructure to assess performance, reliability, and scalability of deployed software. Documentation & Best Practices: Produce technical documentation and share architectural patterns, code optimization tips, and reusable components. PREFERRED EXPERIENCE: Software Team Leadership Collaboration with customers and business units to define deliverables and roadmaps. Interfacing with executive leadership on program progress and strategic planning. Experience in production-level software deployment (e.g., upstreaming to open source, commercial rollouts). Software Architecture Deep experience with GPU kernel optimization in C++12/17/20 . Working knowledge of frameworks such as PyTorch, vLLM, Cutlass, Kokkos . Practical expertise in CPU/GPU architecture and system-level performance tuning. Proficiency in Python scripting and infrastructure automation. Application of software design patterns and industry-standard engineering practices. GPU & Low-Level Optimization Hands-on experience with CUDA and low-level GPU programming. Kernel optimization in assembly and tight loops for latency-sensitive code. Proficiency with performance profiling tools (Nsight, VTune, Perf, etc.). Experience with distributed computing strategies in AI environments (multi-GPU, NCCL, MPI). Strong debugging, problem-solving, and performance tuning skills in complex systems. ACADEMIC CREDENTIALS: Bachelor’s or Master’s degree in Computer Engineering, Electrical Engineering, Computer Science, or a related technical field. Advanced degrees or published work in HPC, GPU computing, or AI systems is a plus. #LI-NR1 Qualifications: Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
Posted 3 weeks ago
2.0 years
3 - 9 Lacs
Gurgaon
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Data Scientist II - Analytics At Expedia Group, our mission is to power global travel for everyone, everywhere. If you're passionate about creating exceptional customer experiences and thrive in a collaborative, fast-paced, high-growth environment, you’ll love being part of Expedia Group’s InsurTech team. In InsurTech, we empower travelers around the world with confidence to book, peace of mind, and protection when the unexpected happens—through an innovative portfolio of customer-centric, risk-based products. We’re looking for an exceptional Data Scientist II, Analytics to join our team. If you have a strong analytical mindset, a strategic approach, and a bias for action, this is your opportunity to make a real impact. In this role, you’ll collaborate closely with product partners to optimize and launch best-in-class insurance and FinTech products. This is an exciting opportunity to join a dynamic, international team where data is at the heart of every decision, and customer centricity drives everything we do. What you will do: Collaborate with product teams to harness data from diverse sources—uncovering new product opportunities or identifying friction points in existing experiences. Translate insights into actionable strategies that drive product innovation and enhancement. Design and execute A/B tests to rigorously evaluate product performance and feature improvements. Quantify outcomes into clear financial impact, and perform in-depth analysis to extract insights that inform the next iteration of development. Define, track, and visualize key metrics through intuitive dashboards to enable real-time business monitoring. Proactively surface metric shifts and conduct root-cause analyses to identify underlying drivers. Deliver impactful product and customer insights to cross-functional stakeholders, empowering leadership and product teams to make informed, data-driven decisions. Support product roadmap planning and go-to-market (GTM) strategy by providing analytical guidance and strategic recommendations that align with business objectives and customer needs. Who you are: Must Have: Educational Background : Bachelor’s degree (or equivalent) in Statistics, Mathematics, Economics, Data Science, or a related field. Experience : 2+ years of analytics experience with a focus on product analytics and measurements. Technical Expertise : Strong proficiency in big data tools (SQL, Python, Databricks, AWS, etc.) for working with large, complex datasets. Cross-functional Collaboration : Proven track record of collaborating with teams across Product, Finance, and other departments to integrate data insights into business strategies. Problem-Solving & General Management : Strong analytical, problem-solving, and leadership skills to manage complex initiatives and cross-functional relationships. Good to Have: Advanced Analytical Skills : Expertise in experimentation, causal inference, and advanced analytics, including the ability to dive deeper into A/B test data beyond statistical significance. Economic Modeling : Knowledge of economic modeling techniques such as price elasticity and time series analysis is beneficial. Multivariate Analysis : Familiarity with regression, classification, and other multivariate analysis methods. Machine Learning Models : Exposure to machine learning models such as recommendations, multi-armed bandit algorithms, or reinforcement learning Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. India - Haryana - Gurgaon Technology Full-Time Regular 06/26/2025 ID # R-96322
Posted 3 weeks ago
6.0 - 9.0 years
5 - 9 Lacs
Indore
On-site
About Us: Systango Technologies Limited (NSE: SYSTANGO) is a digital engineering company that offers enterprise-class IT and product engineering services to different size organizations. At Systango, we have a culture of efficiency - we use the best-in-breed technologies to commit quality at speed and world-class support to address critical business challenges. We leverage Gen AI, AI/Machine Learning and Blockchain to unlock the next stage of digitalization for traditional businesses. Our handpicked team is adept at web & enterprise development, mobile apps, QA and DevOps. Ulster University, Sila, Cuentas, Youtility, Porsche, MGM Grand, Deloitte, Grindr, and Tawk.to are some of the top clients that have entrusted us to enhance their digital capabilities and build disruptive innovations. We believe in making the impossible, Possible and we do it literally. About the Role: We are looking for a highly skilled and experienced AI/ML Lead with deep technical expertise in machine learning, deep learning, and Generative AI. The ideal candidate will have a strong programming foundation in Python and hands-on experience with modern ML/DL frameworks, version control systems, and data pipeline tools. This role requires both individual contribution and leadership in driving AI initiatives, while effectively communicating with cross-functional teams and stakeholders. Key Responsibilities: Lead the design, development, and deployment of ML/DL models for real-world applications. Apply advanced techniques such as ensemble learning, transformers, GANs, LSTMs , and reinforcement learning . Work across diverse domains like NLP, computer vision , or recommendation systems based on project needs. Build scalable APIs and services using Flask , Django , or FastAPI . Collaborate with data engineering teams to ensure data readiness for model training and evaluation. Evaluate and fine-tune models using techniques like cross-validation, hyperparameter tuning , and performance metrics. Drive GenAI adoption by leveraging LLM APIs for inference and contribute to LLM training and deployment (if applicable). Document solution architecture, workflows, and technical implementation details clearly. Mentor junior engineers and collaborate with product managers, data scientists, and other technical teams. Required Skills & Qualifications: 6–9 years of hands-on experience in AI/ML and deep learning. Strong programming skills in Python and good understanding of object-oriented programming . Deep knowledge of neural networks including GANs, transformers, LSTMs , etc. Proficient in scikit-learn, pandas, NumPy , and frameworks like TensorFlow-Keras or PyTorch . Experience with version control tools such as Git and GitHub . Solid skills in data visualization tools like Matplotlib , Seaborn , or similar. Familiarity with SQL and NoSQL databases . Experience building RESTful APIs using Django , Flask , or FastAPI . Hands-on experience with GenAI models, especially using LLM APIs for inference. Good understanding of model evaluation techniques and performance metrics. Outstanding verbal and written communication skills, with the ability to clearly articulate complex technical topics. Preferred Qualifications: Exposure to LLM training and deployment workflows. Experience with Big Data technologies such as Hadoop, Spark , etc. Certifications in AI/ML or cloud-based AI platforms (AWS, GCP, Azure). Experience in MLOps tools like MLflow, Kubeflow, or Weights & Biases (nice to have). Why Join Us? Be part of cutting-edge AI and GenAI product development. Work in a collaborative and innovation-driven environment. Lead high-impact projects with global exposure. Excellent growth opportunities and performance-driven culture.
Posted 3 weeks ago
10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description As the Regional Data Center Sales Lead, you will be responsible for driving Nokia's data center sales in the region, as well as the assigned regional accounts to position Nokia’s DCN solutions. You will play a crucial influential role in expanding our APAC market share, working with regional BU, presales and channel teams to cover new opportunities. You will also play a visible role contributing to the regional DC practice and help develop and design medium-to-long DCN strategy as a part of Nokia’s global initiatives and be given the opportunities to work with peers in Europe, Middle East, and Asia Pacific (EMA) region. Nokia is recognized as a Visionary in 2025 Gartner ® Magic Quadrant ™ for Data Center Switching: Nokia solution recognized for Completeness of Vision and Ability to Execute in new research published by Gartner Nokia data center switching solutions are designed to deliver the superior reliability, ease-of-use and adaptability required for modern data center operations Microsoft, CoreWeave, Nscale, Equinix and many more customers have deployed Nokia’s Data Center Switching Solutions as a part of their networking infrastructure to scale HPC/AI workloads How You Will Contribute And What You Will Learn Develop and execute sales strategies to grow Nokia's DC Networking business in the Webscale and Enterprise markets, and be the sales expertise focusing on the area of Data Center Fabric (DCF), Data Center Gateway (DCGW) and Software Automation solutions Educate and work closely with webscale, cloud providers, AI/HPC service providers and enterprise customers in your prime region and assigned strategic global/regional accounts Experienced sales professional who has demonstrated consultation and empathy, and has track records of selling and delivering similar DCGW/DCF solutions as Nokia’s competitor(s), Nokia’s partner(s) or Nokia’s customer(s) Collaborate with cross-functional teams to develop differentiated value propositioning against our competitors (e.g. Juniper, Cisco, Arista, Nvidia, etc.) Represent Nokia at DC related industry events (e.g. xNOG) and develop Nokia’s branding in DCN Own and manage sales opportunities via CRM and provide feedbacks to ensure sales offers are technically and commercially competitive Closely monitor market and technology trends, and proven ability to continuously learn new DC-related technologies and related solutions Ability to travel regionally up to 40% of the time, mostly within Asia Key Skills And Experience You have: Over 10 years of experience in the networking industry, specializing in data center switching and routing technologies that support cloud services, AI/ML training, and inference requirements. Proven track record spanning more than a decade in sales hunting, new account acquisition, and/or presales roles at major networking companies, serving clients such as major cloud providers, internet service providers, XaaS providers, large enterprises, and hyperscalers. In-depth understanding of networking infrastructure requirements across various verticals, including Webscale, Enterprise, Financial, and Government sectors, along with cross-domain knowledge in areas like compute, storage, databases, virtualization, and security. Extensive experience, exceeding 10 years, in both direct sales and indirect sales through partners and distributors. Strong technical expertise in data center architectures and emerging technologies, encompassing software automation and tooling (DevOps, CI/CD), open-source technologies, pluggable technologies, data center operations knowledge, vendor data center reference designs, and familiarity with the data center networking ecosystem. Exceptional communication skills, with a strong inclination towards knowledge sharing and relationship building. Bachelor's degree in computer science, Engineering, or a related field; an advanced degree is preferred. It would be nice if you also have: Familiarity with network automation tools and scripting languages such as Python, Ansible, or similar, to streamline network operations. Experience with cloud networking solutions and services, including but not limited to AWS, Azure, or Google Cloud Platform. About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed.
Posted 3 weeks ago
9.0 - 12.0 years
0 Lacs
India
On-site
Job Summary We are looking for a highly skilled Technical Architect with expertise in AWS, Generative AI, AI/ML, and scalable production-level architectures. The ideal candidate should have experience handling multiple clients, leading technical teams, and designing end-to-end cloud-based AI solutions with an overall experience of 9-12 years. This role involves architecting AI/ML/GenAI-driven applications, ensuring best practices in cloud deployment, security, and scalability while collaborating with cross-functional teams. Key Responsibilities Technical Leadership & Architecture Design and implement scalable, secure, and high-performance architectures on AWS for AI/ML applications. Architect multi-tenant, enterprise-grade AI/ML solutions using AWS services like SageMaker, Bedrock, Lambda, API Gateway, DynamoDB, ECS, S3, OpenSearch, and Step Functions. Lead full lifecycle development of AI/ML/GenAI solutions—from PoC to production—ensuring reliability and performance. Define and implement best practices for MLOps, DataOps, and DevOps on AWS. AI/ML & Generative AI Expertise Design Conversational AI, RAG (Retrieval-Augmented Generation), and Generative AI architectures using models like Claude (Anthropic), Mistral, Llama, and Titan. Optimize LLM inference pipelines, embeddings, vector search, and hybrid retrieval strategies for AI-based applications. Drive ML model training, deployment, and monitoring using AWS SageMaker and AI/ML pipelines. Cloud & Infrastructure Management Architect event-driven, serverless, and microservices architectures for AI/ML applications. Ensure high availability, disaster recovery, and cost optimization in cloud deployments. Implement IAM, VPC, security best practices, and compliance. Team & Client Engagement Lead and mentor a team of ML engineers, Python Developer and Cloud Engineers. Collaborate with business stakeholders, product teams, and multiple clients to define requirements and deliver AI/ML/GenAI-driven solutions. Conduct technical workshops, training sessions, and knowledge-sharing initiatives. Multi-Client & Business Strategy Manage multiple client engagements, delivering AI/ML/GenAI solutions tailored to their business needs. Define AI/ML/GenAI roadmaps, proof-of-concept strategies, and go-to-market AI solutions. Stay updated on cutting-edge AI advancements and drive innovation in AI/ML offerings. Key Skills & Technologies Cloud & DevOps AWS Services: Bedrock, SageMaker, Lambda, API Gateway, DynamoDB, S3, ECS, Fargate, OpenSearch, RDS MLOps: SageMaker Pipelines, CI/CD (CodePipeline, GitHub Actions, Terraform, CDK) Security: IAM, VPC, CloudTrail, GuardDuty, KMS, Cognito AI/ML & GenAI LLMs & Generative AI: Bedrock (Claude, Mistral, Titan), OpenAI, Llama ML Frameworks: TensorFlow, PyTorch, LangChain, Hugging Face Vector DBs: OpenSearch, Pinecone, FAISS RAG Pipelines, Prompt Engineering, Fine-tuning Software Architecture & Scalability Serverless & Microservices Architecture API Design & GraphQL Event-Driven Systems (SNS, SQS, EventBridge, Step Functions) Performance Optimization & Auto Scali
Posted 3 weeks ago
2.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Data Scientist II - Analytics At Expedia Group, our mission is to power global travel for everyone, everywhere. If you're passionate about creating exceptional customer experiences and thrive in a collaborative, fast-paced, high-growth environment, you’ll love being part of Expedia Group’s InsurTech team. In InsurTech, we empower travelers around the world with confidence to book, peace of mind, and protection when the unexpected happens—through an innovative portfolio of customer-centric, risk-based products. We’re looking for an exceptional Data Scientist II, Analytics to join our team. If you have a strong analytical mindset, a strategic approach, and a bias for action, this is your opportunity to make a real impact. In this role, you’ll collaborate closely with product partners to optimize and launch best-in-class insurance and FinTech products. This is an exciting opportunity to join a dynamic, international team where data is at the heart of every decision, and customer centricity drives everything we do. What You Will Do Collaborate with product teams to harness data from diverse sources—uncovering new product opportunities or identifying friction points in existing experiences. Translate insights into actionable strategies that drive product innovation and enhancement. Design and execute A/B tests to rigorously evaluate product performance and feature improvements. Quantify outcomes into clear financial impact, and perform in-depth analysis to extract insights that inform the next iteration of development. Define, track, and visualize key metrics through intuitive dashboards to enable real-time business monitoring. Proactively surface metric shifts and conduct root-cause analyses to identify underlying drivers. Deliver impactful product and customer insights to cross-functional stakeholders, empowering leadership and product teams to make informed, data-driven decisions. Support product roadmap planning and go-to-market (GTM) strategy by providing analytical guidance and strategic recommendations that align with business objectives and customer needs. Who You Are Must Have: Educational Background: Bachelor’s degree (or equivalent) in Statistics, Mathematics, Economics, Data Science, or a related field. Experience: 2+ years of analytics experience with a focus on product analytics and measurements. Technical Expertise: Strong proficiency in big data tools (SQL, Python, Databricks, AWS, etc.) for working with large, complex datasets. Cross-functional Collaboration: Proven track record of collaborating with teams across Product, Finance, and other departments to integrate data insights into business strategies. Problem-Solving & General Management: Strong analytical, problem-solving, and leadership skills to manage complex initiatives and cross-functional relationships. Good To Have Advanced Analytical Skills: Expertise in experimentation, causal inference, and advanced analytics, including the ability to dive deeper into A/B test data beyond statistical significance. Economic Modeling: Knowledge of economic modeling techniques such as price elasticity and time series analysis is beneficial. Multivariate Analysis: Familiarity with regression, classification, and other multivariate analysis methods. Machine Learning Models : Exposure to machine learning models such as recommendations, multi-armed bandit algorithms, or reinforcement learning Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Posted 3 weeks ago
1.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About YUBI Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfilment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India’s fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All 5 of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Loans – Term loans and working capital solutions for enterprises. Yubi Invest – Bond issuance and investments for institutional and retail participants. Yubi Pool– End-to-end securitisations and portfolio buyouts. Yubi Flow – A supply chain platform that offers trade financing solutions. Yubi Co.Lend – For banks and NBFCs for co-lending partnerships. Currently, we have boarded over 4000+ corporates, 350+ investors and have facilitated debt volumes of over INR 40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionising the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 650+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. Responsibilities: Develop and enhance the ML platform to standardise model development and deployment workflows. Create reusable components to streamline the Data Science team's efforts and expedite the model lifecycle. Integrate models seamlessly with various products and systems. Implement robust logging and instrumentation for monitoring scoring requests for models in production. Establish systems for continuous model monitoring and trigger mechanisms for retraining based on performance metrics. Design and build A/B testing frameworks with support for canary deployments and shadow models to evaluate different model versions. Integrate data pipelines necessary for model retraining and update activities in production. Scale training and inference capabilities using standardised environment setups and deployment strategies. Incorporate open-source frameworks and proprietary tools into the MLOps pipeline to achieve development goals. Prototype and evaluate different open-source frameworks to identify optimal technology stacks for the pipeline. Focus on CI/CD pipeline integration for models and ensure seamless deployments in various environments. Requirements Overview: We are seeking a DevOps Engineer who excels in managing and automating cloud infrastructure, container orchestration, and deployment pipelines while possessing a solid understanding of software development practices. In this role, you will focus on creating robust, scalable infrastructure solutions, automating environments, and supporting application deployments. Key Responsibility Design, build, and maintain containerized environments using Docker and Kubernetes. Develop, deploy, and monitor applications and services within cloud environments (AWS(preferable), Azure, GCP). Automate infrastructure provisioning and configuration management using popular scripting languages (Python, Bash, etc.). Collaborate with development teams to support application deployment pipelines and integrate CI/CD practices. Debug and troubleshoot issues in production systems, ensuring high availability and performance. Implement monitoring, logging, and alerting mechanisms to proactively manage system health. Evaluate open-source and commercial tooling to optimize infrastructure and deployment workflows. Assist Data Science or development teams with setting up environments for experimentation and production deployment. Required Experience & Expertise: 1-3+ years of experience in DevOps, infrastructure management, or related fields with a strong emphasis on automation and containerization. Extensive hands-on experience with Docker and Kubernetes for building and managing containerized applications. Proficient programming and scripting skills, preferably in Python, with the ability to develop automation scripts and tools. Solid understanding of public cloud infrastructure (AWS(preferable), Azure, GCP) and associated services. Experience setting up and managing CI/CD pipelines and integrating configuration management tools. Strong problem-solving skills, with the ability to analyse complex issues and provide effective solutions. Exposure to deploying and monitoring applications in production environments. Familiarity with infrastructure-as-code (IaC) frameworks such as Terraform or CloudFormation is a plus. Preferred Qualifications: Background or exposure to Data Science related deployments or applications. Ability to work collaboratively in a fast-paced, cross-functional team environment. Experience evaluating and adopting new technologies and methodologies to streamline DevOps processes.
Posted 3 weeks ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Sr. AI Engineer - Gen AI We are looking for a high-energy tech-savvy individual for the role of Senior AI Engineer- GenAI at Eucloid. The candidate will solve complex problems, picking up new technologies and working at the cutting edge of innovation. The candidate is expected to lead following work-streams: • Develop and optimize Python applications for AI-driven products • Build and integrate LLM-based applications with structured and unstructured data • Work with large datasets, AI pipelines, and model inference APIs • Collaborate with AI researchers and engineers to translate ideas into production-ready solutions • Stay ahead of emerging AI technologies and contribute to continuous innovation Ideal candidate will have following Background and Skills: • Undergraduate Degree in any quantitative discipline such as engineering, science or economics from a Top-Tier institution. • 7+ years of experience in Python-based product development • Strong foundation in data structures, algorithms, and system design • Proven problem-solving ability – you love tackling complex technical challenges • Hands-on experience with APIs, backend development, and cloud platforms (AWS/GCP/Azure) • Experience in AI, ML, or NLP is a bonus, but a strong aptitude to learn AI frameworks quickly is what matters most • Exposure to LangChain, OpenAI APIs, or vector databases (advantageous, but not mandatory) The role offers a very attractive compensation (Base + Bonus) and is based out of Chennai. Please reach out to chandershekhar.verma@eucloid.com if you want to apply.
Posted 3 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask. Implement gRPC services, event-driven systems (Kafka, PubSub), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development: data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow, Dagster, SageMaker, and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation, A/B testing, and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python, FastAPI/Flask, gRPC, and event-driven architectures. Experience with CI/CD, infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices: feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration. Familiarity with tools like Airflow/Dagster, SageMaker, and data pipeline architecture.
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Backend Engineer - Python Role Overview: As a Backend Developer at LearnTube.ai, you will ship the backbone that powers 2.3 million learners in 64 countries—owning APIs that crunch 1 billion learning events & the AI that supports it with <200 ms latency. What You'll Do: At LearnTube, we’re pushing the boundaries of Generative AI to revolutionise how the world learns. As a Backend Engineer, your roles and responsibilities will include: Ship Micro-services – Build FastAPI services that handle ≈ 800 req/s today and will triple within a year (sub-200 ms p95). Power Real-Time Learning – Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily. Design for Scale & Safety – Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch. Deploy Globally – Roll out Dockerised services behind NGINX on AWS (EC2, S3, SQS) and GCP (GKE) via Kubernetes. Automate Releases – GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week. Own Reliability – Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend. Expose Gen-AI at Scale – Publish LLM inference & vector-search endpoints in partnership with the AI team. Ship Fast, Learn Fast – Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in < 2 weeks. What makes you a great fit? Must-Haves: 2+ yrs Python back-end experience (FastAPI) Strong with Docker & container orchestration Hands-on with GitLab CI/CD, AWS (EC2, S3, SQS) or GCP (GKE / Compute) in production SQL/NoSQL (Postgres, MongoDB) + You’ve built systems from scratch & have solid system-design fundamentals Nice-to-Haves: k8s at scale, Terraform, Experience with AI/ML inference services (LLMs, vector DBs) Go / Rust for high-perf services Observability: Prometheus, Grafana, OpenTelemetry About Us: At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with: AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback. Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries. Meet the Founders: LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes. We’re proud to be recognised by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders. Why Work With Us? At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity: Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems. Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment. Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one. Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions. Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to. Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Responsibilities: Deploy, scale, and manage large language models (LLMs) in production environments, ensuring optimal resource usage and performance. Design, implement, and manage CI/CD pipelines to automate the software delivery process, ensuring fast and reliable deployments. Monitor and analyze model performance in real-time, addressing issues like model drift, latency, and accuracy degradation, and initiating model retraining or adjustments when necessary. Manage cloud environments (AWS, GCP, Azure etc.,) to provision and scale infrastructure to meet the needs of training, fine-tuning, and inference for large models. Collaborate with development teams to integrate CI/CD pipelines into the development workflow, promoting continuous integration and delivery best practices. Implement infrastructure as code (IaC) using tools such as Terraform, Ansible, or CloudFormation to automate the provisioning and management of infrastructure. Manage and maintain cloud infrastructure on platforms such as AWS, Azure, or Google Cloud, ensuring scalability, security, and reliability. Develop and implement monitoring, logging, and alerting solutions to ensure the health and performance of applications and infrastructure. Work closely with security teams to integrate security practices into the CI/CD pipelines, ensuring compliance with industry standards and regulations. Optimize build and release processes to improve efficiency and reduce deployment times, implementing strategies such as parallel builds and incremental deployments. Automate testing processes within the CI/CD pipelines to ensure high-quality software releases, including unit tests, integration tests, and performance tests. Manage and monitor version control systems, such as Git, to ensure code integrity and facilitate collaboration among development teams. Provide technical support and troubleshooting for CI/CD-related issues, ensuring timely resolution and minimal disruption to development workflows. Develop and maintain documentation for CI/CD pipelines, infrastructure configurations, and best practices, ensuring clarity and accessibility for team members. Stay updated on the latest trends and advancements in DevOps, CI/CD, and cloud computing, and incorporate new tools and practices into the organization's workflows. Lead and participate in code reviews and technical discussions, providing insights and recommendations for continuous improvement. Conduct training sessions and workshops for internal teams to promote knowledge sharing and best practices in DevOps and CI/CD. Collaborate with IT and development teams to implement and manage containerization solutions using Docker and orchestration platforms such as Kubernetes. Implement and manage configuration management solutions to maintain consistency and manage changes across environments. Develop and implement disaster recovery and business continuity plans to ensure the resilience and availability of applications and infrastructure. Optimize resource utilization and cost management for cloud infrastructure, implementing strategies such as auto-scaling and resource tagging. Facilitate communication between development, operations, and business stakeholders to ensure alignment on DevOps goals and practices. Participate in the evaluation and selection of DevOps tools and technologies that align with organizational goals and improve software delivery processes. Manage and monitor application performance, implementing strategies to optimize performance and resolve bottlenecks. Ensure compliance with organizational policies and industry regulations related to software development and deployment. Required Skills: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Extensive experience in DevOps practices and CI/CD implementation. Strong proficiency in CI/CD tools such as Jenkins, GitLab CI, or CircleCI. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Proficiency in infrastructure as code (IaC) tools such as Terraform, Ansible, or CloudFormation. Strong understanding of containerization and orchestration platforms such as Docker and Kubernetes. Experience with monitoring, logging, and alerting tools such as Prometheus, Grafana, ELK Stack, or Datadog. Proficiency in scripting languages such as Python, Bash, or PowerShell. Strong understanding of version control systems such as Git. Excellent problem-solving and analytical skills, with the ability to troubleshoot and resolve technical issues. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders. Certification in DevOps or cloud platforms (e.g., AWS Certified DevOps Engineer, Azure DevOps Engineer Expert) is preferred. Preferred Skills: Experience with monitoring, logging, and alerting tools such as Prometheus, Grafana, ELK Stack, or Datadog. Proficiency in scripting languages such as Python, Bash, or PowerShell. Strong understanding of version control systems such as Git. Excellent problem-solving and analytical skills, with the ability to troubleshoot and resolve technical issues. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders. Certification in DevOps or cloud platforms (e.g., AWS Certified DevOps Engineer, Azure DevOps Engineer Expert) is preferred.
Posted 3 weeks ago
5.0 years
0 Lacs
India
On-site
Responsibilities Build multi-agent systems capable of reasoning, tool use, and autonomous action Implement Model Context Protocol (MCP) strategies to manage complex, multi-source context Integrate third-party APIs (e.g., Crunchbase, PitchBook, CB Insights), scraping APIs, and data aggregators Develop browser-based agents enhanced with computer vision for dynamic research, scraping, and web interaction Optimize inference pipelines, task planning, and system performance Collaborate on architecture, prototyping, and iterative development Experiment with prompt chaining, tool calling, embeddings, and vector search Requirements 5+ years of experience in software engineering or AI/ML development Strong Python skills and experience with LangChain, LlamaIndex, or agentic frameworks Proven experience with multi-agent systems, tool calling, or task planning agents Familiarity with Model Context Protocol (MCP), Retrieval-Augmented Generation (RAG), and multi-modal context handling Experience with browser automation frameworks (e.g., Playwright, Puppeteer, Selenium) Cloud deployment and systems engineering experience (GCP, AWS, etc.) Self-starter attitude with strong product sense and iteration speed Bonus Points Experience with AutoGen, CrewAI, OpenAgents, or ReAct-style frameworks Background in building AI systems that blend structured and unstructured data Experience working in a fast-paced startup environment Previous startup or technical founding team experience
Posted 3 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
NPS Prism Title: Senior Data Scientist, NPS Prism Last updated: June 2025 Company Profile: NPS Prism is a market-leading, cloud-based CX benchmarking and operational improvement platform owned by Bain & Company. NPS Prism provides its customers with actionable insights and analysis that guide the creation of game-changing customer experiences. Based on rock-solid sampling, research, and analytic methodology, it lets customers see how they compare to their competitors on overall NPS®, and on every step of the customer journey. With NPS Prism you can see where you’re strong, where you lag, and how customers feel about doing business with you and your competitors, in their own words. The result: Prioritize the customer interactions that matter most. NPS Prism customers use our customer experience benchmarks and insights to propel their growth and outpace the competition. Launched in 2019, NPS Prism has rapidly grown to a team of over 200, serving dozens of clients around the world. NPS Prism is 100% owned by Bain & Company, one of the top management consulting firms in the world and a company consistently recognized as one of the world’s best places to work. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. Position Summary: NPS Prism has experienced tremendous growth as a standalone software and data business over the past few years and is making the leap from being a consulting-led business to a technology-led business. Given that shift, we are looking to build our team with world-class team members to help drive business growth to its full potential in this next phase. This is a great opportunity to help build the largest startup owned by Bain & Company and take NPS Prism into the future. Key Responsibilities: Act as a mentor, guiding and coaching junior Data Scientists in technical and storytelling aspects Lead the development, deployment, and adoption of open source LLM models to do text classification, building sentiment models Lead the development, deployment, and adoption of causal inference models to measure marketing campaigns, optimizing investments across short and long-term business objectives Collaborate with Data Engineering, Product and Technology teams on further build out of analytics architecture Take ownership of end-to-end projects, operating where there's a high degree of ambiguity, and delivering results within ambitious timelines Translate insights into relevant business recommendations, anticipating the needs of business stakeholders Research prior work to inform and develop models and algorithms and quantify appropriate metrics & targets Required Qualifications, Experience & Skills Education Bachelor's degree in computer science or Equivalent experience Experience 6 to 10 years Data Science experience covering ML, DL and GenAI 2+ years of experience building end-to-end data science applications using dash, Flask, Streamlit and Django etc. (Preferred) Skills Experience with building end-to-end data science applications Experience with NLP and LLMfine tuning Experience with cultural transformation to CI/CD and MLOps Experience with Azure MLOps or other SDLC tools Experience with Agile methodologies Experience building applications using fast APIs Experience in a start-up, fast-growth, rapid-change environment Must be self-motivated, dedicated to quality and must always strive to “do the right thing” Preferred Qualifications: Experience with HTML, CSS, JavaScript, version control using Git Experience with Containers and Kubernetes Experience with TensorFlow, Pytorch etc Powered by JazzHR yPQ1ufIEcf
Posted 3 weeks ago
1.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Date: 20 Jun 2025 Location: Bangalore, KA, IN, 560099 Custom Field 1: Discovery Services Job Title: Bioinformatician Job Location: Bangalore About Syngene: Syngene ( www.syngeneintl.com ) is an innovation-led contract research, development and manufacturing organization offering integrated scientific services from early discovery to commercial supply. At Syngene, safety is at the heart of everything we do personally and professionally. Syngene has placed safety at par with business performance with shared responsibility and accountability, including following safety guidelines, procedures, and SOPs, in letter and spirit Overall adherence to safe practices and procedures of oneself and the teams aligned Contributing to development of procedures, practices and systems that ensures safe operations and compliance to company’s integrity & quality standards Driving a corporate culture that promotes environment, health, and safety (EHS) mindset and operational discipline at the workplace at all times. Ensuring safety of self, teams, and lab/plant by adhering to safety protocols and following environment, health, and safety (EHS) requirements at all times in the workplace. Ensure all assigned mandatory trainings related to data integrity, health, and safety measures are completed on time by all members of the team including self Compliance to Syngene’ s quality standards at all times Hold self and their teams accountable for the achievement of safety goals Govern and Review safety metrics from time to time We are seeking a highly skilled and experienced computational biologist to join our team. The ideal candidate will have a proven track record in multi-omics data analysis. They will be responsible for integrative analyses and contributing to the development of novel computational approaches to uncover biological insights. Experience: 1-4 years Core Purpose of the Role To support data-driven biological research by performing computational analysis of omics data, and generating translational insights through bioinformatics tools and pipelines. Position Responsibilities Conduct comprehensive analyses of multi-omics datasets, including genomics, transcriptomics, proteomics, metabolomics, and epigenomics. Develop computational workflows to integrate various -omics data to generate inference and hypotheses for testing. Conduct differential expression and functional enrichment analyses. Implement and execute data processing workflows and automate the pipelines with best practices for version control, modularization, and documentation. Apply advanced multivariate data analysis techniques, including regression, clustering, and dimensionality reduction, to uncover patterns and relationships in large datasets. Collaborate with researchers, scientists, and other team members to translate computational findings into actionable biological insights. Educational Qualifications Master’s degree in bioinformatics. Mandatory Technical Skills Programming: Proficiency in Python for data analysis, visualization, and pipeline development. Multi-omics analysis: Proven experience in analyzing and integrating multi-omics datasets. Statistics: Knowledge of probability distributions, correlation analysis, and hypothesis testing. Data visualization: Strong understanding of data visualization techniques and tools (e.g., ggplot2, matplotlib, seaborn). Preferred Machine learning: Familiarity with AI/ML concepts Behavioral Skills Excellent communication skills Objective thinking Problem solving Proactivity Syngene Values All employees will consistently demonstrate alignment with our core values Excellence Integrity Professionalism Equal Opportunity Employer It is the policy of Syngene to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by applicable legislation or local law. In addition, Syngene will provide reasonable accommodations for qualified individuals with disabilities.
Posted 4 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Understand complex and critical business problems from Country/Regional/Global business functions, formulate integrated analytical approach to mine data sources, employ statistical methods and machine learning algorithms to discover actionable insights and automate process for reducing effort and time for repeated use. High agility to be able to work across various business domains (commercial, NTO, GDD, NIBR, NBS) or divisions (Onco, GenMeds, Sandoz). Able to use business presentations, smart visualization tools and contextual storytelling to translate findings back to business users with a clear impact. No direct team management. About The Role Data Scientist I Location – Hyderabad Hybrid Major Accountabilities Work on a variety of business applications including but not limited to: Customer Segmentation & Targeting, Event Prediction, Propensity Modelling, Churn Modelling, Customer Lifetime Value Estimation, Forecasting, Recom-mender Systems, Modelling Response to Incentives, Marketing Mix Optimization, Price Optimization Develop automation for repeatedly refreshing analysis and generating insights Collaborates with globally dispersed internal stakeholders and cross-functional teams to solve critical business problems and deliver successfully on high visibility strategic initiatives Understand life science data sources including sales, contracting, promotions, social me-dia, patient claims and Real-World Evidence. Quickly learn the use of tools, data sources and analytical techniques needed to answer a wide range of critical business questions Articulate solutions/recommendations to business users. Works with senior data science team member to present analytical content concisely and effectively Manage own tasks and work with allied team members; plans proactively, anticipates and actively manages change, sets stakeholder expectations as required, identifies operational risks and independently drives issues to resolution, minimizes surprise escalations Independently identifies research articles and reproduce/apply methodology to Novartis business problems Has high learning agility and diligently follows updates in industry and area of work. Essential Requirements: Quality of insights generated, and solutions provided, with quantified business impact / ROI Effective communication with Country/Regional/Global stakeholders Executes agreed targets for self Define and execute development plans for potential Subject Matter Experts Find creative ways to build team capabilities and play a direct role in driving a culture of innovation Values and Behaviors: in line with leadership standards of Novartis Education: PhD or Masters (or bachelor’s from a top Tier University) in a quantitative discipline (e.g. Statistics, Economics, Mathematics, Computer Science, Bioinformatics, Ops Research, etc.)Experience: 7+ years of relevant experience in Data Science. In case of PhD, 4+ years post qualification experience. Experience in commercial pharma would be a bonus. Extensive experience required in: Statistical and Machine Learning techniques like Regression (esp., GLM, non-linear, etc.), Classification (CART, RF, SVM, GBM, etc.) Clustering, Design of Experiments, Monte Carlo Simulations, Statistical Inference, Feature Engineering, Time Series Forecasting, Text Mining and Natural Language Processing Good to have skills: Stochastic models, Bayesian Models, Markov Chains, Dynamic Programming and Optimization techniques, Deep Learning techniques on structured and un-structured data, Recommender Systems (content and collaborative filtering), etc. Tools and Packages: SAS, R, Python, SQL. Exposure to dashboard or web-apps building using Qliksense, R-Shiny, Flask, etc. would be an added advantage. Why Novartis: Our purpose is to reimagine medicine to improve and extend people’s lives and our vision is to become the most valued and trusted medicines company in the world. How can we achieve this? With our people. It is our associates that drive us each day to reach our ambitions. Be a part of this mission and join us! Learn more here: https://www.novartis.com/about/strategy/people-and-culture You’ll receive: You can find everything you need to know about our benefits and rewards in the Novartis Life Handbook. https://www.novartis.com/careers/benefits-rewards Commitment To Diversity And Inclusion Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve. Join our Novartis Network: If this role is not suitable to your experience or career goals but you wish to stay connected to hear more about Novartis and our career opportunities, join the Novartis Network here: https://talentnetwork.novartis.com/network. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards
Posted 4 weeks ago
12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity Develop ML models, platforms, and services for Adobe Express as a Machine Learning Engineer. About The Team The AI Foundation Team at Adobe Express aims to develop a groundbreaking AI stack using internal and external technologies to improve feature speed and quality. Develop ML models and services, collaborate with teams, improve user experience at Adobe Express. What You'll Do 12+ Years of experience in the field Research, design, and implement advanced ML models and pipelines for training and inference at scale, including techniques in computer vision, NLP, deep learning, and generative AI. Integrate Large Language Models (LLMs) and agent-based frameworks to support multimodal creative workflows, enabling rich, context-aware content generation and dynamic user experiences. Collaborate with multi-functional teams to translate product requirements into ML solutions, iterating from proof-of-concept to fully productionized services. Develop robust platforms for continuous model training, experimentation, A/B testing, and monitoring, ensuring that model quality and relevance remain consistently high. Leverage distributed computing technologies and cloud infrastructures to handle large-scale data processing, feature engineering, and real-time inference, optimizing for performance and cost-efficiency. Implement reliable APIs and microservices that serve ML models to end users, ensuring alignment to standard methodologies in security, compliance, scalability, and maintainability. Stay ahead of emerging ML research, tools, and frameworks, evaluating and integrating new technologies such as sophisticated LLMs, reinforcement learning-based agents, and innovative inference optimization techniques. Basic Qualifications: PhD or Master’s or Bachelor's or equivalent experience in Computer Science, ML, Applied Mathematics, Data Science, or a related technical field. Proficiency in Python and Java for ML model development and systems integration. Hands-on experience with deep learning frameworks, including TensorFlow and PyTorch. Demonstrated experience working with LLMs and agent frameworks to develop advanced AI-based experiences. Proficiency in computer vision and NLP techniques for multimodal content understanding and generation. Work experience in Creative Domains, Imaging Domains will be highly useful. Experience in developing and deploying RESTful web services and microservices architectures for applications involving ML. Proficiency with UNIX environments, Git for version control, and Jenkins for CI/CD processes. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 4 weeks ago
5.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
A career in our Advisory Acceleration Centre is the natural extension of PwC’s leading class global delivery capabilities. We provide premium, cost effective, high quality services that support process quality and delivery capability in support for client engagements. To really stand out and make us fit for the future in a constantly changing world, each and every one of us at PwC needs to be an authentic and inclusive leader, at all grades/levels and in all lines of service. To help us achieve this we have the PwC Professional; our global leadership development framework. It gives us a single set of expectations across our lines, geographies and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. Responsibilities As a Senior Associate, you'll work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Use feedback and reflection to develop self awareness, personal strengths and address development areas. Delegate to others to provide stretch opportunities and coach to help deliver results. Develop new ideas and propose innovative solutions to problems. Use a broad range of tools and techniques to extract insights from from current trends in business area. Review your work and that of others for quality, accuracy and relevance. Share relevant thought leadership. Use straightforward communication, in a structured way, when influencing others. Able to read situations and modify behavior to build quality, diverse relationships. Uphold the firm's code of ethics and business conduct. Edu qualifications: BE / B Tech / MCA/ MTech Experience range: 5 - 10 years Skill - GW Testing - Senior Associate Job Description - Reviewing requirements / specifications / technical design documents Designing detailed, comprehensive and well-structured Test Plans and Test Cases Setting up Test Environment & Test Data Executing tests as needed throughout the project. Analyzing and reporting test results. Identifying and tracking defects through their lifecycle. Understanding of Integration - Technical Design Document and Use Case Testing experience of any one of the Guidewire products: PolicyCenter Experience on policy transactions, workflow ,Audits, forms inference Performing thorough testing [Smoke / System / Integration / Regression / Stabilization Possessing expertise in Test Management Tools like ALM / Jira
Posted 4 weeks ago
5.0 - 10.0 years
0 Lacs
India
On-site
About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Company Overview At Oportun, we are on a mission to foster financial inclusion for all by providing affordable and responsible lending solutions to underserved communities. As a purpose-driven financial technology company, we believe in empowering our customers with access to responsible credit that can positively transform their lives. Our relentless commitment to innovation and data-driven practices has positioned us as a leader in the industry, and we are actively seeking exceptional individuals to join our team as Senior Software Engineer to play a critical role in driving positive change. Position overview We are seeking a highly skilled Platform Engineer with expertise in building self-serve platforms that combine real-time ML deployment and advanced data engineering capabilities. This role requires a blend of cloud-native platform engineering, data pipeline development, and deployment expertise. The ideal candidate will have a strong background in implementing data workflows, building platforms to enable self-serve for ML pipelines while enabling seamless deployments. Responsibilities Platform Engineering Design and build self-serve platforms that support real-time ML deployment and robust data engineering workflows. Create APIs and backend services using Python and FastAPI to manage and monitor ML workflows and data pipelines. Real-Time ML Deployment Implement platforms for real-time ML inference using tools like AWS SageMaker and Databricks. Enable model versioning, monitoring, and lifecycle management with observability tools such as New Relic. Data Engineering Build and optimise ETL/ELT pipelines for data preprocessing, transformation, and storage using PySpark and Pandas. Develop and manage feature stores to ensure consistent, high-quality data for ML model training and deployment. Design scalable, distributed data pipelines on platforms like AWS, integrating tools such as DynamoDB, PostgreSQL, MongoDB, and MariaDB. CI/CD and Automation Use CI/CD pipelines using Jenkins, GitHub Actions, and other tools for automated deployments and testing. Automate data validation and monitoring processes to ensure high-quality and consistent data workflows. Documentation and Collaboration Create and maintain detailed technical documentation, including high-level and low-level architecture designs. Collaborate with cross-functional teams to gather requirements and deliver solutions that align with business goals. Participate in Agile processes such as sprint planning, daily standups, and retrospectives using tools like Jira. Experience Required Qualifications 5-10 years experience in IT 5-8 years experience in platform backend engineering 1 year experience in DevOps & data engineering roles. Hands-on experience with real-time ML model deployment and data engineering workflows. Technical Skills Strong expertise in Python and experience with Pandas, PySpark, and FastAPI. Proficiency in container orchestration tools such as Kubernetes (K8s) and Docker. Advanced knowledge of AWS services like SageMaker, Lambda, DynamoDB, EC2, and S3. Proven experience building and optimizing distributed data pipelines using Databricks and PySpark. Solid understanding of databases such as MongoDB, DynamoDB, MariaDB, and PostgreSQL. Proficiency with CI/CD tools like Jenkins, GitHub Actions, and related automation frameworks. Hands-on experience with observability tools like New Relic for monitoring and troubleshooting. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3).
Posted 4 weeks ago
6.0 years
0 Lacs
India
Remote
Location: India (Hybrid/Remote Options) About RediMinds We don’t just develop AI—we redefine its boundaries . At RediMinds, we turn theoretical breakthroughs into solutions that revolutionize healthcare, climate resilience, robotics, and human potential. If you’ve ever dreamed of leading research that leaves labs to transform lives, this is your moment. The Role - AI Research Pioneer You’ll lead a nimble, brilliant team of 5–8 researchers to tackle audacious questions: How do we make LLMs diagnose diseases before symptoms appear? Can AI simulate protein folding to cure the incurable? What does ethical superintelligence look like for 8 billion people? This isn’t management—it’s orchestrating a rebellion against the possible . What You’ll Ignite Moonshot Leadership : Direct high-risk/high-reward research in generative AI, neuro-symbolic systems, and causal inference—with resources to match your ambition. Industry-Defining Publications : Publish in Nature , NeurIPS, or ICML while shipping code that runs in hospitals, factories, and farms. Talent Alchemy : Recruit and mentor geniuses (PhDs, rebels, dreamers) into a team that outthinks Google DeepMind. Impact Translation : Partner with RediMinds’ product teams to turn embeddings into empathy, and algorithms into action. Global Thought Leadership : Represent RediMinds at Davos, TED, or the UN—your research will shape policy and humanity’s trajectory. Ideal Profile A PhD in CS, Physics, Neurobiology, Quantum Computing, or any field where you’ve hacked the universe’s rules. 5–6 years leading research at a top-tier lab (OpenAI, DeepMind, MIT, Max Planck) or bleeding-edge tech giant. Published or perished : A track record in venues that make peers gasp (e.g., Science , NeurIPS, CVPR). Engineering grit : Fluency in PyTorch, JAX, HF, CUDA—and battle scars from scaling ideas to petabyte scale. Philosophical depth : You debate AI alignment over chai, and see "ethics" as a technical constraint. Deep experience in one or more of the following domains: LLM training and fine-tuning Knowledge retrieval, vector databases, and RAG pipelines Scientific machine learning (e.g., computational chemistry, physics-informed ML) Vision-Language and Multimodal AI Strong ability to bridge academic theory with real-world AI product development . Excellent communication skills and the ability to inspire, guide, and grow a high-performing research team. Why Choose RediMinds? Build Your Legacy : Lead a team of most ambitious AI researchers from Day 1. Unshackled Resources : No grant applications—just a blank check for compute, talent, and wild creativity. Elite Team: Collaborate with a global network of researchers, product thinkers, and AI engineers across RediMinds’ initiatives. Global Advantage : Shape AI revolution from its epicenter, with proximity to IITs/ISRO and global collaborators. Compensation : Competitive salary in a company valued at tomorrow’s numbers. Relocation support, healthcare for family, and a moonshot budget for conferences, courses, or curiosity. Apply Now If you’re ready to trade incremental papers for planet-scale impact , apply now only if you’re a PhD holder and send us: Your CV (highlight research that bent reality). A 1-page manifesto: “The One Problem I’d Solve with Unlimited Compute.” Links to 2 publications that reveal your intellectual signature. Email to talent@rediminds.com Subject line: “Pioneer Application: [Your Name] - [Your Boldest Idea]” “The universe is not made of atoms—it’s made of courage.” Join us. Rewrite physics.
Posted 4 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Mihira Visual Labs Mihira Visual Labs is a research-driven CGI and VFX studio redefining filmmaking through AI- and ML-powered workflows. We specialize in the development and production of full-length animated films, empowering creators with cutting-edge tools to accelerate high-quality storytelling and IP creation. Our mission is to make world-class storytelling faster, more efficient, and more cost-effective — where human imagination is the only true differentiator. Role Overview The Lead AI/ML Engineer will spearhead the development and integration of artificial intelligence and machine learning solutions into our VFX production pipeline. You’ll work closely with pipeline developers, production managers, and creative teams to reimagine workflows, automate repetitive tasks, and push the boundaries of innovation in photorealistic rendering, stylized animation, and other CGI processes. This role combines deep technical expertise with a strong understanding of the demands of a fast-paced VFX/animation studio. Key Responsibilities AI Strategy & Roadmap Develop and maintain a strategic plan for implementing AI/ML across the VFX pipeline—covering data wrangling, rendering, asset management, animation, and post-production. Identify new approaches to innovate current workflows to increase efficiency and cut down on time. Algorithm & Model Development Research, design, and implement ML models (e.g., computer vision, generative models, style transfer) that improve artist efficiency, production speed, enhance image quality, or enable new creative possibilities. Optimize models for performance on local GPU/CPU clusters or cloud-based infrastructures. Pipeline Integration & Automation Collaborate with pipeline engineers to seamlessly integrate AI agents or tools into existing software stacks (e.g., Maya, Houdini, Nuke), ensuring minimal disruption to artists’ workflows. Develop automated solutions for tasks like rotoscoping, clean-up, crowd simulation, environment generation, or facial capture/animation. Infrastructure & Tooling Architect and maintain robust data pipelines, ensuring the secure collection and organization of high-quality datasets for training AI models. Evaluate and deploy containerization/MLOps tools (Docker, Kubernetes, MLflow, etc.) for scalable model training, inference, and monitoring. Performance Optimization Profile model performance, memory usage, and render times; implement optimizations in frameworks such as TensorFlow, PyTorch, or custom GPU pipelines. Work with DevOps/IT teams to configure and manage dedicated GPU farms or cloud compute resources. Research & Development Stay updated with state-of-the-art ML/DL techniques, particularly in generative AI, computer vision, and real-time rendering. Introduce emerging methods (e.g., stable diffusion, large language models, neural rendering) to innovate new production techniques. Mentorship & Collaboration Lead a small team of AI engineers or data scientists, providing guidance on best practices, code reviews, and architectural decisions. Educate and train production staff on AI-driven tools and workflows, fostering a culture of continuous improvement. Documentation & Reporting Create clear technical documentation for AI solutions, ensuring maintainability and scalability. Present progress, insights, and ROI to executive leadership, project stakeholders, and cross-functional teams. Qualifications & Skills Bachelor’s or Master’s degree in Computer Science, AI/ML, or related field. A PhD is a bonus but not mandatory. 5+ years of professional experience in applied machine learning or data science, with at least 2 years in a lead/managerial role. Previous experience in VFX, animation, gaming, or related entertainment industries is a bonus. Programming: Expert-level Python (C++ is a plus). ML Frameworks: Deep understanding of TensorFlow, PyTorch, scikit-learn, or similar libraries. Computer Vision & Generative Models: Familiarity with CNNs, GANs, autoencoders, stable diffusion, or neural radiance fields. Pipeline Tools: Experience with integration in VFX software (Maya, Houdini, Nuke) and plugin APIs.[Optional] DevOps & MLOps: Comfortable with containerization (Docker), orchestration (Kubernetes), CI/CD, and cloud platforms (AWS, Azure, GCP). Proven track record of translating production challenges into AI/ML solutions that deliver measurable efficiency gains or cost savings. Experience with model optimization (quantization, pruning) and GPU/CPU performance tuning. Collaboration: Excellent communication to bridge technical and creative teams, explaining complex concepts in clear, accessible language. Leadership: Ability to mentor junior engineers and foster a culture of experimentation and continuous learning. Agility: Adapts quickly to evolving project needs, production pipelines, and new AI techniques. A genuine interest in cinema, animation, or gaming—a plus if you have prior knowledge of the Baahubali IP or similar large-scale IPs. Creativity in applying AI to artistic challenges, from photorealistic digital humans to stylized animated sequences.
Posted 4 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. Responsibilities This particular role is within our Yubi Invest vertical, and you would get to work on building our bonds platform, called Aspero, for retail users. Be able to operate in ambiguous situations and define clear objectives by breaking down the narratives independently. Work closely with business, research, data and engineering teams to understand the user goals, market dynamics and ship products. Aligning product strategy, proposition and roadmap with measurable metrics with all stakeholders. Drive PRDs, product planning, and product design of new features and enhancements. Clearly communicate product and platform benefits to our users and internal stakeholders About The Role- We’re looking for a highly skilled, results-driven AI engineer who thrives in fast-paced, high-impact environments. If you are passionate about pushing the boundaries of Computer Vision, OCR, and Large Language Models (LLMs) and have a strong foundation in building and deploying AI solutions, this role is for you. As a Senior Data Scientist, you will take ownership of designing and implementing state-of-the-art OCR and Computer Vision systems. This role demands deep technical expertise, the ability to work autonomously, and a mindset that embraces complex challenges head-on. Here, you won’t just fine-tune pre-trained models—you’ll be architecting, optimizing, and scaling AI solutions that power real-world applications. Key Responsibilities- Architect, develop, and deploy high-performance Computer Vision and OCR models for real-world applications. Implement and optimize state-of-the-art OCR models such as Donut, TrOCR, LayoutLM, and DocFormer for document processing and information extraction. Fine-tune and integrate LLMs (GPT, LLaMA, Mistral, etc.) to enhance text understanding and automation. Develop custom deep learning models for large-scale image and document processing. Build and optimize end-to-end AI pipelines, ensuring efficient data processing and model deployment. Work closely with engineers to operationalize AI models in production (Docker, FastAPI, TensorRT, ONNX). Enhance GPU performance and model inference efficiency, applying techniques such as quantization and pruning. Stay ahead of industry advancements, continuously experimenting with new AI architectures and training techniques. Work in a highly dynamic, startup-like environment, balancing rapid experimentation with production-grade robustness. Requirements 5-10 years experience p roven technical expertise – Strong programming skills in Python, PyTorch, TensorFlow with deep experience in Computer Vision and OCR. Hands-on experience in developing, training, and deploying OCR and document AI models. Deep understanding of Transformer-based architectures for vision and text processing. Experience working with Hugging Face, OpenCV, TensorRT, and NVIDIA GPUs for model acceleration. Autonomous problem solver – You take initiative, work independently, and drive projects from research to production. Strong experience in scaling AI solutions, including model optimization and deployment on cloud platforms (AWS/GCP/Azure). Thrives in fast-paced environments – You embrace challenges, pivot quickly, and execute effectively. Familiarity with MLOps tools (Docker, FastAPI, Kubernetes) for seamless model deployment. Experience in multi-modal models (Vision + Text). Nice to Have- Strong background in vector databases, RAG pipelines, and fine-tuning LLMs for document intelligence. Contributions to open-source AI projects.
Posted 4 weeks ago
4.0 years
0 Lacs
Hyderābād
On-site
Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: More details below: Join the exciting Generative AI team at Qualcomm focused on integrating cutting edge GenAI models on Qualcomm chipsets. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities to allow inference of GenAI models on-device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. Utilize this power efficient hardware and Software stack to run Large Language Models (LLMs) and Large Vision Models (LVM) at near GPU speeds! Responsibilities: In this role, you will spearhead the development and commercialization of the Qualcomm AI Runtime (QAIRT) SDK on Qualcomm SoCs. As an AI inferencing expert, you'll push the limits of performance from large models. Your mastery in deploying large C/C++ software stacks using best practices will be essential. You'll stay on the cutting edge of GenAI advancements, understanding LLMs/Transformers and the nuances of edge-based GenAI deployment. Most importantly, your passion for the role of edge in AI's evolution will be your driving force. Requirements: Master’s/Bachelor’s degree in computer science or equivalent. 4+ years of relevant work experience in software development. Strong understanding of Generative AI models – LLM, LVM and LLMs and building blocks Floating-point, Fixed-point representations and Quantization concepts. Experience with optimizing algorithms for AI hardware accelerators (like CPU/GPU/NPU). Strong development skills in C/C++ Excellent analytical and debugging skills. Good communication skills (verbal, presentation, written). Ability to collaborate across a globally diverse team and multiple interests. Preferred Qualifications Strong understanding of SIMD processor architecture and system design. Proficiency in object-oriented software development. Familiarity with Linux and Windows environment Strong background in kernel development for SIMD architectures. Familiarity with frameworks like llama.cpp, MLX, and MLC is a plus. Good knowledge of PyTorch, TFLite, and ONNX Runtime is preferred. Experience with parallel computing systems and Assembly is a plus. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 4 weeks ago
2.0 years
4 - 6 Lacs
Gurgaon
On-site
Position Overview BayOne Solutions is seeking an exceptional AI/ML Engineer to join our innovative technology initiatives that will revolutionize talent acquisition through intelligent automation. This role is critical to developing and implementing cutting-edge AI/ML solutions that will transform core business processes and drive operational excellence. As our AI/ML Engineer, you will architect and deploy sophisticated machine learning models, implement generative AI solutions, and build multimodal AI systems that transform how we connect talent with opportunities. This position offers the opportunity to work with Fortune 500 clients while building scalable AI solutions that will define the future of our technology platform. Key Responsibilities AI/ML Model Development & Implementation (40%) Design and implement advanced machine learning algorithms for candidate-job matching, utilizing semantic understanding and behavioral prediction models Develop natural language processing solutions for job description parsing, candidate profile analysis, and automated content generation Build and optimize recommendation engines that intelligently match candidates to opportunities based on skills, experience, and cultural fit Create predictive models for recruitment outcomes, including candidate success probability and time-to-fill optimization Implement anomaly detection systems for candidate screening and quality assurance Generative AI & Large Language Models (25%) Develop and fine-tune large language models (LLMs) for recruitment-specific applications including automated job description generation, candidate communication, and interview preparation materials Implement RAG (Retrieval-Augmented Generation) systems for intelligent document processing and candidate information extraction Build conversational AI systems for candidate pre-screening, interview scheduling, and engagement campaigns Create prompt engineering solutions and implement advanced generative AI workflows using GPT, LLaMA, and other foundation models Develop multimodal AI applications that process text, voice, and structured data for comprehensive candidate assessment Data Pipeline & Integration (20%) Design and implement robust data pipelines for processing candidate profiles, job descriptions, and recruitment metrics Build ETL processes for integrating multiple data sources including internal systems, critical platforms, and external APIs Develop real-time data processing systems for candidate sourcing and matching operations Implement data quality monitoring and validation systems ensuring high-quality inputs for ML models Create scalable data architectures supporting AI model training, inference, and continuous learning Platform Integration & Deployment (15%) Integrate AI/ML models with the Django-based Recruitment 2.0 platform through RESTfuI APIs and microservices architecture Deploy models to production environments using containerization (Docker) and cloud platforms (Azure/AWS) Implement A/B testing frameworks for model performance evaluation and continuous improvement Build monitoring and alerting systems for model performance, drift detection, and system health Collaborate with full-stack developers to ensure seamless integration of AI capabilities into user- facing applications Required Qualifications Education & Experience Bachelor's degree in Computer Science, Data Science, Machine Learning, or related technical field 2+ years of hands-on experience in machine learning, deep learning, and AI model development Proven track record developing and deploying AI/ML solutions in production environments Experience working with enterprise clients and understanding business requirements for AI applications Technical Skills Programming Languages: Expert-level Python proficiency; experience with C++, SQL, and web technologies ML/AI Frameworks: Advanced experience with TensorFlow, PyTorch, Keras, scikit-learn, and Huggin Face Transformers Generative AI & LLMs: Hands-on experience with GPT models, fine-tuning techniques, prompt engineering, and foundation models NLP & Text Processing: Strong background in natural language processing, text classification, named entity recognition, and semantic analysis Data Processing: Proficiency with NumPy, Pandas, and large-scale data processing frameworks Cloud & DevOps: Experience with Azure/AWS, Docker, Git, and CI/CD pipelines for ML model deployment Databases: Working knowledge of SQL databases, NoSQL systems, and vector databases for AI applications Specialized Experience Experience with multimodal AI systems processing text, audio, and structured data Background in recommendation systems, matching algorithms, and information retrieval Knowledge of automated assessment systems and candidate evaluation methodologies Experience with real-time AI applications and low-latency model serving Understanding of bias detection and fairness in AI systems, particularly for human-oriented applications Preferred Qualifications Master's degree in Machine Learning, AI, or related field Background in developing Al-powered applications for complex business processes Knowledge of federated learning and distributed AI system architectures Experience with document processing, OCR, and information extraction systems Familiarity with enterprise software integration patterns and API development Application Process We are looking for candidates who can start immediately and contribute to our fast-paced, innovative environment. Please submit your resume along with examples of AI/ML projects you've developed, particularly those involving NLP, generative AI, or recommendation systems. Equal Opportunity Employer: Bayone Solutions is committed to creating a diverse and inclusive workplace and is proud to be an equal opportunity employer. This position o[fers an exceptional opportunity to shape the[uture of recruitment technology while working with cutting-edge AI systems and enterprise clients. Join us in building the next generation o[talent solutions.
Posted 4 weeks ago
0 years
5 - 10 Lacs
Gurgaon
On-site
Software Engineer – AI/ML/LLM/Data Science Company Overview: Entra Solutions (A BSI Financial Services company) is a FinTech company specialized in technology based financial solutions and services for the mortgage Industry. We are a people-focused, growth-oriented, innovative company and we're looking for people like you to make a positive change and join our team today! We are looking for an innovative Software Engineer – AI/ML/LLM/Data Science to design, develop, and deploy AI-driven solutions using Machine Learning, NLP, and Large Language Models (LLMs). The ideal candidate will work with Python to build and optimize retrieval-augmented generation (RAG) systems, LLM fine-tuning, and vector search technologies. You will develop scalable AI pipelines, ensuring high performance and seamless integration with cloud and on-prem environments. This role involves MLOps best practices, AI model optimization, and deployment of intelligent applications. WHAT YOU WILL DO: Develop, fine-tune, and deploy AI/ML models and LLM-based applications for real-world use cases. Build and optimize retrieval-augmented generation (RAG) systems using Vector Databases (e.g., ChromaDB, Pinecone, FAISS). Work on LLM fine-tuning, embeddings, and prompt engineering to enhance model performance. Develop end-to-end AI solutions with APIs using FastAPI, Flask, or similar frameworks. Build and maintain scalable data pipelines for training and inferencing AI models. Deploy and manage models using MLOps best practices on AWS or Azure. Optimize AI model performance for low-latency inference and scalability. Collaborate with cross-functional teams (Product, Engineering, Data Science) to integrate AI capabilities into applications. WHAT WE’RE LOOKING FOR: Must Have: Proficiency in Python – Strong hands-on experience in AI/ML frameworks like TensorFlow, PyTorch, Hugging Face, LangChain, OpenAI APIs. Good to Have: Experience with LLM fine-tuning, embeddings, and transformers. Knowledge of NLP, vector search technologies (ChromaDB, Pinecone, FAISS, Milvus). Experience in building scalable AI models and data pipelines with Spark, Kafka, or Dask. Familiarity with MLOps tools (Docker, Kubernetes, CI/CD for AI models). Hands-on experience in cloud-based AI deployment (AWS Lambda, SageMaker, GCP Vertex AI, Azure ML). Knowledge of prompt engineering, GPT models, or knowledge graphs. WHAT’S IN IT FOR YOU? Competitive Salary & Full Benefits Package PTOs / Medical Insurance Exposure to cutting-edge AI/LLM projects in an innovative environment Career Growth Opportunities in AI/ML leadership Collaborative & AI-driven work culture EEO Statement We are an equal employment opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, national origin, disability status, protected veteran status or any other characteristic protected by law.
Posted 4 weeks ago
2.0 - 4.0 years
2 - 8 Lacs
Gurgaon
On-site
Machine Learning Engineer (L1) Experience Required: 2-4 years As a Machine Learning Engineer at Spring, you’ll help bring data-driven intelligence into our products and operations. You’ll support the development and deployment of models and pipelines that power smarter decisions, more personalized experiences, and scalable automation. This is an opportunity to build hands-on experience in real-world ML and AI systems while collaborating with experienced engineers and data scientists. You’ll work on data processing, model training, and integration tasks — gaining exposure to the entire ML lifecycle, from experimentation to production deployment. You’ll learn how to balance model performance with system requirements, and how to structure your code for reliability, observability, and maintainability. You’ll use modern ML/AI tools such as scikit-learn, HuggingFace, and LLM APIs — and be encouraged to explore AI techniques that improve our workflows or unlock new product value. You’ll also be expected to help build and support automated data pipelines, inference services, and validation tools as part of your contributions. You’ll work closely with engineering, product, and business stakeholders to understand how models drive value. Over time, you’ll build the skills and judgment needed to identify impactful use cases, communicate technical trade-offs, and contribute to the broader evolution of ML at Spring. What You’ll Do Support model development and deployment across structured and unstructured data and AI use cases. Build and maintain automated pipelines for data processing, training, and inference. Use ML and AI tools (e.g., scikit-learn, LLM APIs) in day-to-day development. Collaborate with engineers, data scientists, and product teams to scope and deliver features. Participate in code reviews, testing, and monitoring practices. Integrate ML systems into customer-facing applications and internal tools. Identify differences in data distribution that could affect model performance in real-world applications. Stay up to date with developments in the machine learning industry. Tech Expectations Core Skills Curiosity, attention to detail, strong debugging skills, and eagerness to learn through feedback Solid foundation in statistics and data interpretation Strong understanding of data structures, algorithms, and software development best practices Exposure to data pipelines, model training and evaluation, or training workflows Languages Must Have: Python, SQL ML Algorithms Must Have: Traditional modeling techniques (e.g., tree models, Naive Bayes, logistic regression) Ensemble methods (e.g., XGBoost, Random Forest, CatBoost, LightGBM) ML Libraries / Frameworks Must Have: scikit-learn, Hugging Face, Statsmodels, Optuna Good to Have: SHAP, Pytest Data Processing / Manipulation Must Have: pandas, NumPy Data Visualization Must Have: Plotly, Matplotlib Version Control Must Have: Git Others – Good to Have AWS (e.g., EC2, SageMaker, Lambda) Docker Airflow MLflow Github Actions
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France