Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description We aim to bring about a new paradigm in medical image diagnostics; providing intelligent, holistic, ethical, explainable and patient centric care. We are looking for innovative problem solvers who love solving problems. We want people who can empathize with the consumer, understand business problems, and design and deliver intelligent products. People who are looking to extend artificial intelligence into unexplored areas. Your primary focus will be in applying deep learning and artificial intelligence techniques to the domain of medical image analysis. Responsibilities Selecting features, building and optimizing classifier engines using deep learning techniques. Understanding the problem and applying the suitable image processing techniques Use techniques from artificial intelligence/deep learning to solve supervised and unsupervised learning problems. Understanding and designing solutions for complex problems related to medical image analysis by using Deep Learning/Object Detection/Image Segmentation. Recommend and implement best practices around the application of statistical modeling. Create, train, test, and deploy various neural networks to solve complex problems. Develop and implement solutions to fit business problems which may include applying algorithms from a standard statistical tool, deep learning or custom algorithm development. Understanding the requirements and designing solutions and architecture in accordance with them. Participate in code reviews, sprint planning, and Agile ceremonies to drive high-quality deliverables. Design and implement scalable data science architectures for training, inference, and deployment pipelines. Ensure code quality, readability, and maintainability by enforcing software engineering best practices within the data science team. Optimize models for production, including quantization, pruning, and latency reduction for real-time inference. Drive the adoption of versioing strategies for models, datasets, and experiments (e.g., using MLFlow, DVC). Contribute to the architectural design of data platforms to support large-scale experimentation and production workloads. Skills and Qualifications Strong software engineering skills in Python (or other languages used in data science) with emphasis on clean code, modularity, and testability. Excellent understanding and hands-on of Deep Learning techniques such as ANN, CNN, RNN, LSTM, Transformers, VAEs etc. Must have experience with Tensorflow or PyTorch framework in building, training, testing, and deploying neural networks. Experience in solving problems in the domain of Computer Vision. Knowledge of data, data augmentation, data curation, and synthetic data generation. Ability to understand the complete problem and design the solutions that best fit all the constraints. Knowledge of the common data science and deep learning libraries and toolkits such as Keras, Pandas, Scikit-learn, Numpy, Scipy, OpenCV etc. Good applied statistical skills, such as distributions, statistical testing, regression, etc. Exposure to Agile/Scrum methodologies and collaborative development practices. Experience with the development of RESTful APIs. The knowledge of libraries like FastAPI and the ability to apply it to deep learning architectures is essential. Excellent analytical and problem-solving skills with a good attitude and keen to adapt to evolving technologies. Experience with medical image analysis will be an advantage. Experience designing and building ML architecture components (e.g., feature stores, model registries, inference servers). Solid understanding of software design patterns, microservices, and cloud-native architectures. Expertise in model optimization techniques (e.g., ONNX conversion, TensorRT, model distillation) Education: BE/B Tech MS/M Tech (will be a bonus) Experience: 3-5 Years
Posted 19 hours ago
1.0 years
0 Lacs
India
Remote
🚀 Job Title: AI Engineer (Full Stack / Model Deployment Specialist) Location: Remote (India preferred) Type: Full-Time (6-Month Fixed Contract) Experience Level: 1+ Years Salary: Competitive (based on experience) Potential: High-performing candidates may be offered a permanent role after the contract 🧩 About Us We are a dynamic collaboration between Funding Bay , Effer Ventures , and FBX Capital Partners, three industry leaders combining forces to deliver financial, compliance, and strategic growth solutions to businesses across the UK. We’re looking for an AI Engineer who can bridge the gap between machine learning and production-ready applications. If you love optimizing models, deploying them in real-world environments, and know your way around modern web stacks, this role is for you. 🔧 What You’ll Do End-to-End Ownership of ML Models: From training and evaluation to optimization and deployment. Deploy ML Models using AWS services (EC2, Lambda, S3, SageMaker, or custom Docker setups). Optimize Model Performance: Ensure fast inference, low memory usage, and high-quality results. Integrate AI into MERN Stack Applications: Build APIs and interfaces to expose your models to the frontend. Collaborate Cross-Functionally with frontend, product, and design teams. Build scalable and secure pipelines for data ingestion, model serving, and monitoring. Optimize for Speed & Usability : Ensure both backend inference and frontend UI are responsive and seamless. ✅ What We’re Looking For Proficient in MERN Stack (MongoDB, Express.js, React, Node.js) Strong Python skills , especially for AI/ML (NumPy, Pandas, scikit-learn, TensorFlow or PyTorch etc) Hands-on with Model Optimization : Quantization, pruning, distillation, or ONNX deployment is a plus Solid AWS Experience: EC2, S3, IAM, Lambda, API Gateway, CloudWatch, etc. Experience with Docker & CI/CD pipelines (e.g., GitHub Actions, Jenkins) Comfortable building and consuming REST/GraphQL APIs Familiar with ML deployment tools like FastAPI, Flask, TorchServe, or SageMaker endpoints Good understanding of performance profiling, logging, and model monitoring ⭐ Nice to Have Experience with LangChain , LLMs , or NLP pipelines Startup or fast-paced team background Open-source contributions or live-deployed AI projects 🌱 Why Join Us? Build & deploy real AI products that go live Work in a growth-focused, high-ownership environment 6-month contract with the potential for a Permanent full-time Flexible work culture & flat hierarchy Learn fast and build faster with founders and builders Take ownership of core parts of the AI stack Competitive compensation based on experience 📬 To Apply Send us: Your resume A link to your GitHub or portfolio A short paragraph about a project where you deployed an optimized AI model 📧 Email: developer@fundingbay.co.uk or directly apply
Posted 20 hours ago
0 years
0 Lacs
India
Remote
We’re Hiring: Senior AI Developer Remote | Full-time | TechGratia We're solving real-world problems with cutting-edge AI. If you're skilled in LLMs, Generative AI , and deploying scalable models—this is your chance to make real impact. Build AI solutions using LLMs & GenAI frameworks Fine-tune models for domain-specific needs Deploy/manage LLMs with Azure OpenAI Work with CNNs, RNNs , NLP & multimodal tasks Write clean, modular, production-ready code Strong Python, PyTorch/TensorFlow, Hugging Face Experience with GPT, LLaMA, Mistral , Azure OpenAI, AWS Deep learning knowledge: CNN, RNN, Attention Familiarity with LangChain, RAG, vector DBs Hands-on with fine-tuning, quantization, ML workflows Join a collaborative team building AI that matters. 📩 Apply: sheethal.cs@techgratia.com #Hiring #AIJobs #LLM #GenerativeAI #RemoteJobs #TechGratia
Posted 20 hours ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job purpose: Design, develop, and deploy end-to-end AI/ML systems, focusing on large language models (LLMs), prompt engineering, and scalable system architecture. Leverage technologies such as Java/Node.js/NET to build robust, high-performance solutions that integrate with enterprise systems. Who You Are: Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. PhD is a plus. 5+ years of experience in AI/ML development, with at least 2 years working on LLMs or NLP. Proven expertise in end-to-end system design and deployment of production-grade AI systems. Hands-on experience with Java/Node.js/.NET for backend development. Proficiency in Python and ML frameworks (TensorFlow, PyTorch, Hugging Face Transformers). Key Responsibilities: 1. Model Development & Training: Design, train, and fine-tune large language models (LLMs) for tasks such as natural language understanding, generation, and classification. Implement and optimize machine learning algorithms using frameworks like TensorFlow, PyTorch, or Hugging Face. 2. Prompt Engineering: Craft high-quality prompts to maximize LLM performance for specific use cases, including chatbots, text summarization, and question-answering systems. Experiment with prompt tuning and few-shot learning techniques to improve model accuracy and efficiency. 3. End-to-End System Design: Architect scalable, secure, and fault-tolerant AI/ML systems, integrating LLMs with backend services and APIs. Develop microservices-based architectures using Java/Node.js/.NET for seamless integration with enterprise applications. Design and implement data pipelines for preprocessing, feature engineering, and model inference. 4. Integration & Deployment: Deploy ML models and LLMs to production environments using containerization (Docker, Kubernetes) and cloud platforms (AWS/Azure/GCP). Build RESTful or GraphQL APIs to expose AI capabilities to front-end or third-party applications. 5. Performance Optimization: Optimize LLMs for latency, throughput, and resource efficiency using techniques like quantization, pruning, and model distillation. Monitor and improve system performance through logging, metrics, and A/B testing. 6. Collaboration & Leadership: Work closely with data scientists, software engineers, and product managers to align AI solutions with business objectives. Mentor junior engineers and contribute to best practices for AI/ML development. What will excite us: Strong understanding of LLM architectures and prompt engineering techniques. Experience with backend development using Java/Node.js (Express)/.NET Core. Familiarity with cloud platforms (AWS, Azure, GCP) and DevOps tools (Docker, Kubernetes, CI/CD). Knowledge of database systems (SQL, NoSQL) and data pipeline tools (Apache Kafka, Airflow). Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Ability to work in a fast-paced, collaborative environment. What will excite you: Lead AI innovation in a fast-growing, technology-driven organization. Work on cutting-edge AI solutions, including LLMs, autonomous AI agents, and Generative AI applications. Engage with top-tier enterprise clients and drive AI transformation at scale. Location: Ahmedabad
Posted 20 hours ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Compensation: INR 50 lacs-1cr (INR 30-50 lacs pa plus stocks worth a similar amount) Strictly please do NOT apply if you have not done at least 4 years of programming and have not worked on data projects that have high volume data and have not worked on product development. Preferred location is Hyderabad we are open to people who want later to be at Noida. Multiplier AI is a leader in AI accelerators for life sciences and is due for listing. About the Role We are seeking a seasoned and forward-thinking AI Product Development Manager to spearhead AI data cloud implementation projects across enterprise and industry-specific use cases. This is a high-impact leadership role that combines deep technical expertise with strategic consulting to deliver scalable, efficient, and secure data wrangling product solutions. This will include work on agentic and other AI technicalities. Key Responsibilities Lead end-to-end design and deployment of data platforms in production environments. Define architecture for on-device or private-cloud data deployments, optimizing for latency, token cost, and privacy. Collaborate with cross-functional teams (data, MLOps, product, security) to integrate into existing systems and workflows. Data extraction and transformation leadership Mentor engineering and data science teams on best practices in efficient prompt engineering, RAG pipelines, quantization, and distillation techniques. Act as a thought partner to leadership and clients on GenAI roadmap, risk management, and responsible AI design. Required Skills & Experience Proven experience in bbuilding data extraction and transformation projects in production preferably using LLMs and NLP techniques. this is essential do not apply if not done it Strong understanding of transformer architecture, tokenizer design, and parameter-efficient fine-tuning (LoRA, QLoRA). Hands-on with informatica Realtio and data management optimization techniques. Experience integrating SLMs into real-world systems—mobile apps, secure enterprise workflows, or embedded devices. Background in Python, PyTorch/TensorFlow, and familiarity with Data ops tools Ships product faster Preferred Qualifications Prior consulting experience with AI/ML deployments in pharma, finance, or regulated sectors. Familiarity with privacy-preserving AI, federated learning, or differential privacy. Contributions to data ETL projects What We Offer Leadership in shaping the future of lightweight AI. Exposure to cutting-edge GenAI applications across industries. Competitive compensation and equity options (for permanent roles).
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Roles & Responsibilities Design, implement, and train deep learning models for: Text-to-Speech (e.g., SpeechT5, StyleTTS2, YourTTS, XTTS-v2 or similar models) Voice Cloning with speaker embeddings (x-vectors, d-vectors), few-shot adaptation, prosody and emotion transfer Engineer multilingual audio-text preprocessing pipelines: Text normalization, grapheme-to-phoneme (G2P) conversion, Unicode normalization (NFC/NFD) Silence trimming, VAD-based audio segmentation, audio enhancement for noisy corpora, speech prosody modification and waveform manipulation Build scalable data loaders using PyTorch for: Large-scale, multi-speaker datasets with variable-length sequences and chunked streaming Extract and process acoustic features: Log-mel spectrograms, pitch contours, MFCCs, energy, speaker embeddings Optimize training using: Mixed precision (FP16/BFloat16), gradient checkpointing, label smoothing, quantization-aware training Build serving infrastructure for inference using: TorchServe, ONNX Runtime, Triton Inference Server, FastAPI (for REST endpoints), including batch and real-time modes Optimize models for production: Quantization, model pruning, ONNX conversion, parallel decoding, GPU/CPU memory profiling Create automated and human evaluation logics: MOS, PESQ, STOI, BLEU, WER/CER, multi-speaker test sets, multilingual subjective listening tests Implement ethical deployment safeguards: Digital watermarking, impersonation detection, and voice verification for cloned speech Conduct literature reviews and reproduce state-of-the-art papers; adapt and improve on open benchmarks Mentor junior contributors, review code, and maintain shared research and model repositories Collaborate across teams (MLOps, backend, product, linguists) to translate research into deployable, user-facing solutions Required Skills Advanced proficiency in Python and PyTorch (TensorFlow a plus) Strong grasp of deep learning concepts: Sequence-to-sequence models, Transformers, autoregressive and non-autoregressive decoders, attention mechanisms, VAEs, GANs Experience with modern speech processing toolkits: ESPnet, NVIDIA NeMo, Coqui TTS, OpenSeq2Seq, or equivalent Design custom loss function for custom models based on: Mel loss, GAN loss, KL divergence, attention losses, etc.,, learning rate schedules, training stability Hands-on experience with multilingual and low-resource language modeling Understanding of transformer architecture, LLMs and working with existing AI models, tools and APIs Model serving & API integration: TorchServe, FastAPI, Docker, ONNX Runtime Preferred (Bonus) Skills CUDA kernel optimization, custom GPU operations, memory footprint profiling Experience deploying on AWS/GCP with GPU acceleration Experience developing RESTful APIs for real-time TTS/voice cloning endpoints Publications or open-source contributions in TTS, ASR, or speech processing Working knowledge of multilingual translation pipelines Knowledge of speaker diarization, voice anonymization, and speech synthesis for agglutinative/morphologically rich languages Milestones & Expectations (First 3–6 Months) Deliver at least one production-ready TTS or Voice Cloning model integrated with India Speaks’ Dubbing Studio or SaaS APIs Create a fully reproducible experiment pipeline for multilingual speech modeling, complete with model cards and performance benchmarks Contribute to custom evaluation tools for measuring quality across Indian languages Deploy optimized models to live staging environments using Triton, TorchServe, or ONNX Demonstrate impact through real-world integration in education, media, or defence deployments
Posted 3 days ago
0 years
7 - 11 Lacs
Prayagraj, Uttar Pradesh, India
On-site
Institute of Information Science Postdoctoral Researcher 2 Person The Computer Systems Laboratory - Machine Learning Systems Team Focuses On Research Areas Including Parallel And Distributed Computing, Compilers, And Computer Architecture. We Aim To Leverage Computer System Technologies To Accelerate The Inference And Training Of Deep Learning Models And Develop Optimizations For Next-generation AI Models. Our Research Emphasizes The Following Job Description Unit Institute of Information Science JobTitle Postdoctoral Researcher 2 Person Work Content Research on Optimization of Deep Learning Model Inference and Training AI Model Compression and Optimization Model Compression Techniques (e.g., Pruning And Quantization) Reduce The Size And Computational Demands Of AI Models, Which Are Crucial For Resource-constrained Platforms Such As Embedded Systems And Memory-limited AI Accelerators. We Aim To Explore AI compiler: deployment methods for compressed models across servers, edge devices, and heterogeneous systems. High performance computing: efficient execution of compressed models on processors with advanced AI extensions, e.g., Intel AVX512, ARM SVE, RISC-V RVV, and tensor-level accelerations on GPUs and NPUs. AI Accelerator Design We aim to design AI accelerators for accelerating AI model inference, focusing on software and hardware co-design and co-optimization. Optimization of AI Model Inference in Heterogeneous Environments Computer Architectures Are Evolving Toward Heterogeneous Multi-processor Designs (e.g., CPUs + GPUs + AI Accelerators). Integrating Heterogeneous Processors To Execute Complex Models (e.g., Hybrid Models, Multi-models, And Multi-task Models) With High Computational Efficiency Poses a Critical Challenge. We Aim To Explore Efficient scheduling algorithms. Parallel algorithms for the three dimensions: data parallelism, model parallelism, and tensor parallelism. Qualifications Ph.D. degree in Computer Science, Computer Engineering, or Electrical Engineering Experience in parallel computing and parallel programming (CUDA or OpenCL, C/C++ programming) or hardware design (Verilog or HLS) Proficient in system and software development Candidates With The Following Experience Will Be Given Priority Experience in deep learning platforms, including PyTorch, TensorFlow, TVM, etc. Experience in high-performance computing or embedded systems. Experience in algorithm designs. Knowledge of compilers or computer architecture Working Environment Operating Hours 8:30AM-5:30PM Work Place Institute of Information Science, Academia Sinica Treatment According to Academia Sinica standards: Postdoctoral Researchers: NT$64,711-99,317/month. Benefits include: labor and healthcare insurance, and year-end bonuses. Reference Site 洪鼎詠網頁: http://www.iis.sinica.edu.tw/pages/dyhong/index_zh.html, 吳真貞網頁: http://www.iis.sinica.edu.tw/pages/wuj/index_zh.html Please Email Your CV (including Publications, Projects, And Work Experience), Transcripts (undergraduate And Above), And Any Other Materials That May Assist In The Review Process To The Following PIs Acceptance Method Contacts Dr. Ding-Yong Hong Contact Address Room 818, New IIS Building, Academia Sinica Contact Telephone 02-27883799 ext. 1818 Email dyhong@iis.sinica.edu.tw Required Documents Dr. Ding-Yong Hong: dyhong@iis.sinica.edu.tw Dr. Jan-Jan Wu: wuj@iis.sinica.edu.tw Precautions for application Date Publication Date 2025-01-20 Expiration Date 2025-12-31
Posted 4 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. About The Role We're looking for a highly skilled, results-driven AI Developer who thrives in fast-paced, high-impact environments. If you are passionate about pushing the boundaries of Computer Vision, OCR, NLP and and Large Language Models (LLMs) and have a strong foundation in building and deploying AI solutions, this role is for you. As a Lead Data Scientist, you will take ownership of designing and implementing state-of-the-art AI products. This role demands deep technical expertise, the ability to work autonomously, and a mindset that embraces complex challenges head-on. Here, you won't just fine-tune pre-trained models—you'll be architecting, optimizing, and scaling AI solutions that power real-world applications. Key Responsibilities Architect, develop, and deploy high-performance AI Solutions for real-world applications. Implement and optimize state-of-the-art LLM , OCR models and frameworks. Fine-tune and integrate LLMs (GPT, LLaMA, Mistral, etc.) to enhance text understanding and automation. Build and optimize end-to-end AI pipelines, ensuring efficient data processing and model deployment. Work closely with engineers to operationalize AI models in production (Docker, FastAPI, TensorRT, ONNX). Enhance GPU performance and model inference efficiency, applying techniques such as quantization and pruning. Stay ahead of industry advancements, continuously experimenting with new AI architectures and training techniques. Work in a highly dynamic, startup-like environment, balancing rapid experimentation with production-grade robustness. What We're Looking For Requirements Required Skills & Qualifications: Proven technical expertise – Strong programming skills in Python, PyTorch, TensorFlow with deep experience in NLP and LLM Hands-on experience in developing, training, and deploying LLM and Agentic workflows Strong background in vector databases, RAG pipelines, and fine-tuning LLMs for document intelligence. Deep understanding of Transformer-based architectures for vision and text processing. Experience working with Hugging Face, OpenCV, TensorRT, and NVIDIA GPUs for model acceleration. Autonomous problem solver – You take initiative, work independently, and drive projects from research to production. Strong experience in scaling AI solutions, including model optimization and deployment on cloud platforms (AWS/GCP/Azure). Thrives in fast-paced environments – You embrace challenges, pivot quickly, and execute effectively. Familiarity with MLOps tools (Docker, FastAPI, Kubernetes) for seamless model deployment. Experience in multi-modal models (Vision + Text). Good to Have Financial background and understanding corporate finance . Contributions to open-source AI projects.
Posted 4 days ago
3.0 years
0 Lacs
Dehradun, Uttarakhand, India
On-site
Overview - We are hiring experienced AI Developers to lead the design, development, and scaling of intelligent systems using large language models (LLMs). This includes building prompt-based agents, developing and fine-tuning custom AI models, and architecting advanced pipelines across various business functions. You will work across the full AI development lifecycle—from prompt engineering to model training and deployment—while staying at the forefront of innovation in generative AI and autonomous agent frameworks. Key Responsibilities: - Design and deploy intelligent agents using LLMs such as OpenAI GPT-4, Claude, Mistral, Gemini, Cohere, etc. Build prompt-driven and autonomous agents using frameworks like LangChain, AutoGen, CrewAI, Semantic Kernel, LlamaIndex , or custom stacks. Architect and implement multi-agent systems capable of advanced reasoning, coordination, and tool interaction. Incorporate goal-setting, sub-task decomposition, and autonomous feedback loops for agent self-improvement. Develop custom AI models via fine-tuning, supervised learning , or LoRA/QLoRA approaches using Hugging Face Transformers, PyTorch , or TensorFlow . Build and manage Retrieval-Augmented Generation (RAG) pipelines with vector databases like Pinecone, FAISS, Weaviate, or Chroma. Train and evaluate models on custom datasets using modern NLP workflows and distributed training tools. Optimize models for latency, accuracy, and cost efficiency in both prototype and production environments. Create and maintain testing and evaluation pipelines for prompt quality, hallucination detection, and model behavior safety. Integrate external tools, APIs, plugins, and knowledge bases to enhance agent capabilities. Collaborate with product and engineering teams to translate use cases into scalable AI solutions. Required Technical Skills: - 3+ years hands-on experience with large language models, generative AI, conversational systems, or agent-based system development. Proficient in Python , with experience in AI/ML libraries such as Transformers, LangChain, PyTorch, TensorFlow, PEFT , and Scikit-learn. Strong understanding of prompt engineering, instruction tuning , and system prompt architecture. Experience with custom model training, fine-tuning , and deploying models via Hugging Face, OpenAI APIs, or open-source LLMs. Experience designing and implementing RAG pipelines and managing embedding stores. Familiarity with agent orchestration frameworks , tool integration, memory handling, and context management. Working knowledge of containerization (Docker), MLOps , and cloud environments (AWS, GCP, Azure). Preferred Experience: - Exposure to distributed training (DeepSpeed, Accelerate), quantization , or model optimization techniques . Familiarity with LLM evaluation tools (Trulens, LM Eval Harness, custom eval agents). Experience with RLHF, multi-modal models , or voice/chat integrations . Background in data engineering for building high-quality training/evaluation datasets. Experience with self-healing agents, auto-reflection loops, and adaptive control systems . Shift Timing: Night Shift (Fixed timing will be disclosed at the time of joining) Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Category: AIML Job Type: Full Time Job Location: Bengaluru Mangalore Experience: 4-8 Years Skills: AI AWS/AZURE/GCP Azure ML C computer vision data analytics Data Modeling Data Visualization deep learning Descriptive Analytics GenAI Image processing Java LLM models ML ONNX Predictive Analytics Python R Regression/Classification Models SageMaker SQL TensorFlow Position Overview We are looking for an experienced AI/ML Engineer to join our team in Bengaluru. The ideal candidate will bring a deep understanding of machine learning, artificial intelligence, and big data technologies, with proven expertise in developing scalable AI/ML solutions. You will lead technical efforts, mentor team members, and collaborate with cross-functional teams to design, develop, and deploy cutting edge AI/ML applications. Job Details Job Category: AI/ML Engineer. Job Type: Full-Time Job Location: Bengaluru Experience Required: 4-8 Years About Us We are a multi-award-winning creative engineering company. Since 2011, we have worked with our customers as a design and technology enablement partner, guiding them on their digital transformation journeys. Roles And Responsibilities Design, develop, and deploy deep learning models for object classification, detection, and segmentation using CNNs and Transfer Learning. Implement image preprocessing and advanced computer vision pipelines. Optimize deep learning models using pruning, quantization, and ONNX for deployment on edge devices. Work with PyTorch, TensorFlow, and ONNX frameworks to develop and convert models. Accelerate model inference using GPU programming with CUDA and cuDNN. Port and test models on embedded and edge hardware platforms. ( Orin, Jetson, Hailo ) Conduct research and experiments to evaluate and integrate GenAI technologies in computer vision tasks. Explore and implement cloud-based AI workflows, particularly using AWS/Azure AI/ML services. Collaborate with cross-functional teams for data analytics, data processing, and large-scale model training. Required Skills Strong programming experience in Python. Solid background in deep learning, CNNs, and transfer learning and Machine learning basics. Expertise in object detection, classification, segmentation. Proficiency with PyTorch, TensorFlow, and ONNX. Experience with GPU acceleration (CUDA, cuDNN). Hands-on knowledge of model optimization (pruning, quantization). Experience deploying models to edge devices (e.g., Jetson, mobile, Orin, Hailo ) Understanding of image processing techniques. Familiarity with data pipelines, data preprocessing, and data analytics. Willingness to explore and contribute to Generative AI and cloud-based AI solutions. Good problem-solving and communication skills. Preferred (Nice-to-Have) Experience with C/C++. Familiarity with AWS Cloud AI/ML tools (e.g., SageMaker, Rekognition). Exposure to GenAI frameworks like OpenAI, Stable Diffusion, etc. Knowledge of real-time deployment systems and streaming analytics. Qualifications Graduation/Post-graduation in Computers, Engineering, or Statistics from a reputed institute. What We Offer Competitive salary and benefits package. Opportunity to work in a dynamic and innovative environment. Professional development and learning opportunities. Visit us on: CodeCraft Technologies LinkedIn : CodeCraft Technologies LinkedIn Instagram : CodeCraft Technologies Instagram Show more Show less
Posted 6 days ago
6.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Data Scientist Location: Bangalore Business & Team: BB Advanced Analytics and Artificial Intelligence COE Impact & contribution: As a Senior Data Scientist, you will be instrumental in pioneering Gen AI and multi-agentic systems at scale within CommBank. You will architect, build, and operationalize advanced generative AI solutions—leveraging large language models (LLMs), collaborative agentic frameworks, and state-of-the-art toolchains. You will drive innovation, helping set the organizational strategy for advanced AI, multi-agent collaboration, and responsible next-gen model deployment. Roles & Responsibilities: Gen AI Solution Development: Lead end-to-end development, fine-tuning, and evaluation of state-of-the-art LLMs and multi-modal generative models (e.g., transformers, GANs, VAEs, Diffusion Models) tailored for financial domains. Multi-Agentic System Engineering: Architect, implement, and optimize multi-agent systems, enabling swarms of AI agents (utilizing frameworks like Lang chain, Lang graph, and MCP) to dynamically collaborate, chain, reason, critique, and autonomously execute tasks. LLM-Backed Application Design: Develop robust, scalable GenAI-powered APIs and agent workflows using Fast API, Semantic Kernel, and orchestration tools. Integrate observability and evaluation using Lang fuse for tracing, analytics, and prompt/response feedback loops. Guardrails & Responsible AI: Employ frameworks like Guardrails AI to enforce robust safety, compliance, and reliability in LLM deployments. Establish programmatic checks for prompt injections, hallucinations, and output boundaries. Enterprise-Grade Deployment: Productionize and manage at-scale Gen AI and agent systems with cloud infrastructure (GCP/AWS/Azure), utilizing model optimization (quantization, pruning, knowledge distillation) for latency/throughput trade offs. Toolchain Innovation: Leverage and contribute to open source projects in the Gen AI ecosystem (e.g., Lang Chain, Lang Graph, Semantic Kernel, Lang fuse, Hugging face, Fast API). Continuously experiment with emerging frameworks and research. Stakeholder Collaboration: Partner with product, engineering, and business teams to define high-impact use cases for Gen AI and agentic automation; communicate actionable technical strategies and drive proof-of-value experiments into production. Mentorship & Thought Leadership: Guide junior team members in best practices for Gen AI, prompt engineering, agentic orchestration, responsible deployment, and continuous learning. Represent CommBank in the broader AI community through papers, patents, talks, and open-source. Essential Skills: 6+ years of hands-on experience in Machine Learning, Deep Learning, or Generative AI domains, including practical expertise with LLMs, multi-agent frameworks, and prompt engineering. Proficient in building and scaling multi-agent AI systems using Lang Chain, Lang Graph, Semantic Kernel, MCP, or similar agentic orchestration tools. Advanced experience developing and deploying Gen AI APIs using Fast API; operational familiarity with Lang fuse for LLM evaluation, tracing, and error analytics. Demonstrated ability to apply Guardrails to enforce model safety, explainability, and compliance in production environments. Experience with transformer architectures (BERT/GPT, etc.), fine-tuning LLMs, and model optimization (distillation/quantization/pruning). Strong software engineering background (Python), with experience in enterprise-grade codebases and cloud-native AI deployments. Experience integrating open and commercial LLM APIs and building retrieval-augmented generation (RAG) pipelines. Exposure to agent-based reinforcement learning, agent simulation, and swarm-based collaborative AI. Familiarity with robust experimentation using tools like Lang Smith, GitHub Copilot, and experiment tracking systems. Proven track record of driving Gen AI innovation and adoption in cross-functional teams. Papers, patents, or open-source contributions to the Gen AI/LLM/Agentic AI ecosystem. Experience with financial services or regulated industries for secure and responsible deployment of AI. Education Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 01/07/2025 Show more Show less
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI Engineer Key Responsibilities GenAI Development: Design and develop advanced GenAI models (e.g., LLMs, DALL E models) and AI Agents to automate internal tasks and workflows. Exposure to LLMs: Utilize Azure Open AI APIs, experience on models like GPT4o, O3 , llama3 Enhance the existing RAG based application: In depth understanding of stages of RAG - chunking, retrieval etc. Cloud Deployment: Deploy and scale GenAI solutions on Azure Cloud services (e.g., Azure Function App) for optimal performance. In depth understanding of ML models like linear regression, random forest, decision trees. In depth understanding on clustering and supervised models. AI Agent Development: Build AI agents using frameworks like LangChain to streamline internal processes and boost efficiency. Data Analytics: Perform advanced data analytics to preprocess datasets, evaluate model performance, and derive actionable insights for GenAI solutions. Data Visualization: Create compelling visualizations (e.g., dashboards, charts) to communicate model outputs, performance metrics, and business insights to stakeholders. Stakeholder Collaboration: Partner with departments to gather requirements, align on goals, and present technical solutions and insights effectively to non-technical stakeholders. Model Optimization: Fine-tune GenAI models for efficiency and accuracy using techniques like prompt engineering, quantization, and RAG (Retrieval-Augmented Generation). LLMOps Best Practices: Implement GenAI-specific MLOps, including CI/CD pipelines (Git, Azure DevOps) Leadership: Guide cross-functional teams, mentor junior engineers, and drive project execution with strategic vision and ownership. Helicopters, strategic Thinking**: Develop innovative GenAI strategies to address business challenges, leveraging data insights to align solutions with organizational goals. Self-Driven Execution: Independently lead projects to completion with minimal supervision, proactively resolving challenges and seeking collaboration when needed. Continuous Learning: Stay ahead of GenAI, analytics, and visualization advancements, self-learning new techniques to enhance project outcomes. Required Skills & Experience Experience: 8-12 years in AI/ML development, with at least 4 years focused on Generative AI and AI agent frameworks. Education: BTech/BE in Computer Science, Engineering, or equivalent (Master’s or PhD in AI/ML is a plus). Programming: Expert-level Python proficiency, with deep expertise in GenAI libraries (e.g., LangChain, Hugging Face Transformers, PyTorch, Open AI SDK) and data analytics libraries (e.g., Pandas, NumPy), sk-learn. Mac Data Analytics: Strong experience in data preprocessing, statistical analysis, and model evaluation to support GenAI development and business insights. Data Visualization: Proficiency in visualization tools (e.g., Matplotlib, Seaborn, Plotly, Power BI, or Tableau) to create dashboards and reports for stakeholders. Azure Cloud Expertise: Strong experience with Azure Cloud services (e.g., Azure Function App, Azure ML, serverless) for model training and deployment. GenAI Methodologies: Deep expertise in LLMs, AI agent frameworks, prompt engineering, and RAG for internal workflow automation. Deployment: Proficiency in Docker, Kubernetes, and CI/CD pipelines (e.g., Azure DevOps, GitHub Actions) for production-grade GenAI systems. LLMOps: Expertise in GenAI MLOps, including experiment tracking (e.g., Weights & Biases), automated evaluation metrics (e.g., BLEU, ROUGE), and monitoring. Soft Skills: Communication: Exceptional verbal and written skills to articulate complex GenAI concepts, analytics, and visualizations to technical and non-technical stakeholders. Strategic Thinking: Ability to align AI solutions with business objectives, using data-driven insights to anticipate challenges and propose long-term strategies. Problem-Solving: Strong analytical skills with a proactive, self-starter mindset to independently resolve complex issues. Collaboration: Collaborative mindset to work effectively across departments and engage colleagues for solutions when needed. Speed to outcome Preferred Skills Experience deploying GenAI models in production environments, preferably on Azure Familiarity with multi-agent systems, reinforcement learning, or distributed training (e.g., DeepSpeek). Knowledge of DevOps practices, including Git, CI/CD, and infrastructure-as-code. Advanced data analytics techniques (e.g., time-series analysis, A/B testing) for GenAI applications. Experience with interactive visualization frameworks (e.g., Dash, Streamlit) for real-time dashboards. Contributions to GenAI or data analytics open-source projects or publications in NLP, generative modeling, or data scien Show more Show less
Posted 6 days ago
2.0 years
0 Lacs
Hyderābād
On-site
Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: Job Description Join the exciting Generative AI team at Qualcomm focused on integrating cutting edge GenAI models on Qualcomm chipsets. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities to allow inference of GenAI models on-device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. Utilize this power efficient hardware and Software stack to run Large Language Models (LLMs) and Large Vision Models (LVM) at near GPU speeds! Responsibilities: In this role, you will spearhead the development and commercialization of the Qualcomm AI Runtime (QAIRT) SDK on Qualcomm SoCs. As an AI inferencing expert, you'll push the limits of performance from large models. Your mastery in deploying large C/C++ software stacks using best practices will be essential. You'll stay on the cutting edge of GenAI advancements, understanding LLMs/Transformers and the nuances of edge-based GenAI deployment. Most importantly, your passion for the role of edge in AI's evolution will be your driving force. Requirements: Master’s/Bachelor’s degree in computer science or equivalent. 2-4 years of relevant work experience in software development. Strong understanding of Generative AI models – LLM, LVM, LMMs and building blocks (self-attention, cross attention, kv caching etc.) Floating-point, Fixed-point representations and Quantization concepts. Experience with optimizing algorithms for AI hardware accelerators (like CPU/GPU/NPU). Strong in C/C++ programming, Design Patterns and OS concepts. Good scripting skills in Python. Excellent analytical and debugging skills. Good communication skills (verbal, presentation, written). Ability to collaborate across a globally diverse team and multiple interests. Preferred Qualifications Strong understanding of SIMD processor architecture and system design. Proficiency in object-oriented software development and familiarity Familiarity with Linux and Windows environment Strong background in kernel development for SIMD architectures. Familiarity with frameworks like llama.cpp, MLX, and MLC is a plus. Good knowledge of PyTorch, TFLite, and ONNX Runtime is preferred. Experience with parallel computing systems and languages like OpenCL and CUDA is a plus. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 6 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Systems Engineering General Summary: Summary - We are seeking experts with a robust background in the field of deep learning (DL) to design state-of-the-art low-level perception (LLP) as well as end-to-end AD models, with a focus on achieving accuracy-latency Pareto optimality. This role involves comprehending state-of-the-art research in this field and deploying networks on the Qualcomm Ride platform for L2/L3 Advanced Driver Assistance Systems (ADAS) and autonomous driving. The ideal candidate must be well-versed in recent advancements in Vision Transformers (Cross-attention, Self-attention), lifting 2D features to Bird's Eye View (BEV) space, and their applications to multi-modal fusion. This position offers extensive opportunities to collaborate with advanced R&D teams of leading automotive Original Equipment Manufacturers (OEMs) as well as Qualcomm's internal stack teams. The team is responsible for enhancing the speed, accuracy, power consumption, and latency of deep networks running on Snapdragon Ride AI accelerators. A thorough understanding of machine learning algorithms, particularly those related to automotive use cases (autonomous driving, vision, and LiDAR processing ML algorithms), is essential. Research experience in the development of efficient networks, various Neural Architecture Search (NAS) techniques, network quantization, and pruning is highly desirable. Strong communication and interpersonal skills are required, and the candidate must be able to work effectively with various horizontal AI teams. Minimum Qualifications: Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 1+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR Master's degree in Computer Science, Engineering, Information Systems, or related field and 1+ year of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience. OR PhD in Computer Science, Engineering, Information Systems, or related field. Preferred Qualifications: Good at software development with excellent analytical, development, and problem-solving skills. Strong understanding of Machine Learning fundamentals Hands-on experience with deep learning network design and implementation. Ability to define network from scratch in PyTorch, ability to add new loss function, modify network with torch.fx. Adept at version control system like GIT. Experience in neural network quantization, compression, pruning algorithms. Experience in deep learning kernel/compiler optimization Strong communication skills Principal Duties And Responsibilities: Applies Machine Learning knowledge to extend training or runtime frameworks or model efficiency software tools with new features and optimizations. Models, architects, and develops machine learning hardware (co-designed with machine learning software) for inference or training solutions. Develops optimized software to enable AI models deployed on hardware (e.g., machine learning kernels, compiler tools, or model efficiency tools, etc.) to allow specific hardware features; collaborates with team members for joint design and development. Assists with the development and application of machine learning techniques into products and/or AI solutions to enable customers to do the same. Develops, adapts, or prototypes complex machine learning algorithms, models, or frameworks aligned with and motivated by product proposals or roadmaps with minimal guidance from more experienced engineers. Conducts complex experiments to train and evaluate machine learning models and/or software independently. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail myhr.support@qualcomm.com or call Qualcomm's toll-free number found here . Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3074561 Show more Show less
Posted 6 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary Job Description Join the exciting Generative AI team at Qualcomm focused on integrating cutting edge GenAI models on Qualcomm chipsets. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities to allow inference of GenAI models on-device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. Utilize this power efficient hardware and Software stack to run Large Language Models (LLMs) and Large Vision Models (LVM) at near GPU speeds! Responsibilities In this role, you will spearhead the development and commercialization of the Qualcomm AI Runtime (QAIRT) SDK on Qualcomm SoCs. As an AI inferencing expert, you'll push the limits of performance from large models. Your mastery in deploying large C/C++ software stacks using best practices will be essential. You'll stay on the cutting edge of GenAI advancements, understanding LLMs/Transformers and the nuances of edge-based GenAI deployment. Most importantly, your passion for the role of edge in AI's evolution will be your driving force. Requirements Master’s/Bachelor’s degree in computer science or equivalent. 2-4 years of relevant work experience in software development. Strong understanding of Generative AI models – LLM, LVM, LMMs and building blocks (self-attention, cross attention, kv caching etc.) Floating-point, Fixed-point representations and Quantization concepts. Experience with optimizing algorithms for AI hardware accelerators (like CPU/GPU/NPU). Strong in C/C++ programming, Design Patterns and OS concepts. Good scripting skills in Python. Excellent analytical and debugging skills. Good communication skills (verbal, presentation, written). Ability to collaborate across a globally diverse team and multiple interests. Preferred Qualifications Strong understanding of SIMD processor architecture and system design. Proficiency in object-oriented software development and familiarity Familiarity with Linux and Windows environment Strong background in kernel development for SIMD architectures. Familiarity with frameworks like llama.cpp, MLX, and MLC is a plus. Good knowledge of PyTorch, TFLite, and ONNX Runtime is preferred. Experience with parallel computing systems and languages like OpenCL and CUDA is a plus. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3075196 Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Technical Expertise : (minimum 2 year relevant experience) ● Solid understanding of Generative AI models and Natural Language Processing (NLP) techniques, including Retrieval-Augmented Generation (RAG) systems, text generation, and embedding models. ● Exposure to Agentic AI concepts, multi-agent systems, and agent development using open-source frameworks like LangGraph and LangChain. ● Hands-on experience with modality-specific encoder models (text, image, audio) for multi-modal AI applications. ● Proficient in model fine-tuning, prompt engineering, using both open-source and proprietary LLMs. ● Experience with model quantization, optimization, and conversion techniques (FP32 to INT8, ONNX, TorchScript) for efficient deployment, including edge devices. ● Deep understanding of inference pipelines, batch processing, and real-time AI deployment on both CPU and GPU. ● Strong MLOps knowledge with experience in version control, reproducible pipelines, continuous training, and model monitoring using tools like MLflow, DVC, and Kubeflow. ● Practical experience with scikit-learn, TensorFlow, and PyTorch for experimentation and production-ready AI solutions. ● Familiarity with data preprocessing, standardization, and knowledge graphs (nice to have). ● Strong analytical mindset with a passion for building robust, scalable AI solutions. ● Skilled in Python, writing clean, modular, and efficient code. ● Proficient in RESTful API development using Flask, FastAPI, etc., with integrated AI/ML inference logic. ● Experience with MySQL, MongoDB, and vector databases like FAISS, Pinecone, or Weaviate for semantic search. ● Exposure to Neo4j and graph databases for relationship-driven insights. ● Hands-on with Docker and containerization to build scalable, reproducible, and portable AI services. ● Up-to-date with the latest in GenAI, LLMs, Agentic AI, and deployment strategies. ● Strong communication and collaboration skills, able to contribute in cross-functional and fast-paced environments. Bonus Skills ● Experience with cloud deployments on AWS, GCP, or Azure, including model deployment and model inferencing. ● Working knowledge of Computer Vision and real-time analytics using OpenCV, YOLO, and similar Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: AI Engineer Want to join a startup, but with the stability of a larger organization? Join our innovation team at HGS that's focused on building SaaS products. If you are highly driven & passionate person who'd like to build highly scalable SaaS products in a startup type of environment, you're welcome to apply. The HGS Digital Innovation Team is designed to create products and solutions relevant for enterprises, discover innovations and to contextualize and experiment with them within a specific industry. This unit provides an environment for the exploration, development, and testing of Cloud-based Digital AI solutions. In addition to that it also looks at rapid deployment at scale and sustainability of these solutions for target business impacts. Job Overview We are seeking an agile AI Engineer with a strong focus on both AI engineering and SaaS product development in a 0-1 product environment. This role is perfect for a candidate skilled in building and iterating quickly, embracing a fail fast approach to bring innovative AI solutions to market rapidly. You will be responsible for designing, developing, and deploying SaaS products using advanced Large Language Models (LLMs) such as Meta, Azure OpenAI, Claude, and Mistral, while ensuring secure, scalable, and high-performance architecture. Your ability to adapt, iterate, and deliver in fast-paced environments is critical. Responsibilities Lead the design, development, and deployment of SaaS products leveraging LLMs, including platforms like Meta, Azure OpenAI, Claude, and Mistral. Support product lifecycle, from conceptualization to deployment, ensuring seamless integration of AI models with business requirements and user needs. Build secure, scalable, and efficient SaaS products that embody robust data management and comply with security and governance standards. Collaborate closely with product management, and other stakeholders to align AI-driven SaaS solutions with business strategies and customer expectations. Fine-tune AI models using custom instructions to tailor them to specific use cases and optimize performance through techniques like quantization and model tuning. Architect AI deployment strategies using cloud-agnostic platforms (AWS, Azure, Google Cloud), ensuring cost optimization while maintaining performance and scalability. Apply retrieval-augmented generation (RAG) techniques to build AI models that provide contextually accurate and relevant outputs. Build the integration of APIs and third-party services into the SaaS ecosystem, ensuring robust and flexible product architecture. Monitor product performance post-launch, iterating and improving models and infrastructure to enhance user experience and scalability. Stay current with AI advancements, SaaS development trends, and cloud technology to apply innovative solutions in product development. Qualifications Bachelor’s degree or equivalent in Information Systems, Computer Science, or related fields. 6+ years of experience in product development, with at least 2 years focused on AI-based SaaS products. Demonstrated experience in leading the development of SaaS products, from ideation to deployment, with a focus on AI-driven features. Hands-on experience with LLMs (Meta, Azure OpenAI, Claude, Mistral) and SaaS platforms. Proven ability to build secure, scalable, and compliant SaaS solutions, integrating AI with cloud-based services (AWS, Azure, Google Cloud). Strong experience with RAG model techniques and fine-tuning AI models for business-specific needs. Proficiency in AI engineering, including machine learning algorithms, deep learning architectures (e.g., CNNs, RNNs, Transformers), and integrating models into SaaS environments. Solid understanding of SaaS product lifecycle management, including customer-focused design, product-market fit, and post-launch optimization. Excellent communication and collaboration skills, with the ability to work cross-functionally and drive SaaS product success. Knowledge of cost-optimized AI deployment and cloud infrastructure, focusing on scalability and performance. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Mohali, Punjab
On-site
Company: Chicmic Studios Job Role: Python Machine Learning & AI Developer Experience Required: 3+ Years We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF) Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch , TensorFlow , and scikit-learn . Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving , TorchServe , and AWS SageMaker . Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a Python Developer. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch , TensorFlow , and scikit-learn . Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Contact : 9875952836 Office Location: F273, Phase 8b Industrial Area Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Total - 3 to 5 Yrs of Experience Role & Responsibilities Agentic AI Development: Design and develop multi-agent conversational frameworks with adaptive decision-making capabilities. Integrate goal-oriented reasoning and memory components into agents using transformer-based architectures. Build negotiation-capable bots with real-time context adaptation and recursive feedback processing. Generative AI & Model Optimization: Fine-t une LLMs/SLMs using proprietary and domain-specific datasets (NBFC, Financial Services, etc.). Apply distillation and quantization for efficient deployment on edge devices. Benchmark LLM/SLM performance on server vs. edge environments for real-time use cases. Speech and Conversational Intelligence: Implement contextual dialogue flows using speech inputs with emotion and intent tracking. Evaluate and deploy advanced Speech-to-Speech (S2S) models for naturalistic voice responses. Work on real-time speaker diarization and multi-turn, multi-party conversation tracking. Voice Biometrics & AI Security: Train and evaluate voice biometric models for secure identity verification. Implement anti-spoofing layers to detect deepfakes, replay attacks, and signal tampering. Ensure compliance with voice data privacy and ethical AI guidelines. Self-Learning & Autonomous Adaptation: Develop frameworks for agents to self-correct and adapt using feedback loops without full retraining. Enable low-footprint learning systems on-device to support personalization on the edge. Ideal Candidate Educational Qualifications: Bachelor’s/Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Experience Required: 3–8 years of experience, with a mix of core software development and AI/ML model engineering. Proven hands-on work with Conversational AI, Generative AI, or Multi-Agent Systems. Technical Proficiency: Strong programming in Python, TensorFlow/PyTorch, and model APIs (Hugging Face, LangChain, OpenAI, etc.). Expertise in STT, TTS, S2S, speaker diarization, and speech emotion recognition. LLM fine-tuning, model optimization (quantization, distillation), RAG pipelines. Understanding of agentic frameworks, cognitive architectures, or belief-desire-intention (BDI) models. Familiarity with Edge AI deployment, low-latency model serving, and privacy-compliant data pipelines. Desirable: Exposure to agent-based simulation, reinforcement learning, or behavioralmodeling. Publications, patents, or open-source contributions in conversational AI or GenAI systems. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
neoBIM is a well-funded start-up software company revolutionizing the way architects design buildings with our innovative BIM (Building Information Modelling) software. As we continue to grow, we are building a small and talented team of developers to drive our software forward. Tasks We are looking for a highly skilled Generative AI Developer to join our AI team. The ideal candidate should have strong expertise in deep learning, large language models (LLMs), multimodal AI, and generative models (GANs, VAEs, Diffusion Models, or similar techniques) . This role offers the opportunity to work on cutting-edge AI solutions, from training models to deploying AI-driven applications that redefine automation and intelligence. Develop, fine-tune, and optimize Generative AI models , including LLMs, GANs, VAEs, Diffusion Models, and Transformer-based architectures . Work with large-scale datasets and design self-supervised or semi-supervised learning pipelines . Implement multimodal AI systems that combine text, images, audio, and structured data. Optimize AI model inference for real-time applications and large-scale deployment. Build AI-driven applications for BIM (Building Information Modeling), content generation, and automation . Collaborate with data scientists, software engineers, and domain experts to integrate AI into production. Stay ahead of AI research trends and incorporate state-of-the-art methodologies . Deploy models using cloud-based ML pipelines (AWS/GCP/Azure) and edge computing solutions . Requirements Must-Have Skills Strong programming skills in Python (PyTorch, TensorFlow, JAX, or equivalent). Experience in training and fine-tuning Large Language Models (LLMs) like GPT, BERT, LLaMA, or Mixtral . Expertise in Generative AI techniques , including Diffusion Models (e.g., Stable Diffusion, DALL-E, Imagen), GANs, VAEs . Hands-on experience with transformer-based architectures (e.g., Vision Transformers, BERT, T5, GPT, etc.) . Experience with MLOps frameworks for scaling AI applications (Docker, Kubernetes, MLflow, etc.). Proficiency in data preprocessing, feature engineering, and AI pipeline development . Strong background in mathematics, statistics, and optimization related to deep learning. Good-to-Have Skills Experience in NeRFs (Neural Radiance Fields) for 3D generative AI . Knowledge of AI for Architecture, Engineering, and Construction (AEC) . Understanding of distributed computing (Ray, Spark, or Tensor Processing Units). Familiarity with AI model compression and inference optimization (ONNX, TensorRT, quantization techniques) . Experience in cloud-based AI development (AWS/GCP/Azure) . Benefits Work on high-impact AI projects at the cutting edge of Generative AI . Competitive salary with growth opportunities. Access to high-end computing resources for AI training & development. A collaborative, research-driven culture focused on innovation & real-world impact . Flexible work environment with remote options. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Job Description: AI/ML Specialist We are looking for a highly skilled and experienced AI/ML Specialist to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF). Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn. Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker. Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills And Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a AI/ML Specialist Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn. Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Skills:- Artificial Intelligence (AI), pandas, Natural Language Processing (NLP), NumPy, Machine Learning (ML), TensorFlow, PyTorch and Python Show more Show less
Posted 1 week ago
3.0 - 5.0 years
16 - 20 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Work from Office
AI/ML-focused Software Developer with 3+ YOE in SDLC and 1+ year in conversational AI (speech-to-text, emotion recognition, agentic systems). Skilled in fine-tuning LLMs/SLMs, RAG, quantization, and distillation for production use. R & D Required Candidate profile Expert in Python, PyTorch, TensorFlow, Hugging Face, LangChain, OpenAI APIs. Background in agent-based modeling, RL, GenAI. Holds CS/AI degrees with contributions via publications or open-source work.
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Size Large-scale / Global Experience Required 3 - 5 years Working Days 6 days/week Office Location Viman Nagar, Pune Role & Responsibilities Agentic AI Development: Design and develop multi-agent conversational frameworks with adaptive decision-making capabilities. Integrate goal-oriented reasoning and memory components into agents using transformer-based architectures. Build negotiation-capable bots with real-time context adaptation and recursive feedback processing. Generative AI & Model Optimization: Fine-t une LLMs/SLMs using proprietary and domain-specific datasets (NBFC, Financial Services, etc.). Apply distillation and quantization for efficient deployment on edge devices. Benchmark LLM/SLM performance on server vs. edge environments for real-time use cases. Speech And Conversational Intelligence: Implement contextual dialogue flows using speech inputs with emotion and intent tracking. Evaluate and deploy advanced Speech-to-Speech (S2S) models for naturalistic voice responses. Work on real-time speaker diarization and multi-turn, multi-party conversation tracking. Voice Biometrics & AI Security: Train and evaluate voice biometric models for secure identity verification. Implement anti-spoofing layers to detect deepfakes, replay attacks, and signal tampering. Ensure compliance with voice data privacy and ethical AI guidelines. Self-Learning & Autonomous Adaptation: Develop frameworks for agents to self-correct and adapt using feedback loops without full retraining. Enable low-footprint learning systems on-device to support personalization on the edge. Ideal Candidate Educational Qualifications: Bachelor’s/Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Experience Required: 3–8 years of experience, with a mix of core software development and AI/ML model engineering. Proven hands-on work with Conversational AI, Generative AI, or Multi-Agent Systems. Technical Proficiency: Strong programming in Python, TensorFlow/PyTorch, and model APIs (Hugging Face, LangChain, OpenAI, etc.). Expertise in STT, TTS, S2S, speaker diarization, and speech emotion recognition. LLM fine-tuning, model optimization (quantization, distillation), RAG pipelines. Understanding of agentic frameworks, cognitive architectures, or belief-desire-intention (BDI) models. Familiarity with Edge AI deployment, low-latency model serving, and privacy-compliant data pipelines. Desirable: Exposure to agent-based simulation, reinforcement learning, or behavioralmodeling. Publications, patents, or open-source contributions in conversational AI or GenAI systems. Perks, Benefits and Work Culture Our people define our passion and our audacious, incredibly rewarding achievements. Bajaj Finance Limited is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India. Skills: intelligence,python,tensorflow,pytorch,conversational ai,llm fine-tuning,edge ai deployment,optimization,speech,model optimization,llm,speech-to-speech (s2s),privacy-compliant data pipelines,generative ai,speaker diarization,speech emotion recognition,multi-agent systems Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
Remote
We are seeking a skilled Python and SQL Developer to join our dynamic team. The ideal candidate will have a strong background in Python programming and SQL database management. Develop and maintain Python-based applications and scripts. Write efficient SQL queries for data extraction and manipulation. Collaborate with cross-functional teams to gather requirements and deliver solutions. Familiarity with Linux operating systems. Basic understanding of cloud platforms (e.g., AWS, Azure, Google Cloud). Knowledge of Model Quantization and Pruning Experience playing a Data Scientist role Solid development experience in Data Science Arch. Experience in Application Architecture & Design of Java Based Applications Good Knowledge of Architecture and related technologies Experience in Integration Technologies and Architecture Working knowledge of frontend and database technologies Excellent Analytical and Debugging Skills Familiarity with Agile & DevSecOps, Log Analytics, APM Experience in leading the teams technically Experience in requirements gathering, analysis & design and estimation Good communication and articulation skills In-depth knowledge of design issues and best practices Solid understanding of object-oriented programming Familiar with various design, architectural patterns and software development process. Experience with both external and embedded databases Creating database schemas that represent and support business processes Implementing automated testing platforms and unit tests Good verbal and written communication skills Ability to communicate with remote teams in effective manner High flexibility to travel Soft Skills Good verbal & written communication skills – articulate value of AI to business, project managers & other team members Ability to break complex problem into smaller problems and create hypothesis Innovation and experimentation Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary Job Description More Details Below Join the exciting Generative AI team at Qualcomm focused on integrating cutting edge GenAI models on Qualcomm chipsets. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities to allow inference of GenAI models on-device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. Utilize this power efficient hardware and Software stack to run Large Language Models (LLMs) and Large Vision Models (LVM) at near GPU speeds! Responsibilities In this role, you will spearhead the development and commercialization of the Qualcomm AI Runtime (QAIRT) SDK on Qualcomm SoCs. As an AI inferencing expert, you'll push the limits of performance from large models. Your mastery in deploying large C/C++ software stacks using best practices will be essential. You'll stay on the cutting edge of GenAI advancements, understanding LLMs/Transformers and the nuances of edge-based GenAI deployment. Most importantly, your passion for the role of edge in AI's evolution will be your driving force. Requirements Master’s/Bachelor’s degree in computer science or equivalent. 3+ years of relevant work experience in software development. Strong understanding of Generative AI models – LLM, LVM and LLMs and building blocks Floating-point, Fixed-point representations and Quantization concepts. Experience with optimizing algorithms for AI hardware accelerators (like CPU/GPU/NPU). Strong development skills in C/C++ Excellent analytical and debugging skills. Good communication skills (verbal, presentation, written). Ability to collaborate across a globally diverse team and multiple interests. Preferred Qualifications Strong understanding of SIMD processor architecture and system design. Proficiency in object-oriented software development. Familiarity with Linux and Windows environment Strong background in kernel development for SIMD architectures. Familiarity with frameworks like llama.cpp, MLX, and MLC is a plus. Good knowledge of PyTorch, TFLite, and ONNX Runtime is preferred. Experience with parallel computing systems and Assembly is a plus. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3073034 Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane