Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
Delhi, India
On-site
This role is for one of Weekday's clients Min Experience: 2 years Location: gurugram, NCR, Delhi, NOIDA, Uttar Pradesh JobType: full-time Requirements About the Role: We are seeking a passionate and skilled AI/ML Engineer with 2+ years of experience to join our growing technology team. The ideal candidate will have a strong foundation in machine learning, data processing, and model deployment. You will work on developing and implementing cutting-edge AI and ML models to solve real-world problems and contribute to the development of intelligent systems across our product suite. You'll collaborate with cross-functional teams including data scientists, software engineers, and product managers to build scalable and robust ML-powered applications. Key Responsibilities: 🔹 Model Development & Deployment Design, build, and train machine learning models to support core product features. Experiment with supervised, unsupervised, and deep learning algorithms to solve business challenges. Deploy models into production environments and monitor their performance. 🔹 Data Handling & Feature Engineering Collect, clean, preprocess, and analyze large volumes of structured and unstructured data. Engineer features that enhance model performance and align with product requirements. Ensure data quality and consistency across the pipeline. 🔹 Model Optimization Evaluate model performance using relevant metrics (precision, recall, F1-score, ROC-AUC, etc.). Optimize models for speed and accuracy using hyperparameter tuning, ensemble methods, or transfer learning. 🔹 Collaboration & Integration Collaborate with backend and frontend teams to integrate AI/ML capabilities into products. Build APIs and services that expose ML models for consumption across applications. Work closely with data engineers and DevOps to deploy and scale ML pipelines. 🔹 Research & Innovation Stay updated on the latest trends, tools, and frameworks in AI/ML. Prototype and test innovative AI solutions, including NLP, computer vision, recommendation systems, or time series forecasting. Document methodologies, findings, and technical processes clearly. Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, or related field. 2+ years of hands-on experience in AI/ML model development and deployment. Strong knowledge of Python and ML libraries such as scikit-learn, TensorFlow, Keras, PyTorch, XGBoost, or similar. Experience with data manipulation tools like Pandas, NumPy, and visualization libraries such as Matplotlib or Seaborn. Understanding of ML lifecycle, including data preprocessing, model building, evaluation, and deployment. Familiarity with cloud platforms (AWS, GCP, Azure) and container technologies like Docker is a plus. Knowledge of REST APIs and integration of ML models with applications. Excellent problem-solving and analytical skills. Strong communication skills and ability to explain complex ML concepts to non-technical stakeholders. Preferred (Nice to Have): Experience with Natural Language Processing (NLP), Computer Vision, or Reinforcement Learning. Exposure to MLOps, model monitoring, or CI/CD pipelines. Familiarity with data versioning tools like DVC, MLflow, or Kubeflow
Posted 2 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About us: Where elite tech talent meets world-class opportunities! At Xenon7, we work with leading enterprises and innovative startups on exciting, cutting-edge projects that leverage the latest technologies across various domains of IT including Data, Web, Infrastructure, AI, and many others. Our expertise in IT solutions development and on-demand resources allows us to partner with clients on transformative initiatives, driving innovation and business growth. Whether it's empowering global organizations or collaborating with trailblazing startups, we are committed to delivering advanced, impactful solutions that meet today's most complex challenges. We are building a community of top-tier experts and we're opening the doors to an exclusive group of exceptional AI & ML Professionals ready to solve real-world problems and shape the future of intelligent systems. Structured Onboarding Process We ensure every member is aligned and empowered: Screening - We review your application and experience in Data & AI, ML engineering, and solution delivery Technical Assessment - 2-step technical assessment process that includes an interactive problem-solving test, and a verbal interview about your skills and experience Matching you to Opportunity - We explore how your skills align with ongoing projects and innovation tracks Who We're Looking For We're looking for a Senior MLOps Engineer with deep expertise in the Databricks ecosystem to help us build and scale reliable, secure, and automated ML platforms across enterprise environments. You'll work closely with data scientists, ML engineers, DevOps teams, and cloud architects to implement and maintain production-grade machine learning infrastructure using best practices in MLOps, CI/CD, and cloud-native services. This is a hands-on technical leadership role ideal for engineers who can work across the entire ML lifecycle—from experiment tracking to scalable deployment—while championing automation, governance, and performance. If you're driven by curiosity and eager to influence how AI shapes the future, this is your platform. Requirements 6+ years of professional experience in DevOps, DataOps, or MLOps roles 3+ years hands-on with Databricks, including Delta Lake, MLflow, and cluster/workflow administration Strong experience in CI/CD, infrastructure as code (Terraform, GitOps), and Python-based automation Solid understanding of ML lifecycle management, experiment tracking, model registries, and automated deployment pipelines Deep knowledge of AWS (EKS, IAM, Lambda, CloudFormation or Terraform) and/or Azure (ADLS, Azure DevOps, ACR) Experience working with containerized environments, including Kubernetes and Helm Familiarity with data governance and access control frameworks like Unity Catalog Strong scripting and programming skills in Python, Shell, and YAML/JSON Benefits At Xenon7, we're not just building AI systems—we're building a community of talent with the mindset to lead, collaborate, and innovate together. Ecosystem of Opportunity: You'll be part of a growing network where client engagements, thought leadership, research collaborations, and mentorship paths are interconnected. Whether you're building solutions or nurturing the next generation of talent, this is a place to scale your influence Collaborative Environment: Our culture thrives on openness, continuous learning, and engineering excellence. You'll work alongside seasoned practitioners who value smart execution and shared growth Flexible & Impact-Driven Work: Whether you're contributing from a client project, innovation sprint, or open-source initiative, we focus on outcomes—not hours. Autonomy, ownership, and curiosity are encouraged here Talent-Led Innovation: We believe communities are strongest when built around real practitioners. Our Innovation Community isn't just a knowledge-sharing forum—it's a launchpad for members to lead new projects, co-develop tools, and shape the direction of AI itself
Posted 2 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
This role is for one of Weekday's clients Min Experience: 2 years Location: gurugram, NCR, Delhi, NOIDA, Uttar Pradesh JobType: full-time Requirements About the Role: We are seeking a passionate and skilled AI/ML Engineer with 2+ years of experience to join our growing technology team. The ideal candidate will have a strong foundation in machine learning, data processing, and model deployment. You will work on developing and implementing cutting-edge AI and ML models to solve real-world problems and contribute to the development of intelligent systems across our product suite. You'll collaborate with cross-functional teams including data scientists, software engineers, and product managers to build scalable and robust ML-powered applications. Key Responsibilities: 🔹 Model Development & Deployment Design, build, and train machine learning models to support core product features. Experiment with supervised, unsupervised, and deep learning algorithms to solve business challenges. Deploy models into production environments and monitor their performance. 🔹 Data Handling & Feature Engineering Collect, clean, preprocess, and analyze large volumes of structured and unstructured data. Engineer features that enhance model performance and align with product requirements. Ensure data quality and consistency across the pipeline. 🔹 Model Optimization Evaluate model performance using relevant metrics (precision, recall, F1-score, ROC-AUC, etc.). Optimize models for speed and accuracy using hyperparameter tuning, ensemble methods, or transfer learning. 🔹 Collaboration & Integration Collaborate with backend and frontend teams to integrate AI/ML capabilities into products. Build APIs and services that expose ML models for consumption across applications. Work closely with data engineers and DevOps to deploy and scale ML pipelines. 🔹 Research & Innovation Stay updated on the latest trends, tools, and frameworks in AI/ML. Prototype and test innovative AI solutions, including NLP, computer vision, recommendation systems, or time series forecasting. Document methodologies, findings, and technical processes clearly. Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, or related field. 2+ years of hands-on experience in AI/ML model development and deployment. Strong knowledge of Python and ML libraries such as scikit-learn, TensorFlow, Keras, PyTorch, XGBoost, or similar. Experience with data manipulation tools like Pandas, NumPy, and visualization libraries such as Matplotlib or Seaborn. Understanding of ML lifecycle, including data preprocessing, model building, evaluation, and deployment. Familiarity with cloud platforms (AWS, GCP, Azure) and container technologies like Docker is a plus. Knowledge of REST APIs and integration of ML models with applications. Excellent problem-solving and analytical skills. Strong communication skills and ability to explain complex ML concepts to non-technical stakeholders. Preferred (Nice to Have): Experience with Natural Language Processing (NLP), Computer Vision, or Reinforcement Learning. Exposure to MLOps, model monitoring, or CI/CD pipelines. Familiarity with data versioning tools like DVC, MLflow, or Kubeflow
Posted 2 weeks ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
This role is for one of Weekday's clients Min Experience: 2 years Location: gurugram, NCR, Delhi, NOIDA, Uttar Pradesh JobType: full-time Requirements About the Role: We are seeking a passionate and skilled AI/ML Engineer with 2+ years of experience to join our growing technology team. The ideal candidate will have a strong foundation in machine learning, data processing, and model deployment. You will work on developing and implementing cutting-edge AI and ML models to solve real-world problems and contribute to the development of intelligent systems across our product suite. You'll collaborate with cross-functional teams including data scientists, software engineers, and product managers to build scalable and robust ML-powered applications. Key Responsibilities: 🔹 Model Development & Deployment Design, build, and train machine learning models to support core product features. Experiment with supervised, unsupervised, and deep learning algorithms to solve business challenges. Deploy models into production environments and monitor their performance. 🔹 Data Handling & Feature Engineering Collect, clean, preprocess, and analyze large volumes of structured and unstructured data. Engineer features that enhance model performance and align with product requirements. Ensure data quality and consistency across the pipeline. 🔹 Model Optimization Evaluate model performance using relevant metrics (precision, recall, F1-score, ROC-AUC, etc.). Optimize models for speed and accuracy using hyperparameter tuning, ensemble methods, or transfer learning. 🔹 Collaboration & Integration Collaborate with backend and frontend teams to integrate AI/ML capabilities into products. Build APIs and services that expose ML models for consumption across applications. Work closely with data engineers and DevOps to deploy and scale ML pipelines. 🔹 Research & Innovation Stay updated on the latest trends, tools, and frameworks in AI/ML. Prototype and test innovative AI solutions, including NLP, computer vision, recommendation systems, or time series forecasting. Document methodologies, findings, and technical processes clearly. Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, or related field. 2+ years of hands-on experience in AI/ML model development and deployment. Strong knowledge of Python and ML libraries such as scikit-learn, TensorFlow, Keras, PyTorch, XGBoost, or similar. Experience with data manipulation tools like Pandas, NumPy, and visualization libraries such as Matplotlib or Seaborn. Understanding of ML lifecycle, including data preprocessing, model building, evaluation, and deployment. Familiarity with cloud platforms (AWS, GCP, Azure) and container technologies like Docker is a plus. Knowledge of REST APIs and integration of ML models with applications. Excellent problem-solving and analytical skills. Strong communication skills and ability to explain complex ML concepts to non-technical stakeholders. Preferred (Nice to Have): Experience with Natural Language Processing (NLP), Computer Vision, or Reinforcement Learning. Exposure to MLOps, model monitoring, or CI/CD pipelines. Familiarity with data versioning tools like DVC, MLflow, or Kubeflow
Posted 2 weeks ago
3.0 - 4.0 years
7 - 10 Lacs
India
On-site
Job Description: We are looking for a passionate and skilled AI Developer with 3–4 years of hands-on experience to join our dynamic team. The ideal candidate must have a strong foundation in Python and a proven track record of developing and deploying AI/ML solutions. You will be responsible for designing intelligent systems, training models, and collaborating with cross-functional teams to implement AI-driven features in our products. Key Responsibilities: Design, develop, and deploy machine learning and deep learning models. Collaborate with data scientists and software engineers to build AI-powered applications. Perform data wrangling, preprocessing, and feature engineering on large datasets. Evaluate model performance using appropriate metrics and optimize accordingly. Integrate AI models into production using APIs or ML frameworks. Research and implement the latest AI technologies and best practices. Maintain and improve existing AI systems for performance and scalability. Document solutions and write clean, maintainable code. Required Skills & Qualifications: 3–4 years of experience in AI/ML development. Strong proficiency in Python and its AI/ML libraries (e.g., NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Solid understanding of machine learning algorithms, data structures, and OOP principles. Experience with NLP, computer vision, or generative AI is a plus. Familiarity with model deployment frameworks (e.g., Flask, FastAPI, Docker). Experience with version control systems like Git. Good knowledge of databases (SQL/NoSQL) and data pipelines. Excellent problem-solving skills and attention to detail. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, AI, or a related field. Experience with cloud platforms like AWS, Azure, or GCP. Understanding of MLOps concepts and tools (e.g., MLflow, Kubeflow). Exposure to agile development environments. Job Type: Full-time Pay: ₹65,000.00 - ₹85,000.00 per month Benefits: Flexible schedule Location Type: In-person Schedule: Day shift Work Location: In person
Posted 2 weeks ago
3.0 years
25 - 30 Lacs
Bengaluru
On-site
About Us: At HCL-GUVI, we are on a mission to empower organizations with cutting-edge AI and ML capabilities. We work closely with Fortune 500 companies to upskill their workforce through customized, high-impact training solutions. If you’re passionate about AI/ML education and want to work with the R&D segment of one of the fastest-growing corporate training arms, we want to hear from you! Job Title: Technical Subject Matter Expert – AI & Machine Learning Location: On-site/Hybrid (Based on corporate training location) Organization: HCL-GUVI Role Overview: We are seeking a Subject Matter Expert (SME) in Artificial Intelligence and Machine Learning to lead and deliver corporate training programs. The ideal candidate will possess deep domain knowledge, real-world experience, and a strong passion for teaching and innovation. Key Responsibilities: ● Training Delivery: Conduct high-quality, in-depth AI/ML training sessions for corporate employees (on-site or virtually). ● Curriculum Design: Collaborate with stakeholders to create and customize training modules tailored to business needs. ● Project Development: Design industry-relevant, state-of-the-art AI/ML/Generative AI projects and hands-on tasks. ● Assessment & Evaluation: Systematically evaluate learners' progress, conduct code reviews, and provide constructive feedback. ● R&D Collaboration: Work with HCL-GUVI's R&D teams to stay updated with emerging trends and technologies. ● Mentorship: Guide and mentor junior trainers, interns, or learners as needed. Required Qualifications: ● Education: ○ Bachelor’s/Master’s in Computer Science, Artificial Intelligence, Applied Statistics, Data Science, or related fields. ○ Preferred from Tier-1 institutions (IITs, NITs, IIITs, IISc, etc.). ● Experience: ○ 3–5+ years of hands-on industry experience (both in Corporate project development & Corporate training) in AI, Machine Learning, or MLOps. ○ Experience in training corporate employees or holding academic/training positions is a must. Skills and Competencies: ● Technical Skills: ○ Strong understanding of Machine Learning, Deep Learning, Generative AI (LLMs, Transformers, etc.), and MLOps. ○ Expertise in Python, TensorFlow, PyTorch, Scikit-learn, Hugging Face, OpenAI APIs, LangChain, etc. ○ Familiarity with cloud platforms (AWS, Azure, GCP) and ML pipelines. ● Training & Communication: ○ Excellent presentation, instructional design, and communication skills. ○ Proven ability to explain complex concepts in simple terms to a non-technical audience. ● Project Skills: ○ Ability to design real-world capstone projects, datasets, and evaluation metrics. ○ Experience with version control (Git), Docker, CI/CD for ML, and experiment tracking tools (MLflow, Weights & Biases). Preferred Skills: ● Experience with Generative AI, Prompt Engineering, Retrieval-Augmented Generation (RAG), and LLM fine-tuning. ● Exposure to data governance, ethical AI, and explainable AI (XAI) principles. ● Knowledge of corporate L&D processes and success metrics. Why Join Us? ● Competitive Salary aligned with industry standards. ● Access to cutting-edge R&D projects at the intersection of AI and enterprise solutions. ● Opportunity to collaborate with global clients and leading consulting firms (EY, Deloitte, etc.). ● High visibility role with opportunities for growth and leadership. ● Engaging work culture that values innovation, learning, and impact. Apply Now and become a part of the future of AI-driven corporate learning! Job Types: Full-time, Permanent Pay: ₹2,500,000.00 - ₹3,000,000.00 per year Benefits: Health insurance Schedule: Day shift Morning shift Work Location: In person Application Deadline: 10/07/2025 Expected Start Date: 21/07/2025
Posted 2 weeks ago
3.0 years
4 Lacs
Indore
On-site
About the Role: We are looking for a highly skilled and forward-thinking AI/ML Engineer with 3–4 years of practical experience in building and deploying AI-powered solutions for industrial automation, computer vision, and LLM-based applications. The ideal candidate should have experience with the latest AI tools and frameworks including LangChain, LangGraph, Vision Transformers, and MLOps on AWS (SageMaker), as well as expertise in building multi-agent chat applications with React agents and vector-based RAG (Retrieval-Augmented Generation) architectures. Responsibilities: · Design, train, and deploy AI/ML models for industrial automation, including computer vision systems using OpenCV and deep learning frameworks. · Develop multi-agent chat applications integrating LLMs, React-based agents, and contextual memory. · Implement Vision Transformers (ViTs) for advanced visual understanding tasks. · Utilize LangChain, LangGraph, and RAG techniques to create intelligent conversational systems with vector embeddings and document retrieval. · Fine-tune pre-trained LLMs for custom enterprise use cases. · Collaborate with frontend teams to build responsive, intelligent UIs using React + AI backends. · Deploy AI solutions on AWS Cloud, leveraging SageMaker, Lambda, S3, and related MLOps tools for model lifecycle management. · Ensure high performance, reliability, and scalability of deployed AI systems. Required Skills · 3–4 years of hands-on experience in AI/ML engineering, preferably with industrial or automation-focused projects. · Proficiency in Python and frameworks like PyTorch, TensorFlow, Scikit-learn. · Strong understanding of LLMs (GPT, Claude, LLaMA, etc.), prompt engineering, and fine-tuning techniques. · Experience with LangChain, LangGraph, and RAG-based architecture using vector databases like FAISS, Pinecone, or Weaviate. · Expertise in Vision Transformers, YOLO, Detectron2, and computer vision techniques. · Familiarity with multi-agent architectures, React agents, and building intelligent UIs with frontend-backend synergy. · Working knowledge of AWS services (SageMaker, Lambda, EC2, S3) and MLOps workflows (CI/CD for ML). · Experience deploying and maintaining models in production environments. Qualifications: · Experience with edge AI, NVIDIA Jetson, or industrial IoT integration. · Prior involvement in developing AI-powered chatbots or assistants with memory and tool integration. · Exposure to containerization (Docker) and model versioning tools like MLflow or DVC. · Contributions to open-source AI projects or published research in AI/ML Job Type: Full-time Pay: From ₹412,334.30 per year Benefits: Health insurance Paid sick time Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Required) Work Location: In person
Posted 2 weeks ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Job Description for Databricks Platform Administrator Experience Level: 8-12 Yrs Job Title: Databricks Platform Administrator Roles and Responsibilities A Databricks Platform Administrator is a crucial role responsible for the effective design, implementation, maintenance, and optimization of the Databricks Lakehouse Platform within an organization. This individual ensures the platform is scalable, performant, secure, and aligned with business objectives, providing essential support to data engineers, data scientists, and analysts. Job Summary: The Databricks Platform Administrator is a key member of our data and analytics team, responsible for the overall administration, configuration, and optimization of the Databricks Lakehouse Platform. This role ensures the platform's stability, security, and performance, enabling data engineering, data science, and machine learning initiatives. The administrator will work closely with various cross-functional teams to understand requirements, provide technical solutions, and maintain best practices for the Databricks environment. Key Responsibilities: 1. Provision and configure Databricks workspaces, clusters, pools, and jobs across environments. 2. Create catalogs, schemas, access controls, and lineage configurations. 3. Implement identity and access management using account groups, workspace-level permissions, and data-level governance. 4. Monitor platform health, cluster utilization, job performance, and cost using Databricks admin tools and observability dashboards. 5. Automate workspace onboarding, schema creation, user/group assignments, and external location setup using Terraform, APIs, or CLI. 6. Integrate with Azure services like ADLS Gen2, Azure Key Vault, Azure Data Factory, and Azure Synapse. 7. Support model serving, feature store, and MLflow lifecycle management for Data Science/ML teams. 8. Manage secrets, tokens, and credentials securely using Databricks Secrets and integration with Azure Key Vault. 9. Define and enforce tagging policies, data masking, and row-level access control using Unity Catalog and Attribute-Based Access Control (ABAC). 10. Ensure compliance with enterprise policies, security standards, and audit requirements. 11. Coordinate with Ops Architect, Cloud DevOps teams for network, authentication (e.g., SSO), and VNET setup. 12. Troubleshoot workspace, job, cluster, or permission issues for end users and data teams. Preferred Qualifications: · Databricks Certified Associate Platform Administrator or other relevant Databricks certifications. · Experience with Apache Spark and data engineering concepts. · Knowledge of monitoring tools (e.g., Splunk, Grafana, Cloud-native monitoring). · Familiarity with data warehousing and data lake concepts. · Experience with other big data technologies (e.g., Hadoop, Kafka). · Previous experience leading or mentoring junior administrators.
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI Engineer – Voice, NLP, and GenAI Systems Location : Sector 63, Gurgaon – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM to 8:00 PM Experience : 2–6 years in AI/ML, NLP, or applied machine learning engineering Apply at : careers@darwix.ai Subject Line : Application – AI Engineer – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how enterprise sales, field, and support teams engage with customers. Our suite— Transform+ , Sherpa.ai , and Store Intel —powers real-time multilingual voice analytics, AI nudges, coaching systems, and computer vision analytics for major enterprises across India, MENA, and Southeast Asia. We work with some of the largest names such as Aditya Birla Capital, Sobha, GIVA, and Bank Dofar. Our systems process thousands of daily conversations, live call transcripts, and omnichannel data to deliver actionable revenue insights and in-the-moment enablement. Role Overview As an AI Engineer , you will play a key role in designing, developing, and scaling AI and NLP systems that power our core products. You will work at the intersection of voice AI, natural language processing (NLP), large language models (LLMs), and speech-to-text pipelines. You will collaborate with product, backend, and frontend teams to integrate ML models into production workflows, optimize inference pipelines, and improve the accuracy and performance of real-time analytics used by enterprise sales and field teams. Key ResponsibilitiesAI & NLP System Development Design, train, fine-tune, and deploy NLP models for conversation analysis, scoring, sentiment detection, and call summarization. Work on integrating and customizing speech-to-text (STT) pipelines (e.g., WhisperX, Deepgram) for multilingual audio data. Develop and maintain classification, extraction, and sequence-to-sequence models to handle real-world sales and service conversations. LLM & Prompt Engineering Experiment with and integrate large language models (OpenAI, Cohere, open-source LLMs) for live coaching and knowledge retrieval use cases. Optimize prompts and design retrieval-augmented generation (RAG) workflows to support real-time use in product modules. Develop internal tools for model evaluation and prompt performance tracking. Productionization & Integration Build robust model APIs and microservices in collaboration with backend engineers (primarily Python, FastAPI). Optimize inference time and resource utilization for real-time and batch processing needs. Implement monitoring and logging for production ML systems to track drift and failure cases. Data & Evaluation Work on audio-text alignment datasets, conversation logs, and labeled scoring data to improve model performance. Build evaluation pipelines and create automated testing scripts for accuracy and consistency checks. Define and track key performance metrics such as WER (word error rate), intent accuracy, and scoring consistency. Collaboration & Research Work closely with product managers to translate business problems into model design requirements. Explore and propose new approaches leveraging the latest research in voice, NLP, and generative AI. Document research experiments, architecture decisions, and feature impact clearly for internal stakeholders. Required Skills & Qualifications 2–6 years of experience in AI/ML engineering, preferably with real-world NLP or voice AI applications. Strong programming skills in Python , including libraries like PyTorch, TensorFlow, Hugging Face Transformers. Experience with speech processing , audio feature extraction, or STT pipelines. Solid understanding of NLP tasks: tokenization, embedding, NER, summarization, intent detection, sentiment analysis. Familiarity with deploying models as APIs and integrating them with production backend systems. Good understanding of data pipelines, preprocessing techniques, and scalable model architectures. Preferred Qualifications Prior experience with multilingual NLP systems or models tuned for Indian languages. Exposure to RAG pipelines , embeddings search (e.g., FAISS, Pinecone), and vector databases. Experience working with voice analytics, diarization, or conversational scoring frameworks. Understanding of DevOps basics for ML (MLflow, Docker, GitHub Actions for model deployment). Experience in SaaS product environments serving enterprise clients. Success in This Role Means Accurate, robust, and scalable AI models powering production workflows with minimal manual intervention. Inference pipelines optimized for enterprise-scale deployments with high availability. New features and improvements delivered quickly to drive direct business impact. AI-driven insights and automations that enhance user experience and boost revenue outcomes for clients. You Will Excel in This Role If You Love building AI systems that create measurable value in the real world, not just in research labs. Enjoy solving messy, real-world data problems and working on multilingual and noisy data. Are passionate about voice and NLP, and constantly follow advancements in GenAI. Thrive in a fast-paced, high-ownership environment where ideas quickly become live features. How to Apply Email your updated CV to careers@darwix.ai Subject Line: Application – AI Engineer – [Your Name] (Optional): Share links to your GitHub, open-source contributions, or a short note about a model or system you designed and deployed in production. This is an opportunity to build foundational AI systems at one of India’s fastest-scaling GenAI startups and to impact how large enterprises engage millions of customers every day. If you are ready to transform how AI meets revenue teams—Darwix AI wants to hear from you.
Posted 2 weeks ago
3.0 - 6.0 years
15 - 25 Lacs
Bengaluru
Hybrid
The Opportunity Are you passionate about building intelligent, enterprise-grade AI systems using Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and agentic frameworks? At Nutanix, we're looking for a skilled and experienced AI/ML engineer to help shape the future of generative AI within our SaaS Engineering organization. As a senior member of our team, youll work at the cutting edge of AI innovationdeveloping and deploying state-of-the-art LLMs and embedding models, optimizing model performance, and building scalable ML pipelines with real-world impact. About the Team At Nutanix, you will be joining a dynamic central platform team that plays a pivotal role in revolutionizing our approach to artificial intelligence and machine learning within the SaaS Engineering group. Comprising eight experienced engineers, our team specializes in addressing the GenAI, machine learning, and data science needs of various squads within the organization. Our diverse skill set ensures we collaborate effectively to create innovative solutions, leveraging the latest advancements in technology to drive our initiatives forward. Your Role Design and deploy Retrieval-Augmented Generation (RAG) pipelines . Build, fine-tune, and deploy LLMs and embedding models such as LLaMA 3 , Gemma , Mistral , and other domain-specific transformers. Fine-tune both LLMs and embedding models for specialized enterprise tasks including Q&A, summarization, classification, and conversational AI. Develop and maintain agentic frameworks capable of orchestrating task-specific intelligent agents with memory, planning, and tool-use capabilities. Build and evaluate custom agents for use cases like document analysis, data querying, and interactive user support. Implement evaluation frameworks for LLM outputs, including both automated metrics and task-specific success criteria. Work closely with data engineering teams to develop custom training pipelines and extract meaningful insights from large-scale internal datasets. Develop MLOps pipelines for training, deployment, and monitoring using tools like MLflow , Kubeflow , and custom CI/CD workflows. Deploy optimized inference endpoints for high-performance, low-latency model serving at scale. Manage vectorization workflows using advanced embedding models and vector databases for semantic search and content retrieval. Demonstrate working knowledge of LangChain, OpenAI function-calling, vector databases and scalable retrieval logic. Work with Kubernetes clusters to provision, scale, and monitor AI/ML workloads; understand GPU, CPU, and storage hardware requirements for efficient deployment. Collaborate with cross-functional teams including backend, data, and infrastructure engineers to integrate models seamlessly into production systems. What You Will Bring Bachelors, Masters, or Ph.D. in Computer Science, Machine Learning, Applied Math, or a related field. 5+ years of hands-on experience building, deploying, and maintaining AI/ML systems in production environments. Strong foundation in MLOps, including model versioning, CI/CD, monitoring, and retraining workflows. In-depth understanding of Kubernetes (K8s) and GPU-based infrastructure, including container orchestration and GPU scheduling for AI workloads. Experience working with Elasticsearch for semantic search and integrating it within RAG or LLM-driven architectures. Proficient in Python (core ML libraries like PyTorch, Pandas, and NumPy). Hands-on experience using Jupyter Notebooks for experimentation, documentation, and collaboration. Comfortable with Unix-based systems, shell scripting, and command-line tooling for ML operations and debugging. Familiarity with LangChain, LLM orchestration, and vector database integration. Strong collaboration and communication skills, with the ability to mentor junior team members and drive initiatives independently. Open-source contributions or published work in the ML/AI domain is a plus.
Posted 2 weeks ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role name: Automation Test Lead (AI/ML) Years of exp: 5 - 8 yrs About Dailoqa Dailoqa’s mission is to bridge human expertise and artificial intelligence to solve the challenges facing financial services. Our founding team of 20+ international leaders, including former CIOs and senior industry experts, combines extensive technical expertise with decades of real-world experience to create tailored solutions that harness the power of combined intelligence. With a focus on Financial Services clients, we have deep expertise across Risk & Regulations, Retail & Institutional Banking, Capital Markets, and Wealth & Asset Management. Dailoqa has global reach in UK, Europe, Africa, India, ASEAN, and Australia. We integrate AI into business strategies to deliver tangible outcomes and set new standards for the financial services industry. Working at Dailoqa will be hard work, our environment is fluid and fast-moving and you'll be part of a community that values innovation, collaboration, and relentless curiosity. We’re looking at people who: Are proactive, curious adaptable, and patient Shape the company's vision and will have a direct impact on its success. Have the opportunity for fast career growth. Have the opportunity to participate in the upside of an ultra-growth venture. Have fun 🙂 Don’t apply if: You want to work on a single layer of the application. You prefer to work on well-defined problems. You need clear, pre-defined processes. You prefer a relaxed and slow paced environment. Role Overview As an Automation Test Lead at Dailoqa, you’ll architect and implement robust testing frameworks for both software and AI/ML systems. You’ll bridge the gap between traditional QA and AI-specific validation, ensuring seamless integration of automated testing into CI/CD pipelines while addressing unique challenges like model accuracy, GenAI output validation, and ethical AI compliance. Key Responsibilities Test Automation Strategy & Framework Design Design and implement scalable test automation frameworks for frontend (UI/UX), backend APIs, and AI/ML model-serving endpoints using tools like Selenium, Playwright, Postman, or custom Python/Java solutions. Build GenAI-specific test suites for validating prompt outputs, LLM-based chat interfaces, RAG systems, and vector search accuracy. Develop performance testing strategies for AI pipelines (e.g., model inference latency, resource utilization). Continuous Testing & CI/CD Integration Establish and maintain continuous testing pipelines integrated with GitHub Actions, Jenkins, or GitLab CI/CD. Implement shift-left testing by embedding automated checks into development workflows (e.g., unit tests, contract testing). AI/ML Model Validation Collaborate with data scientists to test AI/ML models for accuracy, fairness, stability, and bias mitigation using tools like TensorFlow Model Analysis or MLflow. Validate model drift and retraining pipelines to ensure consistent performance in production. Quality Metrics & Reporting Define and track KPIs: Test coverage (code, data, scenarios) Defect leakage rate Automation ROI (time saved vs. maintenance effort) Model accuracy thresholds Report risks and quality trends to stakeholders in sprint reviews. Drive adoption of AI-specific testing tools (e.g., LangChain for LLM testing, Great Expectations for data validation). Technical Requirements Must-Have 5–8 years in test automation, with 2+ years validating AI/ML systems. Expertise in: Automation tools: Selenium, Playwright, Cypress, REST Assured, Locust/JMeter CI/CD: Jenkins, GitHub Actions, GitLab AI/ML testing: Model validation, drift detection, GenAI output evaluation Languages: Python, Java, or JavaScript Certifications: ISTQB Advanced, CAST, or equivalent. Experience with MLOps tools: MLflow, Kubeflow, TFX Familiarity with vector databases (Pinecone, Milvus) and RAG workflows. Strong programming/scripting experience in JavaScript, Python, Java, or similar Experience with API testing, UI testing, and automated pipelines Understanding of AI/ML model testing, output evaluation, and non-deterministic behavior validation Experience with testing AI chatbots, LLM responses, prompt engineering outcomes, or AI fairness/bias Familiarity with MLOps pipelines and automated validation of model performance in production Exposure to Agile/Scrum methodology and tools like Azure Boards Soft Skills Strong problem-solving skills for balancing speed and quality in fast-paced AI development. Ability to communicate technical risks to non-technical stakeholders. Collaborative mindset to work with cross-functional teams (data scientists, ML engineers, DevOps).
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: We are seeking a results-driven AI with Python Team Lead on immediate basis who can guide a team of AI engineers and independently manage AI/ML projects particularly in adverse media analysis, risk intelligence, and data-driven compliance solutions . The ideal candidate will have a solid foundation in Python-based AI/ML development and hands-on experience working with unstructured data sources for screening, classification, and risk identification. Candidates who can join immediately will be preferred. Responsibilities Lead a team of AI/ML engineers focused on building intelligent systems, particularly for Adverse Media detection and analysis . Develop and deploy scalable AI/ML models using Python to analyze structured and unstructured media sources. Build and optimize NLP pipelines for entity extraction, sentiment analysis, and news classification. Architect and implement solutions that classify and score entities (individuals or companies) based on risk. Translate compliance and regulatory requirements into AI-powered systems. Independently handle project lifecycles from scoping and design to delivery and optimization. Mentor junior team members and enforce best practices in AI development. Conduct regular performance reviews, knowledge sharing, and skill development sessions within the team. Collaborate with cross-functional teams including Data Associates, and QA. Document AI systems, model decisions, and ensure auditability of outputs. Qualifications and Required Skills Bachelor’s or Master’s in Computer Science with AI Specialization. 4 years of hands-on experience in Python-based AI/ML development. Strong experience in Adverse Media , KYC, AML, or regulatory intelligence. Proficiency in NLP, text classification, named entity recognition (NER), topic modeling, and sentiment analysis. Strong command of Python libraries like SpaCy, Transformers (Hugging Face), Scikit-learn, Pandas, NumPy , etc. Experience handling multilingual media sources and identifying fake, biased, or non-reputable sources. Familiarity with risk scoring methodologies, sanctions screening, or negative news processing. Preferred Skills: Experience working with third-party adverse media feeds or OSINT platforms. Exposure to ML pipelines (MLflow, Airflow) and MLOps practices. Familiarity with graph databases and link analysis for network detection (e.g., Neo4j, NetworkX). Experience in working with Tor network data, onion sources, or dark web crawlers.
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role**: Digital : Data Science Required Technical Skill Set: Digital : Data Science Desired Experience Range: 03 - 08 yrs Notice Period: Immediate to 90Days only Location of Requirement: Hyderabad/Bangalore/Pune/Chennai/Kolkata We are currently planning to do a Virtual Interview Job Description: Must-Have** (Ideally should not be more than 3-5) Proficiency in Python or R for data analysis and modeling. Strong understanding of machine learning algorithms (regression, classification, clustering, etc.). Experience with SQL and working with relational databases. Hands-on experience with data wrangling, feature engineering, and model evaluation techniques. Experience with data visualization tools like Tableau, Power BI, or matplotlib/seaborn. Strong understanding of statistics and probability. Ability to translate business problems into analytical solutions. Good-to-Have Experience with deep learning frameworks (TensorFlow, Keras, PyTorch). Knowledge of big data platforms (Spark, Hadoop, Databricks). Experience deploying models using MLflow, Docker, or cloud platforms (AWS, Azure, GCP). Familiarity with NLP, computer vision, or time series forecasting. Exposure to MLOps practices for model lifecycle management. Understanding of data privacy and governance concepts. SN Responsibility of / Expectations from the Role 1 Work with stakeholders to identify business requirements and opportunities for leveraging data. Design and implement advanced analytics models using machine learning and statistical techniques. Clean, process, and analyze large datasets from various sources (structured & unstructured). Develop, test, and deploy predictive models and data pipelines. Create data visualizations, dashboards, and reports to communicate insights to non-technical users. Collaborate with data engineers, analysts, and product teams to integrate models into production environments. Monitor and continuously improve model performance. Stay up-to-date with latest research, tools, and techniques in data science and AI.
Posted 2 weeks ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Lead Data Scientist We are looking for 8+years experienced candidates for this role. Job Description A minimum of 8 years of professional experience, with at least 6 years in a data science role. Strong knowledge of statistical modeling, machine learning, deep learning and GenAI. Proficiency in Python and hands on experience optimizing code for performance. Experience with data preprocessing, feature engineering, data visualization and hyperparameter tuning. Solid understanding of database concepts and experience working with large datasets. Experience deploying and scaling machine learning models in a production environment. Familiarity with machine learning operations (MLOps) and related tools. Good understanding of Generative AI concepts and LLM finetuning. Excellent communication and collaboration skills. Responsibilities include: Lead a high performance team, guide and mentor them on the latest technology landscape, patterns and design standards and prepare them to take on new roles and responsibilities. Provide strategic direction and technical leadership for AI initiatives, guiding the team in designing and implementing state-of-the-art AI solutions. Lead the design and architecture of complex AI systems, ensuring scalability, reliability, and performance. Lead the development and deployment of machine learning/deep learning models to address key business challenges. Apply statistical modeling, data preprocessing, feature engineering, machine learning, and deep learning techniques to build and improve models. Utilize expertise in at least two of the following areas: computer vision, predictive analytics, natural language processing, time series analysis, recommendation systems. Design, implement, and optimize data pipelines for model training and deployment. Experience with model serving frameworks (e.g., TensorFlow Serving, TorchServe, KServe, or similar). Design and implement APIs for model serving and integration with other systems. Collaborate with cross-functional teams to define project requirements, develop solutions, and communicate results. Mentor junior data scientists, providing guidance on technical skills and project execution. Stay up-to-date with the latest advancements in data science and machine learning, particularly in generative AI, and evaluate their potential applications. Communicate complex technical concepts and analytical findings to both technical and non-technical audiences. Serves as a primary point of contact for client managers and liaises frequently with internal stakeholders to gather data or inputs needed for project work Certifications : Bachelor's or Master's degree in a quantitative field such as statistics, mathematics, computer science, or a related area. Primary Skills : Python Data Science concepts Pandas, NumPy, Matplotlib Artificial Intelligence Statistical Modeling Machine Learning, Natural Language Processing (NLP), Deep Learning Model Serving Frameworks (e.g., TensorFlow Serving, TorchServe) MLOps(e.g; MLflow, Tensorboard, Kubeflow etc) Computer Vision, Predictive Analytics, Time Series Analysis, Anomaly Detection, Recommendation Systems (Atleast 2) Generative AI, RAG, Finetuning(LoRa, QLoRa) Proficent in any of Cloud Computing Platforms (e.g., AWS, Azure, GCP) Secondary Skills : Expertise in designing scalable and efficient model architectures is crucial for developing robust AI solutions. Ability to assess and forecast the financial requirements of data science projects ensures alignment with budgetary constraints and organizational goals. Strong communication skills are vital for conveying complex technical concepts to both technical and non-technical stakeholders.
Posted 2 weeks ago
0.0 - 4.0 years
0 Lacs
Indore, Madhya Pradesh
On-site
About the Role: We are looking for a highly skilled and forward-thinking AI/ML Engineer with 3–4 years of practical experience in building and deploying AI-powered solutions for industrial automation, computer vision, and LLM-based applications. The ideal candidate should have experience with the latest AI tools and frameworks including LangChain, LangGraph, Vision Transformers, and MLOps on AWS (SageMaker), as well as expertise in building multi-agent chat applications with React agents and vector-based RAG (Retrieval-Augmented Generation) architectures. Responsibilities: · Design, train, and deploy AI/ML models for industrial automation, including computer vision systems using OpenCV and deep learning frameworks. · Develop multi-agent chat applications integrating LLMs, React-based agents, and contextual memory. · Implement Vision Transformers (ViTs) for advanced visual understanding tasks. · Utilize LangChain, LangGraph, and RAG techniques to create intelligent conversational systems with vector embeddings and document retrieval. · Fine-tune pre-trained LLMs for custom enterprise use cases. · Collaborate with frontend teams to build responsive, intelligent UIs using React + AI backends. · Deploy AI solutions on AWS Cloud, leveraging SageMaker, Lambda, S3, and related MLOps tools for model lifecycle management. · Ensure high performance, reliability, and scalability of deployed AI systems. Required Skills · 3–4 years of hands-on experience in AI/ML engineering, preferably with industrial or automation-focused projects. · Proficiency in Python and frameworks like PyTorch, TensorFlow, Scikit-learn. · Strong understanding of LLMs (GPT, Claude, LLaMA, etc.), prompt engineering, and fine-tuning techniques. · Experience with LangChain, LangGraph, and RAG-based architecture using vector databases like FAISS, Pinecone, or Weaviate. · Expertise in Vision Transformers, YOLO, Detectron2, and computer vision techniques. · Familiarity with multi-agent architectures, React agents, and building intelligent UIs with frontend-backend synergy. · Working knowledge of AWS services (SageMaker, Lambda, EC2, S3) and MLOps workflows (CI/CD for ML). · Experience deploying and maintaining models in production environments. Qualifications: · Experience with edge AI, NVIDIA Jetson, or industrial IoT integration. · Prior involvement in developing AI-powered chatbots or assistants with memory and tool integration. · Exposure to containerization (Docker) and model versioning tools like MLflow or DVC. · Contributions to open-source AI projects or published research in AI/ML Job Type: Full-time Pay: From ₹412,334.30 per year Benefits: Health insurance Paid sick time Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Required) Work Location: In person
Posted 2 weeks ago
0.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
At Takeda, we are guided by our purpose of creating better health for people and a brighter future for the world. Every corporate function plays a role in making sure we — as a Takeda team — can discover and deliver life-transforming treatments, guided by our commitment to patients, our people and the planet. People join Takeda because they share in our purpose. And they stay because we’re committed to an inclusive, safe and empowering work environment that offers exceptional experiences and opportunities for everyone to pursue their own ambitions. Job ID R0150071 Date posted 07/07/2025 Location Bengaluru, Karnataka I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description The Future Begins Here At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. About the Role We are seeking an innovative and skilled Principal AI/ML Engineer with a strong focus on designing and deploying scalable machine learning solutions. This role requires a strategic thinker who can architect production-ready solutions, collaborate closely with cross-functional teams, and ensure adherence to Takeda’s technical standards through participation in the Architecture Council. The ideal candidate has extensive experience in operationalizing ML models, MLOps workflows, and building systems aligned with healthcare standards. By leveraging cutting-edge machine learning and engineering principles, this role supports Takeda’s global mission of delivering transformative therapies to patients worldwide. How You Will Contribute Architect scalable and secure machine learning systems that integrate with Takeda’s enterprise platforms, including R&D, manufacturing, and clinical trial operations. Design and implement pipelines for model deployment, monitoring, and retraining using advanced MLOps tools such as MLflow, Airflow, and Databricks. Operationalize AI/ML models for production environments, ensuring efficient CI/CD workflows and reproducibility. Collaborate with Takeda’s Architecture Council to propose and refine AI/ML system designs, balancing technical excellence with strategic alignment. Implement monitoring systems to track model performance (accuracy, latency, drift) in a production setting, using tools such as Prometheus or Grafana. Ensure compliance with industry regulations (e.g., GxP, GDPR) and Takeda’s ethical AI standards in system deployment. Identify use cases where machine learning can deliver business value, and propose enterprise-level solutions aligned to strategic goals. Work with Databricks tools and platforms for model management and data workflows, optimizing solutions for scalability. Manage and document the lifecycle of deployed ML systems, including versioning, updates, and data flow architecture. Drive adoption of standardized architecture and MLOps frameworks across disparate teams within Takeda. Skills and Qualifications Education Bachelors or Master’s or Ph.D. in Computer Science, Software Engineering, Data Science, or related field. Experience At least 6-8 years of experience in machine learning system architecture, deployment, and MLOps, with a significant focus on operationalizing ML at scale. Proven track record in designing and advocating ML/AI solutions within enterprise architecture frameworks and council-level decision-making. Technical Skills Proficiency in deploying and managing machine learning pipelines using MLOps tools like MLflow, Airflow, Databricks, or Clear ML. Strong programming skills in Python and experience with machine learning libraries such as Scikit-learn, XGBoost, LightGBM, and TensorFlow. Deep understanding of CI/CD pipelines and tools (e.g., Jenkins, GitHub Actions) for automated model deployment. Familiarity with Databricks tools and services for scalable data workflows and model management. Expertise in building robust observability and monitoring systems to track ML systems in production. Hands-on experience with classical machine learning techniques, such as random forests, decision trees, SVMs, and clustering methods. Knowledge of infrastructure-as-code tools like Terraform or CloudFormation to enable automated deployments. Experience in handling regulatory considerations and compliance in healthcare AI/ML implementations (e.g., GxP, GDPR). Soft Skills Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills for influencing technical and non-technical stakeholders. Leadership ability to mentor teams and drive architecture-standardization initiatives. Ability to manage projects independently and advocate for AI/ML adoption across Takeda. Preferred Qualifications Real-world experience operationalizing machine learning for pharmaceutical domains, including drug discovery, patient stratification, and manufacturing process optimization. Familiarity with ethical AI principles and frameworks, aligned with FAIR data standards in healthcare. Publications or contributions to AI research or MLOps tooling communities. WHAT TAKEDA ICC INDIA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are: Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 5 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs No Meeting Days Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. #Li-Hybrid Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time
Posted 2 weeks ago
10.0 - 12.0 years
10 - 12 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities: Workspace Management: Create and manage Databricks workspaces, ensuring proper configuration and access control. User & Identity Management: Administer user roles, permissions, and authentication mechanisms. Cluster Administration: Configure, monitor, and optimize Databricks clusters for efficient resource utilization. Security & Compliance: Implement security best practices, including data encryption, access policies, and compliance adherence. Performance Optimization: Troubleshoot and resolve performance issues related to Databricks workloads. Integration & Automation: Work with cloud platforms (AWS, Azure, GCP) to integrate Databricks with other services. Monitoring & Logging: Set up monitoring tools and analyze logs to ensure system health. Data Governance: Manage Unity Catalog and other governance tools for structured data access. Collaboration: Work closely with data engineers, analysts, and scientists to support their workflows. Qualifications: Proficiency in Python or Scala for scripting and automation. Knowledge of cloud platforms (AWS). Familiarity with Databricks Delta Lake and MLflow. Understanding of ETL processes and data warehousing concepts. Strong problem-solving and analytical skills.
Posted 2 weeks ago
0.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant - Databricks Architect! In this role, the Databricks Architect is responsible for providing technical direction and lead a group of one or more developer to address a goal. Responsibilities . Architect and design solutions to meet functional and non-functional requirements. . Create and review architecture and solution design artifacts. . Evangelize re-use through the implementation of shared assets. . Enforce adherence to architectural standards/principles, global product-specific guidelines, usability design standards, etc. . Proactively guide engineering methodologies, standards, and leading practices. . Guidance of engineering staff and reviews of as-built configurations during the construction phase. . Provide insight and direction on roles and responsibilities required for solution operations. . Identify, communicate and mitigate Risks, Assumptions, Issues, and Decisions throughout the full lifecycle. . Considers the art of the possible, compares various architectural options based on feasibility and impact, and proposes actionable plans. . Demonstrate strong analytical and technical problem-solving skills. . Ability to analyze and operate at various levels of abstraction. . Ability to balance what is strategically right with what is practically realistic. . Growing the Data Engineering business by helping customers identify opportunities to deliver improved business outcomes, designing and driving the implementation of those solutions. . Growing & retaining the Data Engineering team with appropriate skills and experience to deliver high quality services to our customers. . Supporting and developing our people, including learning & development, certification & career development plans . Providing technical governance and oversight for solution design and implementation . Should have technical foresight to understand new technology and advancement. . Leading team in the definition of best practices & repeatable methodologies in Cloud Data Engineering, including Data Storage, ETL, Data Integration & Migration, Data Warehousing and Data Governance . Should have Technical Experience in Azure, AWS & GCP Cloud Data Engineering services and solutions. . Contributing to Sales & Pre-sales activities including proposals, pursuits, demonstrations, and proof of concept initiatives . Evangelizing the Data Engineering service offerings to both internal and external stakeholders . Development of Whitepapers, blogs, webinars and other though leadership material . Development of Go-to-Market and Service Offering definitions for Data Engineering . Working with Learning & Development teams to establish appropriate learning & certification paths for their domain. . Expand the business within existing accounts and help clients, by building and sustaining strategic executive relationships, doubling up as their trusted business technology advisor. . Position differentiated and custom solutions to clients, based on the market trends, specific needs of the clients and the supporting business cases. . Build new Data capabilities, solutions, assets, accelerators, and team competencies. . Manage multiple opportunities through the entire business cycle simultaneously, working with cross-functional teams as necessary. Qualifications we seek in you! Minimum qualifications . Excellent technical architecture skills, enabling the creation of future-proof, complex global solutions. . Excellent interpersonal communication and organizational skills are required to operate as a leading member of global, distributed teams that deliver quality services and solutions. . Ability to rapidly gain knowledge of the organizational structure of the firm to facilitate work with groups outside of the immediate technical team. . Knowledge and experience in IT methodologies and life cycles that will be used. . Familiar with solution implementation/management, service/operations management, etc. . Leadership skills can inspire others and persuade. . Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. . Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience . Experience in a solution architecture role using service and hosting solutions such as private/public cloud IaaS, PaaS, and SaaS platforms. . Experience in architecting and designing technical solutions for cloud-centric solutions based on industry standards using IaaS, PaaS, and SaaS capabilities. . Must have strong hands-on experience on various cloud services like ADF/Lambda, ADLS/S3, Security, Monitoring, Governance . Must have experience to design platform on Databricks. . Hands-on Experience to design and build Databricks based solution on any cloud platform. . Hands-on experience to design and build solution powered by DBT models and integrate with databricks. . Must be very good designing End-to-End solution on cloud platform. . Must have good knowledge of Data Engineering concept and related services of cloud. . Must have good experience in Python and Spark. . Must have good experience in setting up development best practices. . Intermediate level knowledge is required for Data Modelling. . Good to have knowledge of docker and Kubernetes. . Experience with claims-based authentication (SAML/OAuth/OIDC), MFA, RBAC, SSO etc. . Knowledge of cloud security controls including tenant isolation, encryption at rest, encryption in transit, key management, vulnerability assessments, application firewalls, SIEM, etc. . Experience building and supporting mission-critical technology components with DR capabilities. . Experience with multi-tier system and service design and development for large enterprises . Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technologies. . Exposure to infrastructure and application security technologies and approaches . Familiarity with requirements gathering techniques. Preferred qualifications . Must have designed the E2E architecture of unified data platform covering all the aspect of data lifecycle starting from Data Ingestion, Transformation, Serve and consumption. . Must have excellent coding skills either Python or Scala, preferably Python. . Must have experience in Data Engineering domain with total . Must have designed and implemented at least 2-3 project end-to-end in Databricks. . Must have experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o SQL Endpoint - Photon engine o Unity Catalog o Databricks workflows orchestration o Security management o Platform governance o Data Security . Must have knowledge of new features available in Databricks and its implications along with various possible use-case. . Must have followed various architectural principles to design best suited per problem. . Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. . Must have strong understanding of Data warehousing and various governance and security standards around Databricks. . Must have knowledge of cluster optimization and its integration with various cloud services. . Must have good understanding to create complex data pipeline. . Must be strong in SQL and sprak-sql. . Must have strong performance optimization skills to improve efficiency and reduce cost. . Must have worked on designing both Batch and streaming data pipeline. . Must have extensive knowledge of Spark and Hive data processing framework. . Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. . Must be strong in writing unit test case and integration test. . Must have strong communication skills and have worked with cross platform team. . Must have great attitude towards learning new skills and upskilling the existing skills. . Responsible to set best practices around Databricks CI/CD. . Must understand composable architecture to take fullest advantage of Databricks capabilities. . Good to have Rest API knowledge. . Good to have understanding around cost distribution. . Good to have if worked on migration project to build Unified data platform. . Good to have knowledge of DBT. . Experience around DevSecOps including docker and Kubernetes. . Software development full lifecycle methodologies, patterns, frameworks, libraries, and tools . Knowledge of programming and scripting languages such as JavaScript, PowerShell, Bash, SQL, Java, Python, etc. . Experience with data ingestion technologies such as Azure Data Factory, SSIS, Pentaho, Alteryx . Experience with visualization tools such as Tableau, Power BI . Experience with machine learning tools such as mlFlow, Databricks AI/ML, Azure ML, AWS sagemaker, etc. . Experience in distilling complex technical challenges to actionable decisions for stakeholders and guiding project teams by building consensus and mediating compromises when necessary. . Experience coordinating the intersection of complex system dependencies and interactions . Experience in solution delivery using common methodologies especially SAFe Agile but also Waterfall, Iterative, etc. Demonstrated knowledge of relevant industry trends and standards Why join Genpact . Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities . Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 2 weeks ago
5.0 - 8.0 years
8 - 12 Lacs
Chennai
Work from Office
Role: ML Ops Engineer Onsite Chennai Exp:5-8 Y Immediate/0-15d Required Skills 1.5+ year exp MLOps/DevOps roles with ML 2. Python, ML libraries 3.Docker&Kubernetes. 4.CI/CD-Jenkins, GitLab, 5.Technology-Azure,AWS,GCP Note: TamilNadu Candidate Only
Posted 2 weeks ago
0 years
6 - 8 Lacs
Bengaluru
On-site
Must Have Skills: Extensive knowledge of large language models, natural language processing techniques and prompt engineering. Experience in testing and validation processes to ensure the models' accuracy and efficiency in real-world scenarios. Experience in design, build, and deployment of innovative applications utilizing Gen AI technologies such as RAG (Retrieval-Augmented Generation) based chatbots or AI Agents. Proficiency in programming using Python or Java. Familiarity with Oracle Cloud Infrastructure or similar cloud platforms. Effective communication and presentation skills. Analyzes problems, identifies solutions, and makes decisions. Good to Have Skills: Experience in LLM architectures, model evaluation, and fine-tuning techniques. Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, TensorFlow, PyTorch, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Proficiency in databases (e.g., Oracle, MySQL), developing and executing AI over any of the cloud data platforms, associated data stores, Graph Stores, Vector Stores and pipelines. Understanding of the security and compliance requirements for ML/GenAI implementations. Career Level - IC2/IC3 As a member of Oracle Cloud LIFT, you’ll help guide our customers from concept to successful cloud deployment. You’ll: Shape architecture and solution design with best practices and experience. Own the delivery of agreed workload implementations. Validate and test deployed solutions. Conduct security assurance reviews. You’ll work in a fast-paced, international environment, engaging with customers across industries and regions. You’ll collaborate with peers, sales, architects, and consulting teams to make cloud transformation real. https://www.oracle.com/in/cloud/cloud-lift/
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. JOB LOCATION: Bangalore, Mumbai, Pune, Gurgaon, Chennai, Hyderabad, Coimbatore, Noida Job Description Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients. Responsibilities As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments. Enable Model tracking, model experimentation, Model automation Develop scalable ML pipelines Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS Work across all phases of Model development life cycle to build MLOPS components Build the knowledge base required to deliver increasingly complex MLOPS projects on the Cloud(AWS, Azure, GCP)/On Prem Be an integral part of client business development and delivery engagements across multiple domains. QUALIFICATIONS: REQUIRED QUALIFICATIONS: 3-5 years experience building production-quality software Strong experience in System Integration, Application Development or DataWarehouse projects across technologies used in the enterprise space Basic Knowledge of MLOps, machine learning and docker Object-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) Experience developing CI/CD components for production ready ML pipeline. Database programming using any flavors of SQL Knowledge of Git for Source code management Ability to collaborate effectively with highly technical resources in a fast-paced environment Ability to solve complex challenges/problems and rapidly deliver innovative solutions Team handling, problem solving, project management and communication skills & creative thinking Foundational Knowledge of Cloud Computing either one AWS, Azure or GCP Hunger and passion for learning new skills Education B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest!
Posted 2 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
P-375 At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business. Founded by engineers — and customer obsessed — we leap at every opportunity to solve technical challenges, from designing next-gen UI/UX for interfacing with data to scaling our services and infrastructure across millions of virtual machines. Databricks Mosaic AI offers a unique data-centric approach to building enterprise-quality, Machine Learning and Generative AI solutions, enabling organizations to securely and cost-effectively own and host ML and Generative AI models, augmented or trained with their enterprise data. And we're only getting started in Bengaluru , India - and currently in the process of setting up 10 new teams from scratch ! As a Staff Software Engineer at Databricks India, you can get to work across : Backend DDS (Distributed Data Systems) Full Stack The Impact You'll Have Our Backend teams span many domains across our essential service platforms. For instance, you might work on challenges such as: Problems that span from product to infrastructure including: distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience. Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Build reliable, scalable services, e.g. Scala, Kubernetes, and data pipelines, e.g. Apache Spark™, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage. Our DDS team spans across: Apache Spark™ Data Plane Storage Delta Lake Delta Pipelines Performance Engineering As a Full Stack software engineer, you will work closely with your team and product management to bring that delight through great user experience. What We Look For BS (or higher) in Computer Science, or a related field 10+ years of production level experience in one of: Python, Java, Scala, C++, or similar language. Experience developing large-scale distributed systems from scratch Experience working on a SaaS platform or with Service-Oriented Architectures. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description JOB RESPONSIBILITY : Collaborate with cross-functional teams, including data scientists and product managers, to acquire, process, and manage data for AI/ML model integration and optimization. Design and implement robust, scalable, and enterprise-grade data pipelines to support state-of-the-art AI/ML models. Debug, optimize, and enhance machine learning models, ensuring quality assurance and performance improvements. Operate container orchestration platforms like Kubernetes, with advanced configurations and service mesh implementations, for scalable ML workload deployments. Design and build scalable LLM inference architectures, employing GPU memory optimization techniques and model quantization for efficient deployment. Engage in advanced prompt engineering and fine-tuning of large language models (LLMs), focusing on semantic retrieval and chatbot development. Document model architectures, hyperparameter optimization experiments, and validation results using version control and experiment tracking tools like MLflow or DVC. Research and implement cutting-edge LLM optimization techniques, such as quantization and knowledge distillation, ensuring efficient model performance and reduced computational costs. Collaborate closely with stakeholders to develop innovative and effective natural language processing solutions, specializing in text classification, sentiment analysis, and topic modeling. Stay up-to-date with industry trends and advancements in AI technologies, integrating new methodologies and frameworks to continually enhance the AI engineering function. Contribute to creating specialized AI solutions in healthcare, leveraging domain-specific knowledge for task adaptation and : Minimum education: Bachelors degree in any Engineering Stream Specialized training, certifications, and/or other special requirements: Nice to have Preferred education: Computer : Minimum relevant experience - 4+ years in AI AND COMPETENCIES Skills : Advanced proficiency in Python with expertise in data science libraries (NumPy, Pandas, scikit-learn) and deep learning frameworks (PyTorch, TensorFlow) Extensive experience with LLM frameworks (Hugging Face Transformers, LangChain) and prompt engineering techniques Experience with big data processing using Spark for large-scale data analytics Version control and experiment tracking using Git and MLflow Software Engineering & Development: Advanced proficiency in Python, familiarity with Go or Rust, expertise in microservices, test-driven development, and concurrency processing. DevOps & Infrastructure: Experience with Infrastructure as Code (Terraform, CloudFormation), CI/CD pipelines (GitHub Actions, Jenkins), and container orchestration (Kubernetes) with Helm and service mesh implementations. LLM Infrastructure & Deployment: Proficiency in LLM serving platforms such as vLLM and FastAPI, model quantization techniques, and vector database management. MLOps & Deployment: Utilization of containerization strategies for ML workloads, experience with model serving tools like TorchServe or TF Serving, and automated model retraining. Cloud & Infrastructure: Strong grasp of advanced cloud services (AWS, GCP, Azure) and network security for ML systems. LLM Project Experience: Expertise in developing chatbots, recommendation systems, translation services, and optimizing LLMs for performance and security. General Skills: Python, SQL, knowledge of machine learning frameworks (Hugging Face, TensorFlow, PyTorch), and experience with cloud platforms like AWS or GCP. Experience in creating LLD for the provided architecture. Experience working in microservices based Expertise : Strong mathematical foundation in statistics, probability, linear algebra, and optimization Deep understanding of ML and LLM development lifecycle, including fine-tuning and evaluation Expertise in feature engineering, embedding optimization, and dimensionality reduction Advanced knowledge of A/B testing, experimental design, and statistical hypothesis testing Experience with RAG systems, vector databases, and semantic search implementation Proficiency in LLM optimization techniques including quantization and knowledge distillation Understanding of MLOps practices for model deployment and Competencies : Strong analytical thinking with ability to solve complex ML challenges Excellent communication skills for presenting technical findings to diverse audiences Experience translating business requirements into data science solutions Project management skills for coordinating ML experiments and deployments Strong collaboration abilities for working with cross-functional teams Dedication to staying current with latest ML research and best practices Ability to mentor and share knowledge with team members (ref:hirist.tech)
Posted 2 weeks ago
4.0 years
8 - 30 Lacs
Chennai, Tamil Nadu, India
On-site
Azure Databricks Engineer A fast-growing information technology & analytics services firm within the cloud data engineering sector, we architect and deliver high-throughput data platforms, real-time analytics, and AI-ready pipelines for Fortune 500 customers across finance, retail, and manufacturing. Leveraging the Microsoft Azure ecosystem and open-source big-data tooling, we transform massive datasets into actionable insight at enterprise scale. Role & Responsibilities Design, develop, and optimise scalable data pipelines on Azure Databricks using Spark, Delta Lake, and PySpark. Integrate diverse data sources—Azure Data Lake, SQL/NoSQL, REST APIs—into unified, high-quality datasets for analytics and BI. Implement job orchestration, cluster management, and automated CI/CD deployments through Azure DevOps and Terraform. Tune Spark jobs for performance, cost, and reliability, proactively monitoring with Azure Monitor and Log Analytics. Collaborate with data architects, analysts, and business stakeholders to translate requirements into technical solutions. Document standards, mentor junior engineers, and champion best practices in code quality, security, and governance. Skills & Qualifications Must-Have 4+ years building production data pipelines with Azure Databricks and Apache Spark. Strong proficiency in PySpark, SQL, and Delta Lake optimisation techniques. Hands-on experience with Azure Data Factory, Data Lake Storage Gen2, and ADLS security. Knowledge of DevOps pipelines, Git, and automated testing within Azure environments. Understanding of data modelling, partitioning, and performance tuning in big-data systems. Preferred Exposure to MLflow, Feature Store, or MLOps workflows on Databricks. Experience scripting infrastructure as code with Terraform or Bicep. Familiarity with real-time streaming Kafka, Event Hubs and structured streaming in Spark. Benefits & Culture Highlights Work on greenfield petabyte-scale projects for global brands. Continuous learning budget for certifications in Azure and Databricks. Collaborative high-ownership culture with fast career progression. Workplace Type: On-Site | Location: India Skills: mlflow,kafka,sql,bicep,git,azure,adls security,devops pipelines,delta lake,terraform,data modelling,etl,apache spark,mlops workflows,automated testing,pyspark,azure data factory,partitioning,feature store,structured streaming,azure databricks,performance tuning,data lake storage gen2,event hubs
Posted 2 weeks ago
4.0 years
8 - 30 Lacs
Bengaluru, Karnataka, India
On-site
Azure Databricks Engineer A fast-growing information technology & analytics services firm within the cloud data engineering sector, we architect and deliver high-throughput data platforms, real-time analytics, and AI-ready pipelines for Fortune 500 customers across finance, retail, and manufacturing. Leveraging the Microsoft Azure ecosystem and open-source big-data tooling, we transform massive datasets into actionable insight at enterprise scale. Role & Responsibilities Design, develop, and optimise scalable data pipelines on Azure Databricks using Spark, Delta Lake, and PySpark. Integrate diverse data sources—Azure Data Lake, SQL/NoSQL, REST APIs—into unified, high-quality datasets for analytics and BI. Implement job orchestration, cluster management, and automated CI/CD deployments through Azure DevOps and Terraform. Tune Spark jobs for performance, cost, and reliability, proactively monitoring with Azure Monitor and Log Analytics. Collaborate with data architects, analysts, and business stakeholders to translate requirements into technical solutions. Document standards, mentor junior engineers, and champion best practices in code quality, security, and governance. Skills & Qualifications Must-Have 4+ years building production data pipelines with Azure Databricks and Apache Spark. Strong proficiency in PySpark, SQL, and Delta Lake optimisation techniques. Hands-on experience with Azure Data Factory, Data Lake Storage Gen2, and ADLS security. Knowledge of DevOps pipelines, Git, and automated testing within Azure environments. Understanding of data modelling, partitioning, and performance tuning in big-data systems. Preferred Exposure to MLflow, Feature Store, or MLOps workflows on Databricks. Experience scripting infrastructure as code with Terraform or Bicep. Familiarity with real-time streaming Kafka, Event Hubs and structured streaming in Spark. Benefits & Culture Highlights Work on greenfield petabyte-scale projects for global brands. Continuous learning budget for certifications in Azure and Databricks. Collaborative high-ownership culture with fast career progression. Workplace Type: On-Site | Location: India Skills: mlflow,kafka,sql,bicep,git,azure,adls security,devops pipelines,delta lake,terraform,data modelling,etl,apache spark,mlops workflows,automated testing,pyspark,azure data factory,partitioning,feature store,structured streaming,azure databricks,performance tuning,data lake storage gen2,event hubs
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France