Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At (TekLink HGS Digital), our vision is to be the globally preferred digital transformation partner for our clients, creating value in their business through rigorous innovation at scale. We are an expert team of 500+ leading strategic thinkers, digital marketing and creative masters, data analysts, software engineers, and process optimization specialists with an elemental desire to create transformative digital solutions. Job Title: Data Scientist Location: Hyderabad, India Duration: Full time Data scientist would support our internal teams and clients in driving strategic decisions, applying advanced statistical & predictive analytics and machine learning concepts to solve business problems in BFSI and CPG domains. You will also phrase requirements document, contribute towards project plan, carry out data research and collection, study attributes and features, test for parameters, resolve data issues, decide on models, modeling, QA/testing and showcase the findings in various formats for client consumption. Responsibilities: a) Analytics Requirements Definition: Works with business users to approve the requirements for analytics solution. b) Data Preparation: Reviews data preparation rules (data extraction, data integration, data granularity, data cleansing etc.). Prepares data for analytical modelling. Guides data analysts and associate data scientists on data preparation activities. c) Builds Machine Learning (ML) and Statistical Models using Python/R/Scala/SAS/SPSS d) Collaborate with clients and internal teams to define industry-leading analytics solutions for a wide variety of industries and business groups e) Develop proof-of-concepts and demos needed for client & internal presentations f) Create clear functional and technical documentation g) Work agnostic across multiple industry sectors and functional domains, with focus on BFSI and CPG domains. h) Work closely with all stakeholders to identify, evaluate, design, and implement statistical and other quantitative approaches for modeling enterprise scale data and big data i) Display proficiency in converting algorithmic proof of concepts into business requirement documents for product development or data driven actionable intelligence Minimum Requirements & Qualification The ideal candidate should have: • Full time Degree in Mathematics, Statistics, Computer Science or Computer Applications from reputed institutions, B.E./B.Tech., MBA specialized in Marketing, Operations Research, Data Science and/or Business Analytics • Overall 8+ years of technical experience in IT industry across BFSI and CPG domains. • Minimum of 5 years of hands-on work experience in Data Science/Advance analytics, Machine Learning using Python and SQL • Practical experience specifically around quantitative and analytical skills is required. • People management skills and experience, and familiarity with the pharmaceutical industry are preferred. • Knowledge of solution design, planning, and execution • Contribute to case studies, blogs, eBooks, and whitepapers • Proficiency in maintaining strong project documentation hygiene • Able to fully assimilate into automated MLOps mode • Must have good communication skills – written, oral, ppt and language skills o Able to translate statistical findings to business English • Hands on experience in one or more of the skillsets below: o Programming Language: R Programming, Base SAS, Advanced SAS o Visualization Tool: Tableau, MS Excel, think-cell, Power BI, Qlik Sense o Automation Tool: VBA Macro, Python scripts • Basic understanding of NLP/NLU/NLG and text mining • Skills/knowledge of advanced ML techniques with image processing and signal processing is a plus • GenAI and multimodal GenAI skills with RAG development and fine tuning • Sounds statistical training in linear and non-linear regression, weighted regression, clustering, and classification techniques • Sound understanding of applied statistical methods including survival analysis, categorical data analysis, time series analysis and multivariate statistics • Introduction to classical statistical including concepts in Bayesian statistics, experimental design and inference theory • Practical understanding of concepts in computer vision, data mining, machine learning, information retrieval, pattern recognition and knowledge discovery • Additional knowledge in WFM, biological learning systems and modern statistical concepts is a plus • Knowledge of IoT devices and solutions with multi-sensor data fusion is a plus • Knowledge of Geostatistics, information theory, computational statistics is a plus • Experience in character recognition with image, speech, and video analytics capabilities is a plus • Working knowledge of or certifications in AWS/Azure/GCP is beneficial
Posted 4 weeks ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Are you passionate about leveraging data to drive impactful decisions? Join our company as a Lead Data Scientist and be at the forefront of innovative health solutions. Our mission is to save lives by advancing the prevention and treatment of diseases, and our team is dedicated to workforce analytics, serving HR and the workforce as our clients. As a key member of our team, you will design and implement advanced models, enhance AI/ML capabilities, and collaborate with various stakeholders to deliver actionable insights. If you are driven by curiosity and have a knack for solving complex problems, this is the perfect opportunity for you to make a difference. Your Core Responsibilities Design, develop, enhance, and implement models that delve deep into our workforce data, ensuring high standards of quality, relevance, and usability Boost AI/ML capability within the team, using the latest methods and tools to extract insights from text, activity, behavioral, and network data Develop and deploy solutions that are robust, scalable, and meet the needs of a diverse user base, including supporting an LLM-based app in production with thousands of users Collaborate with data scientists, data engineers, devops engineers, solution architects, and the data science community of practice, to amplify data science capabilities and drive innovation Work closely with our client-facing teams to address business needs, providing research solutions that are both insightful and actionable Act as an AI/ML expert, advising HR colleagues and end users on the best usage within the HR domain Technically lead and mentor a team of data scientists and data engineers, fostering a collaborative and innovative environment Manage research projects from inception to completion, ensuring agile delivery and alignment with business goals, while also managing relationships and influencing non-technical stakeholders Who You Are (Education minimum requirements subject to change based on country) You are ready if you have Minimum of 10 years of experience in data science or machine learning engineering with a Bachelor’s degree from an accredited institution in Computer Science, Data Science, Machine Learning, Statistics, or another related field. (With a Master’s degree, minimum experience is 8 years) Expertise in using Python, R, and SQL to execute a solid portfolio of data science projects involving statistical inference, classical machine learning, and deep learning frameworks Solid understanding of NLP tools, methods, and pipeline design, including experience using large language models (LLMs) Proven leadership in team project settings Experience with cloud computing platforms, such as AWS and Databricks, and related tools Familiarity with version control systems Openness to coaching and learning from team members with different specializations Exceptional initiative, curiosity, communication skills, and a team-first orientation Demonstrated interest in projects focused on the workforce Familiarity with product management and agile methodologies Nice to have, but not essential MLOps experience is a big plus, and LLM app deployment experience is ideal Ability to manage relationships, influence non-technical stakeholders, and tell a great data story Understanding of HR data, processes, information systems, and governance Ability to conduct literature reviews and leverage external research to stay on top of best practices in AI/ML and data science in human capital management What we offer (The primary location is Czechia, benefits in other country may vary) Exciting work in a great team, global projects, international environment Opportunity to learn and grow professionally within the company globally Hybrid working model, flexible role pattern Pension and health insurance contributions Internal reward system plus referral programme 5 weeks annual leave, 5 sick days, 15 days of certified sick leave paid above statutory requirements annually, 40 paid hours annually for volunteering activities, 12 weeks of parental contribution Cafeteria for tax free benefits according to your choice (meal vouchers, Lítačka, sport, culture, health, travel, etc.), Multisport Card Vodafone, Raiffeisen Bank, Foodora, and Mall.cz discount programmes Up-to-date laptop and iPhone Parking in the garage, showers, refreshments, library, music corner Competitive salary, incentive pay, and many more Ready to take up the challenge? Apply now! Know anybody who might be interested? Refer this job! Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Not Applicable Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Design, Data Engineering, Data Modeling, Data Science, Data Visualization, Machine Learning, Software Development, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 06/13/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R321934
Posted 4 weeks ago
0.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore,Karnataka,India Job ID 763418 Join our Team As the technology firm that created the mobile world and a rich history of 145 years of building ground-breaking solutions and innovative technologies supported by 60,000+ patents, Ericsson has made it our business to make a mark. When joining our team at Ericsson you are empowered to learn, lead and perform at your best, shaping the future of technology. This is a place where you are welcomed as your own perfectly unique self, and celebrated for the skills, talent, and perspective you bring to the team. Ericsson Enterprise Wireless Solutions (BEWS) is the group responsible for leading Ericsson’s Enterprise Networking and Security business. Our growing product portfolio spans across wide area networks, local area networks and enterprise security. We are the #1 global market leader in Wireless-WAN based enterprise connectivity solutions and are growing fast in enterprise Private 5G networks and Secure Access Services Edge (SASE) solutions. You will Develop scientific methods, processes, and systems to extract knowledge or insights to drive the future of applied analytics. Mine and analyze data from company databases to drive optimization and improvement of product development and business strategies. Assess the effectiveness of new data sources and data gathering techniques. Develop custom data models and algorithms to apply to data sets. Use Generative AI and predictive modeling to enhance customer experiences, revenue generation and other business outcomes. You must have Solid understanding in Statistics, e.g., hypothesis formulation, hypothesis testing, descriptive analysis and data exploration. Ability to perform EDA and visualize the data. Aptitude and skills in Machine Learning, e.g., Natural Language Processing, Bayesian model, Deep Learning, and Large Language Models. Strong programming skills in Python, SQL. Strong understanding of DSA. Strong ambition to learn and implement current state of the art machine learning frameworks such as Scikit-Learn, TensorFlow, PyTorch and Spark. Familiarity with Linux/OS X command line, version control software (git), and general software development. Familiarity with APIs Experience in programming or scripting to enable ETL development. Familiarity with relational databases and Cloud (AWS). Understanding of Reinforcement Learning and Causal Inference will be preferred. Qualifications B.Tech or B.E. or M.Tech or MS in Computer Science / Masters in Mathematics / Statistics from a premium institute Minimum 4 – 6 years of experience in relevant role Why Ericsson Enterprise Wireless Solutions? At Ericsson Enterprise Wireless Solutions, we are one team - all in on inclusion. Celebrating the uniqueness of our individual team members across the globe helps us build diverse teams where we all can thrive. Our connected, community-focused culture enables each one of us to perform at our best and fully be ourselves. Please note: Ericsson Enterprise Wireless Solutions does not accept agency resumes and is not responsible for any fees related to unsolicited resumes. Please do not forward resumes to Ericsson Enterprise Wireless Solutions employees. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 4 weeks ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Title : AI/ML Engineer (Python + AWS + REST APIs) Department : Web Location : Indore Job Type : Full-time Experience : 3-5 years Notice Period : (immediate joiners preferred) Work Arrangement : On-site (Work from Office) Overview Advantal Technologies is seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Key Responsibilities AI/ML Development : Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for : Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies Python, PyTorch, TensorFlow FastAPI, Flask AWS : SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL (ref:hirist.tech)
Posted 4 weeks ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About Motadata Motadata is a renowned IT monitoring and management software company that has been transforming how businesses manage their ITOps since its inception. Our vision is to revolutionize the way organizations extract valuable insights from their IT networks. Bootstrapped since inception, Motadata has built up a formidable product suite comprising cutting-edge solutions, empowering enterprises to make informed decisions and optimize their IT infrastructure. As a market leader, we take pride in our ability to collect and analyze data from various sources, in any format, providing a unified view of IT monitoring data. Position Overview We are seeking a Senior Machine Learning Engineer to join our team, focused on enhancing our AIOps and IT Service Management (ITSM) product through the integration of cutting-edge AI/ML features and functionality. As part of our innovative approach to revolutionizing the IT industry, you will play a pivotal role in leveraging data analysis techniques and advanced machine learning algorithms to drive meaningful insights and optimize our product's performance. With a particular emphasis on end-to-end machine learning lifecycle management and MLOps, you will collaborate with cross-functional teams to develop, deploy, and continuously improve AI-driven solutions tailored to our customers' needs. From semantic search and AI chatbots to root cause analysis based on metrics, logs, and traces, you will have the opportunity to tackle diverse challenges and shape the future of intelligent IT operations. Role & Responsibility Lead the end-to-end machine learning lifecycle, understand the business problem statement, convert into ML problem statement, data acquisition, exploration, feature engineering, model selection, training, evaluation, deployment, and monitoring (MLOps). Should be able to lead the team of ML Engineers to solve the business problem and get it implemented in the product, QA validated and improvise based on the feedback from the customer. Collaborate with product managers to understand business needs and translate them into technical requirements for AI/ML solutions. Design, develop, and implement machine learning algorithms and models, including but not limited to statistics, regression, classification, clustering, and transformer-based architectures. Preprocess and analyze large datasets to extract meaningful insights and prepare data for model training. Build and optimize machine learning pipelines for model training and inference using relevant frameworks. Fine-tune existing models and/or train custom models to address specific use cases. Enhance the accuracy and performance of existing AI/ML models through monitoring, iterative refinement and optimization techniques. Collaborate closely with cross-functional teams to integrate AI/ML features seamlessly into our product, ensuring scalability, reliability, and maintainability. Document your work clearly and concisely for future reference and knowledge sharing within the team. Stay ahead of latest developments in machine learning research and technology and evaluate their potential applicability to our product roadmap. Skills And Qualifications Bachelor's or higher degree in Computer Science, Engineering, Mathematics, or related field. Minimum 5+ years of experience as a Machine Learning Engineer or similar role. Proficiency in data analysis techniques and tools to derive actionable insights from complex datasets. Solid understanding and practical experience with machine learning algorithms and techniques, including statistics, regression, classification, clustering, and transformer-based models. Hands-on experience with end-to-end machine learning lifecycle management and MLOps practices. Proficiency in programming languages such as Python and familiarity with at least one of the following : Java,Golang, .NET, Rust. Experience with machine learning frameworks/libraries (e.g. , TensorFlow, PyTorch, scikit-learn) and MLOps tools (e.g. , MLflow, Kubeflow). Experience with ML.NET and other machine learning frameworks. Familiarity with natural language processing (NLP) techniques and tools. Excellent communication and teamwork skills, with the ability to effectively convey complex technical concepts to diverse audiences. Proven track record of delivering high-quality, scalable machine learning solutions in a production environment. (ref:hirist.tech)
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for an AI/ML Engineer with expertise in Text-to-Speech (TTS) systems to train and optimize a Glow-TTS model for Indian languages, starting with Telugu/ other indian languages. The goal is to develop a high-quality, natural-sounding TTS system using datasets like AI4Bharat or other relevant sources. Selected Intern's Day-to-day Responsibilities Include Dataset preparation & preprocessing: Identify and curate high-quality Telugu or other Indian languages speech datasets (AI4Bharat, IndicTTS, or custom datasets) Clean, normalize,e and preprocess text and audio data (phoneme alignment, noise removal, sample rate standardization) Model training & optimization: Fine-tune GlowTTS or Coqui-TTS (or comparable neural TTS architecture) for Telugu/other Indian language speech synthesis Ensure loss convergence by tuning hyperparameters (learning rate, batch size, duration predictors) Experiment with transfer learning from existing multilingual TTS models (if applicable) GPU training & performance tuning (good to have): Optimize training for GPU efficiency (NVIDIA CUDA, mixed precision) Monitor validation loss, attention alignments, and speech quality (MOS testing) Debug training instability (vanishing gradients, overfitting, etc.) Deployment & evaluation: Integrate trained model into an inference pipeline (ONNX, TensorRT, or PyTorch runtime) Benchmark latency, speech quality, and speaker similarity against existing TTS solutions About Company: Coinearth Technologies Pvt Ltd is a dynamic and innovative product-based company established in 2017. While some public records indicate a later incorporation date of 2020, their official communication states their founding year as 2017, suggesting a period of initial development and strategic planning before formal registration. Based in Hyderabad, Telangana, India, the company specializes in building and deploying cutting-edge applications, particularly in the Web3 and fintech sectors. Core Focus: Product Development and Deployment Coinearth Technologies primarily operates as a product company, focusing on creating proprietary software solutions rather than offering traditional IT services. Their expertise lies in the entire lifecycle of app development, from conceptualization and design to robust deployment and ongoing maintenance.
Posted 4 weeks ago
0.0 years
1 - 1 Lacs
Hyderabad, Telangana
Remote
Job Description: About the Role: Our team is responsible for building the backend components of MLOps platform on AWS. The backend components we build are the fundamental blocks for feature engineering, feature serving, model deployment and model inference in both batch and online modes. What you’ll do here Design & build backend components of our MLOps platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you’ll need to succeed Must have skills: Experience with web development frameworks such as Flask, Django or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with unit and functional testing frameworks. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice to have skills: Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Skills Required "Python Development (Flask, Django or FastAPI) WSGI & ASGI web servers (Gunicorn, Uvicorn etc) AWS" Job Type: Contractual / Temporary Contract length: 12 months Pay: ₹100,000.00 - ₹150,000.00 per month Location Type: Hybrid work Schedule: Day shift Work Location: Hybrid remote in Hyderabad, Telangana
Posted 4 weeks ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In Our Website - https://aurigait.com/
Posted 4 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description NIQ is looking for a Software Engineer to join our AI ML Engineering team. At NIQ, the Retail Measurement System (RMS) is a powerful analytics service that tracks product sales and market performance across a wide range of retail channels. It provides comprehensive, store-level data that helps businesses understand how their products are performing in the market, benchmark against competitors, and identify growth opportunities. Charlink and Jarvis models are used to predict product placements to its ideal hierarchy product tree. Learn more on the data driven approach to train models efficiently to predict placements based on Characteristics. Developing frontend applications to interact with ML models, integrating inference codes, and providing tools and patterns for enhancing our MLOps cycle. The ideal candidate has strong software design and programming experience, with some expertise in cloud computing, and big data technologies, and strong communication and management skills. You will be part of a diverse, flexible, and collaborative environment where you will be able to apply and develop your skills and knowledge working with unique data and exciting applications. Our Software Engineering platform is based in AngularJS, Java, React, Spring Boot, Typescript, Javascript, Sql and Snowflake, and we continue to adopt the best of breed in cloud-native, low-latency technologies. Who we are looking for: You have a strong entrepreneurial spirit and a thirst to solve difficult challenges through innovation and creativity with a strong focus on results You have a passion for data and the insights it can deliver You are intellectually curious with a broad range of interests and hobbies You take ownership of your deliverables You have excellent analytical communication and interpersonal skills You have excellent communication skills with both technical and non-technical audiences You can work with distributed teams situated globally in different geographies You want to work in a small team with a start-up mentality You can work well under pressure, prioritize work and be well organized. Relish tackling new challenges, paying attention to details, and, ultimately, growing professionally. Responsibilities Design, develop, and maintain scalable web applications using AngularJS for the front end and Java (Spring Boot) for the backend Collaborate closely with cross-functional teams to translate business requirements into technical solutions Optimize application performance, usability, and responsiveness Conduct code reviews, write unit tests, and ensure adherence to coding standards Troubleshoot and resolve software defects and production issues Contribute to architecture and technical documentation Qualifications 3–5 years of experience as a full stack developer Proficient in AngularJS(Version 12+), Typescript, Java, Spring Framework (especially Spring Boot) Experience with RESTful APIs and microservices architecture Solid understanding of HTML, CSS, JavaScript, and responsive web design Familiarity with relational databases (e.g., MySQL, PostgreSQL) Hands-on experience with version control systems (e.g., GitHub) and CI/CD tools Strong problem-solving abilities and attention to detail 3 - 5+ years of relevant software engineering experience Minimum B.S. degree in Computer Science, Computer Engineering, Information Technology or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
We're seeking a Mid-Level Machine Learning Engineer to join our growing Data Science & Engineering team. In this role, you will design, develop, and deploy ML models that power our cutting-edge technologies like voice ordering, prediction algorithms and customer-facing analytics. You'll collaborate closely with data engineers, backend engineers, and product managers to take models from prototyping through to production, continuously improving accuracy, scalability, and maintainability. Essential Job Functions Model Development: Design and build next-generation ML models using advanced tools like PyTorch, Gemini, and Amazon SageMaker - primarily on Google Cloud or AWS platforms Feature Engineering: Build robust feature pipelines; extract, clean, and transform largescale transactional and behavioral data. Engineer features like time- based attributes, aggregated order metrics, categorical encodings (LabelEncoder, frequency encoding) Experimentation & Evaluation: Define metrics, run A/B tests, conduct cross-validation, and analyze model performance to guide iterative improvements. Train and tune regression models (XGBoost, LightGBM, scikit-learn, TensorFlow/Keras) to minimize MAE/RMSE and maximize R² Own the entire modeling lifecycle end-to-end, including feature creation, model development, testing, experimentation, monitoring, explainability, and model maintenance Monitoring & Maintenance: Implement logging, monitoring, and alerting for model drift and data-quality issues; schedule retraining workflows Collaboration & Mentorship: Collaborate closely with data science, engineering, and product teams to define, explore, and implement solutions to open-ended problems that advance the capabilities and applications of Checkmate, mentor junior engineers on best practices in ML engineering Documentation & Communication: Produce clear documentation of model architecture, data schemas, and operational procedures; present findings to technical and non-technical stakeholders Requirements Academics: Bachelors/Master's degree in Computer Science, Engineering, Statistics, or related field Experience: 5+ years of industry experience (or 1+ year post-PhD). Building and deploying advanced machine learning models that drive business impact Proven experience shipping production-grade ML models and optimization systems, including expertise in experimentation and evaluation techniques. Hands-on experience building and maintaining scalable backend systems and ML inference pipelines for real-time or batch prediction Programming & Tools: Proficient in Python and libraries such as pandas, NumPy, scikit-learn; familiarity with TensorFlow or PyTorch. Hands-on with at least one cloud ML platform (AWS SageMaker, Google Vertex AI, or Azure ML). Data Engineering: Hands-on experience with SQL and NoSQL databases; comfortable working with Spark or similar distributed frameworks. Strong foundation in statistics, probability, and ML algorithms like XGBoost/LightGBM; ability to interpret model outputs and optimize for business metrics. Experience with categorical encoding strategies and feature selection. Solid understanding of regression metrics (MAE, RMSE, R²) and hyperparameter tuning. Cloud & DevOps: Proven skills deploying ML solutions in AWS, GCP, or Azure; knowledge of Docker, Kubernetes, and CI/CD pipelines Collaboration: Excellent communication skills; ability to translate complex technical concepts into clear, actionable insights Working Terms: Candidates must be flexible and work during US hours at least until 6 p.m. ET in the USA, which is essential for this role & must also have their own system/work setup for remote work
Posted 1 month ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Scientist Experience range: 3+ years Location: CloudLex Pune Office (In-person, Monday to Friday, 9:30 AM – 6:30 PM) Responsibilities Design and implement AI agent workflows. Develop end-to-end intelligent pipelines and multi-agent systems (e.g., LangGraph/LangChain workflows) that coordinate multiple LLM-powered agents to solve complex tasks. Create graph-based or state-machine architectures for AI agents, chaining prompts and tools as needed. Build and fine-tune generative models. Develop, train, and fine-tune advanced generative models (transformers, diffusion models, VAEs, GANs, etc.) on domain-specific data. Deploy and optimize foundation models (such as GPT, LLaMA, Mistral) in production, adapting them to our use cases through prompt engineering and supervised fine-tuning. Develop data pipelines. Build robust data collection, preprocessing, and synthetic data generation pipelines to feed training and inference workflows. Implement data cleansing, annotation, and augmentation processes to ensure high-quality inputs for model training and evaluation. Implement LLM-based agents and automation. Integrate generative AI agents (e.g., chatbots, AI copilots, content generators) into business processes to automate data processing and decision-making tasks. Use Retrieval-Augmented Generation (RAG) pipelines and external knowledge sources to enhance agent capabilities. Leverage multimodal inputs when applicable. Optimize performance and safety. Continuously evaluate and improve model/system performance. Use GenAI-specific benchmarks and metrics (e.g., BLEU, ROUGE, TruthfulQA) to assess results, and iterate to optimize accuracy, latency, and resource efficiency. Implement safeguards and monitoring to mitigate issues like bias, hallucination, or inappropriate outputs. Collaborate and document. Work closely with product managers, engineers, and other stakeholders to gather requirements and integrate AI solutions into production systems. Document data workflows, model architectures, and experimentation results. Maintain code and tooling (prompt libraries, model registries) to ensure reproducibility and knowledge sharing. Required Skills & Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related quantitative field analyticsvidhya.com (or equivalent practical experience). A strong foundation in algorithms, statistics, and software engineering is expected. Programming proficiency: Expert-level skills in Python coursera.org , with hands-on experience in machine learning and deep learning frameworks (PyTorch, TensorFlow) analyticsvidhya.com . Comfortable writing production-quality code and using version control, testing, and code review workflows. Generative model expertise: Demonstrated ability to build, fine-tune, and deploy large-scale generative models analyticsvidhya.com . Familiarity with transformer architectures and generative techniques (LLMs, diffusion models, GANs) analyticsvidhya.comanalyticsvidhya.com . Experience working with model repositories and fine-tuning frameworks (Hugging Face, etc.). LLM and agent frameworks: Strong understanding of LLM-based systems and agent-oriented AI patterns. Experience with frameworks like LangGraph/LangChain or similar multi-agent platforms gyliu513.medium.com . Knowledge of agent communication standards (e.g., MCP/Agent Protocol) gyliu513.medium.comblog.langchain.dev to enable interoperability between AI agents. AI integration and MLOps: Experience integrating AI components with existing systems via APIs and services. Proficiency in retrieval-augmented generation (RAG) setups, vector databases, and prompt engineering analyticsvidhya.com . Familiarity with machine learning deployment and MLOps tools (Docker, Kubernetes, MLflow, KServe, etc.) for managing end-to-end automation and scalable workflows analyticsvidhya.com . Familiarity with GenAI tools: Hands-on experience with state-of-the-art GenAI models and APIs (OpenAI GPT, Anthropic, Claude, etc.) and with popular libraries (Hugging Face Transformers, LangChain, etc.). Awareness of the current GenAI tooling ecosystem and best practices. Soft skills: Excellent problem-solving and analytical abilities. Strong communication and teamwork skills to collaborate across data, engineering, and business teams. Attention to detail and a quality-oriented mindset. (See Ideal Candidate below for more on personal attributes.) Ideal Candidate Innovative, problem-solver: You are a creative thinker who enjoys tackling open-ended challenges. You have a solutions-oriented mindset and proactively experiment with new ideas and techniques analyticsvidhya.com . Systems thinker: You understand how different components (data, models, services) fit together in a large system. You can architect end-to-end AI solutions with attention to reliability, scalability, and integration points. Collaborative communicator: You work effectively in multidisciplinary teams. You are able to explain complex technical concepts to non-technical stakeholders and incorporate feedback. You value knowledge sharing and mentorship. Adaptable learner: The generative AI landscape evolves rapidly. You are passionate about staying current with the latest research and tools. You embrace continuous learning and are eager to upskill and try new libraries or platforms analyticsvidhya.com . Ethical and conscientious: You care about the real-world impact of AI systems. You take responsibility for the quality and fairness of models, and proactively address concerns like data privacy, bias, and security.
Posted 1 month ago
3.0 years
0 Lacs
India
Remote
Job Title: Voice Processing Specialist Location: Remote /Jaipur Job Type: Full-time / Contract Experience: 3+ years expertise in voice cloning, transformation, and synthesis technologies Job Summary We are seeking a talented and motivated Voice Processing Specialist to join our team and lead the development of innovative voice technologies. The ideal candidate will have a deep understanding of speech synthesis, voice cloning, and transformation techniques. You will play a critical role in designing, implementing, and deploying state-of-the-art voice models that enhance naturalness, personalization, and flexibility of speech in AI-powered applications. This role is perfect for someone passionate about advancing human-computer voice interaction and creating lifelike, adaptive voice systems. Key Responsibilities Design, develop, and optimize advanced deep learning models for voice cloning, text-to-speech (TTS), voice conversion, and real-time voice transformation. Implement speaker embedding and voice identity preservation techniques to support accurate and high-fidelity voice replication. Work with large-scale and diverse audio datasets, including preprocessing, segmentation, normalization, and data augmentation to improve model generalization and robustness. Collaborate closely with data scientists, ML engineers, and product teams to integrate developed voice models into production pipelines. Fine-tune neural vocoders and synthesis architectures for better voice naturalness and emotional range. Stay current with the latest advancements in speech processing, AI voice synthesis, and deep generative models through academic literature and open-source projects. Contribute to the development of tools and APIs for deploying models on cloud and edge environments with high efficiency and low latency. Required Skills Strong understanding of speech signal processing, speech synthesis, and automatic speech recognition (ASR) systems. Hands-on experience with voice cloning frameworks such as Descript Overdub, Coqui TTS, SV2TTS, Tacotron, FastSpeech, or similar. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow. Experience working with speech libraries and toolkits such as ESPnet, Kaldi, Librosa, or SpeechBrain. In-depth knowledge of mel spectrograms, vocoder architectures (e.g., WaveNet, HiFi-GAN, WaveGlow), and their role in speech synthesis. Familiarity with REST APIs, model deployment, and cloud-based inference systems using platforms like AWS, Azure, or GCP. Ability to optimize models for performance in real-time or low-latency environments. Preferred Qualifications Experience in real-time voice transformation, including pitch shifting, timing modification, or emotion modulation. Exposure to emotion-aware speech synthesis, multilingual voice models, or prosody modeling. Design, develop, and optimize advanced deep learning models for voice cloning, text-to-speech (TTS), voice conversion, and real-time voice transformation Background in audio DSP (Digital Signal Processing) and speech analysis techniques. Previous contributions to open-source speech AI projects or publications in relevant domains. Why Join Us You will be part of a fast-moving, collaborative team working at the forefront of voice AI innovation. This role offers the opportunity to make a significant impact on products that reach millions of users, helping to shape the future of interactive voice experiences. Skills: automatic speech recognition (asr),vocoder architectures,voice cloning,voice processing,data,real-time voice transformation,speech synthesis,pytorch,tensorflow,voice conversion,speech signal processing,audio dsp,rest apis,python,cloud deployment,transformation,mel spectrograms,deep learning
Posted 1 month ago
1.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Function, Responsibility Level Insurance Operations, Senior Executive Reports to Assistant Manager/Lead Assistant Manager/Manager – Insurance Operations Basic Function (Property Survey) Responsible for carrying out review of property survey reports submitted by Independent Consultants (ICs) and various other tasks in a manner that is consistent with company policies, procedures and standards. Follow appropriate Operating procedure Meet quality goals Meet office time service goals Monitor e-mails, and respond in a timely manner Send reports to clients Handle additional duties as assigned Competencies Excellent written communication skills, with an ability to think and react to situations confidently Domain experience in Homeowner/ Commercial Insurance (Preferred but not mandatory) Must be assertive, persistent, and result-oriented, ability to work in a team environment and adhere to department guidelines Knowledgeable in Microsoft Word, Excel and Power Point Skills Requirement Technical Skills (Minimum) Proficient with computer systems and software including Microsoft Excel, Outlook and Word Typing Speed of at least 30 WPM and 90% accuracy Soft Skills (Minimum) Good Communication Skills – Able to express thoughts and ideas in an accurate and understandable manner through verbal and written format with internal and external contacts High Levels of Comprehension – Able to understand and follow information received from field staff or from the customer Able to identify the main idea, cause and effect, fact and opinion, make inference, compare and contrast, sequence information, and draw conclusions basis the information acquired or provided Customer Focus Identifies and understands the (internal or external) customer’s needs Detail oriented with excellent follow up skills Teamwork Works effectively with the team to accomplish goals, takes action that respects the needs of others and those of the organization Effective interpersonal skills Adaptability Maintains effectiveness despite changes to situations, tasks, responsibilities, and people Professionalism Conducting oneself with responsibility, integrity, accountability and excellence Work Standards Sets own high standards of performance Education Requirements Minimum of bachelor’s degree in any field Work Experience Requirements Minimum 1 year of work experience in BPO preferably in P&C Insurance
Posted 1 month ago
3.0 years
0 Lacs
India
On-site
Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Ashutosh Panda Sponsorship Available: No Relocation Assistance Available: No Job Description Roles and Responsibilties : Analyze, design and develop new processes, programs and configuration - Taking into account the complex inter-relationships of system-wide components Provide system-wide support and maintenance for a complex system or business process. Maintain and modify existing processes, programs and configuration through use of current IT Toolsets. Troubleshoot, investigate and persist. Develop solutions to problems with unknown causes where precedents do not exist, by applying logic and inference with persistence and experience to see the problem through to resolution. Confer with the stakeholder community on problem determination. Make joint analysis decisions on cause and correction methods. Perform tasks (as necessary) to ensure data integrity and system stability. Complete life-cycle testing (unit and integration) of all work processes (including Cross Platform Interaction). Create applications and databases with main focus on Data Collection Systems - Supporting Analysis, Data Capture, Design Tools, Library Functions, Reporting, Request Systems and Specification Systems used in the Tire Development Process. Knowlege,Skills,Abilities : Developing an understanding of skills needed in other disciplines, of second business process area and basic Cost/Benefit Analysis Method. 3+ Years of strong development experience with Java and Springboot. 3+ Years of strong experience of working with cloud based environment(AWS - Event Driven Architecture). Strong experience of working with Microservices & SQL Server. Good to have some knowledge on Salesforce application. Basic Organizational, Communication and Time Management skills Participate as an active Team Member (Effective Listening and Collaboration skills) Achieve all IT Objectives through use of approved Standards and Guidelines Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate #Li-Hybrid
Posted 1 month ago
10.0 years
3 - 9 Lacs
Hyderābād
On-site
About Celestial AI As Generative AI continues to advance, the performance drivers for data center infrastructure are shifting from systems-on-chip (SOCs) to systems of chips. In the era of Accelerated Computing, data center bottlenecks are no longer limited to compute performance, but rather the system's interconnect bandwidth, memory bandwidth, and memory capacity. Celestial AI's Photonic Fabric™ is the next-generation interconnect technology that delivers a tenfold increase in performance and energy efficiency compared to competing solutions. The Photonic Fabric™ is available to our customers in multiple technology offerings, including optical interface chiplets, optical interposers, and Optical Multi-chip Interconnect Bridges (OMIB). This allows customers to easily incorporate high bandwidth, low power, and low latency optical interfaces into their AI accelerators and GPUs. The technology is fully compatible with both protocol and physical layers, including standard 2.5D packaging processes. This seamless integration enables XPUs to utilize optical interconnects for both compute-to-compute and compute-to-memory fabrics, achieving bandwidths in the tens of terabits per second with nanosecond latencies. This innovation empowers hyperscalers to enhance the efficiency and cost-effectiveness of AI processing by optimizing the XPUs required for training and inference, while significantly reducing the TCO2 impact. To bolster customer collaborations, Celestial AI is developing a Photonic Fabric ecosystem consisting of tier-1 partnerships that include custom silicon/ASIC design, system integrators, HBM memory, assembly, and packaging suppliers. ABOUT THE ROLE Celestial AI is looking for a highly motivated and detail-oriented Software Quality Assurance (SQA) Manager to join our team. As an SQA Manager, you will lead a small team of engineers and play a critical role in ensuring the quality of our software products. You will be responsible for managing the team, as well as designing, developing, and executing test plans and test cases, identifying and reporting defects, and working closely with developers to ensure that our software meets the highest standards. This is a hands-on leadership position that requires both technical depth and leadership skills. ESSENTIAL DUTIES AND RESPONSIBILITIES Test Strategy & Planning: Develop comprehensive test plans, strategies, and methodologies specifically tailored for embedded firmware, covering functional, non-functional (performance, power, memory), reliability, stress, and security aspects. Test Case Design & Execution: Design, document, and execute detailed test cases for firmware components, drivers, communication protocols, and system-level interactions with hardware. Hardware-Firmware Integration Testing: Lead and perform testing at the hardware-firmware interface, ensuring seamless and correct interaction between embedded software and physical components (e.g., sensors, actuators, external memory, peripherals like SPI, I2C, UART). Automation Development: Design, develop, and maintain automated test scripts and test harnesses using scripting languages (e.g., Python, Bash) and specialized tools to enhance test coverage and efficiency, particularly for regression testing. Defect Management: Identify, document, track, and verify resolution of software defects using bug tracking systems. Provide clear and concise bug reports with steps to reproduce and relevant logs. Root Cause Analysis: Collaborate with firmware developers to perform in-depth root cause analysis of defects, often involving debugging on embedded targets using JTAG/SWD, oscilloscopes, logic analyzers, and other hardware debugging tools. Performance & Resource Analysis: Monitor and analyze firmware performance metrics (CPU usage, memory footprint, power consumption, boot time, latency) and validate against specified requirements. Regression & Release Qualification: Own the regression testing process and contribute significantly to the final release qualification of firmware builds. Process Improvement: Champion and contribute to the continuous improvement of firmware development and quality assurance processes, methodologies, and best practices. QUALIFICATIONS Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, or a related technical field. 10 years of experience in Software Quality Assurance, with a minimum of 5 years directly focused on firmware or embedded software testing . Strong understanding of embedded systems concepts , including microcontrollers/microprocessors, real-time operating systems (RTOS), interrupts, memory management, and common peripheral interfaces (GPIO, I2C, SPI, UART, ADC, DAC, Timers). Proficiency in C/C++ for embedded development, with the ability to read, understand, and debug firmware code. Experience with scripting languages for test automation (e.g., Python, Bash). Hands-on experience with hardware debugging tools such as JTAG/SWD debuggers, oscilloscopes, logic analyzers, and multimeters. Familiarity with version control systems (e.g., Git) and bug tracking tools (e.g., Jira, Azure DevOps). Experience with test management tools (e.g., TestRail, Zephyr). Excellent problem-solving skills, with a methodical and analytical approach to identifying and isolating defects. PREFERRED QUALIFICATIONS Experience with continuous integration/continuous deployment (CI/CD) pipelines for embedded systems. Knowledge of networking protocols (TCP/IP) Experience with Hardware-in-the-Loop (HIL) testing, simulation, or emulation environments. LOCATION : Hyderabad, India We offer great benefits (health, vision, dental and life insurance), collaborative and continuous learning work environment, where you will get a chance to work with smart and dedicated people engaged in developing the next generation architecture for high performance computing. Celestial AI Inc. is proud to be an equal opportunity workplace and is an affirmative action employer. #LI-Onsite
Posted 1 month ago
175.0 years
1 - 1 Lacs
Gurgaon
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Enterprise Essentials team within Financial Data Engineering is hiring for a highly skilled Senior Engineering Manager with expertise in Python/Java Full Stack Development, Generative AI, Data Engineering, and Natural Language Processing. The Senior Engineering Manager will be working on creating new capabilities and modernizing existing ones in the domain of Global Tax, Finance and GSM Conversational AI platforms, and Enterprise essential products like Reconciliations, ERRM, Balancing and Control, Concur, and Ariba. The ideal candidate will be responsible for designing, developing, and maintaining scalable AI-driven applications and data pipelines. This role requires a deep understanding of NLP techniques, modern AI frameworks, data engineering best practices, and full-stack development to build innovative solutions that leverage machine learning and AI technologies. Key Responsibilities: Oversees and mentors a team of Software Engineering colleagues, enabling a culture of continuous learning, growth opportunities, and inclusivity for all individual colleagues and teams. Provides direct leadership and coaching to teams, supporting training and development of best practices. Manages resource allocation, project timeline, and budget for Software Engineering projects, ensuring alignment with organizational goals. Collaborates with senior leadership to hire top talent for the team, ensuring a high-functioning and cohesive unit, implementing strategies for talent retention and professional development Leads the development, deployment, support, and monitoring of software across various environments. Collaborates with senior leadership and cross-functional teams to define and implement technology roadmaps and strategies. Leads teams to innovate and automate processes, driving efficiency and scalability in production environments. Drives continuous improvement initiatives, leveraging metrics and feedback to improve team performance and software quality. Collaborates and co-creates effectively with teams in product and the business to align technology initiatives with business objectives. Full Stack Development: Design and develop scalable and secure applications using Java / Python framework, and front-end technologies such as React. Implement and optimize microservices, APIs, and server-side logic for AI-based platforms. Develop and maintain cloud-based, containerized applications (Docker, Kubernetes). Design, optimize, and deploy high-performance systems ensuring minimal latency and maximum throughput. Architect solutions for real-time processing, ensuring low-latency data retrieval and high system availability. Troubleshoot and enhance system performance, optimizing for large-scale, real-time, distributed and COTS applications. Generative AI & Machine Learning: Develop, and deploy innovative solutions in Tax, and finance using ML & Generative AI models leveraging frameworks such as langchain. Implement NLP algorithms for language understanding, text summarization, information extraction, and conversational agents. Create pipelines for training and deploying AI models efficiently in production environments. Collaborate with data scientists to optimize and scale AI/NLP solutions. Integrate AI/ML models into applications, ensuring proper scaling, optimization, and monitoring of models in production. Design solutions that enable fast and efficient inference for real-time AI applications. Data Engineering: Build and maintain data pipelines to support AI/ML model development and deployment. Design and develop ETL processes to ingest, clean, and process large-scale structured and unstructured datasets. Work with data storage and retrieval solutions like SQL/NoSQL databases, data lakes, and cloud storage (GCP, AWS, or Azure). Ensure data integrity, security, and performance of the data pipelines. Collaboration & Leadership: Lead cross-functional teams to deliver high-quality, AI-driven products. Lead and mentor engineers and collaborate with product managers, data scientists, and business stakeholders to ensure alignment with project goals. Keep up-to-date with the latest advancements in AI, NLP, and data engineering, and provide technical guidance to the team. Takes accountability for the success of the team achieving their goals Drives the team’s strategy and prioritizes initiatives Influence team members by challenging status quo, demonstrating risk taking, and implementing creative ideas. Be a productivity multiplier for your team by analysing your workflow and contributing to enable the team to be more effective, productive, and demonstrating faster and stronger results. Mentor and guide team members to success within the team Minimum Qualifications Education: Bachelor’s or Master’s in Computer Science, Engineering, Data Science, or a related field. 10+ years of experience in software engineering in architecture and design (architecture, design patterns, reliability and scaling) of new and existing systems. Strong experience in developing full stack software in Java or Python, data engineering, and AI/NLP solutions and demonstrated ability to quickly learn new languages. Following standard Engineering excellence standards while building software. Leveraging code assistants like Github Copilot. Writing great prompts for generating high quality code, tests, and other artefacts like documentation. Proficiency in data engineering tools and frameworks like GCP BigQuery, Apache Spark, Kafka. Proficiency with containerization (Docker, Kubernetes), CI/CD pipelines, and version control. Experience with RESTful API design, microservices architecture, and cloud platforms (AWS / GCP / Azure). Preferred Qualifications Experience working with large-scale AI systems in production environments. Familiarity with modern AI research and developments in Generative AI and NLP. `Strong understanding of DevOps and Infrastructure-as-Code (Terraform, Ansible). Proven track record of delivering AI-driven products that scale Understanding of MLOps practices will be a plus. Familiarity with Generative AI models and frameworks (e.g., GPT, DALL-E) Knowledge of machine learning frameworks (TensorFlow, PyTorch, Scikit-learn) will be a plus We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 month ago
3.0 years
0 Lacs
Gurgaon
On-site
Senior Data Scientist (Deep Learning and Artificial Intelligence) Job Description We aim to bring about a new paradigm in medical image diagnostics; providing intelligent, holistic, ethical, explainable and patient centric care. We are looking for innovative problem solvers who love solving problems. We want people who can empathize with the consumer, understand business problems, and design and deliver intelligent products. People who are looking to extend artificial intelligence into unexplored areas. Your primary focus will be in applying deep learning and artificial intelligence techniques to the domain of medical image analysis. Responsibilities Selecting features, building and optimizing classifier engines using deep learning techniques. Understanding the problem and applying the suitable image processing techniques Use techniques from artificial intelligence/deep learning to solve supervised and unsupervised learning problems. Understanding and designing solutions for complex problems related to medical image analysis by using Deep Learning/Object Detection/Image Segmentation. Recommend and implement best practices around the application of statistical modeling. Create, train, test, and deploy various neural networks to solve complex problems. Develop and implement solutions to fit business problems which may include applying algorithms from a standard statistical tool, deep learning or custom algorithm development. Understanding the requirements and designing solutions and architecture in accordance with them is important. Participate in code reviews, sprint planning, and Agile ceremonies to drive high-quality deliverables. Design and implement scalable data science architectures for training, inference, and deployment pipelines. Ensure code quality, readability, and maintainability by enforcing software engineering best practices within the data science team. Optimize models for production, including quantization, pruning, and latency reduction for real-time inference. Drive the adoption of versioing strategies for models, datasets, and experiments (e.g., using MLFlow, DVC). Contribute to the architectural design of data platforms to support large-scale experimentation and production workloads. Skills and Qualifications Strong software engineering skills in Python (or other languages used in data science) with emphasis on clean code, modularity, and testability. Excellent understanding and hands-on of Deep Learning techniques such as ANN, CNN, RNN, LSTM, Transformers, VAEs etc. Must have experience with Tensorflow or PyTorch framework in building, training, testing, and deploying neural networks. Experience in solving problems in the domain of Computer Vision. Knowledge of data, data augmentation, data curation, and synthetic data generation. Ability to understand the complete problem and design the solutions that best fit all the constraints. Knowledge of the common data science and deep learning libraries and toolkits such as Keras, Pandas, Scikit-learn, Numpy, Scipy, OpenCV etc. Good applied statistical skills, such as distributions, statistical testing, regression, etc. Exposure to Agile/Scrum methodologies and collaborative development practices. Experience with the development of RESTful APIs. The knowledge of libraries like FastAPI and the ability to apply it to deep learning architectures is essential. Excellent analytical and problem-solving skills with a good attitude and keen to adapt to evolving technologies. Experience with medical image analysis will be an advantage. Experience designing and building ML architecture components (e.g., feature stores, model registries, inference servers). Solid understanding of software design patterns, microservices, and cloud-native architectures. Expertise in model optimization techniques (e.g., ONNX conversion, TensorRT, model distillation) Education : BE/B Tech MS/M Tech (will be a bonus) Experience : 3+ Years Job Type: Full-time Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Do you have experience leading teams in AI Development? Do you have experience creating software architecture for production environment in AI applications? Experience: Deep learning: 3 years (Required) Computer vision: 3 years (Required) PyTorch: 3 years (Required) Work Location: In person
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary Gen AI Agentic AI Project Management Python Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Responsibilities Strategic & Leadership-Level GenAI Skills AI Solution Architecture Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Understanding trade-offs: latency cost accuracy context length. Using tools like LM Evaluation Harness OpenLLM Leaderboard etc. Enterprise-Grade RAG Systems Designing Retrieval-Augmented Generation pipelines. Using vector databases (Pinecone Weaviate Qdrant) with LangChain or LlamaIndex. Optimizing chunking embedding strategies and retrieval quality. Security Privacy & Governance Implementing data privacy access control and audit logging. Understanding risks: prompt injection data leakage model misuse. Aligning with frameworks like NIST AI RMF EU AI Act or ISO/IEC 42001. Cost Optimization & Monitoring Estimating and managing GenAI inference costs. Using observability tools (e.g. Arize WhyLabs PromptLayer). Token usage tracking and prompt optimization. Advanced Technical Skills Model Fine-Tuning & Distillation Fine-tuning open-source models using PEFT LoRA QLoRA. Knowledge distillation for smaller faster models. Using tools like Hugging Face Axolotl or DeepSpeed. Multi-Agent Systems Designing agent workflows (e.g. AutoGen CrewAI LangGraph). Task decomposition memory and tool orchestration. Toolformer & Function Calling Integrating LLMs with external tools APIs and databases. Designing tool-use schemas and managing tool routing. Team & Product Leadership GenAI Product Thinking Identifying use cases with high ROI. Balancing feasibility desirability and viability. Leading GenAI PoCs and MVPs. Mentoring & Upskilling Teams Training developers on prompt engineering LangChain etc. Establishing GenAI best practices and code reviews. Leading internal hackathons or innovation sprints.
Posted 1 month ago
3.0 years
3 - 6 Lacs
Jaipur
On-site
Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally: Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In https://www.aurigait.com/ -https://aurigait.com/https://aurigait.com
Posted 1 month ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Company Description At StatusNeo, we are a global consulting firm specializing in AI, automation, and cloud-first digital solutions. We empower businesses with cutting-edge product & platform engineering to enhance user experience, design, and functionality. As advocates for digital transformation, we guide CXOs worldwide to embrace Digital, Data AI, and DevSecOps. Our exceptional work environment, recognized with the Great Place To Work certification, fosters innovation and collaboration. Role Description This is a full-time on-site role as a Solution Architect - Gen AI at StatusNeo located in Gurgaon. The Solution Architect will be responsible for designing and implementing innovative AI, automation, and cloud solutions. They will collaborate with clients to understand their needs, develop consulting strategies, lead software development projects, integrate solutions, and optimize business processes. Qualifications · Architect and deliver end-to-end GenAI platforms using AWS (ECS, RDS, Lambda, S3) with real-time LLM orchestration and RAG workflows. · Design and implement Python microservices with Redis caching and vector search using Qdrant or Redis Vector. · Integrate GenAI models and APIs (OpenAI, HuggingFace, LangChain,LangGraph), including containerized inference services and secured API pipelines. · Lead frontend architecture using Next.js (TypeScript) with SSR and scalable client-server routing. · Own infrastructure automation and DevOps using Terraform, AWS CDK, GitHub Actions, and Docker-based CI/CD pipelines. · Manage and optimize data architecture across Snowflake, PostgreSQL (RDS), and S3 for both analytical and transactional needs. · Knowledge of database pipeline and data quality, transitioning legacy systems to modular, cloud-native deployments. · Champion engineering culture, leading design/code reviews, mentoring team members, and aligning technical priorities with product strategy. · Ensure compliance, encryption, and data protection via AWS security best practices (IAM, Secrets Manager, WAF, API Gateway). --- Ideal Candidate Profile · Proven track record as a Solution Architect / Tech Lead on large-scale Data & AI products with GenAI integration. · Deep knowledge of AWS cloud services, microservices architecture, and full-stack deployment. · Strong understanding of ML lifecycle and productionization of LLMs / GenAI APIs. · Practical experience with design thinking, breaking down problems from user need to system delivery. · Excellent leadership, communication, and mentoring skills to drive team alignment and technical execution.
Posted 1 month ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About us: Kantar is the world’s leading data, insights and consulting company. We understand more about how people think, feel, shop, share, vote and view than anyone else. Combining our expertise in human understanding with advanced technologies. Kantar’s 25,000 people based in more than 100 countries help the world’s leading organisations succeed and grow. Nobody knows people better than Kantar. We provide insight and inspiration to help our clients, our people and society to create and flourish in an extraordinary world. Our Worldpanel colleagues are the global experts in shopper behaviour, offering continuous monitoring, advanced analytics and tailored solutions to inspire successful decisions by leading organisations worldwide. Worldpanel turns purchase behaviour into a competitive advantage across a diverse range of markets including tech, fashion, telecoms and FMCG. Purpose of the role: A new role is being created by Kantar Worldpanel India team, who uses technology and analytical tools to link real purchase data with impactful and high- quality data which inspire successful decisions. We are looking for someone who aspires to the same growth as us and joins us in developing the Data Quality solution in a fast-changing market working alongside our commercial teams and internal research team to help our clients solve their business challenges. Playing a leading role in the delivery of strategic quality management is a critical part of the business to ensure consistent quality standards and methods, deliver strategic transformational projects and drive continuous improvement for the growth of the business and excellency of operations. This is a full-time, role based in Mumbai. WHAT'D YOU DO: (1) On a day-to-day basis, Act as custodian of project quality in all strategic and operational plans through experience and knowledge on consumer panel research operations. Accountable for tasks such as panel health & stability, sample designs, impact and pick up analysis factor creation etc. Maintain a strong, collaborative relationship with Client Service through data-driven decision making. (2) Quality Standards and Policies Implement and help define quality standards and policies, including KPI definitions for purchase quality management. Provide textual definitions and management standard information for Operations (OP) to implement. (3) Data Quality Assurance Act as the quality guardian for all consumer panel research Ensure good data quality before sending it to clients or uploading it to systems. Efficiently and timely investigate and resolve client data queries. Investigate problematic root causes and process flaws and provide solutions. (4) Panel Health and Sample Design Take responsibility for panel health, stability, sample design, and analysis factors. Own the sourcing and application of Universe data and its ongoing implementation. Provide annual Universe information and offer explanations for any changes. (5) Performance Monitoring and Auditing Audit local operational processes, define and monitor monthly results, and forecast to achieve panel and delivery targets. Regularly review and provide feedback on KPIs to OP teams. (6) Process and Improvement Maintain all procedures related to quality, including mechanisms for purchase management. Continuously identify areas for improvement and propose corrective actions. Monitor and review quality assurance. (7) Training and Education Educate Client Service (CS) and Operations teams on statistical data limitations according to sample sizes and sample design. Provide regular, high-quality local training content. (8) Collaboration and Communication Maintain strong collaborative relationships with customer service through data-driven decisions. Ensure all queries/issues are kept on track and solved on time, following local and regional processes. (9) Long run, you will.. Implement and help define quality standards and policy. Own the local audits of operating procedures, define & monitor monthly results, and forecast to achieve panel & delivery targets. Troubleshoot, resolve, remedy and overhaul where needed process defects which result in under-performance or non-delivery. Own and deliver local quality content training regularly. Productions of the above data matching, fraud detection, and data inference, by putting it into a technological environment Develop algorithm as well as statistical models to match and maximize the current data usage/automation. WHAT'D YOU BRING: Personal traits: stimulated by creative problem solving, comfortable with number, comfortable with ambiguity, work well autonomously but also happy to sit as part of a team, can focus on one specific task and not getting bored by the task, willing to learn and able to learn fast. Professional skills: excellent analytical skills; strong communication as you’ll need to explain the complex technical issues to people who might not have the data analysis or quality background; influencing the others in a rational way; creative thinking and solution-oriented; familiar with SPSS, Python or R, SQL , analytical innovation and automation ; computer coding skill is a plus but definitely a must-have. Minimum 2 tools knowledge is required. Academic background: Bachelor degree or above. Those who majored in Mathematics, Statistics and have certification in data quality might find quicker assimilation to the role but we do not intend to limit the range at all. Fluent in both English and Hindi. Work experience: Mini 8 years of work experience is preferred. If you have customized research experience before, or worked in a data analytics and research or statistics role, that would be a preferential advantage. Ability to communicate clearly and confidently, spoken and written, to levels of business locally and with Regional Quality team and peers in other markets. Ability to be self-motivated and independent as well as teamwork. Able to work under pressure to tight deadlines. With the following qualities… Inquisitive, with critical thinking, and demonstrate a genuine passion for consumer behaviour. Enjoys actively looking for new and more efficient ways of improving processes, raising standards, reducing errors, and overcoming omissions. Is proactive, optimistic, and willing to get involved to achieve the team's goals and objectives. Is highly collaborative and adaptable with the ability to work effectively within different cultural and technical environments. Possesses outstanding communication and interpersonal skills in order to comfortably connect with partners at all levels across the organization and facilitates discussions in a constructive manner. Join us! At Kantar, we want to create a more diverse community to expand our talent pool, be locally representative, drive diversity of thinking and better commercial outcomes. We also make sure we create an equality of opportunity in a fair and encouraging working environment where people feel included and accepted and are allowed to flourish in a space where their mental health and well-being are taken into consideration.
Posted 1 month ago
0.0 - 3.0 years
0 Lacs
Gurugram, Haryana
On-site
Senior Data Scientist (Deep Learning and Artificial Intelligence) Job Description We aim to bring about a new paradigm in medical image diagnostics; providing intelligent, holistic, ethical, explainable and patient centric care. We are looking for innovative problem solvers who love solving problems. We want people who can empathize with the consumer, understand business problems, and design and deliver intelligent products. People who are looking to extend artificial intelligence into unexplored areas. Your primary focus will be in applying deep learning and artificial intelligence techniques to the domain of medical image analysis. Responsibilities Selecting features, building and optimizing classifier engines using deep learning techniques. Understanding the problem and applying the suitable image processing techniques Use techniques from artificial intelligence/deep learning to solve supervised and unsupervised learning problems. Understanding and designing solutions for complex problems related to medical image analysis by using Deep Learning/Object Detection/Image Segmentation. Recommend and implement best practices around the application of statistical modeling. Create, train, test, and deploy various neural networks to solve complex problems. Develop and implement solutions to fit business problems which may include applying algorithms from a standard statistical tool, deep learning or custom algorithm development. Understanding the requirements and designing solutions and architecture in accordance with them is important. Participate in code reviews, sprint planning, and Agile ceremonies to drive high-quality deliverables. Design and implement scalable data science architectures for training, inference, and deployment pipelines. Ensure code quality, readability, and maintainability by enforcing software engineering best practices within the data science team. Optimize models for production, including quantization, pruning, and latency reduction for real-time inference. Drive the adoption of versioing strategies for models, datasets, and experiments (e.g., using MLFlow, DVC). Contribute to the architectural design of data platforms to support large-scale experimentation and production workloads. Skills and Qualifications Strong software engineering skills in Python (or other languages used in data science) with emphasis on clean code, modularity, and testability. Excellent understanding and hands-on of Deep Learning techniques such as ANN, CNN, RNN, LSTM, Transformers, VAEs etc. Must have experience with Tensorflow or PyTorch framework in building, training, testing, and deploying neural networks. Experience in solving problems in the domain of Computer Vision. Knowledge of data, data augmentation, data curation, and synthetic data generation. Ability to understand the complete problem and design the solutions that best fit all the constraints. Knowledge of the common data science and deep learning libraries and toolkits such as Keras, Pandas, Scikit-learn, Numpy, Scipy, OpenCV etc. Good applied statistical skills, such as distributions, statistical testing, regression, etc. Exposure to Agile/Scrum methodologies and collaborative development practices. Experience with the development of RESTful APIs. The knowledge of libraries like FastAPI and the ability to apply it to deep learning architectures is essential. Excellent analytical and problem-solving skills with a good attitude and keen to adapt to evolving technologies. Experience with medical image analysis will be an advantage. Experience designing and building ML architecture components (e.g., feature stores, model registries, inference servers). Solid understanding of software design patterns, microservices, and cloud-native architectures. Expertise in model optimization techniques (e.g., ONNX conversion, TensorRT, model distillation) Education : BE/B Tech MS/M Tech (will be a bonus) Experience : 3+ Years Job Type: Full-time Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Do you have experience leading teams in AI Development? Do you have experience creating software architecture for production environment in AI applications? Experience: Deep learning: 3 years (Required) Computer vision: 3 years (Required) PyTorch: 3 years (Required) Work Location: In person
Posted 1 month ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the Role: We are seeking an experienced MLOps Engineer to lead the deployment, scaling, and performance optimization of open-source Generative AI models on cloud infrastructure. You’ll work at the intersection of machine learning, DevOps, and cloud engineering to help productize and operationalize large-scale LLM and diffusion models. Key Responsibilities: Design and implement scalable deployment pipelines for open-source Gen AI models (LLMs, diffusion models, etc.). Fine-tune and optimize models using techniques like LoRA, quantization, distillation, etc. Manage inference workloads, latency optimization, and GPU utilization. Build CI/CD pipelines for model training, validation, and deployment. Integrate observability, logging, and alerting for model and infrastructure monitoring. Automate resource provisioning using Terraform, Helm, or similar tools on GCP/AWS/Azure. Ensure model versioning, reproducibility, and rollback using tools like MLflow, DVC, or Weights & Biases. Collaborate with data scientists, backend engineers, and DevOps teams to ensure smooth production rollouts. Required Skills & Qualifications: 5+ years of total experience in software engineering or cloud infrastructure. 3+ years in MLOps with direct experience in deploying large Gen AI models. Hands-on experience with open-source models (e.g., LLaMA, Mistral, Stable Diffusion, Falcon, etc.). Strong knowledge of Docker, Kubernetes, and cloud compute orchestration. Proficiency in Python and familiarity with model-serving frameworks (e.g., FastAPI, Triton Inference Server, Hugging Face Accelerate, vLLM). Experience with cloud platforms (GCP preferred, AWS or Azure acceptable). Familiarity with distributed training, checkpointing, and model parallelism. Good to Have: Experience with low-latency inference systems and token streaming architectures. Familiarity with cost optimization and scaling strategies for GPU-based workloads. Exposure to LLMOps tools (LangChain, BentoML, Ray Serve, etc.). Why Join Us: Opportunity to work on cutting-edge Gen AI applications across industries. Collaborative team with deep expertise in AI, cloud, and enterprise software. Flexible work environment with a focus on innovation and impact.
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Hello, Truecaller is calling you from Bangalore, India! Ready to pick up? Our goal is to make communication smarter, safer, and more efficient, all while building trust everywhere. We're all about bringing you smart services with a big social impact, keeping you safe from fraud, harassment, scam calls or messages, so you can focus on the conversations that matter. Top 20 most downloaded apps globally, and world’s #1 caller ID and spam-blocking service for Android and iOS, with extensive AI capabilities, with more than 400 million active users per month. Founded in 2009, listed on Nasdaq OMX Stockholm and is categorized as a Large Cap. Our focus on innovation, operational excellence, sustainable growth, and collaboration has resulted in consistently high profitability and strong EBITDA margins. A team of 400 people from ~35 different nationalities spread across our headquarters in Stockholm and offices in Bangalore, Mumbai, Gurgaon and Tel Aviv with high ambitions. We in the Insights Team are responsible for SMS Categorization, Fraud detection and other Smart SMS features within the Truecaller app. The OTP & bank notifications, bill & travel reminder alerts are some examples of the Smart SMS features. The team has developed a patented offline text parser that powers all these features and the team is also exploring cutting edge technologies like LLM to enhance the Smart SMS features. The team’s mission is to become the World’s most loved and trusted SMS app which is aligned with Truecaller’s vision to make communication safe and efficient. Smart SMS is used by over 90M users every day. As an ML Engineer , you will be responsible for collecting, organizing, analyzing, and interpreting Truecaller data with a focus on NLP. In this role, you will be working hands-on to optimize the training and deployment of ML models to be quick and cost-efficient. Also, you will be pivotal in advancing our work with large language models and in-device models across diverse regions. Your expertise will enhance our natural language processing, machine learning, and predictive analytics capabilities. What you bring in : 3+ years in machine learning engineering, with hands-on involvement in feature engineering, model development, and deployment. Experience in Natural Language Processing (NLP), with a deep understanding of text processing, model development, and deployment challenges in the domain. Proven ability to develop, deploy, and maintain machine learning models in production environments, ensuring scalability, reliability, and performance. Strong familiarity with ML frameworks like TensorFlow, PyTorch, and ONNX, and experience in tech stack such as Kubernetes, Docker, APIs, Vertex AI, GCP. Experience deploying models across backend and mobile platforms. Fine-tune and optimize LLMs prompts for domain-specific applications Ability to optimize feature engineering, model training, and deployment strategies for performance and efficiency. Strong SQL and statistical skills. Programming knowledge in at least one language, such as Python or R. Preferably python. Knowledge of machine learning algorithms. Excellent teamwork and communication skills, with the ability to work cross-functionally with product, engineering, and data science teams. Good to have the knowledge in retrieval-based pipelines to enhance LLM performance The impact you will create: Collaborate with Product and Engineering to scope, design, and implement systems that solve complex business problems ensuring they are delivered on time and within scope. Design, develop, and deploy state-of-the-art NLP models, contributing directly to message classification and fraud detection at scale for millions of users. Leverage cutting-edge NLP techniques to enhance message understanding, spam filtering, and fraud detection, ensuring a safer and more efficient messaging experience. Build and optimize ML models that can efficiently handle large-scale data processing while maintaining accuracy and performance. Work closely with data scientists and data engineers to enable rapid experimentation, development, and productionization of models in a cost-effective manner. Streamline the ML lifecycle, from training to deployment, by implementing automated workflows, CI/CD pipelines, and monitoring tools for model health and performance. Stay ahead of advancements in ML and NLP, proactively identifying opportunities to enhance model performance, reduce latency, and improve user experience. Your work will directly impact millions of users, improving message classification, fraud detection, and the overall security of messaging platforms. It would be great if you also have: Understanding of Conversational AI Deploying NLP models in production Working knowledge of GCP components Cloud-based LLM inference with Ray, Kubernetes, and serverless architectures. Life at Truecaller - Behind the code: https://www.instagram.com/lifeattruecaller/ Sounds like your dream job? We will fill the position as soon as we find the right candidate, so please send your application as soon as possible. As part of the recruitment process, we will conduct a background check. This position is based in Bangalore , India. We only accept applications in English. What we offer: A smart, talented and agile team: An international team where ~35 nationalities are working together in several locations and time zones with a learning, sharing and fun environment. A great compensation package: Competitive salary, 30 days of paid vacation, flexible working hours, private health insurance, parental leave, telephone bill reimbursement, Udemy membership to keep learning and improving and Wellness allowance. Great tech tools: Pick the computer and phone that you fancy the most within our budget ranges. Office life: We strongly believe in the in-person collaboration and follow an office-first approach while offering some flexibility. Enjoy your days with great colleagues with loads of good stuff to learn from, daily lunch and breakfast and a wide range of healthy snacks and beverages. In addition, every now and then check out the playroom for a fun break or join our exciting parties and or team activities such as Lab days, sports meetups etc. There something for everyone! Come as you are: Truecaller is diverse, equal and inclusive. We need a wide variety of backgrounds, perspectives, beliefs and experiences in order to keep building our great products. No matter where you are based, which language you speak, your accent, race, religion, color, nationality, gender, sexual orientation, age, marital status, etc. All those things make you who you are, and that’s why we would love to meet you.
Posted 1 month ago
175.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Enterprise Essentials team within Financial Data Engineering is hiring for a highly skilled Senior Engineering Manager with expertise in Python/Java Full Stack Development, Generative AI, Data Engineering, and Natural Language Processing. The Senior Engineering Manager will be working on creating new capabilities and modernizing existing ones in the domain of Global Tax, Finance and GSM Conversational AI platforms, and Enterprise essential products like Reconciliations, ERRM, Balancing and Control, Concur, and Ariba. The ideal candidate will be responsible for designing, developing, and maintaining scalable AI-driven applications and data pipelines. This role requires a deep understanding of NLP techniques, modern AI frameworks, data engineering best practices, and full-stack development to build innovative solutions that leverage machine learning and AI technologies. Key Responsibilities: Oversees and mentors a team of Software Engineering colleagues, enabling a culture of continuous learning, growth opportunities, and inclusivity for all individual colleagues and teams. Provides direct leadership and coaching to teams, supporting training and development of best practices. Manages resource allocation, project timeline, and budget for Software Engineering projects, ensuring alignment with organizational goals. Collaborates with senior leadership to hire top talent for the team, ensuring a high-functioning and cohesive unit, implementing strategies for talent retention and professional development Leads the development, deployment, support, and monitoring of software across various environments. Collaborates with senior leadership and cross-functional teams to define and implement technology roadmaps and strategies. Leads teams to innovate and automate processes, driving efficiency and scalability in production environments. Drives continuous improvement initiatives, leveraging metrics and feedback to improve team performance and software quality. Collaborates and co-creates effectively with teams in product and the business to align technology initiatives with business objectives. Full Stack Development: Design and develop scalable and secure applications using Java / Python framework, and front-end technologies such as React. Implement and optimize microservices, APIs, and server-side logic for AI-based platforms. Develop and maintain cloud-based, containerized applications (Docker, Kubernetes). Design, optimize, and deploy high-performance systems ensuring minimal latency and maximum throughput. Architect solutions for real-time processing, ensuring low-latency data retrieval and high system availability. Troubleshoot and enhance system performance, optimizing for large-scale, real-time, distributed and COTS applications. Generative AI & Machine Learning: Develop, and deploy innovative solutions in Tax, and finance using ML & Generative AI models leveraging frameworks such as langchain. Implement NLP algorithms for language understanding, text summarization, information extraction, and conversational agents. Create pipelines for training and deploying AI models efficiently in production environments. Collaborate with data scientists to optimize and scale AI/NLP solutions. Integrate AI/ML models into applications, ensuring proper scaling, optimization, and monitoring of models in production. Design solutions that enable fast and efficient inference for real-time AI applications. Data Engineering: Build and maintain data pipelines to support AI/ML model development and deployment. Design and develop ETL processes to ingest, clean, and process large-scale structured and unstructured datasets. Work with data storage and retrieval solutions like SQL/NoSQL databases, data lakes, and cloud storage (GCP, AWS, or Azure). Ensure data integrity, security, and performance of the data pipelines. Collaboration & Leadership: Lead cross-functional teams to deliver high-quality, AI-driven products. Lead and mentor engineers and collaborate with product managers, data scientists, and business stakeholders to ensure alignment with project goals. Keep up-to-date with the latest advancements in AI, NLP, and data engineering, and provide technical guidance to the team. Takes accountability for the success of the team achieving their goals Drives the team’s strategy and prioritizes initiatives Influence team members by challenging status quo, demonstrating risk taking, and implementing creative ideas. Be a productivity multiplier for your team by analysing your workflow and contributing to enable the team to be more effective, productive, and demonstrating faster and stronger results. Mentor and guide team members to success within the team Minimum Qualifications Education: Bachelor’s or Master’s in Computer Science, Engineering, Data Science, or a related field. 10+ years of experience in software engineering in architecture and design (architecture, design patterns, reliability and scaling) of new and existing systems. Strong experience in developing full stack software in Java or Python, data engineering, and AI/NLP solutions and demonstrated ability to quickly learn new languages. Following standard Engineering excellence standards while building software. Leveraging code assistants like Github Copilot. Writing great prompts for generating high quality code, tests, and other artefacts like documentation. Proficiency in data engineering tools and frameworks like GCP BigQuery, Apache Spark, Kafka. Proficiency with containerization (Docker, Kubernetes), CI/CD pipelines, and version control. Experience with RESTful API design, microservices architecture, and cloud platforms (AWS / GCP / Azure). Preferred Qualifications Experience working with large-scale AI systems in production environments. Familiarity with modern AI research and developments in Generative AI and NLP. `Strong understanding of DevOps and Infrastructure-as-Code (Terraform, Ansible). Proven track record of delivering AI-driven products that scale Understanding of MLOps practices will be a plus. Familiarity with Generative AI models and frameworks (e.g., GPT, DALL-E) Knowledge of machine learning frameworks (TensorFlow, PyTorch, Scikit-learn) will be a plus We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France