Home
Jobs

1209 Inference Jobs - Page 2

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

12 - 18 Lacs

Hyderābād

Remote

GlassDoor logo

Job Description: About the Role: Our team is responsible for building the backend components of MLOps platform on AWS. The backend components we build are the fundamental blocks for feature engineering, feature serving, model deployment and model inference in both batch and online modes. What you’ll do here Design & build backend components of our MLOps platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you’ll need to succeed Must have skills: Experience with web development frameworks such as Flask, Django or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with unit and functional testing frameworks. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice to have skills: Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Skills Required "Python Development (Flask, Django or FastAPI) WSGI & ASGI web servers (Gunicorn, Uvicorn etc) AWS" Job Type: Contractual / Temporary Contract length: 12 months Pay: ₹100,000.00 - ₹150,000.00 per month Location Type: Hybrid work Schedule: Day shift Work Location: Hybrid remote in Hyderabad, Telangana

Posted 1 day ago

Apply

8.0 years

3 - 10 Lacs

Gurgaon

On-site

GlassDoor logo

- 8+ years of specific technology domain areas (e.g. software development, cloud computing, systems engineering, infrastructure, security, networking, data & analytics) experience - 3+ years of design, implementation, or consulting in applications and infrastructures experience - 10+ years of IT development or implementation/consulting in the software or Internet industries experience Sales, Marketing and Global Services (SMGS) AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Do you like startups? Are you interested in Cloud Computing & Generative AI? Yes? We have a role you might find interesting. Startups are the large enterprises of the future. These young companies are founded by ambitious people who have a desire to build something meaningful and to challenge the status quo. To address underserved customers, or to challenge incumbents. They usually operate in an environment of scarcity: whether that’s capital, engineering resource, or experience. This is where you come in. The Startup Solutions Architecture team is dedicated to working with these early stage startup companies as they build their businesses. We’re here to make sure that they can deploy the best, most scalable, and most secure architectures possible – and that they spend as little time and money as possible doing so. We are looking for technical builders who love the idea of working with early stage startups to help them as they grow. In this role, you’ll work directly with a variety of interesting customers and help them make the best (and sometimes the most pragmatic) technical decisions along the way. You’ll have a chance to build enduring relationships with these companies and establish yourself as a trusted advisor. As well as spending time working directly with customers, you’ll also get plenty of time to “sharpen the saw” and keep your skills fresh. We have more than 175 services across a range of different categories and it’s important that we can help startups take advantages of the right ones. You’ll also play an important role as an advocate with our product teams to make sure we are building the right products for the startups you work with. And for the customers you don’t get to work with on a 1:1 basis you’ll get the chance to share your knowledge more broadly by working on technical content and presenting at events. A day in the life You’re surrounded by innovation. You’re empowered with a lot of ownership. Your growth is accelerated. The work is challenging. You have a voice here and are encouraged to use it. Your experience and career development is in your hands. We live our leadership principles every day. At Amazon, it's always "Day 1". Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Experience in developing and deploying large scale machine learning or deep learning models and/or systems into production, including batch and real-time data processing Experience scaling model training and inference using technologies like Slurm, ParallelCluster, Amazon SageMaker Hands-on experience benchmarking and optimizing performance of models on accelerated computing (GPU, TPU, AI ASICs) clusters with high-speed networking. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 day ago

Apply

3.0 years

4 - 8 Lacs

India

On-site

GlassDoor logo

Job Title: Node.js Developer with AI/ML Expertise Experience Required: 3 Years Location: Noida Sector – 62 About Company: Benthon Labs is a fast-growing global software development company. We are an IT Service based organization providing IT services to our clients. Company Website - https://www.benthonlabs.com Job Summary: We are seeking a skilled and motivated Node.js Developer with a strong background in AI/ML to join our engineering team. The ideal candidate will have hands-on experience developing scalable backend systems using Node.js and integrating machine learning models into production environments. You will collaborate with cross-functional teams including data scientists, frontend developers, and product managers to build intelligent applications that deliver real-world impact Key Responsibilities: 1. Design, develop, and maintain high-performance APIs and backend services using Node.js. 2. Integrate AI/ML models into backend systems and optimize for performance and scalability. 3. Work closely with data science teams to produce machine learning models. 4. Implement data pipelines for training and inference using tools like Python, TensorFlow, or PyTorch. 5. Monitor, troubleshoot, and enhance the performance of deployed models and services. 6. Ensure code quality through automated testing, code reviews, and documentation. 7.Follow best practices for security, scalability, and data privacy. 8. Stay up to date with emerging technologies in backend development and AI/ML. Required Skills & Qualifications: 1. 3+ years of professional experience with Node.js and JavaScript/TypeScript. 2. Strong understanding of backend architecture, RESTful APIs, and microservices. 3. Solid experience working with AI/ML frameworks (e.g., TensorFlow, PyTorch, scikit- learn). 4. Experience with model deployment (e.g., Django OR Flask, Fast API, TensorFlow Serving, or Docker-based solutions). 5. Familiarity with databases (MongoDB and MYSQL) 6. Proficient in writing clean, maintainable, and well-documented code. 7. Strong problem-solving and communication skills. Job Type: Full-time Pay: ₹400,000.00 - ₹800,000.00 per year Schedule: Day shift Monday to Friday Work Location: In person

Posted 1 day ago

Apply

1.0 - 10.0 years

0 Lacs

Noida

On-site

GlassDoor logo

Senior Executive EXL/SE/1386291 Insurance Platform ServicesNoida Posted On 24 Jun 2025 End Date 08 Aug 2025 Required Experience 1 - 10 Years Basic Section Number Of Positions 4 Band A2 Band Name Senior Executive Cost Code D900173 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 350000.0000 - 450000.0000 Complexity Level Back Office (Complexity Level 3) Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Insurance Sub Group Insurance Organization Insurance Platform Services LOB Property Survey SBU Personal Lines Country India City Noida Center Noida - Centre 59 Skills Skill ENGLISH LANGUAGE EXCEL BACK OFFICE MS WORD Minimum Qualification GRADUATE Certification No data available Job Description Job Description Function, Responsibility Level Insurance Operations, Senior Executive Reports to Assistant Manager/Lead Assistant Manager/Manager – Insurance Operations - Basic Function (Property Survey) Responsible for carrying out review of property survey reports submitted by Independent Consultants (ICs) and various other tasks in a manner that is consistent with company policies, procedures and standards. Follow appropriate Operating procedure Meet quality goals Meet office time service goals Monitor e-mails, and respond in a timely manner Send reports to clients Handle additional duties as assigned Competencies Excellent written communication skills, with an ability to think and react to situations confidently Domain experience in Homeowner/ Commercial Insurance (Preferred but not mandatory) Must be assertive, persistent, and result-oriented, ability to work in a team environment and adhere to department guidelines Knowledgeable in Microsoft Word, Excel and Power Point Skills Requirement Technical Skills (Minimum) Proficient with computer systems and software including Microsoft Excel, Outlook and Word Typing Speed of at least 30 WPM and 90% accuracy Soft skills (Minimum) Good Communication Skills – Able to express thoughts and ideas in an accurate and understandable manner through verbal and written format with internal and external contacts High Levels of Comprehension – Able to understand and follow information received from field staff or from the customer Able to identify the main idea, cause and effect, fact and opinion, make inference, compare and contrast, sequence information, and draw conclusions basis the information acquired or provided Customer Focus Identifies and understands the (internal or external) customer’s needs Detail oriented with excellent follow up skills Teamwork Works effectively with the team to accomplish goals, takes action that respects the needs of others and those of the organization Effective interpersonal skills Adaptability Maintains effectiveness despite changes to situations, tasks, responsibilities, and people Professionalism Conducting oneself with responsibility, integrity, accountability and excellence Work Standards Sets own high standards of performance Education Requirements Minimum of bachelor’s degree in any field Work Experience Requirements Minimum 1 year of work experience in BPO preferably in P&C Insurance Workflow Workflow Type Back Office

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

CWX is looking for a dynamic SENIOR AI/ML ENGINEER to become a vital part of our vibrant PROFESSIONAL SERVICES TEAM , working on-site in Hyderabad . Join the energy and be part of the momentum! At CloudWerx, we're looking for a Senior AI/ML Engineer to lead the design, development, and deployment of tailored AI/ML solutions for our clients. In this role, you’ll work closely with clients to understand their business challenges and build innovative, scalable, and cost-effective solutions using tools like Google Cloud Platform (GCP), Vertex AI, Python, PyTorch, LangChain, and more. You’ll play a key role in translating real-world problems into robust machine learning architectures, with a strong focus on Generative AI, multi-agent systems, and modern MLOps practices. From data preparation and ensuring data integrity to building and optimizing models, you’ll be hands-on across the entire ML lifecycle — all while ensuring seamless deployment and scaling using cloud-native infrastructure. Clear communication will be essential as you engage with both technical teams and business stakeholders, making complex AI concepts understandable and actionable. Your deep expertise in model selection, optimization, and deployment will help deliver high-performing solutions tailored to client needs. We’re also looking for someone who stays ahead of the curve — someone who’s constantly learning and experimenting with the latest developments in generative AI, LLMs, and cloud technologies. Your curiosity and drive will help push the boundaries of what's possible and fuel the success of the solutions we deliver. This is a fantastic opportunity to join a fast-growing, engineering-led cloud consulting company that tackles some of the toughest challenges in the industry. At CloudWerx, every team member brings something unique to the table, and we foster a supportive environment that helps people do their best work. Our goal is simple: to be the best at what we do and help our clients accelerate their businesses through world-class cloud solutions. This role is an immediate full time position. Insight on your impact Conceptualize, Prototype, and Implement AI Solutions: Design and deploy advanced AI solutions using large language models (LLMs), diffusion models, and multimodal AI systems by leveraging Google Cloud tools such as Vertex AI, AutoML, and AI Platform (Agent Builder). Implement Retrieval-Augmented Generation (RAG) pipelines for chatbots and assistants, and create domain-specific transformers for NLP, vision, and cross-modal applications. Utilize Document AI, Translation AI, and Vision AI to develop full-stack, multimodal enterprise applications. Technical Expertise: models via LoRA, QLoRA, RLHF, and Dreambooth. Build multi-agent systems using Agent Development Kit (ADK), Agent-to-Agent (A2A) Protocol, and Model Context Protocol (MCP). Provide thought leadership on best practices, architecture patterns, and technical decisions across LLMs, generative AI, and custom ML pipelines, tailored to each client’s unique business needs. Stakeholder Communication: Effectively communicate complex AI/ML concepts, architectures, and solutions to business leaders, technical teams, and non-technical stakeholders. Present project roadmaps, performance metrics, and model validation strategies to C-level executives and guide organizations through AI transformation initiatives. Understand client analytics & modeling needs:Collaborate with clients to extract, analyze, and interpret both internal and external data sources. Design and operationalize data pipelines that support exploratory analysis and model development, enabling business-aligned data insights and AI solutions. Database Management: Work with structured (SQL/BigQuery) and unstructured (NoSQL/Firestore, Cloud Storage) data. Apply best practices in data quality, versioning, and integrity across datasets used for training, evaluation, and deployment of AI/ML models. Cloud Expertise: Architect and deploy cloud-native AI/ML solutions using Google Cloud services including Vertex AI, BigQuery ML, Cloud Functions, Cloud Run, and GKE Autopilot. Provide consulting on GCP service selection, infrastructure scaling, and deployment strategies aligned with client requirements. MLOps & DevOps: Lead the implementation of robust MLOps and LLMOps pipelines using TensorFlow Extended (TFX), Kubeflow, and Vertex AI Pipelines. Set up CI/CD workflows using Cloud Build and Artifact Registry, and deploy scalable inference endpoints through Cloud Run and Agent Engine. Establish automated retraining, drift detection, and monitoring strategies for production ML systems. Prompt Engineering and fine tuning: Apply advanced prompt engineering strategies (e.g., few-shot, in-context learning) to optimize LLM outputs. Fine-tune models using state-of-the-art techniques including LoRA, QLoRA, Dreambooth, ControlNet, and RLHF to enhance instruction-following and domain specificity of generative models. LLMs, Chatbots & Text Processing:Develop enterprise-grade chatbots and conversational agents using Retrieval-Augmented Generation (RAG), powered by both open-source and commercial LLMs. Build state-of-the-art generative solutions for tasks such as intelligent document understanding, summarization, and sentiment analysis. Implement LLMOps workflows for lifecycle management of large-scale language applications. Consistently Model and Promote Engineering Best Practices: Promote a culture of technical excellence by adhering to software engineering best practices including version control, reproducibility, structured documentation, Agile retrospectives, and continuous integration. Mentor junior engineers and establish guidelines for scalable, maintainable AI/ML development. Our Diversity and Inclusion Commitment At CloudWerx, we are dedicated to creating a workplace that values and celebrates diversity. We believe that a diverse and inclusive environment fosters innovation, collaboration, and mutual respect. We are committed to providing equal employment opportunities for all individuals, regardless of background, and actively promote diversity across all levels of our organization. We welcome all walks of life, as we are committed to building a team that embraces and mirrors a wide range of perspectives and identities. Join us in our journey toward a more inclusive and equitable workplace. Background Check Requirement All candidates for employment will be subject to pre-employment background screening for this position. All offers are contingent upon the successful completion of the background check. For additional information on the background check requirements and process, please reach out to us directly. Our Story CloudWerx is an engineering-focused cloud consulting firm born in Silicon Valley - in the heart of hyper-scale and innovative technology. In a cloud environment we help businesses looking to architect, migrate, optimize, secure or cut costs. Our team has unique experience working in some of the most complex cloud environments at scale and can help businesses accelerate with confidence.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At (TekLink HGS Digital), our vision is to be the globally preferred digital transformation partner for our clients, creating value in their business through rigorous innovation at scale. We are an expert team of 500+ leading strategic thinkers, digital marketing and creative masters, data analysts, software engineers, and process optimization specialists with an elemental desire to create transformative digital solutions. Job Title: Data Scientist Location: Hyderabad, India Duration: Full time Data scientist would support our internal teams and clients in driving strategic decisions, applying advanced statistical & predictive analytics and machine learning concepts to solve business problems in BFSI and CPG domains. You will also phrase requirements document, contribute towards project plan, carry out data research and collection, study attributes and features, test for parameters, resolve data issues, decide on models, modeling, QA/testing and showcase the findings in various formats for client consumption. Responsibilities: a) Analytics Requirements Definition: Works with business users to approve the requirements for analytics solution. b) Data Preparation: Reviews data preparation rules (data extraction, data integration, data granularity, data cleansing etc.). Prepares data for analytical modelling. Guides data analysts and associate data scientists on data preparation activities. c) Builds Machine Learning (ML) and Statistical Models using Python/R/Scala/SAS/SPSS d) Collaborate with clients and internal teams to define industry-leading analytics solutions for a wide variety of industries and business groups e) Develop proof-of-concepts and demos needed for client & internal presentations f) Create clear functional and technical documentation g) Work agnostic across multiple industry sectors and functional domains, with focus on BFSI and CPG domains. h) Work closely with all stakeholders to identify, evaluate, design, and implement statistical and other quantitative approaches for modeling enterprise scale data and big data i) Display proficiency in converting algorithmic proof of concepts into business requirement documents for product development or data driven actionable intelligence Minimum Requirements & Qualification The ideal candidate should have: • Full time Degree in Mathematics, Statistics, Computer Science or Computer Applications from reputed institutions, B.E./B.Tech., MBA specialized in Marketing, Operations Research, Data Science and/or Business Analytics • Overall 8+ years of technical experience in IT industry across BFSI and CPG domains. • Minimum of 5 years of hands-on work experience in Data Science/Advance analytics, Machine Learning using Python and SQL • Practical experience specifically around quantitative and analytical skills is required. • People management skills and experience, and familiarity with the pharmaceutical industry are preferred. • Knowledge of solution design, planning, and execution • Contribute to case studies, blogs, eBooks, and whitepapers • Proficiency in maintaining strong project documentation hygiene • Able to fully assimilate into automated MLOps mode • Must have good communication skills – written, oral, ppt and language skills o Able to translate statistical findings to business English • Hands on experience in one or more of the skillsets below: o Programming Language: R Programming, Base SAS, Advanced SAS o Visualization Tool: Tableau, MS Excel, think-cell, Power BI, Qlik Sense o Automation Tool: VBA Macro, Python scripts • Basic understanding of NLP/NLU/NLG and text mining • Skills/knowledge of advanced ML techniques with image processing and signal processing is a plus • GenAI and multimodal GenAI skills with RAG development and fine tuning • Sounds statistical training in linear and non-linear regression, weighted regression, clustering, and classification techniques • Sound understanding of applied statistical methods including survival analysis, categorical data analysis, time series analysis and multivariate statistics • Introduction to classical statistical including concepts in Bayesian statistics, experimental design and inference theory • Practical understanding of concepts in computer vision, data mining, machine learning, information retrieval, pattern recognition and knowledge discovery • Additional knowledge in WFM, biological learning systems and modern statistical concepts is a plus • Knowledge of IoT devices and solutions with multi-sensor data fusion is a plus • Knowledge of Geostatistics, information theory, computational statistics is a plus • Experience in character recognition with image, speech, and video analytics capabilities is a plus • Working knowledge of or certifications in AWS/Azure/GCP is beneficial

Posted 1 day ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Are you passionate about leveraging data to drive impactful decisions? Join our company as a Lead Data Scientist and be at the forefront of innovative health solutions. Our mission is to save lives by advancing the prevention and treatment of diseases, and our team is dedicated to workforce analytics, serving HR and the workforce as our clients. As a key member of our team, you will design and implement advanced models, enhance AI/ML capabilities, and collaborate with various stakeholders to deliver actionable insights. If you are driven by curiosity and have a knack for solving complex problems, this is the perfect opportunity for you to make a difference. Your Core Responsibilities Design, develop, enhance, and implement models that delve deep into our workforce data, ensuring high standards of quality, relevance, and usability Boost AI/ML capability within the team, using the latest methods and tools to extract insights from text, activity, behavioral, and network data Develop and deploy solutions that are robust, scalable, and meet the needs of a diverse user base, including supporting an LLM-based app in production with thousands of users Collaborate with data scientists, data engineers, devops engineers, solution architects, and the data science community of practice, to amplify data science capabilities and drive innovation Work closely with our client-facing teams to address business needs, providing research solutions that are both insightful and actionable Act as an AI/ML expert, advising HR colleagues and end users on the best usage within the HR domain Technically lead and mentor a team of data scientists and data engineers, fostering a collaborative and innovative environment Manage research projects from inception to completion, ensuring agile delivery and alignment with business goals, while also managing relationships and influencing non-technical stakeholders Who You Are (Education minimum requirements subject to change based on country) You are ready if you have Minimum of 10 years of experience in data science or machine learning engineering with a Bachelor’s degree from an accredited institution in Computer Science, Data Science, Machine Learning, Statistics, or another related field. (With a Master’s degree, minimum experience is 8 years) Expertise in using Python, R, and SQL to execute a solid portfolio of data science projects involving statistical inference, classical machine learning, and deep learning frameworks Solid understanding of NLP tools, methods, and pipeline design, including experience using large language models (LLMs) Proven leadership in team project settings Experience with cloud computing platforms, such as AWS and Databricks, and related tools Familiarity with version control systems Openness to coaching and learning from team members with different specializations Exceptional initiative, curiosity, communication skills, and a team-first orientation Demonstrated interest in projects focused on the workforce Familiarity with product management and agile methodologies Nice to have, but not essential MLOps experience is a big plus, and LLM app deployment experience is ideal Ability to manage relationships, influence non-technical stakeholders, and tell a great data story Understanding of HR data, processes, information systems, and governance Ability to conduct literature reviews and leverage external research to stay on top of best practices in AI/ML and data science in human capital management What we offer (The primary location is Czechia, benefits in other country may vary) Exciting work in a great team, global projects, international environment Opportunity to learn and grow professionally within the company globally Hybrid working model, flexible role pattern Pension and health insurance contributions Internal reward system plus referral programme 5 weeks annual leave, 5 sick days, 15 days of certified sick leave paid above statutory requirements annually, 40 paid hours annually for volunteering activities, 12 weeks of parental contribution Cafeteria for tax free benefits according to your choice (meal vouchers, Lítačka, sport, culture, health, travel, etc.), Multisport Card Vodafone, Raiffeisen Bank, Foodora, and Mall.cz discount programmes Up-to-date laptop and iPhone Parking in the garage, showers, refreshments, library, music corner Competitive salary, incentive pay, and many more Ready to take up the challenge? Apply now! Know anybody who might be interested? Refer this job! Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Not Applicable Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Design, Data Engineering, Data Modeling, Data Science, Data Visualization, Machine Learning, Software Development, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 06/13/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R321934

Posted 1 day ago

Apply

0.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Bangalore,Karnataka,India Job ID 763418 Join our Team As the technology firm that created the mobile world and a rich history of 145 years of building ground-breaking solutions and innovative technologies supported by 60,000+ patents, Ericsson has made it our business to make a mark. When joining our team at Ericsson you are empowered to learn, lead and perform at your best, shaping the future of technology. This is a place where you are welcomed as your own perfectly unique self, and celebrated for the skills, talent, and perspective you bring to the team. Ericsson Enterprise Wireless Solutions (BEWS) is the group responsible for leading Ericsson’s Enterprise Networking and Security business. Our growing product portfolio spans across wide area networks, local area networks and enterprise security. We are the #1 global market leader in Wireless-WAN based enterprise connectivity solutions and are growing fast in enterprise Private 5G networks and Secure Access Services Edge (SASE) solutions. You will Develop scientific methods, processes, and systems to extract knowledge or insights to drive the future of applied analytics. Mine and analyze data from company databases to drive optimization and improvement of product development and business strategies. Assess the effectiveness of new data sources and data gathering techniques. Develop custom data models and algorithms to apply to data sets. Use Generative AI and predictive modeling to enhance customer experiences, revenue generation and other business outcomes. You must have Solid understanding in Statistics, e.g., hypothesis formulation, hypothesis testing, descriptive analysis and data exploration. Ability to perform EDA and visualize the data. Aptitude and skills in Machine Learning, e.g., Natural Language Processing, Bayesian model, Deep Learning, and Large Language Models. Strong programming skills in Python, SQL. Strong understanding of DSA. Strong ambition to learn and implement current state of the art machine learning frameworks such as Scikit-Learn, TensorFlow, PyTorch and Spark. Familiarity with Linux/OS X command line, version control software (git), and general software development. Familiarity with APIs Experience in programming or scripting to enable ETL development. Familiarity with relational databases and Cloud (AWS). Understanding of Reinforcement Learning and Causal Inference will be preferred. Qualifications B.Tech or B.E. or M.Tech or MS in Computer Science / Masters in Mathematics / Statistics from a premium institute Minimum 4 – 6 years of experience in relevant role Why Ericsson Enterprise Wireless Solutions? At Ericsson Enterprise Wireless Solutions, we are one team - all in on inclusion. Celebrating the uniqueness of our individual team members across the globe helps us build diverse teams where we all can thrive. Our connected, community-focused culture enables each one of us to perform at our best and fully be ourselves. Please note: Ericsson Enterprise Wireless Solutions does not accept agency resumes and is not responsible for any fees related to unsolicited resumes. Please do not forward resumes to Ericsson Enterprise Wireless Solutions employees. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 2 days ago

Apply

3.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Job Title : AI/ML Engineer (Python + AWS + REST APIs) Department : Web Location : Indore Job Type : Full-time Experience : 3-5 years Notice Period : (immediate joiners preferred) Work Arrangement : On-site (Work from Office) Overview Advantal Technologies is seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Key Responsibilities AI/ML Development : Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for : Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies Python, PyTorch, TensorFlow FastAPI, Flask AWS : SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL (ref:hirist.tech)

Posted 2 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

About Motadata Motadata is a renowned IT monitoring and management software company that has been transforming how businesses manage their ITOps since its inception. Our vision is to revolutionize the way organizations extract valuable insights from their IT networks. Bootstrapped since inception, Motadata has built up a formidable product suite comprising cutting-edge solutions, empowering enterprises to make informed decisions and optimize their IT infrastructure. As a market leader, we take pride in our ability to collect and analyze data from various sources, in any format, providing a unified view of IT monitoring data. Position Overview We are seeking a Senior Machine Learning Engineer to join our team, focused on enhancing our AIOps and IT Service Management (ITSM) product through the integration of cutting-edge AI/ML features and functionality. As part of our innovative approach to revolutionizing the IT industry, you will play a pivotal role in leveraging data analysis techniques and advanced machine learning algorithms to drive meaningful insights and optimize our product's performance. With a particular emphasis on end-to-end machine learning lifecycle management and MLOps, you will collaborate with cross-functional teams to develop, deploy, and continuously improve AI-driven solutions tailored to our customers' needs. From semantic search and AI chatbots to root cause analysis based on metrics, logs, and traces, you will have the opportunity to tackle diverse challenges and shape the future of intelligent IT operations. Role & Responsibility Lead the end-to-end machine learning lifecycle, understand the business problem statement, convert into ML problem statement, data acquisition, exploration, feature engineering, model selection, training, evaluation, deployment, and monitoring (MLOps). Should be able to lead the team of ML Engineers to solve the business problem and get it implemented in the product, QA validated and improvise based on the feedback from the customer. Collaborate with product managers to understand business needs and translate them into technical requirements for AI/ML solutions. Design, develop, and implement machine learning algorithms and models, including but not limited to statistics, regression, classification, clustering, and transformer-based architectures. Preprocess and analyze large datasets to extract meaningful insights and prepare data for model training. Build and optimize machine learning pipelines for model training and inference using relevant frameworks. Fine-tune existing models and/or train custom models to address specific use cases. Enhance the accuracy and performance of existing AI/ML models through monitoring, iterative refinement and optimization techniques. Collaborate closely with cross-functional teams to integrate AI/ML features seamlessly into our product, ensuring scalability, reliability, and maintainability. Document your work clearly and concisely for future reference and knowledge sharing within the team. Stay ahead of latest developments in machine learning research and technology and evaluate their potential applicability to our product roadmap. Skills And Qualifications Bachelor's or higher degree in Computer Science, Engineering, Mathematics, or related field. Minimum 5+ years of experience as a Machine Learning Engineer or similar role. Proficiency in data analysis techniques and tools to derive actionable insights from complex datasets. Solid understanding and practical experience with machine learning algorithms and techniques, including statistics, regression, classification, clustering, and transformer-based models. Hands-on experience with end-to-end machine learning lifecycle management and MLOps practices. Proficiency in programming languages such as Python and familiarity with at least one of the following : Java,Golang, .NET, Rust. Experience with machine learning frameworks/libraries (e.g. , TensorFlow, PyTorch, scikit-learn) and MLOps tools (e.g. , MLflow, Kubeflow). Experience with ML.NET and other machine learning frameworks. Familiarity with natural language processing (NLP) techniques and tools. Excellent communication and teamwork skills, with the ability to effectively convey complex technical concepts to diverse audiences. Proven track record of delivering high-quality, scalable machine learning solutions in a production environment. (ref:hirist.tech)

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are looking for an AI/ML Engineer with expertise in Text-to-Speech (TTS) systems to train and optimize a Glow-TTS model for Indian languages, starting with Telugu/ other indian languages. The goal is to develop a high-quality, natural-sounding TTS system using datasets like AI4Bharat or other relevant sources. Selected Intern's Day-to-day Responsibilities Include Dataset preparation & preprocessing: Identify and curate high-quality Telugu or other Indian languages speech datasets (AI4Bharat, IndicTTS, or custom datasets) Clean, normalize,e and preprocess text and audio data (phoneme alignment, noise removal, sample rate standardization) Model training & optimization: Fine-tune GlowTTS or Coqui-TTS (or comparable neural TTS architecture) for Telugu/other Indian language speech synthesis Ensure loss convergence by tuning hyperparameters (learning rate, batch size, duration predictors) Experiment with transfer learning from existing multilingual TTS models (if applicable) GPU training & performance tuning (good to have): Optimize training for GPU efficiency (NVIDIA CUDA, mixed precision) Monitor validation loss, attention alignments, and speech quality (MOS testing) Debug training instability (vanishing gradients, overfitting, etc.) Deployment & evaluation: Integrate trained model into an inference pipeline (ONNX, TensorRT, or PyTorch runtime) Benchmark latency, speech quality, and speaker similarity against existing TTS solutions About Company: Coinearth Technologies Pvt Ltd is a dynamic and innovative product-based company established in 2017. While some public records indicate a later incorporation date of 2020, their official communication states their founding year as 2017, suggesting a period of initial development and strategic planning before formal registration. Based in Hyderabad, Telangana, India, the company specializes in building and deploying cutting-edge applications, particularly in the Web3 and fintech sectors. Core Focus: Product Development and Deployment Coinearth Technologies primarily operates as a product company, focusing on creating proprietary software solutions rather than offering traditional IT services. Their expertise lies in the entire lifecycle of app development, from conceptualization and design to robust deployment and ongoing maintenance.

Posted 2 days ago

Apply

0.0 years

1 - 1 Lacs

Hyderabad, Telangana

Remote

Indeed logo

Job Description: About the Role: Our team is responsible for building the backend components of MLOps platform on AWS. The backend components we build are the fundamental blocks for feature engineering, feature serving, model deployment and model inference in both batch and online modes. What you’ll do here Design & build backend components of our MLOps platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you’ll need to succeed Must have skills: Experience with web development frameworks such as Flask, Django or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with unit and functional testing frameworks. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice to have skills: Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Skills Required "Python Development (Flask, Django or FastAPI) WSGI & ASGI web servers (Gunicorn, Uvicorn etc) AWS" Job Type: Contractual / Temporary Contract length: 12 months Pay: ₹100,000.00 - ₹150,000.00 per month Location Type: Hybrid work Schedule: Day shift Work Location: Hybrid remote in Hyderabad, Telangana

Posted 2 days ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In Our Website - https://aurigait.com/

Posted 2 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description NIQ is looking for a Software Engineer to join our AI ML Engineering team. At NIQ, the Retail Measurement System (RMS) is a powerful analytics service that tracks product sales and market performance across a wide range of retail channels. It provides comprehensive, store-level data that helps businesses understand how their products are performing in the market, benchmark against competitors, and identify growth opportunities. Charlink and Jarvis models are used to predict product placements to its ideal hierarchy product tree. Learn more on the data driven approach to train models efficiently to predict placements based on Characteristics. Developing frontend applications to interact with ML models, integrating inference codes, and providing tools and patterns for enhancing our MLOps cycle. The ideal candidate has strong software design and programming experience, with some expertise in cloud computing, and big data technologies, and strong communication and management skills. You will be part of a diverse, flexible, and collaborative environment where you will be able to apply and develop your skills and knowledge working with unique data and exciting applications. Our Software Engineering platform is based in AngularJS, Java, React, Spring Boot, Typescript, Javascript, Sql and Snowflake, and we continue to adopt the best of breed in cloud-native, low-latency technologies. Who we are looking for: You have a strong entrepreneurial spirit and a thirst to solve difficult challenges through innovation and creativity with a strong focus on results You have a passion for data and the insights it can deliver You are intellectually curious with a broad range of interests and hobbies You take ownership of your deliverables You have excellent analytical communication and interpersonal skills You have excellent communication skills with both technical and non-technical audiences You can work with distributed teams situated globally in different geographies You want to work in a small team with a start-up mentality You can work well under pressure, prioritize work and be well organized. Relish tackling new challenges, paying attention to details, and, ultimately, growing professionally. Responsibilities Design, develop, and maintain scalable web applications using AngularJS for the front end and Java (Spring Boot) for the backend Collaborate closely with cross-functional teams to translate business requirements into technical solutions Optimize application performance, usability, and responsiveness Conduct code reviews, write unit tests, and ensure adherence to coding standards Troubleshoot and resolve software defects and production issues Contribute to architecture and technical documentation Qualifications 3–5 years of experience as a full stack developer Proficient in AngularJS(Version 12+), Typescript, Java, Spring Framework (especially Spring Boot) Experience with RESTful APIs and microservices architecture Solid understanding of HTML, CSS, JavaScript, and responsive web design Familiarity with relational databases (e.g., MySQL, PostgreSQL) Hands-on experience with version control systems (e.g., GitHub) and CI/CD tools Strong problem-solving abilities and attention to detail 3 - 5+ years of relevant software engineering experience Minimum B.S. degree in Computer Science, Computer Engineering, Information Technology or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

We're seeking a Mid-Level Machine Learning Engineer to join our growing Data Science & Engineering team. In this role, you will design, develop, and deploy ML models that power our cutting-edge technologies like voice ordering, prediction algorithms and customer-facing analytics. You'll collaborate closely with data engineers, backend engineers, and product managers to take models from prototyping through to production, continuously improving accuracy, scalability, and maintainability. Essential Job Functions Model Development: Design and build next-generation ML models using advanced tools like PyTorch, Gemini, and Amazon SageMaker - primarily on Google Cloud or AWS platforms Feature Engineering: Build robust feature pipelines; extract, clean, and transform largescale transactional and behavioral data. Engineer features like time- based attributes, aggregated order metrics, categorical encodings (LabelEncoder, frequency encoding) Experimentation & Evaluation: Define metrics, run A/B tests, conduct cross-validation, and analyze model performance to guide iterative improvements. Train and tune regression models (XGBoost, LightGBM, scikit-learn, TensorFlow/Keras) to minimize MAE/RMSE and maximize R² Own the entire modeling lifecycle end-to-end, including feature creation, model development, testing, experimentation, monitoring, explainability, and model maintenance Monitoring & Maintenance: Implement logging, monitoring, and alerting for model drift and data-quality issues; schedule retraining workflows Collaboration & Mentorship: Collaborate closely with data science, engineering, and product teams to define, explore, and implement solutions to open-ended problems that advance the capabilities and applications of Checkmate, mentor junior engineers on best practices in ML engineering Documentation & Communication: Produce clear documentation of model architecture, data schemas, and operational procedures; present findings to technical and non-technical stakeholders Requirements Academics: Bachelors/Master's degree in Computer Science, Engineering, Statistics, or related field Experience: 5+ years of industry experience (or 1+ year post-PhD). Building and deploying advanced machine learning models that drive business impact Proven experience shipping production-grade ML models and optimization systems, including expertise in experimentation and evaluation techniques. Hands-on experience building and maintaining scalable backend systems and ML inference pipelines for real-time or batch prediction Programming & Tools: Proficient in Python and libraries such as pandas, NumPy, scikit-learn; familiarity with TensorFlow or PyTorch. Hands-on with at least one cloud ML platform (AWS SageMaker, Google Vertex AI, or Azure ML). Data Engineering: Hands-on experience with SQL and NoSQL databases; comfortable working with Spark or similar distributed frameworks. Strong foundation in statistics, probability, and ML algorithms like XGBoost/LightGBM; ability to interpret model outputs and optimize for business metrics. Experience with categorical encoding strategies and feature selection. Solid understanding of regression metrics (MAE, RMSE, R²) and hyperparameter tuning. Cloud & DevOps: Proven skills deploying ML solutions in AWS, GCP, or Azure; knowledge of Docker, Kubernetes, and CI/CD pipelines Collaboration: Excellent communication skills; ability to translate complex technical concepts into clear, actionable insights Working Terms: Candidates must be flexible and work during US hours at least until 6 p.m. ET in the USA, which is essential for this role & must also have their own system/work setup for remote work

Posted 2 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Data Scientist Experience range: 3+ years Location: CloudLex Pune Office (In-person, Monday to Friday, 9:30 AM – 6:30 PM) Responsibilities Design and implement AI agent workflows. Develop end-to-end intelligent pipelines and multi-agent systems (e.g., LangGraph/LangChain workflows) that coordinate multiple LLM-powered agents to solve complex tasks. Create graph-based or state-machine architectures for AI agents, chaining prompts and tools as needed. Build and fine-tune generative models. Develop, train, and fine-tune advanced generative models (transformers, diffusion models, VAEs, GANs, etc.) on domain-specific data. Deploy and optimize foundation models (such as GPT, LLaMA, Mistral) in production, adapting them to our use cases through prompt engineering and supervised fine-tuning. Develop data pipelines. Build robust data collection, preprocessing, and synthetic data generation pipelines to feed training and inference workflows. Implement data cleansing, annotation, and augmentation processes to ensure high-quality inputs for model training and evaluation. Implement LLM-based agents and automation. Integrate generative AI agents (e.g., chatbots, AI copilots, content generators) into business processes to automate data processing and decision-making tasks. Use Retrieval-Augmented Generation (RAG) pipelines and external knowledge sources to enhance agent capabilities. Leverage multimodal inputs when applicable. Optimize performance and safety. Continuously evaluate and improve model/system performance. Use GenAI-specific benchmarks and metrics (e.g., BLEU, ROUGE, TruthfulQA) to assess results, and iterate to optimize accuracy, latency, and resource efficiency. Implement safeguards and monitoring to mitigate issues like bias, hallucination, or inappropriate outputs. Collaborate and document. Work closely with product managers, engineers, and other stakeholders to gather requirements and integrate AI solutions into production systems. Document data workflows, model architectures, and experimentation results. Maintain code and tooling (prompt libraries, model registries) to ensure reproducibility and knowledge sharing. Required Skills & Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related quantitative field analyticsvidhya.com (or equivalent practical experience). A strong foundation in algorithms, statistics, and software engineering is expected. Programming proficiency: Expert-level skills in Python coursera.org , with hands-on experience in machine learning and deep learning frameworks (PyTorch, TensorFlow) analyticsvidhya.com . Comfortable writing production-quality code and using version control, testing, and code review workflows. Generative model expertise: Demonstrated ability to build, fine-tune, and deploy large-scale generative models analyticsvidhya.com . Familiarity with transformer architectures and generative techniques (LLMs, diffusion models, GANs) analyticsvidhya.comanalyticsvidhya.com . Experience working with model repositories and fine-tuning frameworks (Hugging Face, etc.). LLM and agent frameworks: Strong understanding of LLM-based systems and agent-oriented AI patterns. Experience with frameworks like LangGraph/LangChain or similar multi-agent platforms gyliu513.medium.com . Knowledge of agent communication standards (e.g., MCP/Agent Protocol) gyliu513.medium.comblog.langchain.dev to enable interoperability between AI agents. AI integration and MLOps: Experience integrating AI components with existing systems via APIs and services. Proficiency in retrieval-augmented generation (RAG) setups, vector databases, and prompt engineering analyticsvidhya.com . Familiarity with machine learning deployment and MLOps tools (Docker, Kubernetes, MLflow, KServe, etc.) for managing end-to-end automation and scalable workflows analyticsvidhya.com . Familiarity with GenAI tools: Hands-on experience with state-of-the-art GenAI models and APIs (OpenAI GPT, Anthropic, Claude, etc.) and with popular libraries (Hugging Face Transformers, LangChain, etc.). Awareness of the current GenAI tooling ecosystem and best practices. Soft skills: Excellent problem-solving and analytical abilities. Strong communication and teamwork skills to collaborate across data, engineering, and business teams. Attention to detail and a quality-oriented mindset. (See Ideal Candidate below for more on personal attributes.) Ideal Candidate Innovative, problem-solver: You are a creative thinker who enjoys tackling open-ended challenges. You have a solutions-oriented mindset and proactively experiment with new ideas and techniques analyticsvidhya.com . Systems thinker: You understand how different components (data, models, services) fit together in a large system. You can architect end-to-end AI solutions with attention to reliability, scalability, and integration points. Collaborative communicator: You work effectively in multidisciplinary teams. You are able to explain complex technical concepts to non-technical stakeholders and incorporate feedback. You value knowledge sharing and mentorship. Adaptable learner: The generative AI landscape evolves rapidly. You are passionate about staying current with the latest research and tools. You embrace continuous learning and are eager to upskill and try new libraries or platforms analyticsvidhya.com . Ethical and conscientious: You care about the real-world impact of AI systems. You take responsibility for the quality and fairness of models, and proactively address concerns like data privacy, bias, and security.

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Voice Processing Specialist Location: Remote /Jaipur Job Type: Full-time / Contract Experience: 3+ years expertise in voice cloning, transformation, and synthesis technologies Job Summary We are seeking a talented and motivated Voice Processing Specialist to join our team and lead the development of innovative voice technologies. The ideal candidate will have a deep understanding of speech synthesis, voice cloning, and transformation techniques. You will play a critical role in designing, implementing, and deploying state-of-the-art voice models that enhance naturalness, personalization, and flexibility of speech in AI-powered applications. This role is perfect for someone passionate about advancing human-computer voice interaction and creating lifelike, adaptive voice systems. Key Responsibilities Design, develop, and optimize advanced deep learning models for voice cloning, text-to-speech (TTS), voice conversion, and real-time voice transformation. Implement speaker embedding and voice identity preservation techniques to support accurate and high-fidelity voice replication. Work with large-scale and diverse audio datasets, including preprocessing, segmentation, normalization, and data augmentation to improve model generalization and robustness. Collaborate closely with data scientists, ML engineers, and product teams to integrate developed voice models into production pipelines. Fine-tune neural vocoders and synthesis architectures for better voice naturalness and emotional range. Stay current with the latest advancements in speech processing, AI voice synthesis, and deep generative models through academic literature and open-source projects. Contribute to the development of tools and APIs for deploying models on cloud and edge environments with high efficiency and low latency. Required Skills Strong understanding of speech signal processing, speech synthesis, and automatic speech recognition (ASR) systems. Hands-on experience with voice cloning frameworks such as Descript Overdub, Coqui TTS, SV2TTS, Tacotron, FastSpeech, or similar. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow. Experience working with speech libraries and toolkits such as ESPnet, Kaldi, Librosa, or SpeechBrain. In-depth knowledge of mel spectrograms, vocoder architectures (e.g., WaveNet, HiFi-GAN, WaveGlow), and their role in speech synthesis. Familiarity with REST APIs, model deployment, and cloud-based inference systems using platforms like AWS, Azure, or GCP. Ability to optimize models for performance in real-time or low-latency environments. Preferred Qualifications Experience in real-time voice transformation, including pitch shifting, timing modification, or emotion modulation. Exposure to emotion-aware speech synthesis, multilingual voice models, or prosody modeling. Design, develop, and optimize advanced deep learning models for voice cloning, text-to-speech (TTS), voice conversion, and real-time voice transformation Background in audio DSP (Digital Signal Processing) and speech analysis techniques. Previous contributions to open-source speech AI projects or publications in relevant domains. Why Join Us You will be part of a fast-moving, collaborative team working at the forefront of voice AI innovation. This role offers the opportunity to make a significant impact on products that reach millions of users, helping to shape the future of interactive voice experiences. Skills: automatic speech recognition (asr),vocoder architectures,voice cloning,voice processing,data,real-time voice transformation,speech synthesis,pytorch,tensorflow,voice conversion,speech signal processing,audio dsp,rest apis,python,cloud deployment,transformation,mel spectrograms,deep learning

Posted 2 days ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Description Function, Responsibility Level Insurance Operations, Senior Executive Reports to Assistant Manager/Lead Assistant Manager/Manager – Insurance Operations Basic Function (Property Survey) Responsible for carrying out review of property survey reports submitted by Independent Consultants (ICs) and various other tasks in a manner that is consistent with company policies, procedures and standards. Follow appropriate Operating procedure Meet quality goals Meet office time service goals Monitor e-mails, and respond in a timely manner Send reports to clients Handle additional duties as assigned Competencies Excellent written communication skills, with an ability to think and react to situations confidently Domain experience in Homeowner/ Commercial Insurance (Preferred but not mandatory) Must be assertive, persistent, and result-oriented, ability to work in a team environment and adhere to department guidelines Knowledgeable in Microsoft Word, Excel and Power Point Skills Requirement Technical Skills (Minimum) Proficient with computer systems and software including Microsoft Excel, Outlook and Word Typing Speed of at least 30 WPM and 90% accuracy Soft Skills (Minimum) Good Communication Skills – Able to express thoughts and ideas in an accurate and understandable manner through verbal and written format with internal and external contacts High Levels of Comprehension – Able to understand and follow information received from field staff or from the customer Able to identify the main idea, cause and effect, fact and opinion, make inference, compare and contrast, sequence information, and draw conclusions basis the information acquired or provided Customer Focus Identifies and understands the (internal or external) customer’s needs Detail oriented with excellent follow up skills Teamwork Works effectively with the team to accomplish goals, takes action that respects the needs of others and those of the organization Effective interpersonal skills Adaptability Maintains effectiveness despite changes to situations, tasks, responsibilities, and people Professionalism Conducting oneself with responsibility, integrity, accountability and excellence Work Standards Sets own high standards of performance Education Requirements Minimum of bachelor’s degree in any field Work Experience Requirements Minimum 1 year of work experience in BPO preferably in P&C Insurance

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

On-site

GlassDoor logo

Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Ashutosh Panda Sponsorship Available: No Relocation Assistance Available: No Job Description Roles and Responsibilties : Analyze, design and develop new processes, programs and configuration - Taking into account the complex inter-relationships of system-wide components Provide system-wide support and maintenance for a complex system or business process. Maintain and modify existing processes, programs and configuration through use of current IT Toolsets. Troubleshoot, investigate and persist. Develop solutions to problems with unknown causes where precedents do not exist, by applying logic and inference with persistence and experience to see the problem through to resolution. Confer with the stakeholder community on problem determination. Make joint analysis decisions on cause and correction methods. Perform tasks (as necessary) to ensure data integrity and system stability. Complete life-cycle testing (unit and integration) of all work processes (including Cross Platform Interaction). Create applications and databases with main focus on Data Collection Systems - Supporting Analysis, Data Capture, Design Tools, Library Functions, Reporting, Request Systems and Specification Systems used in the Tire Development Process. Knowlege,Skills,Abilities : Developing an understanding of skills needed in other disciplines, of second business process area and basic Cost/Benefit Analysis Method. 3+ Years of strong development experience with Java and Springboot. 3+ Years of strong experience of working with cloud based environment(AWS - Event Driven Architecture). Strong experience of working with Microservices & SQL Server. Good to have some knowledge on Salesforce application. Basic Organizational, Communication and Time Management skills Participate as an active Team Member (Effective Listening and Collaboration skills) Achieve all IT Objectives through use of approved Standards and Guidelines Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate #Li-Hybrid

Posted 2 days ago

Apply

10.0 years

3 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

About Celestial AI As Generative AI continues to advance, the performance drivers for data center infrastructure are shifting from systems-on-chip (SOCs) to systems of chips. In the era of Accelerated Computing, data center bottlenecks are no longer limited to compute performance, but rather the system's interconnect bandwidth, memory bandwidth, and memory capacity. Celestial AI's Photonic Fabric™ is the next-generation interconnect technology that delivers a tenfold increase in performance and energy efficiency compared to competing solutions. The Photonic Fabric™ is available to our customers in multiple technology offerings, including optical interface chiplets, optical interposers, and Optical Multi-chip Interconnect Bridges (OMIB). This allows customers to easily incorporate high bandwidth, low power, and low latency optical interfaces into their AI accelerators and GPUs. The technology is fully compatible with both protocol and physical layers, including standard 2.5D packaging processes. This seamless integration enables XPUs to utilize optical interconnects for both compute-to-compute and compute-to-memory fabrics, achieving bandwidths in the tens of terabits per second with nanosecond latencies. This innovation empowers hyperscalers to enhance the efficiency and cost-effectiveness of AI processing by optimizing the XPUs required for training and inference, while significantly reducing the TCO2 impact. To bolster customer collaborations, Celestial AI is developing a Photonic Fabric ecosystem consisting of tier-1 partnerships that include custom silicon/ASIC design, system integrators, HBM memory, assembly, and packaging suppliers. ABOUT THE ROLE Celestial AI is looking for a highly motivated and detail-oriented Software Quality Assurance (SQA) Manager to join our team. As an SQA Manager, you will lead a small team of engineers and play a critical role in ensuring the quality of our software products. You will be responsible for managing the team, as well as designing, developing, and executing test plans and test cases, identifying and reporting defects, and working closely with developers to ensure that our software meets the highest standards. This is a hands-on leadership position that requires both technical depth and leadership skills. ESSENTIAL DUTIES AND RESPONSIBILITIES Test Strategy & Planning: Develop comprehensive test plans, strategies, and methodologies specifically tailored for embedded firmware, covering functional, non-functional (performance, power, memory), reliability, stress, and security aspects. Test Case Design & Execution: Design, document, and execute detailed test cases for firmware components, drivers, communication protocols, and system-level interactions with hardware. Hardware-Firmware Integration Testing: Lead and perform testing at the hardware-firmware interface, ensuring seamless and correct interaction between embedded software and physical components (e.g., sensors, actuators, external memory, peripherals like SPI, I2C, UART). Automation Development: Design, develop, and maintain automated test scripts and test harnesses using scripting languages (e.g., Python, Bash) and specialized tools to enhance test coverage and efficiency, particularly for regression testing. Defect Management: Identify, document, track, and verify resolution of software defects using bug tracking systems. Provide clear and concise bug reports with steps to reproduce and relevant logs. Root Cause Analysis: Collaborate with firmware developers to perform in-depth root cause analysis of defects, often involving debugging on embedded targets using JTAG/SWD, oscilloscopes, logic analyzers, and other hardware debugging tools. Performance & Resource Analysis: Monitor and analyze firmware performance metrics (CPU usage, memory footprint, power consumption, boot time, latency) and validate against specified requirements. Regression & Release Qualification: Own the regression testing process and contribute significantly to the final release qualification of firmware builds. Process Improvement: Champion and contribute to the continuous improvement of firmware development and quality assurance processes, methodologies, and best practices. QUALIFICATIONS Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, or a related technical field. 10 years of experience in Software Quality Assurance, with a minimum of 5 years directly focused on firmware or embedded software testing . Strong understanding of embedded systems concepts , including microcontrollers/microprocessors, real-time operating systems (RTOS), interrupts, memory management, and common peripheral interfaces (GPIO, I2C, SPI, UART, ADC, DAC, Timers). Proficiency in C/C++ for embedded development, with the ability to read, understand, and debug firmware code. Experience with scripting languages for test automation (e.g., Python, Bash). Hands-on experience with hardware debugging tools such as JTAG/SWD debuggers, oscilloscopes, logic analyzers, and multimeters. Familiarity with version control systems (e.g., Git) and bug tracking tools (e.g., Jira, Azure DevOps). Experience with test management tools (e.g., TestRail, Zephyr). Excellent problem-solving skills, with a methodical and analytical approach to identifying and isolating defects. PREFERRED QUALIFICATIONS Experience with continuous integration/continuous deployment (CI/CD) pipelines for embedded systems. Knowledge of networking protocols (TCP/IP) Experience with Hardware-in-the-Loop (HIL) testing, simulation, or emulation environments. LOCATION : Hyderabad, India We offer great benefits (health, vision, dental and life insurance), collaborative and continuous learning work environment, where you will get a chance to work with smart and dedicated people engaged in developing the next generation architecture for high performance computing. Celestial AI Inc. is proud to be an equal opportunity workplace and is an affirmative action employer. #LI-Onsite

Posted 2 days ago

Apply

175.0 years

1 - 1 Lacs

Gurgaon

On-site

GlassDoor logo

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Enterprise Essentials team within Financial Data Engineering is hiring for a highly skilled Senior Engineering Manager with expertise in Python/Java Full Stack Development, Generative AI, Data Engineering, and Natural Language Processing. The Senior Engineering Manager will be working on creating new capabilities and modernizing existing ones in the domain of Global Tax, Finance and GSM Conversational AI platforms, and Enterprise essential products like Reconciliations, ERRM, Balancing and Control, Concur, and Ariba. The ideal candidate will be responsible for designing, developing, and maintaining scalable AI-driven applications and data pipelines. This role requires a deep understanding of NLP techniques, modern AI frameworks, data engineering best practices, and full-stack development to build innovative solutions that leverage machine learning and AI technologies. Key Responsibilities: Oversees and mentors a team of Software Engineering colleagues, enabling a culture of continuous learning, growth opportunities, and inclusivity for all individual colleagues and teams. Provides direct leadership and coaching to teams, supporting training and development of best practices. Manages resource allocation, project timeline, and budget for Software Engineering projects, ensuring alignment with organizational goals. Collaborates with senior leadership to hire top talent for the team, ensuring a high-functioning and cohesive unit, implementing strategies for talent retention and professional development Leads the development, deployment, support, and monitoring of software across various environments. Collaborates with senior leadership and cross-functional teams to define and implement technology roadmaps and strategies. Leads teams to innovate and automate processes, driving efficiency and scalability in production environments. Drives continuous improvement initiatives, leveraging metrics and feedback to improve team performance and software quality. Collaborates and co-creates effectively with teams in product and the business to align technology initiatives with business objectives. Full Stack Development: Design and develop scalable and secure applications using Java / Python framework, and front-end technologies such as React. Implement and optimize microservices, APIs, and server-side logic for AI-based platforms. Develop and maintain cloud-based, containerized applications (Docker, Kubernetes). Design, optimize, and deploy high-performance systems ensuring minimal latency and maximum throughput. Architect solutions for real-time processing, ensuring low-latency data retrieval and high system availability. Troubleshoot and enhance system performance, optimizing for large-scale, real-time, distributed and COTS applications. Generative AI & Machine Learning: Develop, and deploy innovative solutions in Tax, and finance using ML & Generative AI models leveraging frameworks such as langchain. Implement NLP algorithms for language understanding, text summarization, information extraction, and conversational agents. Create pipelines for training and deploying AI models efficiently in production environments. Collaborate with data scientists to optimize and scale AI/NLP solutions. Integrate AI/ML models into applications, ensuring proper scaling, optimization, and monitoring of models in production. Design solutions that enable fast and efficient inference for real-time AI applications. Data Engineering: Build and maintain data pipelines to support AI/ML model development and deployment. Design and develop ETL processes to ingest, clean, and process large-scale structured and unstructured datasets. Work with data storage and retrieval solutions like SQL/NoSQL databases, data lakes, and cloud storage (GCP, AWS, or Azure). Ensure data integrity, security, and performance of the data pipelines. Collaboration & Leadership: Lead cross-functional teams to deliver high-quality, AI-driven products. Lead and mentor engineers and collaborate with product managers, data scientists, and business stakeholders to ensure alignment with project goals. Keep up-to-date with the latest advancements in AI, NLP, and data engineering, and provide technical guidance to the team. Takes accountability for the success of the team achieving their goals Drives the team’s strategy and prioritizes initiatives Influence team members by challenging status quo, demonstrating risk taking, and implementing creative ideas. Be a productivity multiplier for your team by analysing your workflow and contributing to enable the team to be more effective, productive, and demonstrating faster and stronger results. Mentor and guide team members to success within the team Minimum Qualifications Education: Bachelor’s or Master’s in Computer Science, Engineering, Data Science, or a related field. 10+ years of experience in software engineering in architecture and design (architecture, design patterns, reliability and scaling) of new and existing systems. Strong experience in developing full stack software in Java or Python, data engineering, and AI/NLP solutions and demonstrated ability to quickly learn new languages. Following standard Engineering excellence standards while building software. Leveraging code assistants like Github Copilot. Writing great prompts for generating high quality code, tests, and other artefacts like documentation. Proficiency in data engineering tools and frameworks like GCP BigQuery, Apache Spark, Kafka. Proficiency with containerization (Docker, Kubernetes), CI/CD pipelines, and version control. Experience with RESTful API design, microservices architecture, and cloud platforms (AWS / GCP / Azure). Preferred Qualifications Experience working with large-scale AI systems in production environments. Familiarity with modern AI research and developments in Generative AI and NLP. `Strong understanding of DevOps and Infrastructure-as-Code (Terraform, Ansible). Proven track record of delivering AI-driven products that scale Understanding of MLOps practices will be a plus. Familiarity with Generative AI models and frameworks (e.g., GPT, DALL-E) Knowledge of machine learning frameworks (TensorFlow, PyTorch, Scikit-learn) will be a plus We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 2 days ago

Apply

3.0 years

0 Lacs

Gurgaon

On-site

GlassDoor logo

Senior Data Scientist (Deep Learning and Artificial Intelligence) Job Description We aim to bring about a new paradigm in medical image diagnostics; providing intelligent, holistic, ethical, explainable and patient centric care. We are looking for innovative problem solvers who love solving problems. We want people who can empathize with the consumer, understand business problems, and design and deliver intelligent products. People who are looking to extend artificial intelligence into unexplored areas. Your primary focus will be in applying deep learning and artificial intelligence techniques to the domain of medical image analysis. Responsibilities Selecting features, building and optimizing classifier engines using deep learning techniques. Understanding the problem and applying the suitable image processing techniques Use techniques from artificial intelligence/deep learning to solve supervised and unsupervised learning problems. Understanding and designing solutions for complex problems related to medical image analysis by using Deep Learning/Object Detection/Image Segmentation. Recommend and implement best practices around the application of statistical modeling. Create, train, test, and deploy various neural networks to solve complex problems. Develop and implement solutions to fit business problems which may include applying algorithms from a standard statistical tool, deep learning or custom algorithm development. Understanding the requirements and designing solutions and architecture in accordance with them is important. Participate in code reviews, sprint planning, and Agile ceremonies to drive high-quality deliverables. Design and implement scalable data science architectures for training, inference, and deployment pipelines. Ensure code quality, readability, and maintainability by enforcing software engineering best practices within the data science team. Optimize models for production, including quantization, pruning, and latency reduction for real-time inference. Drive the adoption of versioing strategies for models, datasets, and experiments (e.g., using MLFlow, DVC). Contribute to the architectural design of data platforms to support large-scale experimentation and production workloads. Skills and Qualifications Strong software engineering skills in Python (or other languages used in data science) with emphasis on clean code, modularity, and testability. Excellent understanding and hands-on of Deep Learning techniques such as ANN, CNN, RNN, LSTM, Transformers, VAEs etc. Must have experience with Tensorflow or PyTorch framework in building, training, testing, and deploying neural networks. Experience in solving problems in the domain of Computer Vision. Knowledge of data, data augmentation, data curation, and synthetic data generation. Ability to understand the complete problem and design the solutions that best fit all the constraints. Knowledge of the common data science and deep learning libraries and toolkits such as Keras, Pandas, Scikit-learn, Numpy, Scipy, OpenCV etc. Good applied statistical skills, such as distributions, statistical testing, regression, etc. Exposure to Agile/Scrum methodologies and collaborative development practices. Experience with the development of RESTful APIs. The knowledge of libraries like FastAPI and the ability to apply it to deep learning architectures is essential. Excellent analytical and problem-solving skills with a good attitude and keen to adapt to evolving technologies. Experience with medical image analysis will be an advantage. Experience designing and building ML architecture components (e.g., feature stores, model registries, inference servers). Solid understanding of software design patterns, microservices, and cloud-native architectures. Expertise in model optimization techniques (e.g., ONNX conversion, TensorRT, model distillation) Education : BE/B Tech MS/M Tech (will be a bonus) Experience : 3+ Years Job Type: Full-time Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Do you have experience leading teams in AI Development? Do you have experience creating software architecture for production environment in AI applications? Experience: Deep learning: 3 years (Required) Computer vision: 3 years (Required) PyTorch: 3 years (Required) Work Location: In person

Posted 2 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Summary Gen AI Agentic AI Project Management Python Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Responsibilities Strategic & Leadership-Level GenAI Skills AI Solution Architecture Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Understanding trade-offs: latency cost accuracy context length. Using tools like LM Evaluation Harness OpenLLM Leaderboard etc. Enterprise-Grade RAG Systems Designing Retrieval-Augmented Generation pipelines. Using vector databases (Pinecone Weaviate Qdrant) with LangChain or LlamaIndex. Optimizing chunking embedding strategies and retrieval quality. Security Privacy & Governance Implementing data privacy access control and audit logging. Understanding risks: prompt injection data leakage model misuse. Aligning with frameworks like NIST AI RMF EU AI Act or ISO/IEC 42001. Cost Optimization & Monitoring Estimating and managing GenAI inference costs. Using observability tools (e.g. Arize WhyLabs PromptLayer). Token usage tracking and prompt optimization. Advanced Technical Skills Model Fine-Tuning & Distillation Fine-tuning open-source models using PEFT LoRA QLoRA. Knowledge distillation for smaller faster models. Using tools like Hugging Face Axolotl or DeepSpeed. Multi-Agent Systems Designing agent workflows (e.g. AutoGen CrewAI LangGraph). Task decomposition memory and tool orchestration. Toolformer & Function Calling Integrating LLMs with external tools APIs and databases. Designing tool-use schemas and managing tool routing. Team & Product Leadership GenAI Product Thinking Identifying use cases with high ROI. Balancing feasibility desirability and viability. Leading GenAI PoCs and MVPs. Mentoring & Upskilling Teams Training developers on prompt engineering LangChain etc. Establishing GenAI best practices and code reviews. Leading internal hackathons or innovation sprints.

Posted 2 days ago

Apply

3.0 years

3 - 6 Lacs

Jaipur

On-site

GlassDoor logo

Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally: Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In https://www.aurigait.com/ -https://aurigait.com/https://aurigait.com

Posted 2 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Company Description At StatusNeo, we are a global consulting firm specializing in AI, automation, and cloud-first digital solutions. We empower businesses with cutting-edge product & platform engineering to enhance user experience, design, and functionality. As advocates for digital transformation, we guide CXOs worldwide to embrace Digital, Data AI, and DevSecOps. Our exceptional work environment, recognized with the Great Place To Work certification, fosters innovation and collaboration. Role Description This is a full-time on-site role as a Solution Architect - Gen AI at StatusNeo located in Gurgaon. The Solution Architect will be responsible for designing and implementing innovative AI, automation, and cloud solutions. They will collaborate with clients to understand their needs, develop consulting strategies, lead software development projects, integrate solutions, and optimize business processes. Qualifications · Architect and deliver end-to-end GenAI platforms using AWS (ECS, RDS, Lambda, S3) with real-time LLM orchestration and RAG workflows. · Design and implement Python microservices with Redis caching and vector search using Qdrant or Redis Vector. · Integrate GenAI models and APIs (OpenAI, HuggingFace, LangChain,LangGraph), including containerized inference services and secured API pipelines. · Lead frontend architecture using Next.js (TypeScript) with SSR and scalable client-server routing. · Own infrastructure automation and DevOps using Terraform, AWS CDK, GitHub Actions, and Docker-based CI/CD pipelines. · Manage and optimize data architecture across Snowflake, PostgreSQL (RDS), and S3 for both analytical and transactional needs. · Knowledge of database pipeline and data quality, transitioning legacy systems to modular, cloud-native deployments. · Champion engineering culture, leading design/code reviews, mentoring team members, and aligning technical priorities with product strategy. · Ensure compliance, encryption, and data protection via AWS security best practices (IAM, Secrets Manager, WAF, API Gateway). --- Ideal Candidate Profile · Proven track record as a Solution Architect / Tech Lead on large-scale Data & AI products with GenAI integration. · Deep knowledge of AWS cloud services, microservices architecture, and full-stack deployment. · Strong understanding of ML lifecycle and productionization of LLMs / GenAI APIs. · Practical experience with design thinking, breaking down problems from user need to system delivery. · Excellent leadership, communication, and mentoring skills to drive team alignment and technical execution.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies