Jobs
Interviews

22889 Ml Jobs - Page 33

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 3.0 years

2 - 8 Lacs

New Palasia, Indore, Madhya Pradesh

On-site

Job Description Job Title: Full Stack Python Developer Location: Indore Experience: 3–6 Years Employment Type: Full-Time Job Summary: We are seeking a skilled Full Stack Python Developer with strong backend/API development experience and working knowledge of modern frontend frameworks. The ideal candidate should be proficient in Python (Django, Flask, or FastAPI), database design, and cloud integrations. Exposure to AI/ML is a plus. Responsibilities: Develop and maintain scalable REST/GraphQL APIs using Python frameworks. Design and manage databases (PostgreSQL, MySQL, MongoDB). Write clean, testable, and well-documented code. Integrate with third-party APIs and cloud platforms (AWS/GCP/Azure). Collaborate with frontend teams (React/Angular/Vue). Participate in code reviews, sprint planning, and architecture discussions. Optimize app performance, security, and scalability. Skills: Strong Python development experience (3–6 years). Familiar with modern frontend and database technologies. Hands-on with Git, Docker, CI/CD, and cloud services. Good problem-solving and communication skills. AI/ML exposure is a plus. About Company :- 5 Exceptions Software Solutions Private Limited is a premier offshore software development company led by a team with over 15 years of industry experience. Specializing in a wide array of technology domains, we excel in delivering high-quality products, websites, and mobile applications tailored to client needs. Our expertise spans various technology areas, enabling us to provide innovative solutions across multiple platforms. At 5 Exceptions, we foster a work environment that encourages both technical and professional growth, supporting our team members in achieving their full potential. For more info visit our website - https://5exceptions.com/ Interested Candidates can share their CV through email ID - career@5exceptions.com You can share Via whatsapp - 9329796665 / 7987118432/7780322967 Job Types: Full-time, Permanent Pay: ₹200,000.00 - ₹800,000.00 per year Benefits: Health insurance Internet reimbursement Provident Fund Schedule: Monday to Friday Ability to commute/relocate: New Palasia, Indore, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred) Education: Bachelor's (Preferred) Experience: Python: 3 years (Preferred) Language: English (Preferred) Work Location: In person

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About the Role : We are seeking a Senior Front-End Developer who thrives in building systems from scratch. This role involves designing and developing highly responsive, scalable, and intuitive web applications. You'll work closely with cross-functional teams to bring cutting-edge AI products to life while ensuring a seamless user experience. You should be passionate about modern front-end technologies, comfortable working with CI/CD pipelines, and have hands-on experience deploying applications on AWS. Key Responsibilities Build from the ground up: Architect, develop, and deploy responsive and feature-rich web applications using React.js, Next.js, or other modern frameworks. Collaborate deeply with UI/UX designers, back-end engineers, and product teams to translate designs and user stories into functional web interfaces. Implement CI/CD pipelines to ensure rapid and reliable delivery of code into production. Deploy and manage front-end applications on AWS, ensuring high availability and scalability. Develop and maintain SDKs and reusable UI components to accelerate product development. Optimize applications for maximum speed and scalability across browsers and devices. Participate in Agile ceremonies including sprint planning, stand-ups, and retrospectives. Write clean, well-documented, and maintainable code. Stay current with emerging technologies and propose ways to improve architecture and workflows. Required Skills & Qualifications 3-5 years of front-end development experience in product-driven environments. Strong proficiency in HTML, CSS, JavaScript, and TypeScript. Expertise in React.js (Next.js is a plus) and experience with Redux, Flux, or similar state management libraries. Familiarity with other frameworks like Angular, Vue.js, or Node.js is an advantage. Solid understanding of RESTful APIs, GraphQL, and integrating APIs with front-end applications. Experience with CI/CD tools (e.g., GitHub Actions, Jenkins, CircleCI). Hands-on experience with AWS services (e.g., S3, CloudFront, Lambda, Amplify) for deploying and managing applications. Familiarity with Docker and containerized deployments is a plus. Strong problem-solving skills, attention to detail, and a user-focused mindset. Excellent communication and collaboration skills in a startup environment. What We’re Looking For Someone who loves building things from scratch and thrives in a startup setting. A developer who cares deeply about user experience and understands how back-end choices impact the front-end. A team player who values clean architecture, scalable systems, and developer productivity. Preferred Skills (Good to Have) Experience with SCSS/SASS, styled-components, and design tools like Figma. Familiarity with micro-frontends architecture. Exposure to AI/ML product ecosystems or interest in learning them.

Posted 3 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Designation: AI/ML Engineer Location: Gurugram Experience: 3+ years Budget: Upto 35 LPA Industry: AI Product Role and Responsibilities: Model Development: Design, train, test, and deploy machine learning models using frameworks like Pytorch and TensorFlow, specifically for virtual try-on applications with a focus on draping and fabric simulation. Task-Specific Modeling: Build models for tasks such as Natural Language Processing (NLP), Speech-to-Text (STT), and Text-to-Speech (TTS) that integrate seamlessly with computer vision applications in the virtual try-on domain. Image Processing: Implement advanced image processing techniques including enhancement, compression, restoration, filtering, and manipulation to improve the accuracy and realism of draping in virtual try-on systems. Feature Extraction & Segmentation: Apply feature extraction methods, image segmentation techniques, and draping algorithms to create accurate and realistic representations of garments on virtual models. Machine Learning Pipelines: Develop and maintain ML pipelines for data ingestion, processing, and transformation to support large-scale deployments of virtual try-on solutions. Deep Learning & Draping: Build and train convolutional neural networks (CNNs) for image recognition, fabric draping, and texture mapping tasks crucial to the virtual try-on experience. AI Fundamentals: Leverage a deep understanding of AI fundamentals, including machine learning, computer vision, draping algorithms, and generative AI (Gen AI) techniques to drive innovation in virtual try-on technology. Programming: Proficiently code in Python and work with other programming languages like Java, C++, or R as required. Cloud Integration: Utilize cloud-based AI platforms such as AWS, Azure, or Google Cloud to deploy and scale virtual try-on solutions, with a focus on real-time processing and rendering. Data Analysis: Perform data analysis and engineering to optimize the performance and accuracy of AI models, particularly in the context of fabric draping and garment fitting. Continuous Learning: Stay informed about the latest trends and developments in machine learning, deep learning, computer vision, draping technologies, and generative AI (Gen AI), applying them to virtual try-on projects. Skills Required: Experience: Minimum of 5 years in Computer Vision Engineering or a similar role, with a focus on virtual try-on, draping, or related applications. Programming: Strong programming skills in Python, with extensive experience in Pytorch and TensorFlow. Draping & Fabric Simulation: Hands-on experience with draping algorithms, fabric simulation, and texture mapping techniques. Data Handling: Expertise in data pre-processing, feature engineering, and data analysis to support high-quality model development, especially for draping and virtual garment fitting. Deep Neural Networks & Gen AI: Extensive experience in working with Deep Neural Networks, Generative Adversarial Networks (GANs), Conditional GANs, Transformers, and other generative AI techniques relevant to virtual try-on and draping. Advanced Techniques: Proficiency with cutting-edge techniques like Stable Diffusion, Latent Diffusion, InPainting, Text-to-Image, Image-to-Image models, and their application in computer vision and virtual try-on technology. Algorithm Knowledge: Strong understanding of machine learning algorithms and techniques, including deep learning, supervised and unsupervised learning, reinforcement learning, natural language processing, and generative AI.

Posted 3 days ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Introduction: A Career at HARMAN - Harman Tech Solutions (HTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN HTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs. Empower the company to create new digital business models, enter new markets, and improve customer experiences About The Role You will be responsible for driving the strategic direction of our AI and machine learning practice along with other key leaders. This role will involve leading internal AI/ML projects, shaping the technology roadmap, and overseeing client-facing projects including solution development, RFPs, presentations, analyst interactions, partnership development etc. The ideal candidate will have a strong technical background in AI/ML, exceptional leadership skills, and the ability to balance internal and external project demands effectively. In this strategic role, you will be responsible for shaping the future of AI/ML within our organization, driving innovation, and ensuring the successful implementation of AI/ML solutions that deliver tangible business outcomes. What You Will Do Drive Innovation, Differentiation & Growth Develop and implement a comprehensive AI/ML strategy aligned with our business goals and objectives. Ownership on growth of the COE and influencing client revenues through AI practice Identify and prioritize high-impact opportunities for applying AI/ML across various business units, departments and functions. Lead the selection, deployment, and management of AI/ML tools, platforms, and infrastructure. Oversee the design, development, and deployment of AI/ML solutions Define, differentiate & strategize new AI/ML services/offerings and create reference architecture assets Drive partnerships with vendors on collaboration, capability building, go to market strategies, etc. Guide and inspire the organization about the business potential and opportunities around AI/ML Network with domain experts Develop and implement ethical AI practices and governance standards. Monitor and measure the performance of AI/ML initiatives, demonstrating ROI through cost savings, efficiency gains, and improved business outcomes. Oversee the development, training, and deployment of AI/ML models and solutions. Collaborate with client teams to understand their business challenges and needs. Develop and propose AI/ML solutions tailored to client specific requirements. Influence client revenues through innovative solutions and thought leadership. Lead client engagements from project initiation to deployment. Build and maintain strong relationships with key clients and stakeholders. Build re-usable Methodologies, Pipelines & Models Create data pipelines for more efficient and repeatable data science projects Experience of working across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service, and others Coding knowledge and experience in languages including R, Python, Scala, MATLAB, etc. Experience with popular databases including SQL, MongoDB and Cassandra Experience data discovery/analysis platforms such as KNIME, RapidMiner, Alteryx, Dataiku, H2O, Microsoft AzureML, Amazon SageMaker etc. Expertise in solving problems related to computer vision, text analytics, predictive analytics, optimization, social network analysis etc. Experience with regression, random forest, boosting, trees, hierarchical clustering, transformers, convolutional neural network (CNN), recurrent neural network (RNN), graph analysis, etc. People & Interpersonal Skills Build and manage a high-performing team of AI/ML engineers, data scientists, and other specialists. Foster a culture of innovation and collaboration within the AI/ML team and across the organization. Demonstrate the ability to work in diverse, cross-functional teams in a dynamic business environment. Candidates should be confident, energetic self-starters, with strong communication skills. Candidates should exhibit superior presentation skills and the ability to present compelling solutions which guide and inspire. Provide technical guidance and mentorship to the AI/ML team Collaborate with other directors, managers, and stakeholders across the company to align the AI/ML vision and goals Communicate and present the AI/ML capabilities and achievements to clients and partners Stay updated on the latest trends and developments in the AI/ML domain What You Need 12+ years of experience in the information technology industry with strong focus on AI/ML having led, driven and set up an AI/ML practice in IT services or niche AI/ML organizations 10+ years of relevant experience in successfully launching, planning, and executing advanced data science projects. A master’s or PhD degree in computer science, data science, information systems, operations research, statistics, applied mathematics, economics, engineering, or physics. In depth specialization in text analytics, image recognition, graph analysis, deep learning, is required. The candidate should be adept in agile methodologies and well-versed in applying MLOps methods to the construction of ML pipelines. Candidate should have demonstrated the ability to manage data science projects and diverse teams. Should have experience in creating AI/ML strategies & services, and scale capabilities from a technology, platform, and people standpoint. Experience in working on proposals, presales activities, business development and overlooking delivery of AI/ML projects Experience in building solutions with AI/ML elements in any one or more domains – Industrial, Healthcare, Retail, Communication Be an accelerator to grow the practice through technologies, capabilities, and teams both organically as well as inorganically What We Offer Access to employee discounts on world class HARMAN/Samsung products (JBL, Harman Kardon, AKG etc.) Professional development opportunities through HARMAN University’s business and leadership academies. Flexible work schedule with a culture encouraging work life integration and collaboration in a global environment. An inclusive and diverse work environment that fosters and encourages professional and personal development. Tuition reimbursement. “Be Brilliant” employee recognition and rewards program. What Makes You Eligible Be willing to travel up to 25%, domestic and international travel if required. Successfully complete a background investigation as a condition of employment You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today!

Posted 3 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Ping Identity: At Ping Identity, we believe in making digital experiences both secure and seamless for all users, without compromise. We call this digital freedom. And it's not just something we provide our customers. It's something that inspires our company. People don't come here to join a culture that's built on digital freedom. They come to cultivate it. Our intelligent, cloud identity platform lets people shop, work, bank, and interact wherever and however they want. Without friction. Without fear. While protecting digital identities is at the core of our technology, protecting individual identities is at the core of our culture. We champion every identity. One of our core values, Respect Individuality, reminds us to celebrate differences so you are empowered to bring your authentic self to work. We're headquartered in Denver, Colorado and we have offices and employees around the globe. We serve the largest, most demanding enterprises worldwide, including more than half of the Fortune 100. At Ping Identity, we're changing the way people and businesses think about cybersecurity, digital experiences, and identity and access management. Ping Identity is seeking a Senior Software Engineer to play a key role in migrating our legacy SaaS platform (V1) to our next-generation identity security platform (V2). This is a high-impact opportunity to shape the future of our product and significantly enhance the customer experience. As a Software Engineer, you will help define the migration strategy and contribute directly to the hands-on implementation. Your work will ensure customers experience a seamless transition while unlocking advanced features such as orchestration, identity verification, risk protection, digital credentials, and AI-powered security. What You’ll Do Part of technical migration of core components from the legacy platform (V1) to the next-generation Ping Identity platform (V2). Design and implement a proxy service for SAML/OIDC endpoints, ensuring smooth interoperability between V1 and V2 systems. Develop, own, and enhance migration tools, automation, and scalable processes to support efficient and secure transitions. Collaborate cross-functionally with Product Management, Engineering, and Support to shape and deliver a world-class migration experience. Contribute to platform enhancements in identity orchestration, risk-based access, and intelligent access control. What We’re Looking For Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 5–8 years of experience in full-stack Java development, including backend (SpringBoot) and UI components. Familiarity with legacy UI frameworks such as Apache Wicket is a plus. Experience with Node.js, particularly for building backend services or tools in a microservices or proxy-based architectures, is a plus Strong understanding of IAM protocols like OIDC, OAuth2, and SAML. Proven experience building and scaling multi-tenant SaaS applications using microservices architecture Proficiency in CI/CD pipelines, DevOps practices, and containerized deployments (e.g., Docker, Kubernetes). Hands-on experience developing migration tooling in complex enterprise environments. Exposure to cloud infrastructure such as AWS, GCP, or Azure. Nice to Have Knowledge or experience applying AI/ML in identity security or access management. Excellent problem-solving and debugging skills. Ability to work effectively across teams in Agile environments. Life at Ping: We believe in and facilitate a flexible, collaborative work environment. We’re growing quickly, but remain true to the innovative, can-do startup values that got us here. Most importantly, we keep hiring talented, smart, fun, and genuinely nice people because that’s who we want to succeed with every day. Here are just a few of the things that make Ping special: A company culture that empowers you to do your best work. Employee Resource Groups that create a sense of belonging for everyone. Regular company and team bonding events. Competitive benefits and perks. Global volunteering and community initiatives Our Benefits: Generous PTO & Holiday Schedule Parental Leave Progressive Healthcare Options Retirement Programs Opportunity for Education Reimbursement Commuter Offset (Specific locations) Ping is the collective sum of all our individual experiences, backgrounds and influences and we pride ourselves in growing and learning together. We are committed to building an inclusive and diverse environment where everyone’s individuality is respected and everyone has an Identity. In recruiting for new colleagues, we welcome the unique contributions you can bring and encourage you to be your best self. We are an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected Veteran Status, or any other characteristic protected by applicable federal, state, or local law.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Ping Identity: At Ping Identity, we believe in making digital experiences both secure and seamless for all users, without compromise. We call this digital freedom. And it's not just something we provide our customers. It's something that inspires our company. People don't come here to join a culture that's built on digital freedom. They come to cultivate it. Our intelligent, cloud identity platform lets people shop, work, bank, and interact wherever and however they want. Without friction. Without fear. While protecting digital identities is at the core of our technology, protecting individual identities is at the core of our culture. We champion every identity. One of our core values, Respect Individuality, reminds us to celebrate differences so you are empowered to bring your authentic self to work. We're headquartered in Denver, Colorado and we have offices and employees around the globe. We serve the largest, most demanding enterprises worldwide, including more than half of the Fortune 100. At Ping Identity, we're changing the way people and businesses think about cybersecurity, digital experiences, and identity and access management. Ping Identity is seeking a Software Engineer to play a key role in migrating our legacy SaaS platform (V1) to our next-generation identity security platform (V2). This is a high-impact opportunity to shape the future of our product and significantly enhance the customer experience. As a Software Engineer, you will help define the migration strategy and contribute directly to the hands-on implementation. Your work will ensure customers experience a seamless transition while unlocking advanced features such as orchestration, identity verification, risk protection, digital credentials, and AI-powered security. What You’ll Do Part of technical migration of core components from the legacy platform (V1) to the next-generation Ping Identity platform (V2). Design and implement a proxy service for SAML/OIDC endpoints, ensuring smooth interoperability between V1 and V2 systems. Develop, own, and enhance migration tools, automation, and scalable processes to support efficient and secure transitions. Collaborate cross-functionally with Product Management, Engineering, and Support to shape and deliver a world-class migration experience. Contribute to platform enhancements in identity orchestration, risk-based access, and intelligent access control. What We’re Looking For Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 4–5 years of experience in full-stack Java development, including backend (SpringBoot) and UI components. Familiarity with legacy UI frameworks such as Apache Wicket is a plus. Experience with Node.js, particularly for building backend services or tools in a microservices or proxy-based architectures, is a plus Strong understanding of IAM protocols like OIDC, OAuth2, and SAML. Proven experience building and scaling multi-tenant SaaS applications using microservices architecture Proficiency in CI/CD pipelines, DevOps practices, and containerized deployments (e.g., Docker, Kubernetes). Hands-on experience developing migration tooling in complex enterprise environments. Exposure to cloud infrastructure such as AWS, GCP, or Azure. Nice to Have Knowledge or experience applying AI/ML in identity security or access management. Excellent problem-solving and debugging skills. Ability to work effectively across teams in Agile environments. Life at Ping: We believe in and facilitate a flexible, collaborative work environment. We’re growing quickly, but remain true to the innovative, can-do startup values that got us here. Most importantly, we keep hiring talented, smart, fun, and genuinely nice people because that’s who we want to succeed with every day. Here are just a few of the things that make Ping special: A company culture that empowers you to do your best work. Employee Resource Groups that create a sense of belonging for everyone. Regular company and team bonding events. Competitive benefits and perks. Global volunteering and community initiatives Our Benefits: Generous PTO & Holiday Schedule Parental Leave Progressive Healthcare Options Retirement Programs Opportunity for Education Reimbursement Commuter Offset (Specific locations) Ping is the collective sum of all our individual experiences, backgrounds and influences and we pride ourselves in growing and learning together. We are committed to building an inclusive and diverse environment where everyone’s individuality is respected and everyone has an Identity. In recruiting for new colleagues, we welcome the unique contributions you can bring and encourage you to be your best self. We are an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected Veteran Status, or any other characteristic protected by applicable federal, state, or local law.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description YOUR IMPACT Are you passionate about leveraging cutting-edge AI/ML techniques, including Large Language Models, to solve complex, mission-critical problems in a dynamic environment? Do you want to contribute to safeguarding a leading global financial institution? OUR IMPACT We are Compliance Engineering, a global team of engineers and scientists dedicated to preventing, detecting, and mitigating regulatory and reputational risks across Goldman Sachs. We build and operate a suite of platforms and applications that protect the firm and its clients. We Offer Access to vast amounts of structured and unstructured data to fuel your AI/ML models, including textual data suitable for LLM applications. The opportunity to work with the latest AI/ML technologies, including Large Language Models and cloud computing platforms. A collaborative environment where you can learn from and contribute to a team of experienced engineers and scientists. The chance to make a tangible impact on the firm's ability to manage risk and maintain its reputation. Within Compliance Engineering, we are seeking an experienced AI/ML Engineer to join our Engineering team. This role will focus on developing, deploying, and maintaining AI/ML models, with a particular emphasis on leveraging Large Language Models. How You Will Fulfill Your Potential As a member of our team, you will: Develop and deploy AI/ML models, including those based on Large Language Models (LLMs), using large-scale structured and unstructured data, addressing complex and impactful business challenges. Design and build scalable infrastructure for machine learning, including feature engineering pipelines and model deployment frameworks optimized for LLMs. Develop, productionize, and maintain AI/ML models, ensuring their accuracy, reliability, and performance, with a focus on LLM-based solutions. Design and execute AI/ML experiments, iteratively tuning features, prompts, and modeling approaches to optimize model performance, and meticulously documenting findings and results, especially in the context of LLMs. Collaborate with ML researchers to accelerate the adoption of cutting-edge AI/ML techniques and models, with a focus on advancements in Large Language Models. Participate in code reviews and contribute to maintaining high code quality standards. Contribute to the design and implementation of AI/ML model monitoring and alerting systems. Work with stakeholders to understand their needs and translate them into technical requirements, with an emphasis on identifying opportunities for LLM-based solutions. Qualifications A successful candidate will possess the following attributes: A Bachelor's or Master's degree in Computer Science, or a similar field of study. 4+ years of hands-on experience with building scalable machine learning systems Solid coding skills and strong Computer Science fundamentals (algorithms, data structures, software design) Extensive experience with Machine Learning and Deep Learning toolkits (Tensorflow, PyTorch, Scikit-Learn, HuggingFace) Demonstrated experience with Large Language Models (LLMs), including model fine-tuning, prompt engineering, and evaluation techniques. Experience in architecting and deploying ML applications on cloud, including containerization (Docker, Kubernetes). Experience in working with distributed technologies like Scala, Pyspark, Iceberg, HDFS file formats (avro, parquet), AWS/ GCP, big data feature engineering. Experience in system design and evaluating the pros and cons of database choices, schema definition for data storage. Experience in some of the following is desired and can set you apart from other candidates : Experience with Agentic Frameworks (e.g., Langchain, AutoGen) and their application to real-world problems. Experience with model interpretability techniques. Prior experience in code reviews/ architecture design for distributed systems. Experience with data governance and data quality principles. Familiarity with financial regulations and compliance requirements. About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2023. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer

Posted 3 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Customer Excellence Advisory Lead (CEAL) aims to enable customers to fully leverage their data by offering top-tier architectural guidance and design. As part of the Oracle Analytics Service Excellence organization, our team includes Solution Architects who specialize in Oracle Analytics Cloud, Oracle Analytics Server, and Fusion Data Intelligence. Our main goal is to ensure the successful adoption of Oracle Analytics. We engage with customers and partners globally, building trust in Oracle Analytics. We also collaborate with Product Management to enhance product offerings and share our insights through blogs, webinars, and demonstrations. The candidate will collaborate with strategic FDI customers and partners, guiding them towards an optimized implementation and crafting a Go-live plan focused on achieving high usage. Career Level - IC4 Responsibilities Proactively recognize customer requirements, uncover unaddressed needs, and develop potential solutions across various customer groups. Assist in shaping intricate product and program strategies based on customer interactions, and effectively implement solutions and projects for customers that are scalable to complex, multiple enterprise environments. Collaborate with customers and/or internal stakeholders to communicate the strategy, synchronize the timeline for solution implementation, provide updates, and adjust plans according to evolving objectives, effectively and promptly. Prepare for complex product or solution-related inquiries or challenges that customers may present. Gather and convey detailed product insights driven by customer needs and requirements. Promote understanding of customer complexities and the value propositions of various programs (e.g., speaking at different events, team meetings, product reviews) to key internal stakeholders. Primary Skills: Must possess over 4 years of experience with OBIA and Oracle Analytics. Must have a robust knowledge of Analytics RPD design, development, and deployment. Should possess a strong understanding of BI/data warehouse analysis, design, development, and testing. Extensive experience in data analysis, data profiling, data quality, data modeling, and data integration. Proficient in crafting complex queries and stored procedures using Oracle SQL and Oracle PL/SQL. Skilled in developing visualizations and user-friendly workbooks. Previous experience in developing solutions that incorporate AI and ML using Analytics. Experienced in enhancing report performance. Desirable Skills: Experience with Fusion Applications (ERP/HCM/SCM/CX) Ability to design and develop ETL Interfaces, Packages, Load plans, user functions, variables, and sequences in ODI to support both batch and real-time data integrations. Worked with multiple Cloud Platforms. Certified on FDI, OAC and ADW. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 3 days ago

Apply

0 years

0 Lacs

India

Remote

Artificial Intelligence & Machine Learning Intern (Remote | 3 Months) Company: INLIGHN TECH Location: Remote Duration: 3 Months Stipend (Top Performers): ₹15,000 Perks: Certificate | Letter of Recommendation | Hands-on Training About INLIGHN TECH INLIGHN TECH empowers students and recent graduates through hands-on, project-based internships. Our AI & ML Internship is tailored to develop your expertise in building intelligent systems using real-world datasets and machine learning algorithms. Role Overview As an AI & ML Intern, you’ll work on projects involving model development, data preprocessing, and algorithm implementation. You'll gain practical experience applying artificial intelligence concepts to solve real business problems. Key Responsibilities Collect and preprocess structured and unstructured data Implement supervised and unsupervised machine learning algorithms Work on deep learning models using frameworks like TensorFlow or PyTorch Evaluate model performance and tune hyperparameters Develop intelligent solutions and predictive systems Collaborate with peers on AI-driven projects Requirements Pursuing or recently completed a degree in Computer Science, AI, Data Science, or a related field Proficient in Python and libraries such as Pandas, NumPy, Scikit-learn Familiar with machine learning and deep learning concepts Knowledge of TensorFlow, Keras, or PyTorch is a plus Strong mathematical and analytical thinking Enthusiastic about AI/ML innovations and eager to learn What You’ll Gain Real-world experience developing AI and ML models Internship Completion Certificate Letter of Recommendation for high-performing interns Portfolio of AI/ML projects for career building Potential full-time offer based on performance

Posted 3 days ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH is a fast-growing edtech startup offering hands-on, project-based virtual internships designed to prepare students and fresh graduates for today’s tech-driven industry. The Data Science Internship focuses on real-world applications of machine learning, statistics, and data engineering to solve meaningful problems. 🚀 Internship Overview As a Data Science Intern , you'll explore large datasets, build models, and deliver predictive insights. You'll work with machine learning algorithms , perform data wrangling , and communicate your results with visualizations and reports. 🔧 Key Responsibilities Collect, clean, and preprocess structured and unstructured data Apply machine learning models for regression, classification, clustering, and NLP Work with tools like Python, Jupyter Notebook, Scikit-learn, TensorFlow , and Pandas Conduct exploratory data analysis (EDA) to discover trends and insights Visualize data using Matplotlib, Seaborn , or Power BI/Tableau Collaborate with other interns and mentors in regular review and feedback sessions Document your work clearly and present findings to the team ✅ Qualifications Pursuing or recently completed a degree in Data Science, Computer Science, Statistics, or a related field Proficiency in Python and understanding of libraries such as Pandas, NumPy, Scikit-learn Basic knowledge of machine learning algorithms and statistical concepts Familiarity with data visualization tools and SQL Problem-solving mindset and keen attention to detail Enthusiastic about learning and applying data science to real-world problems 🎓 What You’ll Gain Hands-on experience working with real datasets and ML models A portfolio of projects that demonstrate your data science capabilities Internship Certificate upon successful completion Letter of Recommendation for top-performing interns Opportunity for a Full-Time Offer based on performance Exposure to industry-standard tools, workflows, and best practices

Posted 3 days ago

Apply

1.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Location: Vadodara Type: Full-time / Internship Duration (for interns): Minimum 3 months Stipend/CTC: Based on experience and role About Gururo Gururo is an industry leader in practical, career-transforming education. With a mission to empower professionals and students through real-world skills, we specialize in project management, leadership development, and emerging technologies. Join our fast-paced, mission-driven team and work on AI/ML-powered platforms that impact thousands globally. Who Can Apply? Interns: Final-year students or recent graduates from Computer Science, Data Science, or related fields, with a strong passion for AI/ML. Freshers: 0–1 years of experience with academic or internship exposure to machine learning projects. Experienced Professionals: 1+ years of hands-on experience in AI/ML roles with a demonstrated portfolio or GitHub contributions. Key Responsibilities Design and develop machine learning models and AI systems for real-world applications Clean, preprocess, and analyze large datasets using Python and relevant libraries Build and deploy ML pipelines using tools like Scikit-learn, TensorFlow, PyTorch Work on NLP, Computer Vision, or Recommendation Systems based on project needs Evaluate models with appropriate metrics and fine-tune for performance Collaborate with product, engineering, and design teams to integrate AI into platforms Maintain documentation, participate in model audits, and ensure ethical AI practices Use version control (Git), cloud deployment (AWS, GCP), and experiment tracking tools (MLflow, Weights & Biases) Must-Have Skills Strong Python programming skills Hands-on experience with one or more ML frameworks (Scikit-learn, TensorFlow, or PyTorch) Good understanding of core ML algorithms (classification, regression, clustering, etc.) Familiarity with data wrangling libraries (Pandas, NumPy) and visualization (Matplotlib, Seaborn) Experience working with Jupyter Notebooks and version control (Git) Basic understanding of model evaluation techniques and metrics Good to Have (Optional) Exposure to deep learning, NLP (transformers, BERT), or computer vision (OpenCV, CNNs) Experience with cloud ML platforms (AWS SageMaker, GCP AI Platform, etc.) Familiarity with Docker, APIs, and ML model deployment workflows Knowledge of MLOps tools and CI/CD for AI systems Kaggle profile, published papers, or open-source contributions What You’ll Gain Work on real-world AI/ML problems in the fast-growing EdTech space Learn from senior data scientists and engineers in a mentorship-driven environment Certificate of Internship/Experience & Letter of Recommendation (for interns) Opportunities to lead research-driven AI initiatives at scale Flexible work hours and performance-based growth opportunities End-to-end exposure — from data collection to model deployment in production

Posted 3 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities 3+ years of building machine learning models for business application experience PhD, or Master's degree and 2+ years of applied research experience Knowledge of programming languages such as C/C++, Python, Java or Perl Experience programming in Java, C++, Python or related language You have expertise in one of the applied science disciplines, such as machine learning, natural language processing, computer vision, Deep learning You are able to use reasonable assumptions, data, and customer requirements to solve problems. You initiate the design, development, execution, and implementation of smaller components with input and guidance from team members. You work with SDEs to deliver solutions into production to benefit customers or an area of the business. You assume responsibility for the code in your components. You write secure, stable, testable, maintainable code with minimal defects. You understand basic data structures, algorithms, model evaluation techniques, performance, and optimality tradeoffs. You follow engineering and scientific method best practices. You get your designs, models, and code reviewed. You test your code and models thoroughly You participate in team design, scoping and prioritization discussions. You are able to map a business goal to a scientific problem and map business metrics to technical metrics. You invent, refine and develop y Basic Qualifications 3+ years of building models for business application experience PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Preferred Qualifications Experience using Unix/Linux Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka - A66 Job ID: A2715720

Posted 3 days ago

Apply

12.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

What Success Looks Like In This Role DC Architect is responsible for designing, implementing, and optimizing modern data center infrastructures. This role requires expertise in data center networking, cloud integration, high-performance computing, and AI-driven automation. The architect will ensure high availability, scalability, security, and efficiency of data center operations, leveraging the latest AI technologies for automation, predictive maintenance, and energy efficiency. Provides Solutions Architecture consultation and advice Unisys-wide. Works with clients, Product Owners, Sales & Sales Excellence teams and other stakeholders to align the architectural direction of solution intent. Serves as lead on large-scale solution and component development, engaging with cross-functional leaders and stakeholders to ensure mutual understanding, ongoing communication and alignment of outcome expectations. Conducts or leads studies to determine the economic, technical and organizational feasibility of proposed solutions. Key Responsibilities Data Center Architecture & Design: Design and develop scalable, resilient, and high-performance data center architectures. Define and implement data center topology, including compute, storage, networking, and virtualization. Ensure seamless integration of on-premises and cloud-based (hybrid/multi-cloud) infrastructures. Develop AI-driven optimization models for workload balancing, power efficiency, and predictive scaling. Implementation & Deployment: Oversee the deployment of data center infrastructure, including servers, storage, SDN, and hyperconverged infrastructure (HCI). Implement AI-powered automation tools for resource provisioning, capacity planning, and self-healing systems. Integrate GPU-based computing for AI/ML workloads, supporting AI-driven applications and deep learning frameworks. Work closely with cross-functional teams to deploy secure, high-performance solutions. Network & Security Architecture: Design and implement AI-optimized networking for high-speed, low-latency data transfers. Configure SDN, NFV, and intent-based networking (IBN) solutions for data center automation. Implement AI-driven security solutions, including anomaly detection, threat prediction, and automated response systems. AI & Automation in Data Centers: Utilize AI for predictive analytics in infrastructure health monitoring and fault prediction. Deploy AI-powered DCIM (Data Center Infrastructure Management) solutions for automated energy optimization. Implement AIOps (Artificial Intelligence for IT Operations) to improve performance monitoring and troubleshooting. Optimize edge computing and AI workloads within data centers for faster processing and real-time analytics. Cloud & Hybrid Infrastructure: Design cloud-native architectures, ensuring seamless hybrid cloud and multi-cloud integration. Optimize workloads between on-prem, edge, and cloud environments using AI-based orchestration. Implement containerized infrastructure with Kubernetes, OpenShift, and cloud-based AI solutions. Performance Optimization & Sustainability: Improve energy efficiency using AI-powered cooling and power management systems. Ensure high availability and disaster recovery using AI-assisted fault tolerance and failover mechanisms. Work on Green Data Center initiatives to reduce carbon footprint using AI-driven insights. You will be successful in this role if you have: BA/BS degree and 12+ years’ relevant experience OR equivalent combination of education and experience Master’s degree preferred and proven skills in Data Center Design: Expertise in hyperconverged infrastructure (HCI), SDN, SD-WAN, and high-performance computing (HPC). Networking & Security: Deep knowledge of BGP, EVPN, VXLAN, firewall security, Zero Trust, and microsegmentation. AI & Automation: Hands-on experience with AI-driven network and data center automation tools (e.g., NVIDIA AI Enterprise, Ansible, Terraform, AI-driven DCIM). Cloud & Virtualization: Experience with VMware, OpenStack, Kubernetes, AWS/GCP/Azure networking. AI/ML in Data Centers: Knowledge of AI frameworks like TensorFlow, PyTorch, and AI hardware (NVIDIA GPUs, TPUs). Monitoring & Optimization: Experience with AIOps, predictive analytics, and intelligent workload management. Unisys is proud to be an equal opportunity employer that considers all qualified applicants without regard to age, blood type, caste, citizenship, color, disability, family medical history, family status, ethnicity, gender, gender expression, gender identity, genetic information, marital status, national origin, parental status, pregnancy, race, religion, sex, sexual orientation, transgender status, veteran status or any other category protected by law. This commitment includes our efforts to provide for all those who seek to express interest in employment the opportunity to participate without barriers. If you are a US job seeker unable to review the job opportunities herein, or cannot otherwise complete your expression of interest, without additional assistance and would like to discuss a request for reasonable accommodation, please contact our Global Recruiting organization at GlobalRecruiting@unisys.com or alternatively Toll Free: 888-560-1782 (Prompt 4). US job seekers can find more information about Unisys’ EEO commitment here.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Amazon strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. Amazon's evolution is driven by the spirit of innovation that is part of the company's DNA. Amazon Seller Services is looking for a Data Scientist to work hands on from concept to delivery on generative AI, statistical analysis, prescriptive and predictive analysis, and machine learning implementation projects. We are looking for a problem solver with strong analytical skills and a solid understanding of statistics & Machine learning algorithm as well as a practical understanding of collecting, assembling, cleaning and setting up disparate data from enterprise systems. Key job responsibilities Responsibilities Ability to understand a business problem and the available data and identify what statistical or ML techniques can be applied to answer a business question Given a business problem, estimate solution feasibility and potential approaches based on available data Understand what data is available, where, and how to pull it together. Work with partner teams where needed to facilitate permissions and acquisition of required data Quickly prototype solutions and build models to test feasibility of solution approach Build statistical models/ ML models, train and test them to and drive towards the optimal level of model performance Improve existing processes with development and implementation of state of the art generative AI models Work with technology teams to integrate models by wrapping them as services that plug into Amazon's marketplace and fulfillment systems Work across the spectrum of reporting and data visualization, statistical modeling and supervised learning tools and techniques and apply the right level of solution to the right problem The problem set covers aspects of detecting fraud and abuse, improving performance, driving lift and adoption, recommend the right upsell to the right audience, cost saving, selection economics and several others. Basic Qualifications 5+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience 5+ years of data scientist experience Experience with statistical models e.g. multinomial logistic regression Preferred Qualifications Experience working with data engineers and business intelligence engineers collaboratively Experience managing data pipelines Experience as a leader and mentor on a data science team Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka - A66 Job ID: A2815591

Posted 3 days ago

Apply

4.0 - 6.0 years

0 Lacs

Greater Nashik Area

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Data Scientist - Global Consumer Sentiment Index Location: Bangalore Reporting to: Product Manager - Global Consumer Sentiment Index Purpose of the role We are seeking a skilled and business-savvy Data Scientist to contribute to the development of our global consumer sentiment analytics platform. This role is critical in ensuring our data science outputs are accurate, scalable, and business-relevant, with a strong focus on NLP and model optimization. The ideal candidate will bridge the gap between advanced data techniques and actionable business insights. Key tasks & accountabilities Apply and fine-tune NLP models for sentiment analysis, aspect extraction, translation, and classification on multi-source textual data. Translate complex technical results into simple, decision-oriented insights for business teams. Ensure high model accuracy and relevance through iterative optimization and performance testing. Collaborate with functional stakeholders to understand use cases and refine problem statements. Ensure proper data modeling aligned with AB InBev’s data architecture; manage different data layers, handle data archiving, and continuously optimize the model as it matures. Involve actively during the visualization phase to ensure that final dashboards align with user needs. Collaborate with functional and technical teams to translate business questions into modeling approaches. Integrate structured and unstructured data from various platforms (e.g., social, e-commerce, forums) to enrich outputs. Maintain rigorous documentation of data science workflows and ensure reproducibility of results. Support model deployment and handover for integration into Power BI, aligning with ABI data standards. Operate in a high-pressure, fast-paced environment across a global project with multiple stakeholders, diverse markets, and high-volume datasets. Ability to manage expectations, adapt to evolving requirements, and deliver results across geographies is critical. Work under the guidance of the Lead Data Scientist and support junior analysts when needed. Business Environment Challenges Lack of structured data; handle noise and ambiguity in user-generated content. Deliver actionable insights within tight timelines and changing inputs. Build scalable models that adapt across markets with unique consumer behaviors. Evaluation Criteria Quality, accuracy and impact of models as measured by stakeholder feedback and usage in decision-making. Ability to communicate technical output clearly to non-technical stakeholders. Delivery of milestones within agreed timelines for each project phase. Continuous innovation and problem-solving initiative demonstrated in improving the models and insights. Qualifications, Experience, Skills Level of educational attainment required Bachelor’s or Master’s in Data Science, Computer Science, Statistics, or a related quantitative field. Previous work experience 4 - 6 years in data science with hands-on experience in NLP projects. Strong track record of working with social media, reviews, or consumer sentiment data. Prior involvement in integrating models with BI platforms, ideally Power BI. Experience working with CI/CD tools (e.g., Azure DevOps). IT Skills: Python (essential), SQL, R, ML/DL frameworks (TensorFlow, Scikit-learn, SpaCy, HuggingFace) Experience with APIs (Twitter, Reddit, Facebook, YouTube, etc.) Familiarity with cloud environments (Azure preferred) Knowledge of Power BI integration and data pipelines Technical Competencies: Essential: NLP, Sentiment Analysis, Topic Modeling, Text Classification Python, SQL, Machine Learning, Model Explainability API integration, Power BI readiness, data engineering fundamentals Text preprocessing, tokenization, and vectorization Sentiment analysis and topic modeling fundamentals Efficient coding practices and code optimization Working with multilingual corpora and translation APIs Desirable: Knowledge of social listening tools (e.g., Brandwatch, Talkwalker) Advanced Data Visualization techniques Experience with multilingual data sets And above all of this, an undying love for beer! We dream big to create future with more cheers.

Posted 3 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Siemens Digital Industries Software is a leading provider of solutions for the design, simulation, and manufacture of products across many different industries. Formula 1 cars, skyscrapers, ships, space exploration vehicles, and many of the objects we see in our daily lives are being conceived and manufactured using our Product Lifecycle Management (PLM) software. We are seeking AI Backend Engineers to play a pivotal role in building our Agentic Workflow Service and Retrieval-Augmented Generation (RAG) Service. In this hybrid role, you'll leverage your expertise in both backend development and machine learning to create robust, scalable AI-powered systems using AWS Kubernetes, Amazon Bedrock models, AWS Strands Framework, and LangChain / LangGraph. Understanding of and expertise in: Design and implement core backend services and APIs for agentic framework and RAG systems LLM-based applications using Amazon Bedrock models RAG systems with advanced retrieval mechanisms and vector database integration Implement agentic workflows using technologies such as AWS Strands Framework, LangChain / LangGraph Design and develop microservices that efficiently integrate AI capabilities Create scalable data processing pipelines for training data and document ingestion Optimize model performance, inference latency, and overall system efficiency Implement evaluation metrics and monitoring for AI components Write clean, maintainable, and well-tested code with comprehensive documentation Collaborate with multiple cross-functional team members including DevOps, product, and frontend engineers Stay current with the latest advancements in LLMs and AI agent architectures Minimum Experience Requirements 6+ years of total software engineering experience Backend development experience with strong Python programming skills Experience in ML/AI engineering, particularly with LLMs and generative AI applications Experience with microservices architecture, API design, and asynchronous programming Demonstrated experience building RAG systems and working with vector databases LangChain/LangGraph or similar LLM orchestration frameworks Strong knowledge of AWS services, particularly Bedrock, Lambda, and container services Experience with containerization technologies and Kubernetes Understanding of ML model deployment, serving, and monitoring in production environments Knowledge of prompt engineering and LLM fine-tuning techniques Excellent problem-solving abilities and system design skills Strong communication skills and ability to explain complex technical concepts Experience in Kubernetes, AWS Serverless Experience in working with Databases (SQL, NoSQL) and data structures Ability to learn new technologies quickly Preferred Qualifications: Must have AWS certifications - Associate Architect / Developer / Data Engineer / AI Track Must have familiarity with streaming architectures and real-time data processing Must have experience with ML experiment tracking and model versioning Must have understanding of ML/AI ethics and responsible AI development Experience with AWS Strands Framework Knowledge of semantic search and embedding models Contributions to open-source ML/AI projects We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We are Siemens A collection of over 377,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and creativity and help us shape tomorrow! We offer a comprehensive reward package which includes a competitive basic salary, bonus scheme, generous holiday allowance, pension, and private healthcare. Siemens Software. ‘Transform the everyday' , #SWSaaS

Posted 3 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Siemens Digital Industries Software is a leading provider of solutions for the design, simulation, and manufacture of products across many different industries. Formula 1 cars, skyscrapers, ships, space exploration vehicles, and many of the objects we see in our daily lives are being conceived and manufactured using our Product Lifecycle Management (PLM) software. We are seeking Backend Engineers to play a pivotal role in building our Data & AI services Agentic Workflow Service and Retrieval-Augmented Generation (RAG) Service. In this hybrid role, you'll leverage your expertise in both backend development and AI knowledge and skills to create robust, scalable Data & AI services using AWS Kubernetes, Amazon Bedrock models. Expertise and understanding in: Backend development experience with strong Java programming skills along with basic Python programming knowledge Design and develop microservices with Java spring boot that efficiently integrate AI capabilities Experience with microservices architecture, API design, and asynchronous programming Experience in working with Databases (SQL, NoSQL) and data structures Solid understanding of AWS services, particularly Bedrock, Lambda, and container services Experience with containerization technologies, Kubernetes and AWS serverless Understanding of RAG systems with advanced retrieval mechanisms and vector database integration Understanding of agentic workflows using technologies such as AWS Strands Framework, LangChain / LangGraph Build scalable data processing pipelines for training data and document ingestion Write clean, maintainable, and well-tested code with comprehensive documentation Collaborate with multiple cross-functional team members including DevOps, product, and frontend engineers Stay current with the latest advancements in Data, LLMs and AI agent architectures Minimum Experience Requirements 4+ years of total software engineering experience Understanding building RAG systems and working with vector databases ML/AI engineering, particularly with LLMs and generative AI applications Awareness about LangChain/LangGraph or similar LLM orchestration frameworks Understanding of ML model deployment, serving, and supervising in production environments Knowledge of timely engineering Excellent problem-solving abilities and system design skills Strong communication skills and ability to explain complex technical concepts Ability to learn new technologies quickly Preferred Qualifications: Must have AWS certifications - Associate Architect / Developer / Data Engineer / AI Track Must have familiarity with streaming architectures and real-time data processing Must have developed, delivered and operated microservices on AWS Understanding of ML/AI ethics and responsible AI development Knowledge of semantic search and embedding models We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We are Siemens A collection of over 377,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and creativity and help us shape tomorrow! We offer a comprehensive reward package which includes a competitive basic salary, bonus scheme, generous holiday allowance, pension, and private healthcare. Siemens Software. ‘Transform the every day ' , #SWSaaS

Posted 3 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us! Basic Qualifications 3+ years of building machine learning models for business application experience Experience programming in Java, C++, Python or related language Experience with neural deep learning methods and machine learning Preferred Qualifications Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Master's degree in math/statistics/engineering or other equivalent quantitative discipline, or PhD Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A2923787

Posted 3 days ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Coimbatore, Tamil Nadu

Remote

At 360Watts, we’re rethinking what it means to own and control solar energy at home . We’re building an intelligent rooftop solar system that doesn’t just sit on your roof — it monitors, automates, and adapts , giving users full visibility and control over their energy flow. Are you a systems thinker who can build the bridge between physical and digital? A solar system with an IoT layer for automation — that will be modular and powered by AL/ML capabilities. Upgradable to user's needs from basic to advanced automation. Remote controlled by users with smart-home energy management app (EMS). >> Responsibilities Lead the design, testing and iteration of end-to-end IoT system architecture layer, powered by edge-MCUs and hybrid data flows to cloud + smart-home control hub Develop real-time firmware to read sensors, control relays, and implement safe, OTA-updatable logic. MCUs with simple to complex inference capabilities (such as ESP32-S3, Raspberry Pi CM4, Jetson Nano/Xavier NX) and maintain firmware modularity for upgrades. Define IoT use-cases, data workflow, communication protocol stacks (MODBUS RTU/TCP, MQTT) integration with inverter, battery system & cloud EMS Guide hardware intern with embedded prototyping from breadboard to PCB: wiring, testing, debugging with hardware intern and solar design engineer Collaboration with solar engineer + EMS software lead for rapid prototyping, field testing, and user-centric automation logic Drive field deployment readiness — from pilot configuration to relay switching stability, inverter integration, and offline fallback modes >> Must have Systems-oriented Embedded C/C++ Edge (AL/ML) architecture & modular firmware design Real-world firmware with control logic + sensor/relay integration Protocol stack implementation (MODBUS RTU/TCP, MQTT) OTA, structured data, and embedded fault handling System debugging and field-readiness >> Background Bachelor’s or Master's degree in Electrical Engineering, Electronics & Communication, Embedded Systems, or a related field 1-3 years of work experience Professional English language >> Job details Salary depending on skill, between Rs. 30k–50k per month Option for equity (ESOP) after 9-12 months Start date = 15.08.2025 (or) 01.09.2025 Probation = 3 months If you are excited, please apply soon. Job Type: Full-time Pay: ₹35,000.00 - ₹50,000.00 per month Benefits: Flexible schedule Paid time off Schedule: Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Embedded software: 1 year (Required) Language: English (Preferred) Work Location: In person Speak with the employer +91 9087610051 Application Deadline: 27/07/2025 Expected Start Date: 01/09/2025

Posted 3 days ago

Apply

5.0 years

0 Lacs

Greater Chennai Area

On-site

Redefine the future of customer experiences. One conversation at a time. We’re changing the game with a first-of-its-kind, conversation-centric platform that unifies team collaboration and customer experience in one place. Powered by AI, built by amazing humans. Our culture is forward-thinking, customer-obsessed and built on an unwavering belief that connection fuels business and life; connections to our customers with our signature Amazing Service®, our products and services, and most importantly, each other. Since 2008, 100,000+ companies and 1M+ users rely on Nextiva for customer and team communication. If you’re ready to collaborate and create with amazing people, let your personality shine and be on the frontlines of helping businesses deliver amazing experiences, you’re in the right place. Build Amazing - Deliver Amazing - Live Amazing - Be Amazing Nextiva is building a next-generation voice and video platform to power our Unified Communications (UCaaS) and Contact Center (CCaaS) products. This platform blends open-source components with in-house innovation to deliver carrier-grade quality and 99.999% uptime. We need a Software Engineer to drive development of real-time voice/video services, enhancing call quality and reliability and enabling AI-powered voice features on our unified customer experience (UCXM) platform. You will work on everything from media servers and audio processing to cloud deployment, ensuring our system is scalable, secure, and high performing. We operate with a DevOps culture – engineers own their code from development through production. Key Responsibilities Develop Core Communication Services: Build and maintain backend services for voice/video calling (signaling servers, call routing logic, media gateways) using SIP and WebRTC. Implement features like call setup, conferencing, transfers, and recording with a focus on efficiency and reliability. Enhance Audio Quality (DSP): Implement and tune digital signal processing algorithms for superior call audio. This includes noise suppression, echo cancellation, jitter buffer optimization, and voice activity detection to ensure crystal-clear, uninterrupted communication even on poor networks. Optimize Media & Codecs: Work with real-time media streaming (RTP) and various codecs (Opus, G.711, H.264, etc.). Optimize codec configurations and adapt bitrates on the fly based on network conditions to balance quality and bandwidth. Integrate Voice AI Features: Embed speech-to-text (ASR) and text-to-speech (TTS) capabilities into the platform. Enable AI voice agents to participate in calls by streaming audio to AI services and injecting synthesized speech responses. Manage conversation flow between humans and AI (handling interruptions, timing responses) to make interactions feel natural. Ensure Scalability & Resilience: Design services with a cloud-native approach (microservices, containers) for deployment on Kubernetes. Implement high-availability strategies (clustering, failover) across global data centers so that the platform achieves five-9s uptime with no downtime for maintenance. Performance & Reliability Tuning: Continuously profile and improve system performance end-to-end. Minimize call setup times and audio latency through efficient coding (C/C++ for media processing) and system optimizations. DevOps & Support: Use CI/CD pipelines to deploy updates safely with zero downtime. Write comprehensive automated tests (unit, integration, load) for your features. Participate in on-call rotation to troubleshoot and resolve production issues in real time, and implement lasting solutions to prevent recurrence. Collaboration: Work closely with Product Managers, front-end teams, AI/ML team and with network engineers. Qualifications Real-Time Communications: 5+ years of experience developing VoIP or real-time communication systems. Strong knowledge of SIP protocol, WebRTC, and related networking (RTP, NAT traversal). Proven ability to implement call logic and troubleshoot signaling and media issues. Audio/DSP Expertise: Hands-on experience with audio processing in real time. Familiarity with noise reduction, echo cancellation, jitter buffers, and other voice QoS techniques. Comfort optimizing or using audio codecs (Opus, G.711, etc.) and improving call quality under varying network conditions. Strong Coding Skills: Proficiency in C/C++ for high-performance, multi-threaded systems programming. Experience writing efficient, low-latency code (lock-free structures, memory management). Additionally, skilled with a higher-level language like Go or Java for building microservices and control logic. Cloud & Scalability: Experience building and deploying services in a cloud-native environment (Docker, Kubernetes). Knowledge of designing scalable microservices and using cloud infrastructure (AWS, GCP, or Azure) for load balancing, monitoring, and fault tolerance. Voice AI Familiarity: Exposure to integrating speech recognition and text-to-speech in applications. You’ve perhaps worked with voice assistants, IVR systems, or call center AI – you understand basic latency/accuracy trade-offs and how to interface with speech APIs/SDKs. Security & Compliance: Basic understanding of securing voice communications (TLS, SRTP) and safeguarding customer data (GDPR, HIPAA considerations for call recordings, etc.). Designs solutions with privacy and security best practices in mind. DevOps Mindset: Comfortable using CI/CD, infrastructure-as-code, and logging/monitoring tools. Willing to take ownership of code in production – debugging live issues, optimizing resource usage, and responding to incidents. Team Player: Excellent collaboration and communication skills. Experience working in Agile teams. Ability to clearly document designs and mentor others. A proactive attitude to problem-solving and an enthusiasm for continuous learning in the fast-evolving communications and AI field. Nextiva DNA (Core Competencies) Nextiva’s most successful team members share common traits and behaviors: Drives Results: Action-oriented with a passion for solving problems. They bring clarity and simplicity to ambiguous situations, challenge the status quo, and ask what can be done differently. They lead and drive change, celebrating success to build more success. Critical Thinker: Understands the "why" and identifies key drivers, learning from the past. They are fact-based and data-driven, forward-thinking, and see problems a few steps ahead. They provide options, recommendations, and actions, understanding risks and dependencies. Right Attitude: They are team-oriented, collaborative, competitive, and hate losing. They are resilient, able to bounce back from setbacks, zoom in and out, and get in the trenches to help solve important problems. They cultivate a culture of service, learning, support, and respect, caring for customers and teams. Total Rewards Our Total Rewards offerings are designed to allow our employees to take care of themselves and their families so they can be their best, in and out of the office. Our compensation packages are tailored to each role and candidate's qualifications. We consider a wide range of factors, including skills, experience, training, and certifications, when determining compensation. We aim to offer competitive salaries or wages that reflect the value you bring to our team. Depending on the position, compensation may include base salary and/or hourly wages, incentives, or bonuses. Medical 🩺 - Medical insurance coverage is available for employees, their spouse, and up to two dependent children with a limit of 500,000 INR, as well as their parents or in-laws for up to 300,000 INR. This comprehensive coverage ensures that essential healthcare needs are met for the entire family unit, providing peace of mind and security in times of medical necessity. Group Term & Group Personal Accident Insurance 💼 - Provides insurance coverage against the risk of death / injury during the policy period sustained due to an accident caused by violent, visible & external means. Coverage Type - Employee Only Sum Insured - 3 times of annual CTC with minimum cap of INR 10,00,000 Free Cover Limit - 1.5 Crore Work-Life Balance ⚖️ - 15 days of Privilege leaves per calendar year, 6 days of Paid Sick leave per calendar year, 6 days of Casual leave per calendar year. Paid 26 weeks of Maternity leaves, 1 week of Paternity leave, a day off on your Birthday, and paid holidays Financial Security💰 - Provident Fund & Gratuity Wellness 🤸‍ - Employee Assistance Program and comprehensive wellness initiatives Growth 🌱 - Access to ongoing learning and development opportunities and career advancement At Nextiva, we're committed to supporting our employees' health, well-being, and professional growth. Join us and build a rewarding career! Established in 2008 and headquartered in Scottsdale, Arizona, Nextiva secured $200M from Goldman Sachs in late 2021, valuing the company at $2.7B.To check out what’s going on at Nextiva, check us out on Instagram, Instagram (MX), YouTube, LinkedIn, and the Nextiva blog.

Posted 3 days ago

Apply

9.0 years

5 - 10 Lacs

Thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes: Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures of Outcomes: Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected: Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management : Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control and Review : Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development : Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement gathering and Analysis: Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management: Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management: Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting: In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation and Thought Leadership: Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support: Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management: Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design: Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples: Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples: Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments: AI Architect Role Summary: Hands-on AI Architect with strong expertise in Deep Learning, Generative AI, and real-world AI/ML systems. The role involves leading the architecture, development, and deployment of AI agent-based solutions, supporting initiatives such as intelligent automation, anomaly detection, and GenAI-powered assistants across enterprise operations and engineering. This is a hands-on role ideal for someone who thrives in fast-paced environments, is passionate about AI innovations, and can adapt across multiple opportunities based on business priorities. Key Responsibilities: • Design and architect AI-based solutions including multi-agent GenAI systems using LLMs and RAG pipelines. • Build POCs, prototypes, and production-grade AI components for operations, support automation, and intelligent assistants. • Lead end-to-end development of AI agents for use cases such as triage, RCA automation, and predictive analytics. • Leverage GenAI (LLMs) and Time Series models to drive intelligent observability and performance management. • Work closely with product, engineering, and operations teams to align solutions with domain and customer needs. • Own model lifecycle from experimentation to deployment using modern MLOps and LLMOps practices. • Ensure scalable, secure, and cost-efficient implementation across AWS and Azure cloud environments. • Key Skills & Technology Areas: • AI/ML Expertise: 8+ years in AI/ML, with hands-on experience in deep learning, model deployment, and GenAI. • LLMs & Frameworks: GPT-3+, Claude, LLAMA3, LangChain, LangGraph, Transformers (BERT, T5), RAG pipelines, LLMOps. • Programming: Python (advanced), Keras, PyTorch, Pandas, FastAPI, Celery (for agent orchestration), Redis. • Modeling & Analytics: Time Series Forecasting, Predictive Modeling, Synthetic Data Generation. • Data & Storage: ChromaDB, Pinecone, FAISS, DynamoDB, PostgreSQL, Azure Synapse, Azure Data Factory. • Cloud & Tools: o AWS (Bedrock, SageMaker, Lambda), o Azure (Azure ML, Azure Databricks, Synapse), o GCP (Vertex AI – optional) • Observability Integration: Splunk, ELK Stack, Prometheus. • DevOps/MLOps: Docker, GitHub Actions, Kubernetes, CI/CD pipelines, model monitoring & versioning. • Architectural Patterns: Microservices, Event-Driven Architecture, Multi-Agent Systems, API-first Design. Other Requirements: • Proven ability to work independently and collaboratively in agile, innovation-driven teams. • Strong problem-solving mindset and product-oriented thinking. • Excellent communication and technical storytelling skills. • Flexibility to work across multiple opportunities based on business priorities. • Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. ________________________________________ ________________________________________ ________________________________________ Skills python,pandas,AIML,GENAI About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 3 days ago

Apply

12.0 years

0 Lacs

India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Role: Senior Manager - Data Scientist Designation : Senior Manager Preferred Experience: 12+ years Key Responsibilities: Lead the development and implementation of advanced analytics, machine learning, and artificial intelligence solutions across various platforms such as Azure, Google Cloud, and on-premises environments. Design and oversee the creation of scalable AI/ML models and algorithms to solve complex business problems and generate actionable insights. Drive the adoption and implementation of General AI use cases, ensuring alignment with business objectives and value creation. Apply deep expertise in machine learning, predictive modelling, and statistical analysis to large, complex data sets. Champion the integration of AI/ML capabilities into existing data architectures and pipelines for enhanced decision-making. Provide thought leadership and mentorship within the team, staying abreast of emerging AI/ML technologies and industry trends to foster innovation. Ensure compliance with ethical AI practices and contribute to the development of responsible AI frameworks. Qualifications: 12+ years of experience in data management, or a related field, with at least 5 years focused on AI/ML. Proven track record of designing and deploying successful AI/ML projects and General AI use cases. Strong experience with cloud technologies (Azure/Google Cloud) and familiarity with on-premises environments. Expertise in advanced analytics, machine learning algorithms, and predictive modeling. Proficient in programming languages such as Python, R, or Scala, and libraries/frameworks like TensorFlow, PyTorch, or Keras. Deep understanding of the application of AI/ML in creating data products and insights. Experience with deploying machine learning models into production environments. Excellent knowledge of statistical analysis, algorithm development, and experimental design. Strong leadership skills and the ability to mentor and guide teams. Bachelor's or master’s degree in computer science, Engineering, Mathematics, Statistics, or a related field with a strong quantitative focus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 days ago

Apply

3.0 years

2 - 5 Lacs

Cochin

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Senior Developer Overall Years of Experience-3 to 5 years Relevant Years of Experience-3+ Technical Lead: The Artificial Intelligence (AI) & Machine Learning (ML) Developer is responsible for designing and implementing solutions based on AI and Machine Learning. Position Summary 3+yrs of experience in a similar profile with strong service delivery background Lead and guide a team of Junior Developers. Design technical specifications for AI, Machine Learning, Deep Learning, NLP, NLU, NLG projects and implement the same. Contribute to products or tools built on Artificial Intelligence technologies and paradigms, that can enable high-value offerings. Building AI solutions would involve the use and creation of AI and ML techniques including but not limited to deep learning, computer vision, natural language processing, search, information retrieval, information extraction, probabilistic graphical models and machine learning. Plan and implement version control of source control Define and implement best practises for software development Excellent computer skills - proficient in excel, PowerPoint, word and outlook Excellent interpersonal skills and a collaborative management style Ability to analyse and suggest solutions Strong command on verbal and written English language Roles and Responsibilities Essential Create Technical Design for AI, Machine Learning, Deep Learning, NLP, NLU, NLG projects and implementing the same in production. Solid understanding and experience of deep learning architectures and algorithms Experience solving problems in industry using deep learning methods such as recurrent neural networks (RNN, LSTM), convolutional neural nets, auto-encoders etc. Should have experience of 2-3 production implementation of machine learning projects. Knowledge of open source libraries such as Keras, Tensor Flow, Pytorch Work with business analysts/consultants and other necessary teams to create a strong solution Should have in depth understanding and experience of Data science and Machine Learning projects using Python, R etc. Skills in Java/C are a plus Should developing solutions using python in AI/ML projects Should be able to train and build a team of technical developers Desired to have experience as leads in designing and developing applications/tools using Microsoft technologies - ASP.Net, C#, HTML5, MVC Desired to have knowledge on any of the cloud solutions such as Azure or AWS Desired to have knowledge on any of container technology such as Docker Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 days ago

Apply

5.0 - 7.0 years

0 Lacs

Thiruvananthapuram

On-site

5 - 7 Years 2 Openings Trivandrum Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Data Engineering Role Summary: Skilled Data Engineer with strong Python programming skills and experience in building scalable data pipelines across cloud environments. The candidate should have a good understanding of ML pipelines and basic exposure to GenAI solutioning. This role will support large-scale AI/ML and GenAI initiatives by ensuring high-quality, contextual, and real-time data availability. ________________________________________ Key Responsibilities: • Design, build, and maintain robust, scalable ETL/ELT data pipelines in AWS/Azure environments. • Develop and optimize data workflows using PySpark, SQL, and Airflow. • Work closely with AI/ML teams to support training pipelines and GenAI solution deployments. • Integrate data with vector databases like ChromaDB or Pinecone for RAG-based pipelines. • Collaborate with solution architects and GenAI leads to ensure reliable, real-time data availability for agentic AI and automation solutions. • Support data quality, validation, and profiling processes. ________________________________________ Key Skills & Technology Areas: • Programming & Data Processing: Python (4–6 years), PySpark, Pandas, NumPy • Data Engineering & Pipelines: Apache Airflow, AWS Glue, Azure Data Factory, Databricks • Cloud Platforms: AWS (S3, Lambda, Glue), Azure (ADF, Synapse), GCP (optional) • Databases: SQL/NoSQL, Postgres, DynamoDB, Vector databases (ChromaDB, Pinecone) – preferred • ML/GenAI Exposure (basic): Hands-on with Pandas, scikit-learn, knowledge of RAG pipelines and GenAI concepts • Data Modeling: Star/Snowflake schema, data normalization, dimensional modeling • Version Control & CI/CD: Git, Jenkins, or similar tools for pipeline deployment ________________________________________ Other Requirements: • Strong problem-solving and analytical skills • Flexible to work on fast-paced and cross-functional priorities • Experience collaborating with AI/ML or GenAI teams is a plus • Good communication and a collaborative, team-first mindset • Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. Skills ETL,BIGDATA,PYSPARK,SQL About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 3 days ago

Apply

9.0 - 13.0 years

2 - 5 Lacs

Cochin

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation .- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP. -Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI .(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)-Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies