Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Office Location - Office No: 403-405, Time Square, CG Road,Ellisbridge, Ahmedabad, Gujarat-380006. Duration & Type of Employment - Full Time Work Style - Hybrid In Office days - 3 days a week Relocation - Candidate must be willing to relocate to Ahmedabad GJ, with reasonable notice Immediate / Reasonable joiner preferred Requirements Backend: Node.js (TypeScript), Express.js, REST APIs, OpenAPI, JWT, OAuth2.0, OpenID Connect Infrastructure & DevOps: Docker, Docker Compose, CI/CD (MUST), ADFS, NGINX/Traefik, IaC Tools Monitoring & Logging: Grafana, Prometheus, Datadog, Winston, Pino Documentation: OpenAPI (Swagger), Confluence Design and maintain robust, secure, and high-performance backend services using Node.js and TypeScript. Build and document RESTful APIs using OpenAPI ; ensure validation, monitoring, and logging are built in. Lead the development and management of CI/CD pipelines , enabling automated builds, tests, and deployments. Package and deploy applications using Docker and Docker Compose , ensuring environment consistency and isolation. Collaborate with the infrastructure team to configure reverse proxies (NGINX/Traefik) , domain routing, and SSL certificates. Design secure authentication flows using OAuth2/OpenID Connect with enterprise SSO, and manage role-based permissions through JWT decoding. Create and maintain operational documentation , deployment runbooks, and service diagrams. Monitor systems using Grafana/Datadog , optimize performance, and manage alerts and structured logs. Actively participate in performance tuning, production debugging, and incident resolution . Contribute to infrastructure evolution, identifying opportunities to automate, secure, and improve delivery workflows. Bachelor’s in Computer Science, Engineering, or equivalent experience. 2+ years of backend development experience with Node.js, and related tools/frameworks. Solid understanding of REST principles , HTTP protocol, and secure token-based auth (JWT, OAuth2). Experience deploying and managing services with Docker and GitLab CI/CD . Ability to configure, manage, and troubleshoot Linux-based environments . Familiarity with reverse proxies and custom routing using NGINX or Traefik . Experience with OpenAPI specifications to generate and consume documented endpoints. Knowledge of Infrastructure as Code . Understanding of DevOps principles , environment variables, and automated release strategies. Hands-on experience managing logs, alerts, and performance metrics. Comfortable with agile processes , cross-functional collaboration, and code reviews. # Bonus Skills Experience with Active Directory group-based authorization . Familiarity with terminal-based or legacy enterprise platforms (e.g., MultiValue systems). Proficiency with audit logging systems , structured log formatting, and Sentry integration. Exposure to security best practices in authentication, authorization, and reverse proxy configurations. #Educational & Experience Preferred Educational Background - Bachelors of Technology in Computer Science Alternative Acceptable Educational Background - BS/MS in Computer Science Minimum Experience Required - 3 years # Ideal Candidate Traits Obsessed with automation, consistency, and secure environments . Independent problem-solver who takes ownership of both code and environment health. Detail-oriented and performance-conscious, not just focused on features. Collaborative communicator, able to bridge the backend, DevOps, and infrastructure teams. Proactively modernizes existing systems without compromising stability. Benefits Hybrid Working Culture Amazing Perks & Medical Benefits 5 Days Working Mentorship programs & Certification Courses Flexible work arrangements Free drinks, fridge and snacks Competitive Salary & recognitions
Posted 1 day ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Amex GBT is a place where colleagues find inspiration in travel as a force for good and – through their work – can make an impact on our industry. We’re here to help our colleagues achieve success and offer an inclusive and collaborative culture where your voice is valued. What You’ll Do on a Typical Day Work in a SCRUM team Design, develop and test new applications and features Participate in the evolution and maintenance of existing systems Contribute in the deployment of features Monitor the platform Propose new ideas to enhance the product either functionally or technically What We’re Looking For Operational knowledge of C# or python development, as well as in Docker Experience with PostgreSQL or Oracle Knowledge of AWS S3, and optionally AWS Kinesis and AWS Redshift Real desire to master new technologies Unit test & TDD methodology are assets Team spirit, analytical and synthesis skills Passion, Software Craftsmanship, culture of excellence, Clean Code Fluency in English (multicultural and international team) What Technical Skills You’ll Develop C# .NET and/or Python Oracle, PostgreSQL AWS ELK (Elasticsearch, Logstash, Kibana) GIT, GitHub, TeamCity, Docker, Ansible #GBTJobs Location Gurgaon, India The #TeamGBT Experience Work and life: Find your happy medium at Amex GBT. Flexible benefits are tailored to each country and start the day you do. These include health and welfare insurance plans, retirement programs, parental leave, adoption assistance, and wellbeing resources to support you and your immediate family. Travel perks: get a choice of deals each week from major travel providers on everything from flights to hotels to cruises and car rentals. Develop the skills you want when the time is right for you, with access to over 20,000 courses on our learning platform, leadership courses, and new job openings available to internal candidates first. We strive to champion Inclusion in every aspect of our business at Amex GBT. You can connect with colleagues through our global INclusion Groups, centered around common identities or initiatives, to discuss challenges, obstacles, achievements, and drive company awareness and action. And much more! All applicants will receive equal consideration for employment without regard to age, sex, gender (and characteristics related to sex and gender), pregnancy (and related medical conditions), race, color, citizenship, religion, disability, or any other class or characteristic protected by law. Click Here for Additional Disclosures in Accordance with the LA County Fair Chance Ordinance. Furthermore, we are committed to providing reasonable accommodation to qualified individuals with disabilities. Please let your recruiter know if you need an accommodation at any point during the hiring process. For details regarding how we protect your data, please consult the Amex GBT Recruitment Privacy Statement. What if I don’t meet every requirement? If you’re passionate about our mission and believe you’d be a phenomenal addition to our team, don’t worry about “checking every box;" please apply anyway. You may be exactly the person we’re looking for!
Posted 1 day ago
6.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Position: Golang Developer (4–6 Years) || Delhi Onsite Location: Delhi Work Mode: Onsite Employment Type: Permanent Responsibilities: Develop backend services and microservices using Go. Build RESTful and gRPC APIs. Write clean, scalable, and well-tested code. Work with SQL (Postgres/MySQL) . Implement concurrency using goroutines and channels. Integrate with Docker and CI/CD pipelines. Debug, optimize, and ensure high performance. Requirements: 4–6 years of Go development experience. Strong knowledge of: Goroutines, channels, context REST/gRPC APIs Database integration (GORM or native drivers) Error handling and Go best practices Experience with: Docker Git Message brokers (Kafka/RabbitMQ) – good to have Familiarity with Kubernetes, cloud deployments (AWS/GCP) is a plus. Soft Skills: Problem-solving mindset. Good communication. Team player and self-learner.
Posted 1 day ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Avaya Avaya is an enterprise software leader that helps the world’s largest organizations and government agencies forge unbreakable connections. The Avaya Infinity™ platform unifies fragmented customer experiences, connecting the channels, insights, technologies, and workflows that together create enduring customer and employee relationships. We believe success is built through strong connections – with each other, with our work, and with our mission. At Avaya, you'll find a community that values your contributions and supports your growth every step of the way. Learn more at https://www.avaya.com Job Information Job Code: 00194008 Job Family: Research and Development Job Function: Software Engineering About The Job We are seeking a skilled DevOps Engineer with 5–8 years of experience to manage and enhance our cloud infrastructure, deployment pipelines, and observability stack. The ideal candidate should have strong experience in Azure, Infrastructure-as-Code (Terraform), CI/CD, and monitoring tools, with a sound understanding of cloud security and compliance. About The Responsibilities Design, implement, and manage infrastructure on Microsoft Azure using Terraform Build and maintain CI/CD pipelines using GitHub Actions, Argo CD, and Jenkins Ensure system observability using Grafana, Prometheus, and log aggregation tools Implement and enforce security best practices in cloud deployments Monitor and improve system reliability, availability, and performance Collaborate with development and data teams for deployments and data platform support Maintain and audit cloud infrastructure in compliance with regulatory and internal security standards About You 5–8 years of DevOps or cloud infrastructure experience Good exp in Terraform (mandatory) Microsoft Azure cloud infrastructure experience CI/CD tools: GitHub Actions, Argo CD, Jenkins Monitoring and observability: Grafana, Prometheus Cloud security and networking with Azure cloud platform Experience with containerization and orchestration using Docker, Kubernetes Good to Have Experience with Kafka, Debezium, Trino or Superset Scripting in Python, Bash, or Go Knowledge of data lake/lakehouse architectures Compliances like PCI, HIPPA etc. Experience 5 - 8 Years of Experience Education Bachelor degree or equivalent experience Master degree or equivalent experience Preferred Certifications Footer Avaya is an Equal Opportunity employer and a U.S. Federal Contractor. Our commitment to equality is a core value of Avaya. All qualified applicants and employees receive equal treatment without consideration for race, religion, sex, age, sexual orientation, gender identity, national origin, disability, status as a protected veteran or any other protected characteristic. In general, positions at Avaya require the ability to communicate and use office technology effectively. Physical requirements may vary by assigned work location. This job brief/description is subject to change. Nothing in this job description restricts Avaya right to alter the duties and responsibilities of this position at any time for any reason. You may also review the Avaya Global Privacy Policy (accessible at https://www.avaya.com/en/privacy/policy/) and applicable Privacy Statement relevant to this job posting (accessible at https://www.avaya.com/en/documents/info-applicants.pdf).
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
JR0125547 Associate, Solution Engineering – Pune, India Are you ready to join a team in a global company where your primary focus is to deliver services and products to our customers, and provide thought leadership on creating solutions? Are you interested in joining a globally diverse organization where our unique contributions are recognized and celebrated, allowing each of us to thrive? Then it’s time to join the Western Union as a Associate, Solution Engineering. Western Union powers your pursuit As an Associate, Solution Engineering you will be part of our product engineering team and contribute towards the API development for Risk and Compliance orchestration platform that is critical in expanding our existing product capabilities, improving customer experience and accelerating the launch of new products and services. You would play a key role in designing and building scalable, high-performance APIs that drive innovation and efficiency across our Risk and Compliance ecosystem. Role Responsibilities Design and develop robust, scalable and secure APIs for the Risk and Compliance platform. Conduct code reviews, enforce coding standards, and ensure code quality across the team. Review the existing APIs and recommend improvements that align with industry standards. Troubleshoot application issues and coordinate issue resolution with operations, functional, and technical teams. Collaborate with geographically distributed stakeholders, across Product and Technology, to define and deliver technical solutions. Design and develop API solutions that integrate with internal and third-party systems. Ensure API security best practices, authentication and compliance with industry standards. Debug and resolve technical issues, performing root cause analysis, coordinating with multiple stakeholders, and implementing long term fixes. Automate processes and leverage CI/CD pipelines to streamline development and deployments. Mentor junior developers in the team while fostering a collaborative team environment. Stay updated on emerging technologies, incorporating them to enhance existing solutions. Role Requirements 3+ years of software experience with focus on API development and backend engineering. Strong proficiency in Core Java, J2EE, Spring, Spring Boot, Spring Batch, API’S, SOA, SOAP, REST. Good understanding of RDBMS like MySQL, PostGreSQL, etc. Good understanding of Microservices and Event driven architecture. Experience in designing and developing high performance APIs. Experience in developing and deploying applications in AWS Cloud. Excellent understanding of Data Structures, Design Patterns, algorithms, OOPS concepts Experience in implementing DevOps and exposure to all DevOps areas like Source Control, Continuous Integration, Infra Automation, Continuous Deployment Experience with Docker or Kubernetes will be a plus. Must be a problem solver with demonstrated experience in solving difficult technology challenges, with a can-do attitude. Strong communication skills with ability to interact with internal and external partners globally. We make financial services accessible to humans everywhere. Join us for what’s next. Western Union is positioned to become the world’s most accessible financial services company —transforming lives and communities. We’re a diverse and passionate customer-centric team of over 8,000 employees serving 200 countries and territories, reaching customers and receivers around the globe. More than moving money, we design easy-to-use products and services for our digital and physical financial ecosystem that help our customers move forward. Just as we help our global customers prosper, we support our employees in achieving their professional aspirations. You’ll have plenty of opportunities to learn new skills and build a career, as well as receive a great compensation package. If you’re ready to help drive the future of financial services, it’s time for Western Union. Learn more about our purpose and people at https://careers.westernunion.com/. Benefits You will also have access to short-term incentives, multiple health insurance options, accident and life insurance, and access to best-in-class development platforms, to name a few(https://careers.westernunion.com/global-benefits/). Please see the location-specific benefits below and note that your Recruiter may share additional role-specific benefits during your interview process or in an offer of employment. Your India Specific Benefits Include Employees Provident Fund [EPF] Gratuity Payment Public holidays Annual Leave, Sick leave, Compensatory leave, and Maternity / Paternity leave Annual Health Check up Hospitalization Insurance Coverage (Mediclaim) Group Life Insurance, Group Personal Accident Insurance Coverage, Business Travel Insurance Cab Facility Relocation Benefit Western Union values in-person collaboration, learning, and ideation whenever possible. We believe this creates value through common ways of working and supports the execution of enterprise objectives which will ultimately help us achieve our strategic goals. By connecting face-to-face, we are better able to learn from our peers, problem-solve together, and innovate. Our Hybrid Work Model categorizes each role into one of three categories. Western Union has determined the category of this role to be Hybrid. This is defined as a flexible working arrangement that enables employees to divide their time between working from home and working from an office location. The expectation is to work from the office a minimum of three days a week. We are passionate about diversity. Our commitment is to provide an inclusive culture that celebrates the unique backgrounds and perspectives of our global teams while reflecting the communities we serve. We do not discriminate based on race, color, national origin, religion, political affiliation, sex (including pregnancy), sexual orientation, gender identity, age, disability, marital status, or veteran status. The company will provide accommodation to applicants, including those with disabilities, during the recruitment process, following applicable laws. Estimated Job Posting End Date 07-26-2025 This application window is a good-faith estimate of the time that this posting will remain open. This posting will be promptly updated if the deadline is extended or the role is filled.
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Apply Before: 05-08-2025 Job Title: DevOps Engineer Job Location: Pune Employment Type: Full-Time Experience Required: 5 Years Salary: 20 LPA to 25 LPA Max Notice Period: Immediate Joiners We are looking for a DevOps Engineer with strong Kubernetes and cloud infrastructure experience to join our Pune-based team. The ideal candidate will play a key role in managing CI/CD pipelines, infrastructure automation, cloud resource optimization, and ensuring high availability and reliability of production systems. Required Skills Certified Kubernetes Administrator (CKA) mandatory Very good knowledge and operational experience with containerization and cluster management infrastructure setup and production environment maintenance (Kubernetes, vCluster, Docker, Helm) Very good knowledge and experience with high availability requirements (RTO and RPO) on cloud (AWS preferred with VPC, Subnet, ELB, Secrets manager, EBS Snapshots, EC2 Security groups, ECS, Cloudwatch and SQS) Very good knowledge and experience in administrating Linux, clients and servers Experience working with data storage, backup and disaster recovery using DynamoDB, RDS PostgreSQL and S3 Good experience and confidence with code versioning (Gitlab Preferred) Experience in automation with programming and IaC scripts (Python / Terraform) Experience with SSO setup and user management with Keycloak and / or Okta SSO Experience in service mesh monitoring setup with Istio, Kiali, Grafana, Loki and Prometheus Experience with GitOps setup and management for ArgoCD / FluxCD
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Egen is a fast-growing and entrepreneurial company with a data-first mindset. We bring together the best engineering talent working with the most advanced technology platforms, including Google Cloud and Salesforce, to help clients drive action and impact through data and insights. We are committed to being a place where the best people choose to work so they can apply their engineering and technology expertise to envision what is next for how data and platforms can change the world for the better. We are dedicated to learning, thrive on solving tough problems, and continually innovate to achieve fast, effective results. You will join a team of insatiably curious data engineers, software architects, and product experts who never settle for "good enough". Our Java Platform team's tech stack is based on Java8 (Spring Boot) and RESTful web services. We typically build and deploy applications as cloud-native Kubernetes microservices and integrate with scalable technologies such as Kafka in Docker container environments. Our developers work in an agile process to efficiently deliver high value data driven applications and product packages. Required Experience: Minimum of Bachelor’s Degree or its equivalent in Computer Science, Computer Information Systems, Information Technology and Management, Electrical Engineering or a related field Have 5 years of experience working and strong understanding of object-oriented programing and cloud technologies End to end experience delivering production ready code with Java8, Spring Boot, Spring Data, and API libraries Strong experience with unit and integration testing of the Spring Boot APIs Strong understanding and production experience of RESTful API's and microservice architecture Strong understanding of SQL databases and NoSQL databases and experience with writing abstraction layers to communicate with the databases Nice to have's (but not required): Exposure to Kotlin or other JVM programming languages Strong understanding and production experience working with Docker container environments Strong understanding and production experience working with Kafka Cloud Environments: AWS, GCP or Azure
Posted 1 day ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Position Senior Engineer/Technical Lead (DevOps Engineer - Azure) Job Description Key Responsibilities: Key Responsibilities: Azure Cloud Management: Design, deploy, and manage Azure cloud environments. Ensure optimal performance, scalability, and security of cloud resources using services like Azure Virtual Machines, Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure SQL Database. Automation & Configuration Management: Use Ansible for configuration management and automation of infrastructure tasks. Implement Infrastructure as Code (IaC) using Azure Resource Manager (ARM) templates or Terraform. Containerization: Implement and manage Docker containers. Develop and maintain Dockerfiles and container orchestration strategies with Azure Kubernetes Service (AKS) or Azure Container Instances. Server Administration: Administer and manage Linux servers. Perform routine maintenance, updates, and troubleshooting. Scripting: Develop and maintain Shell scripts to automate routine tasks and processes. Helm Charts: Create and manage Helm charts for deploying and managing applications on Kubernetes clusters. Monitoring & Alerting: Implement and configure Prometheus and Grafana for monitoring and visualization of metrics. Use Azure Monitor and Azure Application Insights for comprehensive monitoring, logging, and diagnostics. Networking: Configure and manage Azure networking components such as Virtual Networks, Network Security Groups (NSGs), Azure Load Balancer, and Azure Application Gateway. Security & Compliance: Implement and manage Azure Security Center and Azure Policy to ensure compliance and security best practices. Required Skills and Qualifications: Experience: 5+ years of experience in cloud operations, with a focus on Azure. Azure Expertise: In-depth knowledge of Azure services, including Azure Virtual Machines, Azure Kubernetes Service, Azure App Services, Azure Functions, Azure Storage, Azure SQL Database, Azure Monitor, Azure Application Insights, and Azure Security Center. Automation Tools: Proficiency in Ansible for configuration management and automation. Experience with Infrastructure as Code (IaC) tools like ARM templates or Terraform. Containerization: Hands-on experience with Docker for containerization and container management. Linux Administration: Solid experience in Linux server administration, including installation, configuration, and troubleshooting. Scripting: Strong Shell scripting skills for automation and task management. Helm Charts: Experience with Helm charts for Kubernetes deployments. Monitoring Tools: Familiarity with Prometheus and Grafana for metrics collection and visualization. Networking: Experience with Azure networking components and configurations. Problem-Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot complex issues. Communication: Excellent communication skills, both written and verbal, with the ability to work effectively in a team environment. Preferred Qualifications: Certifications: Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect) are a plus. Additional Tools: Experience with other cloud platforms (AWS, GCP) or tools (Kubernetes, Terraform) is beneficial. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services
Posted 1 day ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Description Key Responsibilities: Build and maintain responsive, accessible, and performant web interfaces using React. Performs coding, debugging, testing and troubleshooting for moderately complex issues Maintains and enhances systems by fixing complicated errors, raising risks and escalating issues where necessary Designs high-quality solutions in accordance with timelines and specifications to meet user requirements. Collaborate with the design team and solution architects to translate mockups and prototypes into functional user interfaces. Work closely with backend developers to integrate APIs and ensure seamless functionality Perform thorough testing of UI components, including functional and visual tests. Ensures all activities adhere to the relevant processes, procedures, standards and technical design. Proactively identifies and leads the implementation of continuous improvement items with the support of the Team Lead and Manager. Provides feedback and assists with either formal training or mentoring to junior team members to assist their development. Adheres to the Outsourcing GB coding standards, the Outsourcing GB operational framework and WTW Excellence guidelines. Acts as a buddy to new starters, and quality checks more junior team members’ work. Works closely with QA, Product/Business Analysts, and other Software Engineering functions to ensure high quality on-time delivery Contribute to driving effective Agile Scrum practices to meet/exceed software engineering goals. Embrace and contribute to the team's Agile philosophy Demonstrate learning adaptability, understanding of the implications of technical issues on business requirements and / or operations. Strong interest to expand technology stack Qualifications Requirements 5+ years of experience in front-end development, with a strong focus on React.js and modern Javascript. Expertise in React component design, state management (React Content, Redux, or equivalent), React Hooks. Experience in TypeScript and build tools (yarn, webpack or similar). Strong understanding of RESTful API integration and asynchronous programming Experience with version control such as Git and CI/CD pipelines. Experience on performance optimization, responsive design principles and cross-browser compatibility Hands-on experience in writing unit tests using frameworks like Jest, or similar Experience working with frameworks such as Next.js and server-side rendering Degree (Associates or Bachelors) in computer science, management information systems or related area Experience with Agile methodologies including Scrum framework and Kanban preferred Willingness to work in a fast-paced collaborative team environment that has tight deadlines. Ability to learn and evaluate new tools, concepts, and challenges quickly. Customer service focus and flexibility in supporting customer requests. Strong analytical and problem solving skills. Commitment to quality and continuous improvement. Strong written and verbal communication skills. Be available, at times, to work extended work hours. Background in benefits administration a plus. Familiarity with/using containers in Docker a plus.
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Develop responsive web interfaces using ReactJS, Typescript Build reusable UI components with component-based architecture (Hooks, Redux) Optimize performance, accessibility, and user experience Backend & Middleware Develop RESTful APIs using Python frameworks such as FastAPI, Flask, or Django REST Framework Integrate frontend applications with backend services and external APIs Implement secure authentication and authorization (JWT, OAuth) Write clean, scalable, and testable backend code Build and maintain data processing pipelines and service integrations DevOps & Integration Use Docker for containerized development and deployment Collaborate with DevOps to implement CI/CD pipelines(ArgoCD) Manage codebase using Git (GitHub) Conduct and maintain unit testing using tools like Pytest, Jest ________________________________________ Primary Skills Required Skills & Qualifications Hands-on experience in Python and ReactJS Strong experience designing and integrating RESTful APIs Experience working with databases such as PostgreSQL, MySQL Familiarity with Agile/Scrum methodologies Technical Degree to validate the experience Deep technical expertise. Overall IT experience in the range of 8 to 12 years Display a solid understanding of the technology requested and problem-solving skills Must be analytical, focused and should be able to independently handle work with minimum supervision. Good collaborator management and team player Backend Solid experience with FastAPI, Flask, or Django Proficient in working with JSON, and exception handling Understanding of microservices, multithreading, and async I/O Frontend Experience with TypeScript Familiarity with UI libraries such as Shadcn UI, Tailwind CSS ________________________________________ Nice to Have Experience with AWS Exposure to Agile SAFe practices ________________________________________ Soft Skills Excellent problem-solving and analytical skills Strong written and verbal communication Self-motivated with the ability to work independently and in a team Detail-oriented with a commitment to delivering high-quality code ________________________________________
Posted 1 day ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 11 The Team We are looking for a highly motivated, enthusiastic, and skilled engineering lead for Commodity Insights. We strive to deliver solutions that are sector-specific, data-rich, and hyper-targeted for evolving business needs. Our Software development Leaders are involved in the full product life cycle, from design through release. The resource would be joining a strong innovative team working on the content management platforms which support a large revenue stream for S&P Commodity Insights. Working very closely with the Product owner and Development Manager, teams are responsible for the development of user enhancements and maintaining good technical hygiene. The successful candidate will assist in the design, development, release and support of content platforms. Skills required include ReactJS, Spring Boot, RESTful microservices, AWS services (S3, ECS, Fargate, Lambda, etc.), CSS / HTML, AJAX / JSON, XML and SQL (PostgreSQL/Oracle), . The candidate should be aware of GEN AI or LLM models like Open AI and Claude etc. The candidate should be enthusiast in working on prompt building related to GenAI and business-related prompts. The candidate should be able to develop and optimize prompts for AI models to improve accuracy and relevance. The candidate must be able to work well with a distributed team, demonstrate an ability to articulate technical solutions for business requirements, have experience with content management/packaging solutions, and embrace a collaborative approach for the implementation of solutions. Responsibilities Lead and mentor a team through all phases of the software development lifecycle, adhering to agile methodologies (Analyze, design, develop, test, debug, and deploy). Ensure high-quality deliverables and foster a collaborative environment. Be proficient with the use of developer tools supporting the CI/CD process – including configuring and executing automated pipelines to build and deploy software components Actively contribute to team planning and ceremonies and commit to team agreement and goals Ensure code quality and security by understanding vulnerability patterns, running code scans, and be able to remediate issues. Mentoring the junior developers. Make sure that code review tasks on all user stories are added and timely completed. Perform reviews and integration testing to assure quality of project development efforts Design database schemas, conceptual data models, UI workflows and application architectures that fit into the enterprise architecture Support the user base, assisting with tracking down issues and analyzing feedback to identify product improvements Understand and commit to the culture of S&P Global: the vision, purpose and values of the organization Basic Qualifications 10+ years’ experience in an agile team development role, delivering software solutions using Scrum Java, J2EE, Javascript, CSS/HTML, AJAX ReactJS, Spring Boot, Microservices, RESTful services, OAuth XML, JSON, data transformation SQL and NoSQL Databases (Oracle, PostgreSQL) Working knowledge of Amazon Web Services (Lambda, Fargate, ECS, S3, etc.) Experience on GEN AI or LLM models like Open AI and Claude is preferred. Experience with agile workflow tools (e.g. VSTS, JIRA) Experience with source code management tools (e.g. git), build management tools (e.g. Maven) and continuous integration/delivery processes and tools (e.g. Jenkins, Ansible) Self-starter able to work to achieve objectives with minimum direction Comfortable working independently as well as in a team Excellent verbal and written communication skills Preferred Qualifications Analysis of business information patterns, data analysis and data modeling Working with user experience designers to deliver end-user focused benefits realization Familiar with containerization (Docker, Kubernetes) Messaging/queuing solutions (Kafka, etc.) Familiar with application security development/operations best practices (including static/dynamic code analysis tools) About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316720 Posted On: 2025-07-23 Location: Hyderabad, Telangana, India
Posted 1 day ago
3.0 years
30 - 40 Lacs
Gurugram, Haryana, India
Remote
Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
5.0 - 7.0 years
20 - 24 Lacs
Gurugram
Work from Office
- Design, develop, and implement robust backend systems using .Net technologies. - Collaborate with front-end developers to integrate user-facing elements with server-side logic. - Develop and maintain APIs to facilitate communication between different components of the application. - Conduct code reviews and provide constructive feedback to ensure code quality and adherence to best practices. - Troubleshoot and resolve application issues and bugs in a timely manner. - Write and maintain documentation for system architecture, code, and processes. - Optimize applications for maximum speed and scalability. - Stay updated with emerging technologies and industry trends to continuously improve development processes. Roles and Responsibilities 1. Must have strong back-end developer with experience in building high-performance back-end services with C# and .NET Core. 2.Database development experience with any of the RDBMS (PostgreSQL SQL or SQL Server or Oracle or MySQL) 3.Good experience with microservices, design principles (SOLID), and design patterns. 4.Test Driven Development (TDD), testing frameworks 5.Comfortable using Azure DevOps or similar CI/CD tools, Git 6.Cloud-centric development (with either AWS, Azure, or GCP) 7.Strong communication skills Good to have : 1.Docker, Kubernetes, Terraform based deployment experience 2.NoSQL Database (such as Mongo DB) development experience 3.Entity Framework (ORM) development experience 4.Full project life-cycle experience
Posted 1 day ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Summary Your role in our mission Design, implement, and manage robust CI/CD pipelines for OutSystems applications and backend services hosted on AWS. Define and maintain infrastructure-as-code (IaC) using tools like Terraform or AWS CloudFormation to provision and manage environments across AWS and PostgreSQL databases. Oversee deployment and configuration management of cloud-native components (e.g., Lambda, API Gateway, RDS/PostgreSQL, S3, ECS). Ensure seamless integration of OutSystems Lifetime environments with AWS-hosted systems and services. Build and maintain monitoring, logging, and alerting systems to ensure system performance, and availability. Collaborate with development, QA and architecture teams to support scalable and resilient environments. Lead incident response, root cause analysis, and continuous improvement efforts for system reliability and uptime. Enforce DevSecOps principles, ensuring all pipelines and infrastructure align with the requirements. Coach and mentor junior DevOps engineers and contribute to establishing DevOps best practices across teams. What we're looking for Bachelor’s degree in Computer Science, Engineering, or related technical field. 5+ years of experience in DevOps, site reliability engineering, or cloud infrastructure roles. 3+ years of experience with AWS services including EC2, RDS (PostgreSQL), Lambda, S3, API Gateway, IAM, and CloudWatch. Experience with PostgreSQL database configuration, performance tuning, and backup strategies. Familiarity with OutSystems deployment lifecycle, environment management (Lifetime), and integration with external services. Proficiency with CI/CD tools (e.g., GitHub Actions, Jenkins, AWS CodePipeline) and scripting languages (Python, Bash, PowerShell). Preferred / Bonus Experience Knowledge of OutSystems DevOps APIs, environment promotion, and automated testing integration. Familiarity with EDI data processing workflows or tools. Experience with container orchestration tools (e.g., Docker, Kubernetes, ECS). AWS Certifications (e.g., Solutions Architect, DevOps Engineer) and/or OutSystems certifications. What you should expect in this role Work Environment Remote/ Hybrid
Posted 1 day ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Design, develop, and maintain integration solutions using Python to connect across internal applications. Write scalable Python scripts and RESTful APIs that facilitate secure and efficient transmission of data across systems. Collaborate with business analysts and architects to translate integration requirements into robust technical solutions. Create and maintain clear technical documentation for integration processes, APIs, and data flows. Ensure application integration solutions adhere to performance, security, and compliance standards. Preferably have exposure to cloud integration services (Azure Logic Apps, AWS Lambda, etc.) and microservices architectures, containerization (Docker, Kubernetes). Python and some Java required; will build connectors between enterprise platforms (e.g., Accurates Collibra, Reltio unique ID integrations). Required Skills Technical Degree to validate the experience Deep technical expertise Overall IT experience in the range of 10+ years Display a solid understanding of the technology requested and problem-solving skills Must be analytical, focused and should be able to independently handle work with minimum supervision Good collaborator management and team player
Posted 1 day ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Design, develop, and maintain integration solutions using Python to connect across internal applications. Write scalable Python scripts and RESTful APIs that facilitate secure and efficient transmission of data across systems. Collaborate with business analysts and architects to translate integration requirements into robust technical solutions. Create and maintain clear technical documentation for integration processes, APIs, and data flows. Ensure application integration solutions adhere to performance, security, and compliance standards. Preferably have exposure to cloud integration services (Azure Logic Apps, AWS Lambda, etc.) and microservices architectures, containerization (Docker, Kubernetes). Python and some Java required; will build connectors between enterprise platforms (e.g., Accurates Collibra, Reltio unique ID integrations). Required Skills Technical Degree to validate the experience Deep technical expertise Overall IT experience in the range of 10+ years Display a solid understanding of the technology requested and problem-solving skills Must be analytical, focused and should be able to independently handle work with minimum supervision Good collaborator management and team player
Posted 1 day ago
3.0 years
30 - 40 Lacs
Cuttack, Odisha, India
Remote
Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
3.0 years
30 - 40 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
3.0 years
30 - 40 Lacs
Kolkata, West Bengal, India
Remote
Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
7.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Position DevOps with Github Action Job Description DevOps principles and Agile practices, including Infrastructure as Code (IaC) and GitOps, to streamline and enhance development workflows. Infrastructure Management: Oversee the management of Linux-based infrastructure and understand networking concepts, including microservices communication and service mesh implementations. Containerization & Orchestration: Leverage Docker and Kubernetes for containerization and orchestration, with experience in service discovery, auto-scaling, and network policies. Automation & Scripting: Automate infrastructure management using advanced scripting and IaC tools such as Terraform, Ansible, Helm Charts, and Python. AWS and Azure Services Expertise: Utilize a broad range of AWS and Azure services, including IAM, EC2, S3, Glacier, VPC, Route53, EBS, EKS, ECS, RDS, Azure Virtual Machines, Azure Blob Storage, Azure Kubernetes Service (AKS), and Azure SQL Database, with a focus on integrating new cloud innovations. Incident Management: Manage incidents related to GitLab pipelines and deployments, perform root cause analysis, and resolve issues to ensure high availability and reliability. Development Processes: Define and optimize development, test, release, update, and support processes for GitLab CI/CD operations, incorporating continuous improvement practices. Architecture & Development Participation: Contribute to architecture design and software development activities, ensuring alignment with industry best practices and GitLab capabilities. Strategic Initiatives: Collaborate with the leadership team on process improvements, operational efficiency, and strategic technology initiatives related to GitLab and cloud services. Required Skills & Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 7-9+ years of hands-on experience with GitLab CI/CD, including implementing, configuring, and maintaining pipelines, along with substantial experience in AWS and Azure cloud services. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services
Posted 1 day ago
3.0 years
30 - 40 Lacs
Guwahati, Assam, India
Remote
Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
3.0 years
0 Lacs
Tamil Nadu, India
On-site
About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, European Union’s leading bank with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 10000 employees, to provide support and develop best-in-class solutions. About BNP Paribas Group BNP Paribas is the European Union’s leading bank and key player in international banking. It operates in 65 countries and has nearly 185,000 employees, including more than 145,000 in Europe. The Group has key positions in its three main fields of activity: Commercial, Personal Banking & Services for the Group’s commercial & personal banking and several specialised businesses including BNP Paribas Personal Finance and Arval; Investment & Protection Services for savings, investment, and protection solutions; and Corporate & Institutional Banking, focused on corporate and institutional clients. Based on its strong diversified and integrated model, the Group helps all its clients (individuals, community associations, entrepreneurs, SMEs, corporates and institutional clients) to realize their projects through solutions spanning financing, investment, savings and protection insurance. In Europe, BNP Paribas has four domestic markets: Belgium, France, Italy, and Luxembourg. The Group is rolling out its integrated commercial & personal banking model across several Mediterranean countries, Turkey, and Eastern Europe. As a key player in international banking, the Group has leading platforms and business lines in Europe, a strong presence in the Americas as well as a solid and fast-growing business in Asia-Pacific. BNP Paribas has implemented a Corporate Social Responsibility approach in all its activities, enabling it to contribute to the construction of a sustainable future, while ensuring the Group's performance and stability. Commitment to Diversity and Inclusion At BNP Paribas, we passionately embrace diversity and are committed to fostering an inclusive workplace where all employees are valued, respected and can bring their authentic selves to work. We prohibit Discrimination and Harassment of any kind and our policies promote equal employment opportunity for all employees and applicants, irrespective of, but not limited to their gender, gender identity, sex, sexual orientation, ethnicity, race, colour, national origin, age, religion, social status, mental or physical disabilities, veteran status etc. As a global Bank, we truly believe that inclusion and diversity of our teams is key to our success in serving our clients and the communities we operate in. About Business Line /Function At BNP Paribas Deutschland Tribe CUSTOMER, we are working on the next stage of customer and contracts care solutions to allow our business partner to provide the best possible customer banking experience. This includes state-of-the-art platform and business process automation solutions. Job Title Java Kotlin Developer Date 01-Jul-2025 Department ISPL – PI Germany Location: Chennai Business Line / Function CUSTOMER – Backend Reports To (Direct) Team Lead Grade (if applicable) (Functional) Number Of Direct Reports 0 Directorship / Registration NA Position Purpose The developer helps with the development of API-related information systems and contributes to ensuring its continuity through personal effort as part of a team or to a limited extent within the department to achieve short-term and occasionally medium-term goals. They help in the development and realization of the software architecture as a contribution to high-quality software solutions in accordance with the applicable best practices (maintainable, safe, documented, scalable, testable and in accordance with the needs of the business area). Responsibilities Direct Responsibilities Developing and maintain API products in CUSTOMER Tribe Co-designing technical implementation of API strategy Ensuring the timeliness of documentation, processes, and tool landscape Maintenance and development of the API platform Ensuring continuous delivery processes via automated pipelines Implementing and co-designing architectural specifications Ensuring software quality, test automation, and integration of tools (e.g., Sonar, Fortify) into development process Contributing Responsibilities Supporting design and implementation of internal APIs Supporting co-creation with partners with technical expertise Supporting implementation and maintenance of BNP Paribas API Policy Supporting requirements elicitation Co-designing API guidelines for internal software development Co-designing API lifecycle management Technical & Behavioral Competencies Bachelor’s degree in computer science or computer engineering At least 3+ years of hands-on experience on Advance API development in Kotlin 8+ years of experience as software engineer Java EE including (Web Services, REST, JPA) Strong in Design Patterns, Hands on in Java 8,11 or 17 Hands on in Spring, Spring Boot, Spring Security, Spring Cloud, Spring Data JPA Webservices: Restful webservice, REST API , Hands on in openAPI/Swagger Databases : Oracle/PostgreSQL CICD: Docker, Kubernetes , Jenkins, Maven, Pod Man – Good to have Design, Develop and Maintain robust and complex applications that interacts with one-to-many interfaces. Core Skill Sets Strong Java and Kotlin technical Expertise Strong critical thinker with problem solving aptitude. Good written and oral communication skills Hands-on experience on API related activities: requirement analysis, design, resource-based API modeling, micro-services architecture Knowledge about API design standards, patterns and best practices Hands-on experience in API security standards and implementation Hibernate Microservice Architecture Testing Junit & Mockito Specific Qualifications Good to have - Min 6 years of Exp. with Kubernetes or Openshift on one of these GCP / AWS / Azure Good to have – Knowledge of one of the following tools: Prometheus, Grafana, ELK Stack or Cloud Watch Skills Referential Behavioural Skills: (Please select up to 4 skills) Ability to collaborate / Teamwork Client focused Attention to detail / rigor Ability to deliver / Results driven Transversal Skills: (Please select up to 5 skills) Ability To Develop Others & Improve Their Skills Ability to understand, explain and support change Choose an item. Choose an item. Choose an item. Education Level Bachelor Degree or equivalent Experience Level At least 10 years
Posted 1 day ago
3.0 years
30 - 40 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
3.0 years
0 Lacs
Tamil Nadu, India
On-site
About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, European Union’s leading bank with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 10000 employees, to provide support and develop best-in-class solutions. About BNP Paribas Group BNP Paribas is the European Union’s leading bank and key player in international banking. It operates in 65 countries and has nearly 185,000 employees, including more than 145,000 in Europe. The Group has key positions in its three main fields of activity: Commercial, Personal Banking & Services for the Group’s commercial & personal banking and several specialized businesses including BNP Paribas Personal Finance and Arval; Investment & Protection Services for savings, investment, and protection solutions; and Corporate & Institutional Banking, focused on corporate and institutional clients. Based on its strong diversified and integrated model, the Group helps all its clients (individuals, community associations, entrepreneurs, SMEs, corporate and institutional clients) to realize their projects through solutions spanning financing, investment, savings and protection insurance. In Europe, BNP Paribas has four domestic markets: Belgium, France, Italy, and Luxembourg. The Group is rolling out its integrated commercial & personal banking model across several Mediterranean countries, Turkey, and Eastern Europe. As a key player in international banking, the Group has leading platforms and business lines in Europe, a strong presence in the Americas as well as a solid and fast-growing business in Asia-Pacific. BNP Paribas has implemented a Corporate Social Responsibility approach in all its activities, enabling it to contribute to the construction of a sustainable future, while ensuring the Group's performance and stability Commitment to Diversity and Inclusion At BNP Paribas, we passionately embrace diversity and are committed to fostering an inclusive workplace where all employees are valued, respected and can bring their authentic selves to work. We prohibit Discrimination and Harassment of any kind, and our policies promote equal employment opportunity for all employees and applicants, irrespective of, but not limited to their gender, gender identity, sex, sexual orientation, ethnicity, race, color, national origin, age, religion, social status, mental or physical disabilities, veteran status etc. As a global Bank, we truly believe that inclusion and diversity of our teams is key to our success in serving our clients and the communities we operate in. About Business Line/Function ITG is a group function established recently in ISPL since 2019 with presence in Mumbai, Chennai. We collaborate with various business lines of the group to provide IT Services. Job Title Developer Date 18/Jul/2025 Department ITG - IT Transversal & Functions::iCHROM Location: Chennai Business Line / Function iCHROM:Compliance IT Reports To (Direct) ISPL – ITG CPL IT - Manager Grade (if applicable) Number Of Direct Reports NA Directorship / Registration NA Position Purpose In the context of development of applications for the Compliance domain of BNPP, the developer will be part of a team of developers, align with the local team lead, take ownership, and deliver quality for all the user stories worked upon. We are looking for a highly skilled backend developer with strong experience in Java 8+, Spring Boot and Microservices. Responsibilities Direct Responsibilities Design and develop backend services using Java 8+, Spring boot & JUnit. Build and maintain robust RESTful APIs. Integrate with MongoDB and ensure performance and security. Ensure coding standards are followed Ensure collaboration, good rapport & teamwork with ISPL and Paris team members Contributing Responsibilities Take ownership and commit towards quality deliverables within estimated timelines, avoiding global schedule shift Participate in code reviews and documentation process. Contribute to continuous improvement in development practices processes and code quality. Participation in projects meetings: fine-tuning, daily, retrospective. Collaboration with the team members: the ability to collect, analyze, synthesize and present information in a clear, concise and precise way Technical & Behavioral Competencies Expert in Java 8+ and Spring Boot, ETL RESTful API and Microservices architecture. Hands-on experience with MongoDB, PostgreSql Apache Kafka for messaging Junit and Spring boot testing frameworks and code quality tools like Sonar API Gateways like APIGEE and authentication strategies Clean coding practices. Maven and swagger tools. Good to have Familiar with payment systems or related compliance driven systems Knowledge of Docker and Kubernetes and CI/CD pipelines using GitLab Experience in Integrated AI tool and knowledge on efficient prompting Knowledge of Web security principles (OWASP, Auth double factor, encryption, etc.) Knowledge of hexagonal architecture, event-oriented architecture and DDD Specific Qualifications (if Required) Experience in Linux, DevOps, IntelliJ, Gitlab (Pipeline CI/CD), Cloud Object Storage, Kafka Skills Referential Behavioural Skills: (Please select up to 4 skills) Ability to collaborate / Teamwork Attention to detail / rigor Communication skills - oral & written Ability to deliver / Results driven Transversal Skills: (Please select up to 5 skills) Analytical Ability Ability to develop and adapt a process Choose an item. Choose an item. Choose an item. Education Level Bachelor Degree or equivalent Experience Level At least 3 years
Posted 1 day ago
3.0 years
30 - 40 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France