Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Quality Automation Specialist In this key role, you’ll be undertaking and enabling automated testing activities in all delivery models We’ll look to you to support teams to develop quality solutions and enable continuous integration and assurance of defect free deployment of customer value You’ll be working closely with feature teams and a variety of stakeholders, giving you great exposure to professional development opportunities We're offering this role at associate level What you'll do Joining us in a highly collaborative role, you’ll be contributing to the transformation of testing using quality processes, tools, and methodologies, significantly improving control, accuracy and integrity. You’ll make sure repeatable, constant and consistent quality is built into all phases of the idea to value lifecycle at reduced cost or reduced time to market. It’s a chance to work with colleagues at multiple levels, and with cross-domain, domain, platform and feature teams, to build in quality as an integral part of all activities. Additionally, you’ll be: Supporting the design of automation test strategies, aligned to business or programme goals Actioning and evolving more predictive and intelligent testing approaches, based on automation and innovative testing products and solutions Collaborating to refine the scope of manual and automated testing required, the creation of automated test scripts, user documentation and artefacts Designing and creating a low maintenance suite of stable, re-usable automated tests, which are usable both within the product or domain and across domains and systems in an end-to-end capacity Applying testing and delivery standards by understanding the product development lifecycle along with mandatory, regulatory and compliance requirements The skills you'll need We’re looking for someone with experience of automated testing, particularly from an Agile development or CI/CD environment. You’ll be an innovative thinker who can identify opportunities and design solutions, coupled with the ability to develop complex automation code. You’ll have a good understanding of Agile methodologies with experience of working in an Agile team, with the ability to relate everyday work to the strategic vision of the feature. We’ll also look for you to have: Experience of at least 4 years in end-to-end and automation testing creating automation scripts which should be able to handle multiple sets of data Exposure on IntelliJ, RestAssured, ActiveMQ, GitLab, Cucumber framework, Maven, Allure Reporting, MongoDB, Docker, KAFKA (handling of events), SQL, Selenium, Strong in JAVA, JIRA, ZEPHYR, BDD Framework, Gherkin, API Testing Strong knowledge on automation frameworks An understanding and implementing Dev Ops principles where automation would be required to be implemented in the pipeline (CI/CD). Excellent communication skills with the ability to communicate complex technical concepts to management level colleagues Good collaboration and stakeholder management skills Understanding and proven experience of different kind of automation frameworks mainly BDD Cucumber and able to implement the same Good Knowedge in AWS or cloud and should have part experience of working with applications hosted in cloud Experience in deliver scripts in Agile manner and understand Agile principles
Posted 2 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Sr Software Engineer Experience level: 5 to 7 years Location: Chennai Who are we? Crayon Data is a leading provider of AI-led revenue acceleration solutions, headquartered in Singapore with a presence in India and the UAE. Founded in 2012, our mission is to simplify the world’s choices. Our flagship platform, maya.ai , helps enterprises in Banking, Fintech, and Travel unlock the value of their data to create hyperpersonalized experiences and drive sustainable revenue streams. maya.ai is powered by four “as a Service” components – Data, Recommendation, Customer Experience, and Marketplace – that work in unison to deliver tangible business outcomes. Why Crayon? Why now? Crayon is transforming into an AI first company , and every Crayon (that’s what we call ourselves!) is undergoing a journey of upskilling and expanding their capabilities in the AI space. We're building an organization where AI is not a department—it’s a way of thinking . If you’re an engineer who’s passionate about building things, experimenting with models, and applying AI to solve real business problems, you’ll feel right at home in our AI squads. Our environment is designed to be a playground for AI practitioners, with access to meaningful data, real-world challenges, and the freedom to innovate. You won't just be writing models—you’ll be shaping Crayon’s future. Experience : 5+ years Industry : Banking, Financial Services, and AI Team : Engineering Job Overview We are looking for a seasoned Senior Software Engineer who thrives at the intersection of technology and business. In this role, you will design, develop, and deploy robust backend systems and applications that power key initiatives for banking and financial institutions – from enhancing digital banking platforms to enabling seamless integration with AI-driven solutions. If you're passionate about building scalable systems, solving complex technical problems, and delivering impact through clean, maintainable code, this opportunity is for you. What You’ll Do Design, develop, and deploy scalable Java-based applications that support critical banking use cases such as customer onboarding, transaction processing, digital banking services, and AI-driven decision systems. Work with large-scale structured and unstructured datasets , integrating with data pipelines and backend systems to ensure seamless data flow and processing. Translate business requirements into robust technical solutions , ensuring performance, security, and scalability in production environments. Collaborate closely with product managers, data scientists, and engineering teams to integrate ML models and business logic into production applications. Continuously monitor, optimize, and enhance backend systems post-deployment to ensure system reliability and performance. Mentor junior engineers and contribute to best practices, code reviews, and internal knowledge sharing across the engineering team. Can you say “Yes, I have!” to the following? A Bachelor’s degree in Computer Science or in a related field 5+ years of work experience Good knowledge of Data Structures and Algorithms with excellent analytical and problem-solving skills. At least 5 years of experience in Java hands-on development At least 3 years of experience in Spring At least 2 years of experience using implementing the CI/CD process Jenkins, GitHub, SonarQube, Docker, Kubernetes At least 2 years of experience in Quality Assurance technologies like TDD 3+ years of Micro-services experience, Kafka. Experience in working with AWS or Azure cloud platforms.Exposure to or understanding of AI/ML applications in a production environment Can you say “Yes, I will!” to the following? Design, code, debug, test, and validate application micro-services, APIs, Backend develop modular software components and - participate in code reviews; monitor system performance metrics and identify potential risks issues Collaborate in agile scrum team with product owners Out-of-the-box thinking, teamwork, code quality, performance, CICD, and automation Coordinate and scale the evolving cloud-based solutions with product development teams You Build You Own Code, deploy, and production support What would you expect to work on here: Object-Oriented Languages Frameworks (Java, Spring boot), Cloud-Based Technologies (OCI, DevOps, Cloud Native ) Work Experience with data industry is a big plus Working in the financial industry is a big plus. Strong sense of ownership and passion for building scalable, flexible, secured and easy to maintain platforms Up-to-date with emerging technology trends and ability to choose the best. Ability to make the right design decisions related to product features and technology choices. For a large poorly understood Problem, explores solutions (possibly with numerous POCs) to determine correct course of action. Passion for building a strong Engineering culture, operational excellence innovation Brownie points for: Alignment with The Crayon Box of Values – because while skills can be learned, values define who we are. Passion for building reusable backend components , internal tools, and frameworks that improve developer productivity and system scalability. Come play, build, and grow with us. # Let’s co-create the future of AI at Crayon.
Posted 2 days ago
175.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? American Express is looking for a hands-on Engineer I in the Risk Decision Technology (RDT) organization for the Authorization Modernization Program (AMP) team. The RDT team is responsible for cross-domain, configurable platforms to enable best-in-class risk management and American Express growth with strong governance. The AMP team is embarking on a journey to modernize and build next-gen authorization platform. This role of Engineer will be an integral part of a team that designs and builds large-scale, high-transaction, cloud-native applications and drive us closer to the vision of a modernized authorizations platform. As an Engineer, you will: Function as a member of an agile team and helps drive consistent development and test practices with respect to tools, common components, and documentation Have primary focus (80%+) on writing quality code, perform unit testing and test automation in ongoing sprints Build new micro-services and web services that help run fraud risk assessment for customer transactions Improve efficiency, reliability, and scalability of our data pipelines Work on multi-functional initiatives and collaborate with Engineers across organizations Build CICD pipelines for continuous integration and delivery Build automation for application changes and deployment for faster time to market Develop deep understanding of tie-ins with other systems and platforms within the supported domains Perform ongoing refactoring of code, quality assurance and testing, applying best-in-class methodologies and processes Find opportunities to adopt innovative technologies & ideas in development / test area Provide continuous support for ongoing application availability Collaborate/influence within and across teams to create successes with an innovative attitude and challenge the status quo Minimum Qualifications Hold a bachelor’s degree in Computer Science, Information Systems, or other related field (or has equivalent work experience) 6+ years of confirmed experience in software development and quality assurance 4 years of demonstrated ability in Java development and building large scale distributed applications 3 years of experience with relational and NoSQL database technologies like Oracle, Cassandra, and Postgres Solid experience with automated release management using Gradle, Git, Jenkins Looks proactively beyond the obvious for continuous improvement opportunities Willingness to learn new technologies and use them to their optimal potential Excellent leadership and communication skills, with the ability to influence at all levels across functions, from both technical and practical views Preferred Qualifications Building APIs using techniques and frameworks like REST, RPC (gRPC and similar), SpringBoot Hands-on experience with Java reactive programming - Experience working with streaming solutions (Apache Kafka and Kafka Streams) We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 2 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Quality Automation Specialist In this key role, you’ll be undertaking and enabling automated testing activities in all delivery models We’ll look to you to support teams to develop quality solutions and enable continuous integration and assurance of defect free deployment of customer value You’ll be working closely with feature teams and a variety of stakeholders, giving you great exposure to professional development opportunities We're offering this role at associate level What you'll do Joining us in a highly collaborative role, you’ll be contributing to the transformation of testing using quality processes, tools, and methodologies, significantly improving control, accuracy and integrity. You’ll make sure repeatable, constant and consistent quality is built into all phases of the idea to value lifecycle at reduced cost or reduced time to market. It’s a chance to work with colleagues at multiple levels, and with cross-domain, domain, platform and feature teams, to build in quality as an integral part of all activities. Additionally, you’ll be: Supporting the design of automation test strategies, aligned to business or programme goals Actioning and evolving more predictive and intelligent testing approaches, based on automation and innovative testing products and solutions Collaborating to refine the scope of manual and automated testing required, the creation of automated test scripts, user documentation and artefacts Designing and creating a low maintenance suite of stable, re-usable automated tests, which are usable both within the product or domain and across domains and systems in an end-to-end capacity Applying testing and delivery standards by understanding the product development lifecycle along with mandatory, regulatory and compliance requirements The skills you'll need We’re looking for someone with experience of automated testing, particularly from an Agile development or CI/CD environment. You’ll be an innovative thinker who can identify opportunities and design solutions, coupled with the ability to develop complex automation code. You’ll have a good understanding of Agile methodologies with experience of working in an Agile team, with the ability to relate everyday work to the strategic vision of the feature. We’ll also look for you to have: Experience of at least 4 years in end-to-end and automation testing creating automation scripts which should be able to handle multiple sets of data Exposure on IntelliJ, RestAssured, ActiveMQ, GitLab, Cucumber framework, Maven, Allure Reporting, MongoDB, Docker, KAFKA (handling of events), SQL, Selenium, Strong in JAVA, JIRA, ZEPHYR, BDD Framework, Gherkin, API Testing Strong knowledge on automation frameworks An understanding and implementing Dev Ops principles where automation would be required to be implemented in the pipeline (CI/CD). Excellent communication skills with the ability to communicate complex technical concepts to management level colleagues Good collaboration and stakeholder management skills Understanding and proven experience of different kind of automation frameworks mainly BDD Cucumber and able to implement the same Good Knowedge in AWS or cloud and should have part experience of working with applications hosted in cloud Experience in deliver scripts in Agile manner and understand Agile principles
Posted 2 days ago
18.0 years
0 Lacs
Pune, Maharashtra, India
On-site
JD Vice President – Delivery (Payments & Transaction Banking) Grade: VP Experience:18 + years Location: Goregaon,Mumbai Role Overview As Vice President – Delivery , you will lead the technology delivery of large-scale, mission-critical payments and transaction banking platforms , with a strong focus on modern tech stacks (Java, Microservices, J2EE). You will drive technical execution excellence, architecture alignment, and delivery rigor across global transformation engagements. The role requires deep technical leadership combined with domain expertise in Payments, Cash Management, and Digital Banking . Key Responsibilities Technical & Program Leadership: Lead the design, architecture, and end-to-end delivery of large tech programs in the payments and transaction banking space . Drive engineering rigor across Java-based platforms, Microservices, APIs, and integrations. Ensure scalability, reliability, and performance of platforms being delivered. Program Management: Oversee multi-stream programs , ensuring timelines, budgets, quality standards, and stakeholder alignment. Implement strong program governance, risk mitigation frameworks, and cadence reviews. Team Management: Manage large cross-functional technology teams (developers, architects, QA, DevOps). Drive performance, innovation, and a culture of engineering excellence. Stakeholder Engagement: Engage with C-level and senior stakeholders on architecture reviews, technical direction, and delivery roadmaps. Act as the escalation point for key technology delivery issues. Continuous Improvement & Best Practices: Champion DevOps, Agile/Scrum, and modern engineering principles. Lead initiatives around code quality, CI/CD, observability, and automation. Core Areas of Expertise Strong hands-on expertise in Java, Spring Boot, J2EE, Microservices architecture, and REST APIs Proven delivery of enterprise-scale payment systems, transaction platforms, and cash/channel banking applications Deep domain experience in digital payments, RTGS/NEFT, UPI, ISO20022, SWIFT, reconciliation systems Deep understanding of platform engineering , systems integration, and regulatory compliance Ability to scale large tech teams, drive modernization, and lead cloud-native transformations Key Requirements B.Tech in Computer Science or related field 18+ years of progressive experience in product engineering, platform delivery, or fintech transformation Strong technical background with hands-on or architectural exposure to Java/J2EE, Spring Boot, Microservices, Kafka, cloud platforms Demonstrated success in leading enterprise banking/payment system implementations Proficient in Agile, DevOps, SAFe , and global delivery methodologies Experience handling high-volume, low-latency, mission-critical systems PMP/Prince2 certification preferred Willingness to travel as required Personal Attributes Technically strong and detail-oriented with an engineering mindset Strategic thinker with delivery discipline and executive presence Excellent communicator with ability to engage CXO-level stakeholders Proactive, result-driven, and comfortable working in high-stakes environments
Posted 2 days ago
18.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
JD Vice President – Delivery (Payments & Transaction Banking) Grade: VP Experience:18 + years Location: Goregaon,Mumbai Role Overview As Vice President – Delivery , you will lead the technology delivery of large-scale, mission-critical payments and transaction banking platforms , with a strong focus on modern tech stacks (Java, Microservices, J2EE). You will drive technical execution excellence, architecture alignment, and delivery rigor across global transformation engagements. The role requires deep technical leadership combined with domain expertise in Payments, Cash Management, and Digital Banking . Key Responsibilities Technical & Program Leadership: Lead the design, architecture, and end-to-end delivery of large tech programs in the payments and transaction banking space . Drive engineering rigor across Java-based platforms, Microservices, APIs, and integrations. Ensure scalability, reliability, and performance of platforms being delivered. Program Management: Oversee multi-stream programs , ensuring timelines, budgets, quality standards, and stakeholder alignment. Implement strong program governance, risk mitigation frameworks, and cadence reviews. Team Management: Manage large cross-functional technology teams (developers, architects, QA, DevOps). Drive performance, innovation, and a culture of engineering excellence. Stakeholder Engagement: Engage with C-level and senior stakeholders on architecture reviews, technical direction, and delivery roadmaps. Act as the escalation point for key technology delivery issues. Continuous Improvement & Best Practices: Champion DevOps, Agile/Scrum, and modern engineering principles. Lead initiatives around code quality, CI/CD, observability, and automation. Core Areas of Expertise Strong hands-on expertise in Java, Spring Boot, J2EE, Microservices architecture, and REST APIs Proven delivery of enterprise-scale payment systems, transaction platforms, and cash/channel banking applications Deep domain experience in digital payments, RTGS/NEFT, UPI, ISO20022, SWIFT, reconciliation systems Deep understanding of platform engineering , systems integration, and regulatory compliance Ability to scale large tech teams, drive modernization, and lead cloud-native transformations Key Requirements B.Tech in Computer Science or related field 18+ years of progressive experience in product engineering, platform delivery, or fintech transformation Strong technical background with hands-on or architectural exposure to Java/J2EE, Spring Boot, Microservices, Kafka, cloud platforms Demonstrated success in leading enterprise banking/payment system implementations Proficient in Agile, DevOps, SAFe , and global delivery methodologies Experience handling high-volume, low-latency, mission-critical systems PMP/Prince2 certification preferred Willingness to travel as required Personal Attributes Technically strong and detail-oriented with an engineering mindset Strategic thinker with delivery discipline and executive presence Excellent communicator with ability to engage CXO-level stakeholders Proactive, result-driven, and comfortable working in high-stakes environments
Posted 2 days ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the role We’re looking for Senior Engineering Manager to lead our Data / AI Platform and MLOps teams at slice. In this role, you’ll be responsible for building and scaling a high-performing team that powers data infrastructure, real-time streaming, ML enablement, and data accessibility across the company. You'll partner closely with ML, product, platform, and analytics stakeholders to build robust systems that deliver high-quality, reliable data at scale. You will drive AI initiatives to centrally build AP platform and apps which can be leveraged by various functions like legal, CX, product in a secured manner This is a hands-on leadership role perfect for someone who enjoys solving deep technical problems while growing people and teams. What You Will Do Lead and grow the data platform pod focused on all aspects of data (batch + real-time processing, ML platform, AI tooling, Business reporting, Data products – enabling product experience through data) Maintain hands-on technical leadership - lead by example through code reviews, architecture decisions, and direct technical contribution Partner closely with product and business stakeholders to identify data-driven opportunities and translate business requirements into scalable data solutions Own the technical roadmap for our data platform including infra modernization, performance, scalability, and cost efficiency Drive the development of internal data products like self-serve data access, centralized query layers, and feature stores Build and scale ML infrastructure with MLOps best practices including automated pipelines, model monitoring, and real-time inference systems Lead AI platform development for hosting LLMs, building secure AI applications, and enabling self-service AI capabilities across the organization Implement enterprise AI governance including model security, access controls, and compliance frameworks for internal AI applications Collaborate with engineering leaders across backend, ML, and security to align on long-term data architecture Establish and enforce best practices around data governance, access controls, and data quality Ensure regulatory compliance with GDPR, PCI-DSS, SOX through automated compliance monitoring and secure data pipelines Implement real-time data processing for fraud detection and risk management with end-to-end encryption and audit trails Coach engineers and team leads through regular 1:1s, feedback, and performance conversations What You Will Need 10+ years of engineering experience, including 2+ years managing data or infra teams with proven hands-on technical leadership Strong stakeholder management skills with experience translating business requirements into data solutions and identifying product enhancement opportunities Strong technical background in data platforms, cloud infrastructure (preferably AWS), and distributed systems Experience with tools like Apache Spark, Flink, EMR, Airflow, Trino/Presto, Kafka, and Kubeflow/Ray plus modern stack: dbt, Databricks, Snowflake, Terraform Hands on experience building AI/ML platforms including MLOps tools and experience with LLM hosting, model serving, and secure AI application development Proven experience improving performance, cost, and observability in large-scale data systems Expert-level cloud platform knowledge with container orchestration (Kubernetes, Docker) and Infrastructure-as-Code Experience with real-time streaming architectures (Kafka, Redpanda, Kinesis) Understanding of AI/ML frameworks (TensorFlow, PyTorch), LLM hosting platforms, and secure AI application development patterns Comfort working in fast-paced, product-led environments with ability to balance innovation and regulatory constraints Bonus: Experience with data security and compliance (PII/PCI handling), LLM infrastructure, and fintech regulations Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependents. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role - Java Backend Developer Experience - 3-6 yrs Location - Bangalore ● Bachelors/Masters in Computer science from a reputed institute/university ● 3-7 years of strong experience in building Java/golang/python based server side solutions ● Strong in data structure, algorithm and software design ● Experience in designing and building RESTful micro services ● Experience with Server side frameworks such as JPA (HIbernate/SpringData), Spring, vertex, Springboot, Redis, Kafka, Lucene/Solr/ElasticSearch etc. ● Experience in data modeling and design, database query tuning ● Experience in MySQL and strong understanding of relational databases. ● Comfortable with agile, iterative development practices ● Excellent communication (verbal & written), interpersonal and leadership skills ● Previous experience as part of a Start-up or a Product company. ● Experience with AWS technologies would be a plus ● Experience with reactive programming frameworks would be a plus · Contributions to opensource are a plus ● Familiarity with deployment architecture principles and prior experience with container orchestration platforms, particularly Kubernetes, would be a significant advantage
Posted 2 days ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Purpose Manage Electro-mechanical and Fire & Safety operations to ensure the quality and deliverables in a timely and cost effective manner at all office locations of Bangalore; Role would be responsible to build, deploy, and scale machine learning and AI solutions across GMR’s verticals. This role will build and manage advanced analytics initiatives, predictive engines, and GenAI applications — with a focus on business outcomes, model performance, and intelligent automation. Reporting to the Head of Automation & AI, you will operate in a high-velocity, product-oriented environment with direct visibility of impact across airports, energy, infrastructure and enterprise functions ORGANISATION CHART Key Accountabilities Accountabilities Key Performance Indicators AI & ML Development Build and deploy models using supervised, unsupervised, and reinforcement learning techniques for use cases such as forecasting, predictive scenarios, dynamic pricing & recommendation engines, and anomaly detection, with exposure to broad enterprise functions and business Lead development of models, NLP classifiers, and GenAI-enhanced prediction engines. Design and integrate LLM-based features such as prompt pipelines, fine-tuned models, and inference architecture using Gemini, Azure OpenAI, LLama etc. Program Plan Vs Actuals End-to-End Solutioning Translate business problems into robust data science pipelines with emphasis on accuracy, explainability, and scalability. Own the full ML lifecycle — from data ingestion and feature engineering to model training, evaluation, deployment, retraining, and drift management. Program Plan Vs Actuals Cloud , ML & data Engineering Deploy production-grade models using AWS, GCP, or Azure AI platforms and orchestrate workflows using tools like Step Functions, SageMaker, Lambda, and API Gateway. Build and optimise ETL/ELT pipelines, ensuring smooth integration with BI tools (Power BI, QlikSense or similar) and business systems. Data compression and familiarity with cloud finops will be an advantage, have used some tools like kafka, apache airflow or similar 100% compliance to processes KEY ACCOUNTABILITIES - Additional Details EXTERNAL INTERACTIONS Consulting and Management Services provider IT Service Providers / Analyst Firms Vendors INTERNAL INTERACTIONS GCFO and Finance Council, Procurement council, IT council, HR Council (GHROC) GCMO/ BCMO FINANCIAL DIMENSIONS Other Dimensions EDUCATION QUALIFICATIONS Engineering Relevant Experience 5 - 8years of hands-on experience in machine learning, AI engineering, or data science, including deploying models at scale. Strong programming and modelling skills in some like Python, SQL, and ML frameworks like scikit-learn, TensorFlow, XGBoost, PyTorch. Demonstrated ability to build models using supervised, unsupervised, and reinforcement learning techniques to solve complex business problems. Technical & Platform Skills Proven experience with cloud-native ML tools: AWS SageMaker, Azure ML Studio, Google AI Platform. Familiarity with DevOps and orchestration tools: Docker, Git, Step Functions, Lambda,Google AI or similar Comfort working with BI/reporting layers, testing, and model performance dashboards. Mathematics and Statistics Linear algebra, Bayesian method, information theory, statistical inference, clustering, regression etc Collaborate with Generative AI and RPA teams to develop intelligent workflows Participate in rapid prototyping, technical reviews, and internal capability building NLP and Computer Vision Knowledge of Hugging Face Transformers, Spacy or similar NLP tools YoLO, Open CV or similar for Computer vision. COMPETENCIES Personal Effectiveness Social Awareness Entrepreneurship Problem Solving & Analytical Thinking Planning & Decision Making Capability Building Strategic Orientation Stakeholder Focus Networking Execution & Results Teamwork & Interpersonal influence
Posted 2 days ago
0 years
15 - 18 Lacs
India
On-site
Develop and maintain Java-based integration services that invoke and interact with IBM ITX maps and artifacts Build wrappers, launchers, or custom adapters to automate ITX map execution and handle pre/post-processing logic Implement error handling, logging, and monitoring mechanisms for data transformations and file transfers Support EDI processing, especially HIPAA X12 transactions (270/271, 837, 835, etc.) Collaborate with ITX developers, solution architects, business analysts, and QA teams for end-to-end integration delivery Write and maintain unit and integration tests, ensuring robust and scalable code Participate in code reviews, performance tuning, and refactoring efforts to optimize system throughput and reliability Troubleshoot issues in integration workflows, data mapping, and system connectivity Requirements Strong Java development experience (Java 8 or higher) Experience integrating with IBM ITX (Transformation Extender) through Java APIs or external command execution Solid understanding of SOA, REST/SOAP APIs, and messaging technologies (JMS, Kafka, or MQ) Familiarity with HIPAA X12 transactions and EDI standards Proficiency with XML, JSON, XSLT, and flat file formats Working knowledge of Spring Boot, Maven/Gradle, and version control (Git) Hands-on experience with secure data exchange (e.g., SFTP, HTTPS, encryption) Strong debugging and performance tuning skills in integration-heavy environments
Posted 2 days ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Analyze business requirements & functional specifications Be able to determine the impact of changes in current functionality of the system Interaction with diverse Business Partners and Technical Workgroups Be flexible to collaborate with onshore business, during US business hours Be flexible to support project releases, during US business hours Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 3+ years of working experience in Python, Pyspark, Scala 3+ years of experience working on MS Sql Server and NoSQL DBs like Cassandra, etc. Hands-on working experience in Azure Databricks Solid healthcare domain knowledge Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent analytical and communication skills (Both verbal and written) Preferred Qualification Experience in the Streaming application (Kafka, Spark Streaming, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen #NJP
Posted 2 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Title: Java Backend Developer Location : Kharadi, Pune We are looking for a Java Backend with Dev Ops Experience. : ( 7-13 Exp ) Java Backend Developer, specializing in Spring Boot, microservices, RESTful API development, and hibernate to build scalable, high-performance applications GraphQL. Leveraging Docker for containerization and Kubernetes for orchestration. TECHNICAL SKILLS • Languages : Java, JSP, Servlets • Frameworks : Spring Framework, Spring Boot, Spring MVC, Spring Data JPA, Hibernate • Web Services : RESTful APIs, SOAP, Microservices • Database & ORM : SQL, JDBC • Dev Tools : Eclipse, IntelliJ, Postman, SOAP UI, Git, Bitbucket, Gerrit, Jira, Slack • DevOps & Platforms : Docker, Kubernetes, Jenkins, Grafana • Messaging & Monitoring : Apache Kafka, SonarQube • Testing & QA : JUnit 5, Mockito, Groovy
Posted 2 days ago
5.0 years
0 Lacs
Delhi, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a skilled and passionate Full Stack Engineer to join our high-performing engineering team. You will be responsible for building scalable, secure, and high-performance applications that support our healthcare platforms. This role requires deep expertise in both front-end and back-end development, with a solid focus on microservices, cloud-native architecture, observability, and operational excellence. Primary Responsibilities Design, develop, and maintain full-stack applications using modern frameworks and tools Build and develop UI using ReactJS Develop microservices using Java and Spring Boot and integrate with messaging systems like Apache Kafka Write comprehensive unit and integration test cases to ensure code quality and reliability Implement monitoring and observability using Grafana, Dynatrace, and Splunk Deploy and manage applications using Kubernetes (K8s) and GitHub Actions Participate in production support activities as needed, including incident resolution and root cause analysis Collaborate with cross-functional teams including product, QA, DevOps, and UX Participate in code reviews, architecture discussions, and agile ceremonies Ensure application performance, scalability, and security Stay current with emerging technologies and propose innovative solutions Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 5+ years of experience in full stack development Hands-on experience with Kubernetes, Docker, and GitHub Actions for CI/CD Experience with Kafka for event-driven architecture Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB) Solid proficiency in Java, Spring Boot, and RESTful API development Solid understanding of HTML, CSS, JavaScript, and modern front-end ReactJS framework Familiarity with Grafana, Elastic APM, and Splunk for monitoring and logging Proven solid problem-solving skills and attention to detail Proven ability to write and maintain unit tests using tools like JUnit, Mockito, etc. Preferred Qualifications Experience in the healthcare domain or working with HIPAA-compliant systems Exposure to DevOps practices and infrastructure as code (e.g., Terraform) Knowledge of security best practices in web and microservices development Familiarity with cloud platforms like AWS, Azure, or GCP At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Java Developer – 4 Years Experience Location: Bangalore (Onsite) Job Type: Full-Time Experience Required: 4+ Years Project Type: Migration & Enterprise-Level Projects Job Summary: We are seeking a skilled Java Developer with 4+ years of hands-on experience in enterprise-level applications and migration projects . The ideal candidate should be strong in Kafka , multi-threading , microservices , SQL , and core Java coding principles. You will work in a fast-paced Agile environment focused on designing scalable systems and supporting complex business processes. Key Responsibilities: Develop and maintain scalable Java-based microservices . Build and integrate robust Kafka-based messaging solutions. Write clean, efficient, and testable multi-threaded code for high-performance applications. Collaborate with cross-functional teams to support migration initiatives . Optimize SQL queries and interact with relational databases effectively. Participate in code reviews , technical discussions, and performance tuning. Deliver high-quality code aligned with enterprise standards and best practices . Mandatory Skills: Strong Core Java (OOPs, Collections, Exception Handling) Apache Kafka (Producer, Consumer, Streams, Topics) Multi-threading & Concurrency Microservices Architecture (Spring Boot preferred) SQL (Joins, Indexing, Stored Procedures, Performance Tuning) Strong debugging and problem-solving skills Good to Have: Experience with Spring Cloud , Docker , or Kubernetes Familiarity with CI/CD pipelines Exposure to cloud platforms (AWS/Azure/GCP) Knowledge of JIRA , Git , and Agile methodologies
Posted 2 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Job Type: Full-Time Experience Required: 8+ years in Software QA/Testing, 3+ years in Test Automation using Playwright, 2+ years in AI/ML project environments --- About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. --- Key Responsibilities: Test Automation Framework Design & Implementation · Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. · Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). · Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy · Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. · Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. · Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. · Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage · Lead the implementation of end-to-end automation for: o Web interfaces (React, Angular, or other SPA frameworks) o Backend services (REST, GraphQL, WebSockets) o ML model integration endpoints (real-time inference APIs, batch pipelines) · Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration · Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. · Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. · Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership · Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. · Lead and mentor a team of automation and QA engineers across multiple projects. · Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration · Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. · Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. · Review feature specs, AI/ML model update notes, and data schemas for impact analysis. --- Required Skills and Qualifications: Technical Skills: · Strong hands-on expertise with Playwright (TypeScript/JavaScript). · Experience building custom automation frameworks and utilities from scratch. · Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. · Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). · Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). · Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: · Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. · Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. · Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: · Proven experience leading QA/Automation teams (4+ engineers). · Strong documentation, code review, and stakeholder communication skills. · Experience collaborating in Agile/SAFe environments with cross-functional teams. --- Preferred Qualifications: · Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. · Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. · Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. · Experience with GraphQL, Kafka, or event-driven architecture testing. · QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). --- Education: · Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. · Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. --- Why Join Us? · Work on cutting-edge AI platforms shaping the future of [industry/domain]. · Collaborate with world-class AI researchers and engineers. · Drive the quality of products used by [millions of users / high-impact clients]. · Opportunity to define test automation practices for AI—one of the most exciting frontiers in tech.
Posted 2 days ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Analyze business requirements & functional specifications Be able to determine the impact of changes in current functionality of the system Be able to handle SAS Algo dev assignments independently, end-to-end SDLC Interaction with diverse Business Partners and Technical Workgroups Drive Algo optimization and innovation in the team Flexible to collaborate with onshore business, during US business hours Flexible to support project releases, during US business hours Adherence to the defined delivery process/guidelines Drive project quality process compliance Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience. 7+ years on working experience in Python, Pyspark, Scala 5+ years working experience in Azure Databricks 3+ years of experience working on MS Sql Server and NoSQL DBs like Cassandra, etc. Hands-on experience in the Streaming application (Kafka, Spark Streaming, etc.) Solid healthcare domain knowledge Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally. Proven ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent Analytical and Communication skills (Both Verbal and Written) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 days ago
18.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Brief House of Shipping provides business consultancy and advisory services for Shipping & Logistics companies. House of Shipping's commitment to their customers begins with developing an understanding of their business fundamentals. We are hiring on behalf of one of our key US based client - a globally recognized service provider of flexible and scalable outsourced warehousing solutions, designed to adapt to the evolving demands of today’s supply chains. Currently House of Shipping is looking to identify a high caliber Warehouse Engineering & Automation Lead . This position is an on-site position for Hyderabad . Background and experience: 15–18 years in software engineering roles with at least 5 years in leadership roles managing cross-functional delivery teams Experience in designing and scaling platforms for logistics, warehouse automation, or e-commerce Proven success in leading large product or platform teams through architecture modernization and tech transformation Job purpose: To lead the architecture, development, and delivery of enterprise-grade platforms and applications supporting warehouse, logistics, and supply chain operations. This role drives engineering standards, delivery velocity, team growth, and cross-functional alignment with business strategy. Main tasks and responsibilities: Own engineering strategy, delivery planning, and execution of large-scale, mission-critical systems across WMS, TMS, and supply chain orchestration platforms Define and implement best practices in system design, scalability, observability, and modularity Guide the technical direction of microservices architecture, cloud-native deployments (AWS/GCP), and DevSecOps pipelines Build engineering roadmaps in collaboration with product, operations, and infrastructure teams Mentor architects, engineering managers, and tech leads on design decisions, sprint execution, and long-term maintainability Review and approve solution architectures, integration patterns (REST, EDI, streaming), and platform-wide decisions Oversee budgeting, licensing, tooling decisions, and vendor evaluation Champion a culture of code quality, automated testing, release velocity, and incident-free deployment Ensure compliance with global security, audit, and data privacy standards (SOC2, ISO 27001, GDPR where applicable) Serve as escalation point for cross-platform technical blockers and engineering productivity challenges Education requirements: Bachelor’s or Master’s in Computer Science, Software Engineering, or related technical field Preferred: Certifications in Cloud Architecture (AWS/GCP), TOGAF, or Scaled Agile Leadership Competencies and skills: Technical: Distributed systems, Microservices, DevOps, Cloud-native development, Event-driven architecture Leadership: Strategic planning, team scaling, stakeholder communication, delivery governance Tools: Java/Python/Node.js, Kubernetes, Kafka, Git, Terraform, CI/CD (Jenkins/Azure DevOps), Logging/Monitoring stacks Strong business acumen in supply chain/logistics workflows and SLAs
Posted 2 days ago
6.0 years
0 Lacs
India
Remote
Position: Java Developer Location: Remote Employment type: Full-time Experience required: 6+ years Job Overview : As a Java Backend Engineer, you will be instrumental in understanding complex business requirements, you will be able to provide comprehensive solutions from design to implementation and rollout of a Java application software. You will be responsible for coding, testing, and implementing the high-quality Java modules for a sophisticated enterprise application system that integrates with multiple third-party web services, like AWS( EC2, Lambda, SNS, SQS). Additionally, you will modify existing software to fix errors, enhance performance, troubleshoot, debug, and upgrade existing applications. Effective coordination with various stakeholders, including technical teams, operations, business analysts, and quality assurance teams globally, is crucial. You will ensure that all designs and implementations are scalable, fault-tolerant, and capable of handling high-volume data sources. You will be required to write scalable web-API for the Spring Boot application. Key Responsibilities: Design, develop, and maintain robust and scalable Java applications using Spring Boot and Hibernate. Implement and manage microservices architecture to enhance application modularity and scalability. Develop RESTful APIs and ensure their integration with frontend services. Secure applications using JSON Web Tokens (JWT) and other security protocols. Deploy and manage applications on AWS cloud infrastructure. Work with both relational (MySQL) and NoSQL databases to ensure efficient data storage and retrieval. Utilize Docker and Kubernetes for containerization and orchestration of applications. Implement messaging solutions using Kafka to ensure reliable and high-throughput communication between services. Set up and maintain CI/CD pipelines to automate the software development lifecycle. Deploy and manage applications on Tomcat servers. Write unit tests using JUnit and Mockito to ensure code quality and reliability. Collaborate with cross-functional teams to define, design, and ship new features. Troubleshoot and resolve production issues in a timely manner. Required Qualifications: Bachelor’s degree in Computer Science, IT, or a related technical field. 6+ years of hands-on experience as a Java Backend Developer Strong expertise in Spring Boot and Hibernate. Proven experience with microservices architecture and REST API development. Familiarity with JSON Web Tokens (JWT) for securing applications. Proficiency in AWS cloud services and infrastructure management. Experience with both NoSQL (e.g., MongoDB, DynamoDB, etc.) and relational databases (e.g., MySQL). Strong proficiency in AWS, Experience with Multithreading, DSA Hands-on experience with Docker and Kubernetes for containerization and orchestration. Experience of Kafka/SQS/SNS etc. for building scalable and reliable messaging solutions. Experience with CI/CD pipelines using tools like Jenkins etc. Excellent problem-solving skills and the ability to work independently as well as in a team environment. Strong communication and interpersonal skills.
Posted 2 days ago
7.0 years
0 Lacs
India
Remote
Remote, Rotational shift Need immediate joiners only Required Skills & Qualifications: 7+ years of experience as a Databricks Administrator or in a similar role. Strong knowledge of Azure/AWS Databricks and cloud computing platforms. Hands-on experience with Databricks clusters, notebooks, libraries, and job scheduling. Expertise in Spark optimization, data caching, and performance tuning. Proficiency in Python, Scala, or SQL for data processing. Experience with Terraform, ARM templates, or CloudFormation for infrastructure automation. Familiarity with Git, DevOps, and CI/CD pipelines. Strong problem-solving skills and ability to troubleshoot Databricks-related issues. Excellent communication and stakeholder management skills. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Associate/Professional). Experience in Delta Lake, Unity Catalog, and MLflow. Knowledge of Kubernetes, Docker, and containerized workloads. Experience with big data ecosystems (Hadoop, Apache Airflow, Kafka, etc.)
Posted 2 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Analyze business requirements and functional specifications Able to determine the impact of changes in current functionality of the system Interaction with diverse Business Partners and Technical Workgroups Be flexible to collaborate with onshore business, during US business hours Be flexible to support project releases, during US business hours Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 3+ years on working experience in Python, Pyspark, Scala 3+ years of experience working on MS Sql Server and NoSQL DBs like Cassandra, etc. Hands on working experience in Azure Databricks Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally. Solid healthcare domain knowledge Demonstrated ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent Analytical and Communication skills (Both Verbal and Written) Preferred Qualification Experience in the Streaming application (Kafka, Spark Streaming, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 2 days ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Wissen Technology is Hiring fo r Java + Python Developer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges. Role Overview: We’re looking for a versatile Java + Python Developer who thrives in backend development and automation. You'll be working on scalable systems, integrating third-party services, and contributing to high-impact projects across fintech/data platforms/cloud-native applications Experience : 2-10 Years Location: Bengaluru Key Responsibilities: Design, develop, and maintain backend services using Java and Python Build and integrate RESTful APIs, microservices, and data pipelines Write clean, efficient, and testable code across both Java and Python stacks Work on real-time, multithreaded systems and optimize performance Collaborate with DevOps and data engineering teams on CI/CD, deployment, and monitoring Participate in design discussions, peer reviews, and Agile ceremonies Required Skills: 2–10 years of experience in software development Strong expertise in Core Java (8+) and Spring Boot Proficient in Python (data processing, scripting, API development) Solid understanding of data structures, algorithms, and multithreading Hands-on experience with REST APIs, JSON, SQL/NoSQL (PostgreSQL, MongoDB, etc.) Familiarity with Git, Maven/Gradle, Jenkins, Agile/Scrum Preferred Skills : Experience with Kafka, RabbitMQ, or message queues Cloud services (AWS, Azure, or GCP) Knowledge of data engineering tools (Pandas, NumPy, PySpark , etc.) Docker/Kubernetes familiarity Exposure to ML/AI APIs or DevOps scripting The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right ’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work® Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard ’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie. Website : www.wissen.com LinkedIn : https ://www.linkedin.com/company/wissen-technology Wissen Leadership : https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership : https://www.wissen.com/articles/ Employee Speak: https://www.ambitionbox.com/overview/wissen-technology-overview https://www.glassdoor.com/Reviews/Wissen-Infotech-Reviews-E287365.htm Great Place to Work: https://www.wissen.com/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-india/ https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-6935459546131763200-xF2k
Posted 2 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
\Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Software engineering is the application of engineering to the design, development, implementation, testing and maintenance of software in a systematic method. The roles in this function will cover all primary development activity across all technology functions that ensure we deliver code with high quality for our applications, products and services and to understand customer needs and to develop product roadmaps. These roles include, but are not limited to analysis, design, coding, engineering, testing, debugging, standards, methods, tools analysis, documentation, research and development, maintenance, new development, operations and delivery. With every role in the company, each position has a requirement for building quality into every output. This also includes evaluating new tools, new techniques, strategies; Automation of common tasks; build of common utilities to drive organizational efficiency with a passion around technology and solutions and influence of thought and leadership on future capabilities and opportunities to apply technology in new and innovative ways. Generally work is self-directed and not prescribed. Primary Responsibilities Manage Azure Cloud Infrastructure and building resilient and self-scaling systems Implement solutions to continuously improve operational reliability of the cloud infrastructure You will be responsible for the availability, performance, monitoring and Infra Provisioning for the Platform which comprises of Cloud infrastructure and On Prem technologies Closely partner with Engineering and Technical Support teams to drive resolution of critical issues Publish and implement operational standards for all Cloud infrastructure and services Work towards reducing Operations toil by automating repeatable tasks Focus would be to mentor and develop other members in the SRE subject area Application deployments using CI/CD tools, code repository, code scanning, artifact repo, compliance scanning, packaging, deployment, and configuration management Build Operations Dashboards leveraging tools like Dynatrace, Splunk or Grafana Handling incident, change and problem management Help with provisioning of Infrastructure using Terraform Enhancing Platform Observability Dashboards Closely partnering with Development Teams and help address Platform related roadblocks Conduct post-mortem after a production issues. React to production deficiencies by continuously implementing automation, self-healing, and real-time monitoring to production systems Work with Docker, Kubernetes, Azure cloud, Prometheus, Grafana, Java, Python and many other modern SaaS technologies Participate in projects involving people of many different disciplines: Engineering, Cloud, Networking, CI/CD, Project management, Monitoring, alerting etc. Stay informed of new technologies and Innovate Works with less structured, more complex issues Serves as a resource to others Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or advanced Degree in a related technical field 3+ years IT Experience 3+ years DevOps Experience 2+ years experience on Infrastructure as Code (Terraform/Ansible/Chef/Puppet) 2+ years experience on Docker and Container Orchestration (Kubernetes/OpenShift) 2+ years experience on DevOps and CI/CD tools such as Git, Jenkins 2+ years experience on Kafka Support 2+ years experience on Monitoring tools and technologies (Splunk, Dynatrace, new relic) Preferred Qualifications Infrastructure Engineering Experience Cloud Experience (Azure/AWS/GCP) Automation experience Good Knowledge on SRE principles Hands on scripting with one or more: YAML, JSON, PowerShell, BASH or Python At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title : Data Engineer Experience : 4-9 Years Location : Noida, Chennai & Pune Skills : Python, Pyspark, Snowflake & Redshift Key Responsibilities • Migration & Modernization • Lead the migration of data pipelines, models, and workloads from Redshift to Snowflake/Yellowbrick. • Design and implement landing, staging, and curated data zones to support scalable ingestion and consumption patterns. • Evaluate and recommend tools and frameworks for migration, including file formats, ingestion tools, and orchestration. • Design and build robust ETL/ELT pipelines using Python, PySpark, SQL, and orchestration tools (e.g., Airflow, dbt). • Support both batch and streaming pipelines, with real-time processing via Kafka, Snowpipe, or Spark Structured Streaming. • Build modular, reusable, and testable pipeline components that handle high volume and ensure data integrity. • Define and implement data modeling strategies (star, snowflake, normalization/denormalization) for analytics and BI layers. • Implement strategies for data versioning, late-arriving data, and slowly changing dimensions. • Implement automated data validation and anomaly detection (using tools like dbt tests, Great Expectations, or custom checks). • Build logging and alerting into pipelines to monitor SLA adherence, data freshness, and pipeline health. • Contribute to data governance initiatives including metadata tracking, data lineage, and access control. Required Skills & Experience • 10+ years in data engineering roles with increasing responsibility. • Proven experience leading data migration or re-platforming projects. • Strong command of Python, SQL, and PySpark for data pipeline development. • Hands-on experience with modern data platforms like Snowflake, Redshift, Yellowbrick, or BigQuery. • Proficient in building streaming pipelines with tools like Kafka, Flink, or Snowpipe. • Deep understanding of data modeling, partitioning, indexing, and query optimization. • Expertise with ETL orchestration tools (e.g., Apache Airflow, Prefect, Dagster, or dbt). • Comfortable working with large datasets and solving performance bottlenecks. • Experience in designing data validation frameworks and implementing DQ rules.
Posted 2 days ago
0 years
0 Lacs
Delhi, India
On-site
Job Title: Java Backend Developer Job Description: We are looking for a skilled Java Backend Developer with strong expertise in Java 8 , Core Java , Multithreading , and Spring Boot . The ideal candidate should have experience building scalable Microservices , working with Kafka or similar messaging queues, and strong problem-solving skills. You will be responsible for designing, developing, and maintaining backend services in a fast-paced, agile environment. Key Skills: Java 8, Core Java, Multithreading, Spring Boot, Kafka, Messaging Queue, Microservices, Problem Solving
Posted 2 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
What We’ll Expect From You: 5+ years of experience automating security tooling, alerting, and remediation workflows, especially security event enrichment, reduction, and correlation Vulnerability Management experience, focused on prioritizing known vulnerabilities for remediation at scale and classifying previously unknown vulnerabilities Strong understanding of Linux systems and common O/S hardening practices Hands-on experience with cloud service providers. Amazon Web Services (AWS) is a core requirement, while familiarity with Google Cloud Platform (GCP) is a strong advantage. Focus areas include security issues and misconfigurations across cloud services, virtual machines, containers, and Kubernetes infrastructure. Strong understanding of systems in a multi-tenant, cloud environment Clear written and verbal communication skills to include: technical writing, presenting, coaching, mentoring Bonus: Experience in one or more of the following areas: Endpoint Intrusion Detection, Response, and Remediation, open source or commercial Configuration as Code software and methods (eg, Chef, Salt, Ansible) Message Bus Architectures and Data Processing Pipelines (eg, Kafka) Log management (e.g., ELK, Splunk, BigQuery) Engineering and maintaining Identity and Access management systems (e.g., OpenLDAP, Okta, VPN or Zero Trust)
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France