Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
35 - 50 Lacs
Hyderabad
Work from Office
Role & responsibilities Build F5 Distributed Cloud data system and management systems Design/develop/enhance data, analytics, AI/Gen AI powered service on SaaS platform Design/develop/enhance telemetry and metrics pipeline and services Work closely with product, marketing, operation, platform, and customer support team to create innovative solution for Cloud product delivery Preferred candidate profile Bachelors degree in computer science or equivalent professional experience (7+ years). Proficiency in Cloud native development and programming languages such as GO, Java, Python Experience with data/stream processing (e.g., Kafka, Pub Sub, Dataflow, Vector, Spark, Flink), database and data warehouse (Clickhouse, BigQuery, StarRocks, Elasticsearch, Redis) Experience with logs, metrics, telemetry, Prometheus, Open Telemetry Experience with data system quality, monitoring and performance Experience in SaaS multi-tenancy, onboarding, metering & billing, monitoring & alerting Experience with container and orchestration technologies, Kubernetes and Microservice Experience with automation and cloud Infra, tooling, workload, modern CI/CD
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
vadodara, gujarat
On-site
As a Senior Software Engineer (Java Developer, Spring Boot) at our organization, you will be responsible for designing, developing, and deploying high-performance, scalable Java-based microservices. You will have the opportunity to work on cutting-edge technologies and contribute to the development of cloud-native applications. Your key skills should include a strong foundation in Core Java (8/11/17), object-oriented programming, multi-threading, and exception handling. Additionally, experience with Spring Boot, REST APIs, Spring Data JPA, Spring Security, and Spring Cloud is essential for this role. You will be expected to follow API-first and Cloud-native design principles while implementing and maintaining REST APIs following OpenAPI/Swagger standards. Analyzing code review reports, ensuring adherence to clean code principles, and driving the adoption of automated testing practices will be crucial aspects of your responsibilities. Collaboration and mentoring are also essential parts of this role. You will work closely with DevOps, Product Owners, and QA teams to deliver features effectively. Mentoring junior developers, conducting code walkthroughs, and leading design discussions are key responsibilities that contribute to the growth of the team. Preferred experience for this role includes 5+ years of hands-on Java development experience, a deep understanding of Microservices design patterns, exposure to Cloud deployment models, proficiency with Git, Jenkins, SonarQube, and containerization, as well as experience working in Agile/Scrum teams. In addition to technical skills, behavioral traits such as an ownership-driven mindset, strong communication skills, and the ability to dive deep into technical problems to deliver solutions under tight deadlines are highly valued in our organization. If you are passionate about building scalable backend systems, possess the required technical skills, and thrive in a collaborative work environment, we encourage you to apply for this exciting opportunity.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Lead Software Engineer at JPMorgan Chase within the Consumer & Community Banking, you will have the opportunity to impact your career and embark on an adventure where you can push the limits of what's possible. You will play a crucial role in an agile team dedicated to developing, improving, and providing reliable, cutting-edge technology solutions that are secure, stable, and scalable. Your responsibilities will involve executing creative software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. You will be tasked with developing secure, high-quality production code, reviewing and debugging code written by others, and identifying opportunities to eliminate or automate the remediation of recurring issues to enhance the overall operational stability of software applications and systems. Additionally, you will lead evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture. You will also lead communities of practice across Software Engineering to promote the awareness and utilization of new and leading-edge technologies while fostering a team culture of diversity, opportunity, inclusion, and respect. The ideal candidate for this role should possess formal training or certification on software engineering concepts and at least 5 years of applied experience. You should have experience with Java, Microservices, Spring Boot, and Kafka, along with familiarity with Microservices architecture and related technologies such as Docker, Kubernetes, and API Gateway. Experience with distributed tracing and monitoring tools such as Prometheus and Grafana is desired. Hands-on practical experience in delivering system design, application development, testing, and operational stability is essential, and proficiency in automation and continuous delivery methods is a must. You should be proficient in all aspects of the Software Development Life Cycle and have an advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security. Demonstrated proficiency in software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) is required, along with practical cloud-native experience. Preferred qualifications include experience with AI technologies, including machine learning frameworks like TensorFlow or PyTorch, as well as in-depth knowledge of the financial services industry and their IT systems. Additionally, having proficiency in Python and AI would be advantageous for this role.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Senior Software DevOps Engineer, you will lead the design, implementation, and evolution of telemetry pipelines and DevOps automation that enable next-generation observability for distributed systems. You will blend a deep understanding of Open Telemetry architecture with strong DevOps practices to build a reliable, high-performance, and self-service observability platform across hybrid cloud environments (AWS & Azure). Your mission is to empower engineering teams with actionable insights through rich metrics, logs, and traces, while championing automation and innovation at every layer. You will be responsible for: Observability Strategy & Implementation: Architect and manage scalable observability solutions using OpenTelemetry (OTel), encompassing Collectors, Instrumentation, Export Pipelines, Processors & Extensions for advanced enrichment and routing. DevOps Automation & Platform Reliability: Own the CI/CD experience using GitLab Pipelines, integrating infrastructure automation with Terraform, Docker, and scripting in Bash and Python. Build resilient and reusable infrastructure-as-code modules across AWS and Azure ecosystems. Cloud-Native Enablement: Develop observability blueprints for cloud-native apps across AWS (ECS, EC2, VPC, IAM, CloudWatch) and Azure (AKS, App Services, Monitor). Optimize cost and performance of telemetry pipelines while ensuring SLA/SLO adherence for observability services. Monitoring, Dashboards, and Alerting: Build and maintain intuitive, role-based dashboards in Grafana, New Relic, enabling real-time visibility into service health, business KPIs, and SLOs. Implement alerting best practices integrated with incident management systems. Innovation & Technical Leadership: Drive cross-team observability initiatives that reduce MTTR and elevate engineering velocity. Champion innovation projects including self-service observability onboarding, log/metric reduction strategies, AI-assisted root cause detection, and more. Mentor engineering teams on instrumentation, telemetry standards, and operational excellence. Requirements: - 6+ years of experience in DevOps, Site Reliability Engineering, or Observability roles - Deep expertise with OpenTelemetry, including Collector configurations, receivers/exporters (OTLP, HTTP, Prometheus, Loki), and semantic conventions - Proficient in GitLab CI/CD, Terraform, Docker, and scripting (Python, Bash, Go). Strong hands-on experience with AWS and Azure services, cloud automation, and cost optimization - Proficiency with observability backends: Grafana, New Relic, Prometheus, Loki, or equivalent APM/log platforms - Passion for building automated, resilient, and scalable telemetry pipelines - Excellent documentation and communication skills to drive adoption and influence engineering culture Nice to Have: - Certifications in AWS, Azure, or Terraform - Experience with OpenTelemetry SDKs in Go, Java, or Node.js - Familiarity with SLO management, error budgets, and observability-as-code approaches - Exposure to event streaming (Kafka, RabbitMQ), Elasticsearch, Vault, Consul,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
indore, madhya pradesh
On-site
You are a skilled UI Engineer with 2 years of experience, specializing in React.js. As a part of our front-end team in Indore, you will be instrumental in developing user-friendly web applications that offer exceptional user experiences. Your primary responsibilities will include developing and maintaining scalable front-end components using React.js. You will collaborate closely with UX/UI designers and backend developers to implement and enhance user interfaces. It will be your responsibility to optimize applications for speed and scalability, ensuring cross-browser compatibility and responsiveness across various devices. Writing clean, well-documented code, participating in code reviews, unit testing, and staying abreast of the latest front-end development trends are also crucial aspects of your role. To excel in this position, you should hold a Bachelor's degree in computer science, engineering, or a related field, along with at least 2 years of professional experience in front-end development. Proficiency in TypeScript/JavaScript, React.js, HTML5, and CSS3 is essential. Experience with Grafana, Prometheus, state management libraries, RESTful APIs, asynchronous request handling, version control systems like Git, and build tools like Webpack, Babel, Rollup, or Vite will be advantageous. Familiarity with testing frameworks such as Jest, React Testing Library, time-series databases, and strong problem-solving skills are desired. Effective communication and collaboration skills are also necessary for this role.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
chandigarh
On-site
You will be a part of our team as a Junior DevOps Engineer, where you will contribute to building, maintaining, and optimizing our cloud-native infrastructure. Your role will involve collaborating with senior DevOps engineers and development teams to automate deployments, monitor systems, and ensure the high availability, scalability, and security of our applications. Your key responsibilities will include managing and optimizing Kubernetes (EKS) clusters, Docker containers, and Helm charts for deployments. You will support CI/CD pipelines using tools like Jenkins, Bitbucket, and GitHub Actions, and help deploy and manage applications using ArgoCD for GitOps workflows. Monitoring and troubleshooting infrastructure will be an essential part of your role, utilizing tools such as Grafana, Prometheus, Loki, and OpenTelemetry. Working with various AWS services like EKS, ECR, ALB, EC2, VPC, S3, and CloudFront will also be a crucial aspect to ensure reliable cloud infrastructure. Automating infrastructure provisioning using IaC tools like Terraform and Ansible will be another key responsibility. Additionally, you will assist in maintaining Docker image registries and collaborate with developers to enhance observability, logging, and alerting while adhering to security best practices for cloud and containerized environments. To excel in this role, you should have a basic understanding of Kubernetes, Docker, and Helm, along with familiarity with AWS cloud services like EKS, EC2, S3, VPC, and ALB. Exposure to CI/CD tools such as Jenkins, GitHub/Bitbucket pipelines, basic scripting skills (Bash, Python, or Groovy), and knowledge of observability tools like Prometheus, Grafana, and Loki will be beneficial. Understanding GitOps (ArgoCD) and infrastructure as code (IaC), experience with Terraform/CloudFormation, and knowledge of Linux administration and networking are also required skills. This is a full-time position that requires you to work in person. If you are interested in this opportunity, please feel free to reach out to us at +91 6284554276.,
Posted 1 week ago
11.0 - 15.0 years
0 Lacs
karnataka
On-site
As an AI Research Scientist, your role will involve developing the overarching technical vision for AI systems that cater to both current and future business needs. You will be responsible for architecting end-to-end AI applications, ensuring seamless integration with legacy systems, enterprise data platforms, and microservices. Collaborating closely with business analysts and domain experts, you will translate business objectives into technical requirements and AI-driven solutions. Working in partnership with product management, you will design agile project roadmaps that align technical strategy with market needs. Additionally, you will coordinate with data engineering teams to guarantee smooth data flows, quality, and governance across various data sources. Your responsibilities will also include leading the design of reference architectures, roadmaps, and best practices for AI applications. You will evaluate emerging technologies and methodologies, recommending innovations that can be integrated into the organizational strategy. Identifying and defining system components such as data ingestion pipelines, model training environments, CI/CD frameworks, and monitoring systems will be crucial aspects of your role. Leveraging containerization (Docker, Kubernetes) and cloud services, you will streamline the deployment and scaling of AI systems. Implementing robust versioning, rollback, and monitoring mechanisms to ensure system stability, reliability, and performance will also be part of your duties. Project management will be a key component of your role, overseeing the planning, execution, and delivery of AI and ML applications within budget and timeline constraints. You will be responsible for the entire lifecycle of AI application development, from conceptualization and design to development, testing, deployment, and post-production optimization. Enforcing security best practices throughout each phase of development, with a focus on data privacy, user security, and risk mitigation, will be essential. Furthermore, providing mentorship to engineering teams and fostering a culture of continuous learning will play a significant role in your responsibilities. In terms of mandatory technical and functional skills, you should possess a strong background in working with or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, along with robust knowledge of machine learning libraries such as TensorFlow, PyTorch, and Keras, is required. You should also have proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker), orchestration frameworks (Kubernetes), and related DevOps tools like Jenkins and GitLab CI/CD are necessary. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments is essential. Additionally, proficiency in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) and expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture are vital for this role. Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) is also required to support robust inter-system communications. Preferred technical and functional skills include experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI's API integrations, and other domain-specific tools is advantageous. Experience in large-scale deployment of ML projects, along with a good understanding of DevOps/MLOps/LLM Ops and training and fine-tuning of Large Language Models (SLMs) like PALM2, GPT4, LLAMA, etc., is beneficial. Key behavioral attributes for this role include the ability to mentor junior developers, take ownership of project deliverables, contribute to risk mitigation, and understand business objectives and functions to support data needs. If you have a Bachelor's or Master's degree in Computer Science, certifications in cloud technologies (AWS, Azure, GCP), and TOGAF certification (good to have), along with 11 to 14 years of relevant work experience, this role might be the perfect fit for you.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
andhra pradesh
On-site
You are a talented Full Stack Developer with a solid background in Laravel, AWS, and DevOps. Your role involves designing, developing, deploying, and maintaining cutting-edge web applications with a focus on performance, scalability, and reliability. You will work on Laravel development, AWS management, and DevOps tasks to ensure seamless CI/CD operations. In Laravel Development, you will design, develop, and maintain web applications using Laravel, optimize applications for speed and scalability, integrate back-end services, troubleshoot and debug existing applications, and collaborate with front-end developers for seamless integration. For AWS Management, you will manage and deploy web applications on AWS infrastructure, utilize various AWS services, implement backup, recovery, and security policies, optimize services for cost and performance, and have experience with Infrastructure as Code tools like AWS CloudFormation or Terraform. In DevOps, your responsibilities include designing and implementing CI/CD pipelines, maintaining infrastructure automation, monitoring server and application performance, developing configuration management tools, implementing logging and monitoring solutions, and collaborating with development and QA teams for code deployments and releases. Requirements for this role include a Bachelor's degree in Computer Science or related field, 4+ years of Laravel experience, 2+ years of AWS experience, 4+ years of DevOps experience, proficiency in version control systems, strong knowledge of database systems, experience with containerization tools, familiarity with agile methodologies, problem-solving skills, detail orientation, and ability to work in a fast-paced environment. Preferred qualifications include AWS certifications, experience with serverless architecture and microservices, knowledge of front-end technologies, familiarity with monitoring tools, and understanding of security best practices. Soft skills such as communication, collaboration, independence, teamwork, analytical skills, and problem-solving abilities are also essential. This position offers a competitive salary, opportunities to work with the latest technologies, professional development opportunities, health insurance, paid time off, and other benefits in a collaborative and innovative work culture.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Java Microservices Lead with 4+ years of experience and an immediate joiner located in Pune, you will play a crucial role in the end-to-end architecture, development, and deployment of enterprise Java microservices-based applications. Your primary responsibilities will include collaborating with cross-functional teams to architect, design, and develop solutions using core Java, Spring Boot, Spring Cloud, and AWS API Gateway. Moreover, you will lead and mentor a team of developers, participate in the entire software development life cycle, and drive adoption of microservices patterns and API design. Your expertise in Java, Spring Boot, AWS API Gateway, and microservices architecture will be essential in ensuring the delivery of high-quality code following best practices and coding standards. Your hands-on experience with containerization technologies like Docker, orchestration platforms such as Kubernetes, and deployment on cloud services like AWS, Azure, or Google Cloud will be highly valuable. Additionally, your familiarity with relational and NoSQL databases, Agile methodologies, version control systems, and software engineering best practices will contribute to the success of the projects. Furthermore, your strong problem-solving and analytical skills, attention to detail, and ability to work both independently and collaboratively in a fast-paced environment will be key assets in troubleshooting, debugging, and resolving issues across distributed systems. Your excellent communication and interpersonal skills will foster a culture of collaboration, continuous improvement, and technical excellence within the team. Staying up to date with industry trends and introducing innovative solutions to improve application development will be encouraged. In summary, as a Java Microservices Lead, you will be at the forefront of designing and developing scalable, cloud-native solutions, optimizing application performance and scalability, and establishing CI/CD pipelines. Your technical skills in Java, Spring Boot, microservices architecture, cloud platforms, databases, CI/CD, DevOps, and monitoring tools will be crucial in ensuring the success of the projects and the team.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
Qualcomm India Private Limited is looking for a highly skilled and experienced MLOps Engineer to join their team and contribute to the development and maintenance of their ML platform both on premises and AWS Cloud. As a MLOps Engineer, your responsibility will include architecting, deploying, and optimizing the ML & Data platform supporting the training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial for ensuring the smooth operation and scalability of the ML infrastructure. You will collaborate with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of the ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Your responsibilities will include architecting, developing, and maintaining the ML platform, designing and implementing scalable infrastructure solutions for NVIDIA clusters on premises and AWS Cloud, collaborating with data scientists and software engineers to define requirements, optimizing platform performance and scalability, monitoring system performance, implementing CI/CD pipelines, maintaining monitoring stack using Prometheus and Grafana, managing AWS services, implementing logging and monitoring solutions, staying updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proposing enhancements to the ML platform. Qualcomm is looking for candidates with a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, proven experience as an MLOps Engineer or similar role with a focus on large-scale ML and/or Data infrastructure and GPU clusters, strong expertise in configuring and optimizing NVIDIA DGX clusters, proficiency in using the Kubernetes platform and related technologies, solid programming skills in languages like Python, Go, experience with relevant ML frameworks, in-depth understanding of distributed computing and GPU acceleration techniques, familiarity with containerization technologies and orchestration tools, experience with CI/CD pipelines and automation tools for ML workflows, experience with AWS services and monitoring tools, strong problem-solving skills, excellent communication, and collaboration skills. Qualcomm is an equal opportunity employer and is committed to providing reasonable accommodations to support individuals with disabilities during the hiring process. If you are interested in this role or require more information, please contact Qualcomm Careers.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources - time series, equipment, documents, 3D objects - into a unified data model. You will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: - Design, develop, and maintain scalable, API-driven backend services using Kotlin. - Align backend systems with modern data modeling and orchestration standards. - Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. - Implement and refine RESTful APIs following established design guidelines. - Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. - Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. - Drive software development best practices including code reviews, documentation, and CI/CD process adherence. - Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: - 3+ years of backend development experience, with a strong focus on Kotlin. - Proven ability to design and maintain robust, API-centric microservices. - Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. - Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. - Strong understanding of distributed systems, data modeling, and software scalability principles. - Excellent communication skills and ability to work in a cross-functional, English-speaking environment. - Bachelor's or Master's degree in Computer Science or related discipline. Bonus Qualifications: - Experience with Python for auxiliary services, data processing, or SDK usage. - Knowledge of data contextualization or entity resolution techniques. - Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. - Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). - Experience with Terraform, Prometheus, and scalable backend performance testing. About The Role And Key Responsibilities: - Develop Data Fusion - a robust, state-of-the-art SaaS for industrial data. - Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. - Work with distributed open-source software such as Kubernetes, Kafka, Spark, and similar to build scalable and performant solutions. - Help shape the culture and methodology of a rapidly growing company. ,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources - time series, equipment, documents, 3D objects - into a unified data model. You will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: - Design, develop, and maintain scalable, API-driven backend services using Kotlin. - Align backend systems with modern data modeling and orchestration standards. - Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. - Implement and refine RESTful APIs following established design guidelines. - Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. - Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. - Drive software development best practices including code reviews, documentation, and CI/CD process adherence. - Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: - 5+ years of backend development experience, with a strong focus on Kotlin - Proven ability to design and maintain robust, API-centric microservices. - Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. - Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. - Strong understanding of distributed systems, data modeling, and software scalability principles. - Excellent communication skills and ability to work in a cross-functional, English-speaking environment. - Bachelor's or Master's degree in Computer Science or related discipline. Bonus Qualifications: - Experience with Python for auxiliary services, data processing, or SDK usage. - Knowledge of data contextualization or entity resolution techniques. - Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. - Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). - Experience with Terraform, Prometheus, and scalable backend performance testing. About the role and key responsibilities: - Develop Data Fusion - a robust, state-of-the-art SaaS for industrial data. - Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. Examples include integrating data sources into our platform in a secure and scalable way and enabling high-performance data science pipelines. - Work with application teams to ensure a delightful user experience that helps the user solve complex real-world problems that have yet to be solved before. - Work with distributed open-source software such as Kubernetes, Kafka, Spark and similar to build scalable and performant solutions. - Work with databases or storage systems such as PostgreSQL, Elasticsearch or S3-API-compatible blob stores. - Help shape the culture and methodology of a rapidly growing company.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
We are looking for an experienced DevOps Architect to spearhead the design, implementation, and management of scalable, secure, and highly available infrastructure. As the ideal candidate, you should possess in-depth expertise in DevOps practices, CI/CD pipelines, cloud platforms, and infrastructure automation across various cloud environments. This role requires strong leadership skills and the ability to mentor team members effectively. Your responsibilities will include leading and overseeing the DevOps team to ensure the reliability of infrastructure and automated deployment processes. You will be tasked with designing, implementing, and maintaining highly available, scalable, and secure cloud infrastructure on platforms such as AWS, Azure, and GCP. Developing and optimizing CI/CD pipelines for multiple applications and environments will be a key focus, along with driving Infrastructure as Code (IaC) practices using tools like Terraform, CloudFormation, or Ansible. Monitoring, logging, and alerting solutions will fall under your purview to ensure system health and performance. Collaboration with Development, QA, and Security teams to integrate DevOps best practices throughout the SDLC is essential. You will also lead incident management and root cause analysis for production issues, ensuring robust security practices for infrastructure and pipelines. Guiding and mentoring team members to foster a culture of continuous improvement and technical excellence will be crucial. Additionally, evaluating and recommending new tools, technologies, and processes to enhance operational efficiency will be part of your role. Qualifications: - Bachelor's degree in Computer Science, IT, or a related field; Master's degree preferred. - At least two current cloud certifications (e.g., AWS Solutions Architect, Azure Administrator, GCP DevOps Engineer, CKA). - 10+ years of relevant experience in DevOps, Infrastructure, or Cloud Operations. - 5+ years of experience in a technical leadership or team lead role. Skills & Abilities: - Expertise in at least two major cloud platforms: AWS, Azure, or GCP. - Strong experience with CI/CD tools such as Jenkins, GitLab CI, Azure DevOps, or similar. - Hands-on experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. - Proficiency in containerization and orchestration using Docker and Kubernetes. - Strong knowledge of monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, CloudWatch). - Scripting knowledge in languages like Python, Bash, or Go. - Solid understanding of networking, security, and system administration. - Experience in implementing security best practices across DevOps pipelines. - Proven ability to mentor, coach, and lead technical teams. Conditions: Work Arrangement: An occasionally hybrid opportunity based out of our Trivandrum office. Travel Requirements: Occasional travel may be required for team meetings, user research, or conferences. On-Call Requirements: Light on-call rotation may be required depending on operational needs. Hours of Work: Monday to Friday, 40 hours per week, with overlap with PST required. Values: Our values at AOT guide how we work, collaborate, and grow as a team. Every role is expected to embody and promote values such as innovation, integrity, ownership, agility, collaboration, and empowerment.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You have 5+ years of overall experience in Cloud Operations, with a minimum of 5 years of hands-on experience with Google Cloud Platform (GCP) and at least 3 years of experience in Kubernetes administration. It is mandatory to have GCP Certified Professional certification. In this role, you will be responsible for managing and monitoring GCP infrastructure resources to ensure optimal performance, availability, and security. You will also administer Kubernetes clusters, handle deployment, scaling, upgrades, patching, and troubleshooting. Automation for provisioning, scaling, and monitoring using tools like Terraform, Helm, or similar will be implemented and maintained by you. Your key responsibilities will include responding to incidents, performing root cause analysis, and resolving issues within SLAs. Configuring logging, monitoring, and alerting solutions across GCP and Kubernetes environments will also be part of your duties. Supporting CI/CD pipelines, integrating Kubernetes deployments with DevOps processes, and maintaining detailed documentation of processes, configurations, and runbooks are critical aspects of this role. Collaboration with Development, Security, and Architecture teams to ensure compliance and best practices is essential. You will participate in an on-call rotation and respond promptly to critical alerts. The required skills and qualifications for this position include being a GCP Certified Professional (Cloud Architect, Cloud Engineer, or equivalent) with a strong working knowledge of GCP services such as Compute Engine, GKE, Cloud Storage, IAM, VPC, and Cloud Monitoring. Solid experience in Kubernetes cluster administration, proficiency with Infrastructure as Code tools like Terraform, knowledge of containerization concepts and tools like Docker, experience in monitoring and observability with tools like Prometheus, Grafana, and Stackdriver, familiarity with incident management and ITIL processes, ability to work in 24x7 operations with rotating shifts, and strong troubleshooting and problem-solving skills. Preferred skills that would be nice to have for this role include experience supporting multi-cloud environments, scripting skills in Python, Bash, Go, exposure to other cloud platforms like AWS and Azure, and familiarity with security controls and compliance frameworks.,
Posted 1 week ago
3.0 - 8.0 years
10 - 16 Lacs
Bengaluru
Work from Office
Develop and manage job plans, schedules, and work packages for routine maintenance activities in LNG and refining assets. Coordinate with operations, engineering, and reliability teams to ensure safe and timely execution Required Candidate profile Engineering professionals with 3–7 years of experience in maintenance planning/scheduling in oil & gas. Proficient in CMMS tools (JDE), Primavera P6, safety compliance, and resource coordination.
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
andhra pradesh
On-site
The role of Technical Architect for IoT Platform requires a highly skilled individual with over 10 years of experience, who possesses expertise in Java Spring Boot, React.js, IoT system architecture, and a strong foundation in DevOps practices. As a Technical Architect, you will be responsible for designing scalable, secure, and high-performance IoT solutions. Your role will involve leading full-stack teams and collaborating with product, infrastructure, and data teams to ensure the successful implementation of IoT projects. Your key responsibilities will include architecture and design tasks such as implementing scalable and secure IoT platform architecture, defining and maintaining architecture blueprints and technical documentation, leading technical decision-making, and ensuring adherence to best practices and coding standards. You will also be involved in architecting microservices-based solutions using Spring Boot, integrating them with React-based front-ends, defining data flow, event processing pipelines, and device communication protocols. In terms of IoT domain expertise, you will be required to architect solutions for real-time sensor data ingestion, processing, and storage, work closely with hardware and firmware teams for device-cloud communication, support multi-tenant, multi-protocol device integration, and guide the design of edge computing, telemetry, alerting, and digital twin models. Your role will also involve DevOps and infrastructure-related tasks such as defining CI/CD pipelines, managing containerization & orchestration, driving infrastructure automation, ensuring platform monitoring, logging, and observability, and enabling auto-scaling, load balancing, and zero-downtime deployments. As a Technical Architect, you will be expected to demonstrate leadership qualities by collaborating with product managers and business stakeholders, mentoring and leading a team of developers and engineers, conducting code and architecture reviews, setting goals and targets, organizing features and sprints, and providing coaching and professional development to team members. Your technical skills and experience should include proficiency in backend technologies such as Java 11+/17, Spring Boot, Spring Cloud, REST APIs, JPA/Hibernate, PostgreSQL, as well as frontend technologies like React.js, Redux, TypeScript, and Material-UI. Additionally, experience with messaging/streaming platforms, databases, DevOps tools, monitoring tools, cloud platforms, and other relevant technologies is required. Other must-have qualifications for this role include hands-on IoT project experience, designing and deploying multi-tenant SaaS platforms, knowledge of security best practices in IoT and cloud environments, as well as excellent problem-solving, communication, and team leadership skills. It would be beneficial if you have experience with Edge Computing frameworks, AI/ML model integration, industrial protocols, digital twin concepts, and relevant certifications in AWS/GCP, Kubernetes, or Spring. A Bachelor's or Master's degree in Computer Science, Engineering, or a related field is also required. By joining us, you will have the opportunity to lead architecture for cutting-edge industrial IoT platforms, work with a passionate team in a fast-paced and innovative environment, and gain exposure to cross-disciplinary challenges in IoT, AI, and cloud-native technologies.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Lead Engineer, DevOps at Toyota Connected India, you will have the opportunity to work in a collaborative and fast-paced environment focused on creating infotainment solutions on embedded and cloud platforms. You will be part of a team that values continual improvement, innovation, and delivering exceptional value to customers. Your role will involve being hands-on with cloud platforms like AWS and Google Cloud Platform, utilizing containerization and Kubernetes for container orchestration, and working with infrastructure automation and configuration management tools such as Terraform, CloudFormation, and Ansible. You will be expected to have a strong proficiency in scripting languages like Python, Bash, or Go, experience with monitoring and logging solutions including Prometheus, Grafana, ELK Stack, or Datadog, and knowledge of networking concepts, security best practices, and infrastructure monitoring. Additionally, your responsibilities will include working with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar. At Toyota Connected, you will enjoy top-of-the-line compensation, autonomy in managing your time and workload, yearly gym membership reimbursement, free catered lunches, and a flexible dress code policy. You will have the opportunity to contribute to the development of products that enhance the safety and convenience of millions of customers. Moreover, you will be working in a new cool office space and enjoying other awesome benefits. Toyota Connected's core values are EPIC - Empathetic, Passionate, Innovative, and Collaborative. You will be encouraged to make decisions empathetically, strive to build something great, experiment with innovative ideas, and work collaboratively with your teammates to achieve success. Join us at Toyota Connected to be part of a team that is reimagining mobility for today and the future!,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Team Lead in DevOps with 6+ years of experience, you will be responsible for managing, mentoring, and developing a team of DevOps engineers. Your role will involve overseeing the deployment and maintenance of applications such as Odoo (Python/PostgreSQL), Magento (PHP/MySQL), and Node.js (JavaScript/TypeScript). You will design and manage CI/CD pipelines using tools like Jenkins, GitHub Actions, and GitLab CI. Additionally, you will handle environment-specific configurations for staging, production, and QA. Your responsibilities will include containerizing legacy and modern applications using Docker and deploying them via Kubernetes (EKS/AKS/GKE) or Docker Swarm. You will implement and maintain Infrastructure as Code using tools like Terraform, Ansible, or CloudFormation. Monitoring application health and infrastructure using tools such as Prometheus, Grafana, ELK, Datadog, and ensuring systems are secure, resilient, and compliant with industry standards will also be part of your role. Optimizing cloud costs and infrastructure performance, collaborating with development, QA, and IT support teams, and troubleshooting performance, deployment, or scaling issues across tech stacks are essential tasks. To excel in this role, you must have at least 6 years of experience in DevOps/Cloud/System Engineering roles, with hands-on experience. You should have a minimum of 2 years of experience managing or leading DevOps teams. Proficiency in supporting and deploying Odoo on Ubuntu/Linux with PostgreSQL, Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB, and Node.js with PM2/Nginx or containerized setups is required. Experience with AWS, Azure, or GCP infrastructure in production, strong scripting skills (Bash, Python, PHP CLI, or Node CLI), and a deep understanding of Linux system administration and networking fundamentals are essential. In addition, you should have experience with Git, SSH, reverse proxies (Nginx), and load balancers. Good communication skills and exposure to managing clients are crucial. Preferred certifications that are highly valued include AWS Certified DevOps Engineer Professional, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Additionally, experience with Magento Cloud DevOps or Odoo Deployment is considered a bonus. Nice-to-have skills include experience with multi-region failover, HA clusters, RPO/RTO-based design, familiarity with MySQL/PostgreSQL optimization, and knowledge of Redis, RabbitMQ, or Celery. Previous experience with GitOps, ArgoCD, Helm, or Ansible Tower, as well as knowledge of VAPT 2.0, WCAG compliance, and infrastructure security best practices, are also advantageous for this role.,
Posted 1 week ago
2.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a DevOps Manager, you will be responsible for leading our DevOps efforts across a suite of modern and legacy applications, including Odoo (Python), Magento (PHP), Node.js, and other web-based platforms. Your main duties will include managing, mentoring, and growing a team of DevOps engineers, overseeing the deployment and maintenance of various applications, designing and managing CI/CD pipelines, handling environment-specific configurations, containerizing applications, implementing and maintaining Infrastructure as Code, monitoring application health and infrastructure, ensuring system security and compliance, optimizing cloud cost and performance, collaborating with cross-functional teams, and troubleshooting technical issues. To be successful in this role, you should have at least 8 years of experience in DevOps/Cloud/System Engineering roles with real hands-on experience, including 2+ years of experience managing or leading DevOps teams. You should have experience supporting and deploying applications like Odoo, Magento, and Node.js, along with strong scripting skills in Bash, Python, PHP CLI, or Node CLI. Additionally, you should have a deep understanding of Linux system administration, networking fundamentals, AWS/Azure/GCP infrastructure, Git, SSH, reverse proxies, and load balancers. Good communication skills and client management exposure are also essential for this position. Preferred certifications for this role include AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Bonus skills that would be beneficial for this position include experience with multi-region failover, HA clusters, MySQL/PostgreSQL optimization, GitOps, VAPT 2.0, WCAG compliance, and infrastructure security best practices. In summary, as a DevOps Manager, you will play a crucial role in leading our DevOps efforts and ensuring the smooth deployment, maintenance, and optimization of various applications while collaborating with different teams and implementing best practices in infrastructure management and security.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
delhi
On-site
As a DevOps Engineer specializing in App Infrastructure & Scaling, you will be a valuable addition to our technology team. Your primary responsibility will be to design, implement, and maintain scalable and secure cloud infrastructure that supports our mobile and web applications. Your role is crucial in ensuring system reliability, performance, and cost efficiency across different environments. You will work with Google Cloud Platform (GCP) to design, configure, and manage cloud infrastructure. Your tasks will include implementing horizontal scaling, load balancers, auto-scaling groups, and performance monitoring systems. Additionally, you will be developing and maintaining CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Real-time monitoring, crash alerting, logging systems, and health dashboards will be set up by you using industry-leading tools. Managing and optimizing Redis, job queues, caching layers, and backend request loads will also be part of your responsibilities. You will automate data backups, enforce secure access protocols, and implement disaster recovery systems. Collaborating with Flutter and PHP (Laravel) teams to address performance bottlenecks and reduce system load is crucial. Infrastructure security audits will be conducted by you to recommend best practices for preventing downtime and security breaches. Monitoring and optimizing cloud usage and billing to ensure a cost-effective and scalable architecture will also fall under your purview. You should have at least 3-5 years of hands-on experience in a DevOps or Cloud Infrastructure role, preferably with GCP. Proficiency with Docker, Kubernetes, NGINX, and load balancing strategies is essential. Experience with CI/CD pipelines and tools like GitHub Actions, Jenkins, or GitLab CI is required. Familiarity with monitoring tools such as Grafana, Prometheus, NewRelic, or Datadog is expected. A deep understanding of API architecture, including rate limiting, error handling, and fallback mechanisms is necessary. Experience working with PHP/Laravel backends, Firebase, and modern mobile app infrastructure is beneficial. Working knowledge of Redis, Socket.IO, and message queuing systems like RabbitMQ or Kafka will be advantageous. Preferred qualifications include a Google Cloud Professional certification or equivalent. Experience in optimizing systems for high-concurrency, low-latency environments and familiarity with Infrastructure as Code (IaC) tools like Terraform or Ansible are considered a plus.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a DevOps Engineer, you will define and implement DevOps strategies that are closely aligned with the business goals. Your primary responsibility will be to lead cross-functional teams in order to enhance collaboration among development, QA, and operations teams. This involves designing, implementing, and managing Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate build, test, and deployment processes, thereby accelerating release cycles. Furthermore, you will be tasked with implementing and managing Infrastructure as Code using tools such as Terraform, CloudFormation, Ansible, among others. Your expertise will be crucial in managing cloud platforms like AWS, Azure, or Google Cloud. It will also be your responsibility to monitor and mitigate security risks in CI/CD pipelines and infrastructure, as well as setting up observability tools like Prometheus, Grafana, Splunk, Datadog, etc. In addition, you will play a key role in implementing proactive alerting and incident response processes. This will involve leading incident response efforts and conducting root cause analysis (RCA) when necessary. Documenting DevOps processes, best practices, and system architectures will also be part of your routine tasks. As a DevOps Engineer, you will continuously evaluate and implement new DevOps tools and technologies to enhance efficiency and productivity. Moreover, you will be expected to foster a culture of learning and knowledge sharing within the organization, promoting collaborative growth and development among team members.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Linux & Cloud Administrator at SAP, you will play a key role in supporting the seamless 24/7 operations of our cloud platform across Azure, AWS, Google Cloud, and SAP data centers. Your primary responsibilities will involve ensuring the smooth operation of business-critical SAP systems in the cloud, leveraging technologies such as Prometheus, Grafana, Kubernetes, Ansible, ArgoCD, AWS, GitHub Actions, and more. Your tasks will include network troubleshooting, architecture design, cluster setup and configuration, and the development of automation solutions to deliver top-notch cloud services for SAP applications to enterprise customers globally. You will be part of the ECS Delivery XDU team, responsible for the operation of SAP Enterprise Cloud Services (ECS) Delivery, a managed services provider offering SAP applications through the HANA Enterprise Cloud. At SAP, we are committed to building a workplace culture that values collaboration, embraces diversity, and is focused on creating a better world. Our company ethos revolves around a shared passion for helping the world run better, with a strong emphasis on learning and development, recognizing individual contributions, and offering a range of benefit options for our employees. SAP is a global leader in enterprise application software, with a mission to help customers worldwide work more efficiently and leverage business insights effectively. With a cloud-based approach and a dedication to innovation, SAP serves millions of users across various industries, driving solutions for ERP, database, analytics, intelligent technologies, and experience management. As a purpose-driven and future-focused organization, SAP fosters a highly collaborative team environment and prioritizes personal development, ensuring that every challenge is met with the right solution. At SAP, we believe in the power of inclusion and diversity, supporting the well-being of our employees and offering flexible working models to enable everyone to perform at their best. We recognize the unique strengths that each individual brings to our company, investing in our workforce to unleash their full potential and create a more equitable world. As an equal opportunity workplace, SAP is committed to providing accessibility accommodations to applicants with disabilities and promoting a culture of equality and empowerment. If you are interested in joining our team at SAP and require accommodation during the application process, please reach out to our Recruiting Operations Team at Careers@sap.com. We are dedicated to fostering an environment where all talents are valued, and every individual has the opportunity to thrive.,
Posted 1 week ago
3.0 - 6.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for a proactive and detail-oriented L1 DataOps Monitoring Engineer to support our data pipeline operations. This role involves monitoring, identifying issues, raising alerts, and ensuring timely communication and escalation to minimize data downtime and improve reliability. Roles and Responsibilities Key Responsibilities: Monitor data pipelines, jobs, and workflows using tools like Airflow, Control-M, or custom monitoring dashboards. Acknowledge and investigate alerts from monitoring tools (Datadog, Prometheus, Grafana, etc.). Perform first-level triage for job failures, delays, and anomalies. Log incidents and escalate to L2/L3 teams as per SOP. Maintain shift handover logs and daily operational reports. Perform routine system checks and health monitoring of data environments. Follow predefined runbooks to troubleshoot known issues. Coordinate with application, infrastructure, and support teams for timely resolution. Participate in shift rotations including nights/weekends/public holidays. Skills and Qualifications: Bachelor's degree in Computer Science, IT, or related field (or equivalent experience). 0–2 years of experience in IT support, monitoring, or NOC environments. Basic understanding of data pipelines, ETL/ELT processes. Familiarity with monitoring tools (Datadog, Grafana, CloudWatch, etc.). Exposure to job schedulers (Airflow, Control-M, Autosys) is a plus. Good verbal and written communication skills. Ability to remain calm and effective under pressure. Willingness to work in a 24x7 rotational shift model. Good to Have (Optional): Knowledge of cloud platforms (AWS/GCP/Azure) Basic SQL or scripting knowledge (Shell/Python) ITIL awareness or ticketing systems experience (e.g., ServiceNow, JIRA)
Posted 1 week ago
3.0 - 7.0 years
7 - 10 Lacs
Hyderabad
Work from Office
Job Title: SDE-2/3 LocationHyderabad Experience range0-1 Yr Notice PeriodImmediate joiner What we offer: Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. Key Responsibilities Design, develop, and maintain backend services and APIs using Java and Spring Boot. Collaborate with product managers, architects, and QA teams to deliver robust banking solutions. Build microservices for transaction processing, customer onboarding, and risk management. Integrate with internal and third-party APIs for payments, KYC, and credit scoring. Ensure code quality through unit testing, code reviews, and adherence to secure coding practices. Participate in Agile ceremonies and contribute to continuous integration and deployment (CI/CD) pipelines. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. Proficiency in Java, Spring Boot, and RESTful API development. Solid understanding of data structures, algorithms, and object-oriented programming. Experience with relational databases (e.g., PostgreSQL, MySQL) and version control (Git). Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Preferred Skills Exposure to financial systems, banking APIs, or fintech platforms. Knowledge of security standards (e.g., OAuth2, JWT, PCI DSS). Experience with messaging systems (Kafka, RabbitMQ) and monitoring tools (Grafana, Prometheus). Strong problem-solving skills and ability to work in a fast-paced environment. Education background: Bachelors degree in Computer Science, Information Technology or related field of study Good to have Certifications Java Certified Developer AWS Developer or Solution Architect
Posted 1 week ago
3.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Job TitleAssociate III-Software Engineering LocationHyderabad Experience range0-1 Yr Notice PeriodImmediate joiner What we offer: Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. Key Responsibilities Design, develop, and maintain backend services and APIs using Java and Spring Boot. Collaborate with product managers, architects, and QA teams to deliver robust banking solutions. Build microservices for transaction processing, customer onboarding, and risk management. Integrate with internal and third-party APIs for payments, KYC, and credit scoring. Ensure code quality through unit testing, code reviews, and adherence to secure coding practices. Participate in Agile ceremonies and contribute to continuous integration and deployment (CI/CD) pipelines. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. Proficiency in Java, Spring Boot, and RESTful API development. Solid understanding of data structures, algorithms, and object-oriented programming. Experience with relational databases (e.g., PostgreSQL, MySQL) and version control (Git). Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Preferred Skills Exposure to financial systems, banking APIs, or fintech platforms. Knowledge of security standards (e.g., OAuth2, JWT, PCI DSS). Experience with messaging systems (Kafka, RabbitMQ) and monitoring tools (Grafana, Prometheus). Strong problem-solving skills and ability to work in a fast-paced environment. Education background: Bachelors degree in Computer Science, Information Technology or related field of study Good to have Certifications Java Certified Developer AWS Developer or Solution Architect
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi