Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
Qualcomm India Private Limited is looking for a highly skilled and experienced MLOps Engineer to join their team and contribute to the development and maintenance of their ML platform both on premises and AWS Cloud. As a MLOps Engineer, your responsibility will include architecting, deploying, and optimizing the ML & Data platform supporting the training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial for ensuring the smooth operation and scalability of the ML infrastructure. You will collaborate with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of the ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Your responsibilities will include architecting, developing, and maintaining the ML platform, designing and implementing scalable infrastructure solutions for NVIDIA clusters on premises and AWS Cloud, collaborating with data scientists and software engineers to define requirements, optimizing platform performance and scalability, monitoring system performance, implementing CI/CD pipelines, maintaining monitoring stack using Prometheus and Grafana, managing AWS services, implementing logging and monitoring solutions, staying updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proposing enhancements to the ML platform. Qualcomm is looking for candidates with a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, proven experience as an MLOps Engineer or similar role with a focus on large-scale ML and/or Data infrastructure and GPU clusters, strong expertise in configuring and optimizing NVIDIA DGX clusters, proficiency in using the Kubernetes platform and related technologies, solid programming skills in languages like Python, Go, experience with relevant ML frameworks, in-depth understanding of distributed computing and GPU acceleration techniques, familiarity with containerization technologies and orchestration tools, experience with CI/CD pipelines and automation tools for ML workflows, experience with AWS services and monitoring tools, strong problem-solving skills, excellent communication, and collaboration skills. Qualcomm is an equal opportunity employer and is committed to providing reasonable accommodations to support individuals with disabilities during the hiring process. If you are interested in this role or require more information, please contact Qualcomm Careers.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources - time series, equipment, documents, 3D objects - into a unified data model. You will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: - Design, develop, and maintain scalable, API-driven backend services using Kotlin. - Align backend systems with modern data modeling and orchestration standards. - Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. - Implement and refine RESTful APIs following established design guidelines. - Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. - Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. - Drive software development best practices including code reviews, documentation, and CI/CD process adherence. - Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: - 3+ years of backend development experience, with a strong focus on Kotlin. - Proven ability to design and maintain robust, API-centric microservices. - Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. - Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. - Strong understanding of distributed systems, data modeling, and software scalability principles. - Excellent communication skills and ability to work in a cross-functional, English-speaking environment. - Bachelor's or Master's degree in Computer Science or related discipline. Bonus Qualifications: - Experience with Python for auxiliary services, data processing, or SDK usage. - Knowledge of data contextualization or entity resolution techniques. - Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. - Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). - Experience with Terraform, Prometheus, and scalable backend performance testing. About The Role And Key Responsibilities: - Develop Data Fusion - a robust, state-of-the-art SaaS for industrial data. - Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. - Work with distributed open-source software such as Kubernetes, Kafka, Spark, and similar to build scalable and performant solutions. - Help shape the culture and methodology of a rapidly growing company. ,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources - time series, equipment, documents, 3D objects - into a unified data model. You will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: - Design, develop, and maintain scalable, API-driven backend services using Kotlin. - Align backend systems with modern data modeling and orchestration standards. - Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. - Implement and refine RESTful APIs following established design guidelines. - Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. - Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. - Drive software development best practices including code reviews, documentation, and CI/CD process adherence. - Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: - 5+ years of backend development experience, with a strong focus on Kotlin - Proven ability to design and maintain robust, API-centric microservices. - Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. - Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. - Strong understanding of distributed systems, data modeling, and software scalability principles. - Excellent communication skills and ability to work in a cross-functional, English-speaking environment. - Bachelor's or Master's degree in Computer Science or related discipline. Bonus Qualifications: - Experience with Python for auxiliary services, data processing, or SDK usage. - Knowledge of data contextualization or entity resolution techniques. - Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. - Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). - Experience with Terraform, Prometheus, and scalable backend performance testing. About the role and key responsibilities: - Develop Data Fusion - a robust, state-of-the-art SaaS for industrial data. - Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. Examples include integrating data sources into our platform in a secure and scalable way and enabling high-performance data science pipelines. - Work with application teams to ensure a delightful user experience that helps the user solve complex real-world problems that have yet to be solved before. - Work with distributed open-source software such as Kubernetes, Kafka, Spark and similar to build scalable and performant solutions. - Work with databases or storage systems such as PostgreSQL, Elasticsearch or S3-API-compatible blob stores. - Help shape the culture and methodology of a rapidly growing company.,
Posted 5 days ago
10.0 - 14.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
We are looking for an experienced DevOps Architect to spearhead the design, implementation, and management of scalable, secure, and highly available infrastructure. As the ideal candidate, you should possess in-depth expertise in DevOps practices, CI/CD pipelines, cloud platforms, and infrastructure automation across various cloud environments. This role requires strong leadership skills and the ability to mentor team members effectively. Your responsibilities will include leading and overseeing the DevOps team to ensure the reliability of infrastructure and automated deployment processes. You will be tasked with designing, implementing, and maintaining highly available, scalable, and secure cloud infrastructure on platforms such as AWS, Azure, and GCP. Developing and optimizing CI/CD pipelines for multiple applications and environments will be a key focus, along with driving Infrastructure as Code (IaC) practices using tools like Terraform, CloudFormation, or Ansible. Monitoring, logging, and alerting solutions will fall under your purview to ensure system health and performance. Collaboration with Development, QA, and Security teams to integrate DevOps best practices throughout the SDLC is essential. You will also lead incident management and root cause analysis for production issues, ensuring robust security practices for infrastructure and pipelines. Guiding and mentoring team members to foster a culture of continuous improvement and technical excellence will be crucial. Additionally, evaluating and recommending new tools, technologies, and processes to enhance operational efficiency will be part of your role. Qualifications: - Bachelor's degree in Computer Science, IT, or a related field; Master's degree preferred. - At least two current cloud certifications (e.g., AWS Solutions Architect, Azure Administrator, GCP DevOps Engineer, CKA). - 10+ years of relevant experience in DevOps, Infrastructure, or Cloud Operations. - 5+ years of experience in a technical leadership or team lead role. Skills & Abilities: - Expertise in at least two major cloud platforms: AWS, Azure, or GCP. - Strong experience with CI/CD tools such as Jenkins, GitLab CI, Azure DevOps, or similar. - Hands-on experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. - Proficiency in containerization and orchestration using Docker and Kubernetes. - Strong knowledge of monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, CloudWatch). - Scripting knowledge in languages like Python, Bash, or Go. - Solid understanding of networking, security, and system administration. - Experience in implementing security best practices across DevOps pipelines. - Proven ability to mentor, coach, and lead technical teams. Conditions: Work Arrangement: An occasionally hybrid opportunity based out of our Trivandrum office. Travel Requirements: Occasional travel may be required for team meetings, user research, or conferences. On-Call Requirements: Light on-call rotation may be required depending on operational needs. Hours of Work: Monday to Friday, 40 hours per week, with overlap with PST required. Values: Our values at AOT guide how we work, collaborate, and grow as a team. Every role is expected to embody and promote values such as innovation, integrity, ownership, agility, collaboration, and empowerment.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You have 5+ years of overall experience in Cloud Operations, with a minimum of 5 years of hands-on experience with Google Cloud Platform (GCP) and at least 3 years of experience in Kubernetes administration. It is mandatory to have GCP Certified Professional certification. In this role, you will be responsible for managing and monitoring GCP infrastructure resources to ensure optimal performance, availability, and security. You will also administer Kubernetes clusters, handle deployment, scaling, upgrades, patching, and troubleshooting. Automation for provisioning, scaling, and monitoring using tools like Terraform, Helm, or similar will be implemented and maintained by you. Your key responsibilities will include responding to incidents, performing root cause analysis, and resolving issues within SLAs. Configuring logging, monitoring, and alerting solutions across GCP and Kubernetes environments will also be part of your duties. Supporting CI/CD pipelines, integrating Kubernetes deployments with DevOps processes, and maintaining detailed documentation of processes, configurations, and runbooks are critical aspects of this role. Collaboration with Development, Security, and Architecture teams to ensure compliance and best practices is essential. You will participate in an on-call rotation and respond promptly to critical alerts. The required skills and qualifications for this position include being a GCP Certified Professional (Cloud Architect, Cloud Engineer, or equivalent) with a strong working knowledge of GCP services such as Compute Engine, GKE, Cloud Storage, IAM, VPC, and Cloud Monitoring. Solid experience in Kubernetes cluster administration, proficiency with Infrastructure as Code tools like Terraform, knowledge of containerization concepts and tools like Docker, experience in monitoring and observability with tools like Prometheus, Grafana, and Stackdriver, familiarity with incident management and ITIL processes, ability to work in 24x7 operations with rotating shifts, and strong troubleshooting and problem-solving skills. Preferred skills that would be nice to have for this role include experience supporting multi-cloud environments, scripting skills in Python, Bash, Go, exposure to other cloud platforms like AWS and Azure, and familiarity with security controls and compliance frameworks.,
Posted 5 days ago
10.0 - 14.0 years
0 Lacs
andhra pradesh
On-site
The role of Technical Architect for IoT Platform requires a highly skilled individual with over 10 years of experience, who possesses expertise in Java Spring Boot, React.js, IoT system architecture, and a strong foundation in DevOps practices. As a Technical Architect, you will be responsible for designing scalable, secure, and high-performance IoT solutions. Your role will involve leading full-stack teams and collaborating with product, infrastructure, and data teams to ensure the successful implementation of IoT projects. Your key responsibilities will include architecture and design tasks such as implementing scalable and secure IoT platform architecture, defining and maintaining architecture blueprints and technical documentation, leading technical decision-making, and ensuring adherence to best practices and coding standards. You will also be involved in architecting microservices-based solutions using Spring Boot, integrating them with React-based front-ends, defining data flow, event processing pipelines, and device communication protocols. In terms of IoT domain expertise, you will be required to architect solutions for real-time sensor data ingestion, processing, and storage, work closely with hardware and firmware teams for device-cloud communication, support multi-tenant, multi-protocol device integration, and guide the design of edge computing, telemetry, alerting, and digital twin models. Your role will also involve DevOps and infrastructure-related tasks such as defining CI/CD pipelines, managing containerization & orchestration, driving infrastructure automation, ensuring platform monitoring, logging, and observability, and enabling auto-scaling, load balancing, and zero-downtime deployments. As a Technical Architect, you will be expected to demonstrate leadership qualities by collaborating with product managers and business stakeholders, mentoring and leading a team of developers and engineers, conducting code and architecture reviews, setting goals and targets, organizing features and sprints, and providing coaching and professional development to team members. Your technical skills and experience should include proficiency in backend technologies such as Java 11+/17, Spring Boot, Spring Cloud, REST APIs, JPA/Hibernate, PostgreSQL, as well as frontend technologies like React.js, Redux, TypeScript, and Material-UI. Additionally, experience with messaging/streaming platforms, databases, DevOps tools, monitoring tools, cloud platforms, and other relevant technologies is required. Other must-have qualifications for this role include hands-on IoT project experience, designing and deploying multi-tenant SaaS platforms, knowledge of security best practices in IoT and cloud environments, as well as excellent problem-solving, communication, and team leadership skills. It would be beneficial if you have experience with Edge Computing frameworks, AI/ML model integration, industrial protocols, digital twin concepts, and relevant certifications in AWS/GCP, Kubernetes, or Spring. A Bachelor's or Master's degree in Computer Science, Engineering, or a related field is also required. By joining us, you will have the opportunity to lead architecture for cutting-edge industrial IoT platforms, work with a passionate team in a fast-paced and innovative environment, and gain exposure to cross-disciplinary challenges in IoT, AI, and cloud-native technologies.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Lead Engineer, DevOps at Toyota Connected India, you will have the opportunity to work in a collaborative and fast-paced environment focused on creating infotainment solutions on embedded and cloud platforms. You will be part of a team that values continual improvement, innovation, and delivering exceptional value to customers. Your role will involve being hands-on with cloud platforms like AWS and Google Cloud Platform, utilizing containerization and Kubernetes for container orchestration, and working with infrastructure automation and configuration management tools such as Terraform, CloudFormation, and Ansible. You will be expected to have a strong proficiency in scripting languages like Python, Bash, or Go, experience with monitoring and logging solutions including Prometheus, Grafana, ELK Stack, or Datadog, and knowledge of networking concepts, security best practices, and infrastructure monitoring. Additionally, your responsibilities will include working with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar. At Toyota Connected, you will enjoy top-of-the-line compensation, autonomy in managing your time and workload, yearly gym membership reimbursement, free catered lunches, and a flexible dress code policy. You will have the opportunity to contribute to the development of products that enhance the safety and convenience of millions of customers. Moreover, you will be working in a new cool office space and enjoying other awesome benefits. Toyota Connected's core values are EPIC - Empathetic, Passionate, Innovative, and Collaborative. You will be encouraged to make decisions empathetically, strive to build something great, experiment with innovative ideas, and work collaboratively with your teammates to achieve success. Join us at Toyota Connected to be part of a team that is reimagining mobility for today and the future!,
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Team Lead in DevOps with 6+ years of experience, you will be responsible for managing, mentoring, and developing a team of DevOps engineers. Your role will involve overseeing the deployment and maintenance of applications such as Odoo (Python/PostgreSQL), Magento (PHP/MySQL), and Node.js (JavaScript/TypeScript). You will design and manage CI/CD pipelines using tools like Jenkins, GitHub Actions, and GitLab CI. Additionally, you will handle environment-specific configurations for staging, production, and QA. Your responsibilities will include containerizing legacy and modern applications using Docker and deploying them via Kubernetes (EKS/AKS/GKE) or Docker Swarm. You will implement and maintain Infrastructure as Code using tools like Terraform, Ansible, or CloudFormation. Monitoring application health and infrastructure using tools such as Prometheus, Grafana, ELK, Datadog, and ensuring systems are secure, resilient, and compliant with industry standards will also be part of your role. Optimizing cloud costs and infrastructure performance, collaborating with development, QA, and IT support teams, and troubleshooting performance, deployment, or scaling issues across tech stacks are essential tasks. To excel in this role, you must have at least 6 years of experience in DevOps/Cloud/System Engineering roles, with hands-on experience. You should have a minimum of 2 years of experience managing or leading DevOps teams. Proficiency in supporting and deploying Odoo on Ubuntu/Linux with PostgreSQL, Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB, and Node.js with PM2/Nginx or containerized setups is required. Experience with AWS, Azure, or GCP infrastructure in production, strong scripting skills (Bash, Python, PHP CLI, or Node CLI), and a deep understanding of Linux system administration and networking fundamentals are essential. In addition, you should have experience with Git, SSH, reverse proxies (Nginx), and load balancers. Good communication skills and exposure to managing clients are crucial. Preferred certifications that are highly valued include AWS Certified DevOps Engineer Professional, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Additionally, experience with Magento Cloud DevOps or Odoo Deployment is considered a bonus. Nice-to-have skills include experience with multi-region failover, HA clusters, RPO/RTO-based design, familiarity with MySQL/PostgreSQL optimization, and knowledge of Redis, RabbitMQ, or Celery. Previous experience with GitOps, ArgoCD, Helm, or Ansible Tower, as well as knowledge of VAPT 2.0, WCAG compliance, and infrastructure security best practices, are also advantageous for this role.,
Posted 5 days ago
2.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a DevOps Manager, you will be responsible for leading our DevOps efforts across a suite of modern and legacy applications, including Odoo (Python), Magento (PHP), Node.js, and other web-based platforms. Your main duties will include managing, mentoring, and growing a team of DevOps engineers, overseeing the deployment and maintenance of various applications, designing and managing CI/CD pipelines, handling environment-specific configurations, containerizing applications, implementing and maintaining Infrastructure as Code, monitoring application health and infrastructure, ensuring system security and compliance, optimizing cloud cost and performance, collaborating with cross-functional teams, and troubleshooting technical issues. To be successful in this role, you should have at least 8 years of experience in DevOps/Cloud/System Engineering roles with real hands-on experience, including 2+ years of experience managing or leading DevOps teams. You should have experience supporting and deploying applications like Odoo, Magento, and Node.js, along with strong scripting skills in Bash, Python, PHP CLI, or Node CLI. Additionally, you should have a deep understanding of Linux system administration, networking fundamentals, AWS/Azure/GCP infrastructure, Git, SSH, reverse proxies, and load balancers. Good communication skills and client management exposure are also essential for this position. Preferred certifications for this role include AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Bonus skills that would be beneficial for this position include experience with multi-region failover, HA clusters, MySQL/PostgreSQL optimization, GitOps, VAPT 2.0, WCAG compliance, and infrastructure security best practices. In summary, as a DevOps Manager, you will play a crucial role in leading our DevOps efforts and ensuring the smooth deployment, maintenance, and optimization of various applications while collaborating with different teams and implementing best practices in infrastructure management and security.,
Posted 5 days ago
4.0 - 8.0 years
0 Lacs
delhi
On-site
As a DevOps Engineer specializing in App Infrastructure & Scaling, you will be a valuable addition to our technology team. Your primary responsibility will be to design, implement, and maintain scalable and secure cloud infrastructure that supports our mobile and web applications. Your role is crucial in ensuring system reliability, performance, and cost efficiency across different environments. You will work with Google Cloud Platform (GCP) to design, configure, and manage cloud infrastructure. Your tasks will include implementing horizontal scaling, load balancers, auto-scaling groups, and performance monitoring systems. Additionally, you will be developing and maintaining CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Real-time monitoring, crash alerting, logging systems, and health dashboards will be set up by you using industry-leading tools. Managing and optimizing Redis, job queues, caching layers, and backend request loads will also be part of your responsibilities. You will automate data backups, enforce secure access protocols, and implement disaster recovery systems. Collaborating with Flutter and PHP (Laravel) teams to address performance bottlenecks and reduce system load is crucial. Infrastructure security audits will be conducted by you to recommend best practices for preventing downtime and security breaches. Monitoring and optimizing cloud usage and billing to ensure a cost-effective and scalable architecture will also fall under your purview. You should have at least 3-5 years of hands-on experience in a DevOps or Cloud Infrastructure role, preferably with GCP. Proficiency with Docker, Kubernetes, NGINX, and load balancing strategies is essential. Experience with CI/CD pipelines and tools like GitHub Actions, Jenkins, or GitLab CI is required. Familiarity with monitoring tools such as Grafana, Prometheus, NewRelic, or Datadog is expected. A deep understanding of API architecture, including rate limiting, error handling, and fallback mechanisms is necessary. Experience working with PHP/Laravel backends, Firebase, and modern mobile app infrastructure is beneficial. Working knowledge of Redis, Socket.IO, and message queuing systems like RabbitMQ or Kafka will be advantageous. Preferred qualifications include a Google Cloud Professional certification or equivalent. Experience in optimizing systems for high-concurrency, low-latency environments and familiarity with Infrastructure as Code (IaC) tools like Terraform or Ansible are considered a plus.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a DevOps Engineer, you will define and implement DevOps strategies that are closely aligned with the business goals. Your primary responsibility will be to lead cross-functional teams in order to enhance collaboration among development, QA, and operations teams. This involves designing, implementing, and managing Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate build, test, and deployment processes, thereby accelerating release cycles. Furthermore, you will be tasked with implementing and managing Infrastructure as Code using tools such as Terraform, CloudFormation, Ansible, among others. Your expertise will be crucial in managing cloud platforms like AWS, Azure, or Google Cloud. It will also be your responsibility to monitor and mitigate security risks in CI/CD pipelines and infrastructure, as well as setting up observability tools like Prometheus, Grafana, Splunk, Datadog, etc. In addition, you will play a key role in implementing proactive alerting and incident response processes. This will involve leading incident response efforts and conducting root cause analysis (RCA) when necessary. Documenting DevOps processes, best practices, and system architectures will also be part of your routine tasks. As a DevOps Engineer, you will continuously evaluate and implement new DevOps tools and technologies to enhance efficiency and productivity. Moreover, you will be expected to foster a culture of learning and knowledge sharing within the organization, promoting collaborative growth and development among team members.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Linux & Cloud Administrator at SAP, you will play a key role in supporting the seamless 24/7 operations of our cloud platform across Azure, AWS, Google Cloud, and SAP data centers. Your primary responsibilities will involve ensuring the smooth operation of business-critical SAP systems in the cloud, leveraging technologies such as Prometheus, Grafana, Kubernetes, Ansible, ArgoCD, AWS, GitHub Actions, and more. Your tasks will include network troubleshooting, architecture design, cluster setup and configuration, and the development of automation solutions to deliver top-notch cloud services for SAP applications to enterprise customers globally. You will be part of the ECS Delivery XDU team, responsible for the operation of SAP Enterprise Cloud Services (ECS) Delivery, a managed services provider offering SAP applications through the HANA Enterprise Cloud. At SAP, we are committed to building a workplace culture that values collaboration, embraces diversity, and is focused on creating a better world. Our company ethos revolves around a shared passion for helping the world run better, with a strong emphasis on learning and development, recognizing individual contributions, and offering a range of benefit options for our employees. SAP is a global leader in enterprise application software, with a mission to help customers worldwide work more efficiently and leverage business insights effectively. With a cloud-based approach and a dedication to innovation, SAP serves millions of users across various industries, driving solutions for ERP, database, analytics, intelligent technologies, and experience management. As a purpose-driven and future-focused organization, SAP fosters a highly collaborative team environment and prioritizes personal development, ensuring that every challenge is met with the right solution. At SAP, we believe in the power of inclusion and diversity, supporting the well-being of our employees and offering flexible working models to enable everyone to perform at their best. We recognize the unique strengths that each individual brings to our company, investing in our workforce to unleash their full potential and create a more equitable world. As an equal opportunity workplace, SAP is committed to providing accessibility accommodations to applicants with disabilities and promoting a culture of equality and empowerment. If you are interested in joining our team at SAP and require accommodation during the application process, please reach out to our Recruiting Operations Team at Careers@sap.com. We are dedicated to fostering an environment where all talents are valued, and every individual has the opportunity to thrive.,
Posted 5 days ago
3.0 - 6.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for a proactive and detail-oriented L1 DataOps Monitoring Engineer to support our data pipeline operations. This role involves monitoring, identifying issues, raising alerts, and ensuring timely communication and escalation to minimize data downtime and improve reliability. Roles and Responsibilities Key Responsibilities: Monitor data pipelines, jobs, and workflows using tools like Airflow, Control-M, or custom monitoring dashboards. Acknowledge and investigate alerts from monitoring tools (Datadog, Prometheus, Grafana, etc.). Perform first-level triage for job failures, delays, and anomalies. Log incidents and escalate to L2/L3 teams as per SOP. Maintain shift handover logs and daily operational reports. Perform routine system checks and health monitoring of data environments. Follow predefined runbooks to troubleshoot known issues. Coordinate with application, infrastructure, and support teams for timely resolution. Participate in shift rotations including nights/weekends/public holidays. Skills and Qualifications: Bachelor's degree in Computer Science, IT, or related field (or equivalent experience). 0–2 years of experience in IT support, monitoring, or NOC environments. Basic understanding of data pipelines, ETL/ELT processes. Familiarity with monitoring tools (Datadog, Grafana, CloudWatch, etc.). Exposure to job schedulers (Airflow, Control-M, Autosys) is a plus. Good verbal and written communication skills. Ability to remain calm and effective under pressure. Willingness to work in a 24x7 rotational shift model. Good to Have (Optional): Knowledge of cloud platforms (AWS/GCP/Azure) Basic SQL or scripting knowledge (Shell/Python) ITIL awareness or ticketing systems experience (e.g., ServiceNow, JIRA)
Posted 5 days ago
3.0 - 7.0 years
7 - 10 Lacs
Hyderabad
Work from Office
Job Title: SDE-2/3 LocationHyderabad Experience range0-1 Yr Notice PeriodImmediate joiner What we offer: Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. Key Responsibilities Design, develop, and maintain backend services and APIs using Java and Spring Boot. Collaborate with product managers, architects, and QA teams to deliver robust banking solutions. Build microservices for transaction processing, customer onboarding, and risk management. Integrate with internal and third-party APIs for payments, KYC, and credit scoring. Ensure code quality through unit testing, code reviews, and adherence to secure coding practices. Participate in Agile ceremonies and contribute to continuous integration and deployment (CI/CD) pipelines. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. Proficiency in Java, Spring Boot, and RESTful API development. Solid understanding of data structures, algorithms, and object-oriented programming. Experience with relational databases (e.g., PostgreSQL, MySQL) and version control (Git). Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Preferred Skills Exposure to financial systems, banking APIs, or fintech platforms. Knowledge of security standards (e.g., OAuth2, JWT, PCI DSS). Experience with messaging systems (Kafka, RabbitMQ) and monitoring tools (Grafana, Prometheus). Strong problem-solving skills and ability to work in a fast-paced environment. Education background: Bachelors degree in Computer Science, Information Technology or related field of study Good to have Certifications Java Certified Developer AWS Developer or Solution Architect
Posted 5 days ago
3.0 - 7.0 years
7 - 10 Lacs
Hyderabad
Work from Office
Job Title: SDE-2/3 LocationHyderabad Experience range0-1 Yr Notice PeriodImmediate joiner What we offer: Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. Key Responsibilities Design, develop, and maintain backend services and APIs using Java and Spring Boot. Collaborate with product managers, architects, and QA teams to deliver robust banking solutions. Build microservices for transaction processing, customer onboarding, and risk management. Integrate with internal and third-party APIs for payments, KYC, and credit scoring. Ensure code quality through unit testing, code reviews, and adherence to secure coding practices. Participate in Agile ceremonies and contribute to continuous integration and deployment (CI/CD) pipelines. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. Proficiency in Java, Spring Boot, and RESTful API development. Solid understanding of data structures, algorithms, and object-oriented programming. Experience with relational databases (e.g., PostgreSQL, MySQL) and version control (Git). Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Preferred Skills Exposure to financial systems, banking APIs, or fintech platforms. Knowledge of security standards (e.g., OAuth2, JWT, PCI DSS). Experience with messaging systems (Kafka, RabbitMQ) and monitoring tools (Grafana, Prometheus). Strong problem-solving skills and ability to work in a fast-paced environment. Education background: Bachelors degree in Computer Science, Information Technology or related field of study Good to have Certifications Java Certified Developer AWS Developer or Solution Architect
Posted 5 days ago
3.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Job TitleAssociate III-Software Engineering LocationHyderabad Experience range0-1 Yr Notice PeriodImmediate joiner What we offer: Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. Thats why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. Key Responsibilities Design, develop, and maintain backend services and APIs using Java and Spring Boot. Collaborate with product managers, architects, and QA teams to deliver robust banking solutions. Build microservices for transaction processing, customer onboarding, and risk management. Integrate with internal and third-party APIs for payments, KYC, and credit scoring. Ensure code quality through unit testing, code reviews, and adherence to secure coding practices. Participate in Agile ceremonies and contribute to continuous integration and deployment (CI/CD) pipelines. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. Proficiency in Java, Spring Boot, and RESTful API development. Solid understanding of data structures, algorithms, and object-oriented programming. Experience with relational databases (e.g., PostgreSQL, MySQL) and version control (Git). Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Preferred Skills Exposure to financial systems, banking APIs, or fintech platforms. Knowledge of security standards (e.g., OAuth2, JWT, PCI DSS). Experience with messaging systems (Kafka, RabbitMQ) and monitoring tools (Grafana, Prometheus). Strong problem-solving skills and ability to work in a fast-paced environment. Education background: Bachelors degree in Computer Science, Information Technology or related field of study Good to have Certifications Java Certified Developer AWS Developer or Solution Architect
Posted 5 days ago
21.0 - 31.0 years
35 - 42 Lacs
Bengaluru
Work from Office
What we’re looking for As a member of the Infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you’ll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery Support and maintain AWS services, such as EKS, Heroku Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems Support and partner with other teams on improving our observability systems to monitor site stability and performance Work closely with developers in supporting new features and services. Work in a highly collaborative team environment. Participate in on-call rotation We’d love to hear from people with 8+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, Open Telemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid
Posted 5 days ago
2.0 - 6.0 years
12 - 15 Lacs
Navi Mumbai
Work from Office
Key Responsibilities: Manage and administer complex multi-cloud environments (AWS, GCP, Azure). Monitor infrastructure performance and troubleshoot complex issues at the L3 level. Design and implement automation scripts using tools like Terraform, Ansible, CloudFormation, or Bicep. Optimize cloud cost, performance, and security in line with best practices. Support deployment and maintenance of production, development, and staging environments. Collaborate with DevOps, Networking, and Security teams to ensure seamless operations. Implement and monitor backup, disaster recovery, and high-availability strategies. Conduct root cause analysis (RCA) for critical incidents and service disruptions. Stay updated with evolving cloud technologies and recommend improvements. Participate in 24x7 on-call rotation for critical incident support. Technical Skills: Cloud Platforms: Deep expertise in AWS, GCP, and Azure services (EC2, VPC, IAM, S3, AKS, GKE, App Services, etc.) Infrastructure as Code (IaC): Hands-on with Terraform, Ansible, ARM Templates, CloudFormation. Scripting: Strong in PowerShell, Python, or Bash scripting. CI/CD Pipelines: Experience with Jenkins, GitHub Actions, Azure DevOps, or similar tools. Monitoring Tools: Proficient in tools like CloudWatch, Azure Monitor, Stackdriver, Prometheus, Grafana. Security & Governance: Knowledge of IAM, RBAC, security groups, policies, and compliance. Containers & Orchestration: Familiarity with Kubernetes, Docker, AKS, GKE, EKS. Certifications (Preferred): AWS Certified SysOps Administrator / Solutions Architect Associate or Professional Microsoft Certified: Azure Administrator Associate or Architect Expert Google Associate Cloud Engineer or Professional Cloud Architect
Posted 5 days ago
0.0 - 5.0 years
15 - 20 Lacs
Chennai
Work from Office
Job Title Tech Lead/Cloud ArchitectExperience 0-5 YearsLocation Remote : A NASDAQ-listed company that has effectively maintained its position as the front-runner In the food and beverage sector is looking to onboard A Tech Lead to guide and manage the development team on various projects. The Tech Lead will be responsible for overseeing the technical direction of the projects, Ensuring the development of high-quality, scalable, and maintainable code. The talent will be interacting with other talents as well as an internal cross-functional team. Required Skills: Cloud architecture using microservices design Data Modelling/Design API Design API Contracts React Java Azure ADO RESTful API, GraphQL, SQL/NoSQL DB Experience with ADF, Databricks CI/CD, Sonarqube, Snyk, Prometheus, Grafana Responsibilities: Collaborate with Product and Data teams. Ensure a clear understanding of requirements. Architect and design microservices based enterprise web application Build data intensive, UI-rich microservices based enterprise applications that is scalable, Performant, secure using Cloud best practices in Azure Offer Details: Full-time dedication (40 hours/week) REQUIRED3-hour overlap with CST (Central Standard Time) Interview Process: 2-step interviewinitial screening and technical interview
Posted 5 days ago
3.0 - 8.0 years
20 - 35 Lacs
Gurugram, Delhi / NCR, Mumbai (All Areas)
Hybrid
Job location: Mumbai/Gurugram (Hybrid) About the role: Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments. What you would be expected to do: Work with engineering, automation, and data teams to work on various infrastructure requirements. Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform. Managing AWS services for multiple teams. Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services. Deployment and management of Kubernetes resources. Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution. Set up incident response services and design effective processes. Deployment and management of critical platform services like OPA and Keycloak for IAM. Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines. You might be a strong candidate if you have/are: Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks. Experience working with web servers (nginx, apache) and cloud providers (preferably AWS). Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments. Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters). Knowledge of web architecture, distributed systems, and single points of failure. Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck. Good networking fundamentals SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls. Good to have: Experience with backend development and setting up databases and performance tuning using parameter groups. Working experience in Kubernetes cluster administration and Kubernetes deployments. Experience working alongside SecOps engineers. Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing. Setup and usage of open telemetry, central logging, and monitoring systems. Apply here: https://sunking.pinpointhq.com/postings/b63a7111-1b98-48de-8528-4bb4bb77436f
Posted 5 days ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Spring Boot Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are built to the highest standards of quality and performance. You will also participate in discussions to refine project goals and contribute to the overall success of the team. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and design.- Engage in code reviews to ensure adherence to best practices and standards. Professional & Technical Skills: - DS & Algo, Java 17/Java EE, Spring Boot, CICD- Web-Services using RESTful, Spring framework, Caching techniques, PostgreSQL SQL, Junit for testing, and containerization with Kubernetes/Docker. Airflow, GCP, Spark, Kafka - Hands on experiencing in building alerting/monitoring/logging for micro services using frameworks like Open Observe/Splunk, Grafana, Prometheus Additional Information:- The candidate should have minimum 3 years of experience in Spring Boot.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
12.0 - 16.0 years
37 - 42 Lacs
Bengaluru
Work from Office
Job Objective: As AVP/VP Architect- Lead the design and development of scalable, reliable, and high-performance architecture for Zwayam. Job Description: In this role you will: Hands-on Coding & Code Review: Actively participate in coding and code reviews, ensuring adherence to best practices, coding standards, and performance optimization. High-Level and Low-Level Design: Create comprehensive architectural documentation that guides the development team and ensures the scalability and security of the system. Security Best Practices: Implement security strategies, including data encryption, access control, and threat detection, ensuring the platform adheres to the highest security standards. Compliance Management: Oversee compliance with regulatory requirements such as GDPR, including data protection, retention policies, and audit readiness. Disaster Recovery & Business Continuity: Design and implement disaster recovery strategies to ensure the reliability and continuity of the system in case of failures or outages. Scalability & Performance Optimization: Ensure the system architecture can scale seamlessly and optimize performance as business needs grow. Monitoring & Alerting: Set up real-time monitoring and alerting systems to ensure proactive identification and resolution of performance bottlenecks, security threats, and system failures. Cross-Platform Deployment: Architect flexible, cloud-agnostic solutions and manage deployments on Azure and AWS platforms. Containerization & Orchestration: Use Kubernetes and Docker Swarm for container management and orchestration to achieve a high degree of automation and reliability in deployments. Data Management: Manage database architecture using MySQL, MongoDB and ElasticSearch to ensure efficient storage, retrieval, and management of data. Message Queuing Systems: Design and manage asynchronous communication using Kafka and Redis for event-driven architecture. Collaboration & Leadership: Work closely with cross-functional teams including developers, product managers, and other stakeholders to deliver high-quality solutions on time. Mentoring & Team Leadership: Mentor, guide, and lead the engineering team, fostering technical growth and maintaining adherence to architectural and coding standards. Required Skills: Experience: 12+ years of experience in software development and architecture, with at least 3 years in a leadership/architect role. Technical Expertise: Proficient in Java and related frameworks like Spring-boot Experience with databases like MySQL, MongoDB, ElasticSearch, and message queuing systems like Kafka, Redis. Proficiency with containerization (Docker, Docker Swarm) and orchestration (Kubernetes). Solid experience with cloud platforms (Azure, AWS, GCP). Experience with monitoring tools (e.g., Prometheus, Grafana, ELK stack) and alerting systems for real-time issue detection and resolution. Compliance & Security: Hands-on experience in implementing security best practices. Familiarity with compliance frameworks such as GDPR and DPDP Architecture & Design: Proven experience in high-level and low-level architectural design. Problem-Solving: Strong analytical and problem-solving skills, with the ability to handle complex and ambiguous situations. Leadership: Proven ability to lead teams, influence stakeholders, and drive change. Communication: Excellent verbal and written communication skills Our Ideal Candidate: The ideal candidate should possess a deep understanding of the latest architectural patterns, cloud-native design, and security practices. They should be adept at translating business requirements into scalable and efficient technical solutions. A proactive, hands-on approach to problem-solving and a passion for innovation are essential. Strong leadership and mentoring skills are crucial to drive a high-performance team and foster technical excellence.
Posted 5 days ago
7.0 - 12.0 years
9 - 14 Lacs
Pune
Work from Office
Job Summary Synechron is seeking a skilled and experienced Lead Java Developer to oversee the development, deployment, and support of complex enterprise applications. This role involves leading technical initiatives, ensuring best practices in software engineering, and collaborating across teams to deliver cloud-enabled, scalable, and efficient solutions. The successful candidate will contribute to our strategic technology objectives while fostering innovation, best coding practices, and continuous improvement in a dynamic environment. Software Requirements Required: Proficiency in Java (latest stable versions), with extensive experience in building enterprise-scale applications Familiarity with Kettle jobs (Pentaho Data Integration) Operating systems Unix/Linux Scripting languages Shell Scripting , Perl , Python Job scheduling tools Control-M , Autosys Database technologies SQL Server , Oracle , or MongoDB Monitoring tools such as Grafana , Prometheus , or Splunk Container orchestration Kubernetes and OpenShift Messaging middleware Kafka , EMS , RabbitMQ Big data platforms Apache Flink , Spark , Apache Beam , Hadoop , Gemfire , Ignite Continuous Integration/Delivery tools Jenkins , TeamCity , SonarQube , Git Preferred: Experience with cloud platforms (e.g., AWS) Additional data processing frameworks or cloud deployment tools Knowledge of security best practices in enterprise environments Overall Responsibilities Lead the design, development, and deployment of scalable Java-based solutions aligned with business needs Analyze existing system logic, troubleshoot issues, and implement improvements or fixes Collaborate with business stakeholders and technical teams to gather requirements, propose solutions, and document functionalities Define system architecture, including APIs, data flows, and system integration points Develop and maintain comprehensive documentation, including technical specifications, deployment procedures, and API documentation Support application deployment, configurations, and release management within CI/CD pipelines Implement monitoring and alerting solutions using tools like Grafana, Prometheus, or Splunk for operational insights Ensure application security and compliance with enterprise security standards Mentor junior team members and promote development best practices across the team Performance Outcomes: Robust, scalable, and maintainable applications Reduced system outages and improved performance metrics Clear, complete documentation supporting operational and development teams Effective team collaboration and technical leadership Technical Skills (By Category) Programming Languages: Essential Java PreferredScripting languages ( Shell , Perl , Python ) Frameworks and Libraries: EssentialJava frameworks such as Spring Boot , Spring Cloud PreferredMicroservices architecture, messaging, or big data libraries Databases/Data Management: Essential SQL Server , Oracle , MongoDB PreferredData grid solutions like Gemfire or Ignite Cloud Technologies: PreferredHands-on experience with AWS , Azure , or similar cloud platforms, especially for container deployment and orchestration Containerization and Orchestration: Essential Kubernetes , OpenShift DevOps & CI/CD: Essential Jenkins , TeamCity , SonarQube , Git Monitoring & Security: PreferredFamiliarity with Grafana , Prometheus , Splunk Understanding of data security, encryption, and access control best practices Experience Requirements Minimum 7+ years of professional experience in Java application development Proven experience leading enterprise projects, especially involving distributed systems and big data technologies Experience designing and deploying cloud-ready applications Familiarity with SDLC processes, Agile methodologies, and DevOps practices Experience with application troubleshooting, system integration, and performance tuning Day-to-Day Activities Lead project meetings, coordinate deliverables, and oversee technical planning Develop, review, and optimize Java code, APIs, and microservices components Collaborate with development, QA, and operations teams to ensure smooth deployment and operation of applications Conduct system analysis, performance tuning, and troubleshooting of live issues Document system architecture, deployment procedures, and operational workflows Mentor junior developers, review code, and promote best engineering practices Stay updated on emerging technologies, trends, and tools applicable to enterprise software development Qualifications Bachelors or Masters degree in Computer Science, Software Engineering, or a related field Relevant certifications (e.g., Java certifications, cloud certifications) are advantageous Extensive hands-on experience in Java, microservices, and enterprise application development Exposure to big data, cloud deployment, and container orchestration preferred Professional Competencies Strong analytical and problem-solving skills for complex technical challenges Leadership qualities, including mentoring and guiding team members Effective communication skills for stakeholder engagement and documentation Ability to work independently and collaboratively within Agile teams Continuous improvement mindset, eager to adapt and incorporate new technologies Good organizational and time management skills for handling multiple priorities S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law .
Posted 5 days ago
4.0 - 8.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for a skilled Apache Solr Engineer to design, implement, and maintain scalable and high-performance search solutions. The ideal candidate will have hands-on experience with Solr/SolrCloud, strong analytical skills, and the ability to work in cross-functional teams to deliver efficient search functionalities across enterprise or customer-facing applications. Experience: 4–8 years Roles and Responsibilities Key Responsibilities: Design, develop, and maintain enterprise-grade search solutions using Apache Solr and SolrCloud . Develop and optimize search indexes and schema based on use cases like product search, document search, or order/invoice search. Integrate Solr with backend systems, databases and APIs. Implement full-text search , faceted search , auto-suggestions , ranking , and relevancy tuning . Optimize search performance, indexing throughput, and query response time. Ensure data consistency and high availability using SolrCloud and Zookeeper (cluster coordination & configuration management). Monitor search system health and troubleshoot issues in production. Collaborate with product teams, data engineers, and DevOps teams for smooth delivery. Stay up to date with new features of Apache Lucene/Solr and recommend improvements. Required Skills & Qualifications: Strong experience in Apache Solr & SolrCloud Good understanding of Lucene , inverted index , analyzers , tokenizers , and search relevance tuning . Proficient in Java or Python for backend integration and development. Experience with RESTful APIs , data pipelines, and real-time indexing. Familiarity with Zookeeper , Docker , Kubernetes (for SolrCloud deployments). Knowledge of JSON , XML , and schema design in Solr. Experience with log analysis , performance tuning , and monitoring tools like Prometheus/Grafana is a plus. Exposure to e-commerce or document management search use cases is an advantage. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. Experience with Elasticsearch or other search technologies is a plus. Working knowledge of CI/CD pipelines and cloud platforms ( Azure).
Posted 5 days ago
1.0 - 4.0 years
4 - 7 Lacs
Pune
Work from Office
Job Summary: We are seeking a proactive and detail-oriented Site Reliability Engineer (SRE) focused on Monitoring to join our observability team. The candidate will be responsible for ensuring the reliability, availability, and performance of our systems through robust monitoring, alerting, and incident response practices. Key Responsibilities: Monitor Application, IT infrastructure environment Drive the end-to-end incident response and resolution Design, implement, and maintain monitoring and alerting systems for infrastructure and applications. Continuously improve observability by integrating logs, metrics, and traces into a unified monitoring platform. Collaborate with development and operations teams to define and track SLIs, SLOs, and SLAs. Analyze system performance and reliability data to identify trends and potential issues. Participate in incident response, root cause analysis, and post-mortem documentation. Automate repetitive monitoring tasks and improve alert accuracy to reduce noise. Required Skills & Qualifications: 2+ years of experience in application/system monitoring, SRE, or DevOps roles. Proficiency with monitoring tools such as Prometheus, Grafana, ELK, APM, Nagios, Zabbix, Datadog, or similar. Strong scripting skills (Python, Bash, or similar) for automation. Experience with cloud platforms (AWS, Azure) and container orchestration (Kubernetes). Solid understanding of Linux/Unix systems and networking fundamentals. Excellent problem-solving and communication skills.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France