Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As an online travel booking platform, Agoda is committed to connecting travelers with a vast network of accommodations, flights, and more. With cutting-edge technology and a global presence, Agoda strives to enhance the travel experience for customers worldwide. As part of Booking Holdings and headquartered in Asia, Agoda boasts a diverse team of over 7,100 employees from 95+ nationalities across 27 markets. The work environment at Agoda is characterized by diversity, creativity, and collaboration, fostering innovation through a culture of experimentation and ownership. The core purpose of Agoda is to bridge the world through travel, believing that travel enriches lives, facilitates learning, and brings people and cultures closer together. By enabling individuals to explore and experience the world, Agoda aims to promote empathy, understanding, and happiness. As a member of the Observability Platform team at Agoda, you will be involved in building and maintaining the company's time series database and log aggregation system. This critical infrastructure processes a massive volume of data daily, supporting various monitoring tools and dashboards. The team faces challenges in scaling data collection efficiently while minimizing costs. In this role, you will have the opportunity to: - Develop fault-tolerant, scalable solutions in multi-tenant environments - Tackle complex problems in distributed and highly concurrent settings - Enhance observability tools for all developers at Agoda To succeed in this role, you will need: - Minimum of 8 years of experience in writing performant code using JVM languages (Java/Scala/Kotlin) or Rust (C++) - Hands-on experience with observability products like Prometheus, InfluxDB, Victoria Metrics, Elasticsearch, and Grafana Loki - Proficiency in working with messaging queues such as Kafka - Deep understanding of concurrency, multithreading, and emphasis on code simplicity and performance - Strong communication and collaboration skills It would be great if you also have: - Expertise in database internals, indexes, and data formats (AVRO, Protobuf) - Familiarity with observability data types like logs and metrics and proficiency in using profilers, debuggers, and tracers in a Linux environment - Previous experience in building large-scale time series data stores and monitoring solutions - Knowledge of open-source components like S3 (Ceph), Elasticsearch, and Grafana - Ability to work at low-level when required Agoda is an Equal Opportunity Employer and maintains a policy of considering all applications for future positions. For more information about our privacy policy, please refer to our website. Please note that Agoda does not accept third-party resumes and is not responsible for any fees associated with unsolicited resumes.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Cloud Services Developer at SAP, you will be an integral part of our dynamic team working on building a cutting-edge Database as a Service. Your role will involve designing, developing, and delivering a scalable, secure, and highly available database solution that caters to the evolving needs of our customers. Your responsibilities will include collaborating closely with internal teams like product management, designers, and end-users to ensure the success of the product. Leading a team of software developers, you will contribute to the development of new products and features based on customer requirements for a wide range of use cases. Your technical expertise will be crucial in ensuring adherence to design principles, coding standards, and best practices. Troubleshooting and resolving complex issues related to cloud services performance, scalability, and security will be part of your day-to-day tasks. Additionally, you will develop and maintain automated testing frameworks to ensure the quality and reliability of the services. Staying updated with the latest advancements in cloud computing, database technologies, and distributed systems is essential for this role. To be successful in this position, you should have a bachelor's or master's degree in computer science or equivalent, along with at least 6 years of hands-on development experience in programming languages such as Go, Python, or Java. Good expertise in data structures/algorithms, experience with cloud computing platforms like AWS, Azure, or Google Cloud, and familiarity with container and orchestration technologies such as Docker and Kubernetes are necessary qualifications. Knowledge of monitoring tools like Grafana & Prometheus, cloud security best practices, and compliance requirements will also be beneficial. Your passion for solving distributed systems challenges, experience in large-scale data architecture, data modeling, database design, and information systems implementation, coupled with excellent communication and collaboration skills, will make you a valuable asset to our team. Join us at SAP, where we foster a culture of inclusion, health, well-being, and flexible working models to ensure that everyone, regardless of background, feels included and can perform at their best. We are committed to providing accessibility accommodations to applicants with physical and/or mental disabilities and believe in unleashing all talent to create a better and more equitable world. SAP is an equal opportunity workplace and an affirmative action employer. We value Equal Employment Opportunity and strive to create a diverse and inclusive workforce. If you are interested in applying for a role at SAP, please reach out to our Recruiting Operations Team at Careers@sap.com for any accommodation or special assistance needed during the application process. Please note that successful candidates may be required to undergo a background verification with an external vendor. Requisition ID: 396933 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: .,
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
pune, maharashtra
On-site
Success in the role requires agility and results orientation, strategic and innovative thinking, a proven track record of delivering new customer-facing software products at scale, rigorous analytical skills, and a passion for automation and data-driven approaches to solving problems. As a Director of eCommerce Engineering, your responsibilities include overseeing and leading the engineering project delivery for the ECommerce Global Multi-Tenant Platform. You will ensure high availability, scalability, and performance to support global business operations. Defining and executing the engineering strategy that aligns with the company's business goals and long-term vision for omnichannel retail is crucial. Establishing robust processes for code reviews, testing, and deployment to ensure high-quality deliverables is also part of your role. You will actively collaborate with Product Management, Business Stakeholders, and other Engineering Teams to define project requirements and deliver customer-centric solutions. Serving as a key point of contact for resolving technical challenges and ensuring alignment between business needs and technical capabilities. Promoting seamless communication between teams to deliver cross-functional initiatives on time and within budget is essential. Building a strong and diverse engineering team by attracting, recruiting, and retaining top talent is a key responsibility. Designing and implementing a robust onboarding program to ensure new hires are set up for success. Coaching team members to enhance technical expertise, problem-solving skills, and leadership abilities, fostering a culture of continuous learning and improvement. Maintaining a strong pipeline of talent by building relationships with local universities, engineering communities, and industry professionals is also part of your role. You will define clear, measurable goals for individual contributors and teams to ensure alignment with broader organizational objectives. Conducting regular one-on-one meetings to provide personalized feedback, career guidance, and development opportunities. Managing performance reviews and recognizing high-performing individuals, while providing coaching and support to those needing improvement. Fostering a culture of accountability, where team members take ownership of their work and deliver results. Championing the adoption of best practices in software engineering, including agile methodologies, DevOps, and automation is crucial. Facilitating and encouraging knowledge sharing and expertise in critical technologies, such as cloud computing, microservices, and AI/ML. Evaluating and introducing emerging technologies that align with business goals, driving innovation and competitive advantage is part of your responsibility. Developing and executing a continuous education program to upskill team members on key technologies and the Williams-Sonoma business domain is essential. Organizing training sessions, workshops, and certifications to keep the team updated on the latest industry trends. Encouraging team members to actively participate in tech conferences, hackathons, and seminars to broaden their knowledge and network is also important. Accurately estimating development efforts for projects, considering complexity, risks, and resource availability. Developing and implementing project plans, timelines, and budgets to deliver initiatives on schedule. Overseeing system rollouts and implementation efforts to ensure smooth transitions and minimal disruptions to business operations. Optimizing resource allocation to maximize team productivity and ensure proper workload distribution is a key responsibility. Championing initiatives to improve the engineering organization's culture, focusing on collaboration, transparency, and inclusivity. Continuously evaluating and refining engineering processes to increase efficiency and reduce bottlenecks. Promoting team well-being by fostering a positive and supportive work environment where engineers feel valued and motivated. Leading efforts to make the organization a "Great Place to Work," including regular engagement activities, mentorship programs, and open communication. Developing a deep understanding of critical systems and processes, including platform architecture, APIs, data pipelines, and DevOps practices. Providing technical guidance to the team, addressing complex challenges, and ensuring alignment with architectural best practices. Partnering with senior leaders to align technology decisions with business priorities and future-proof the company's systems. Playing a pivotal role in transforming Williams-Sonoma into a leading technology organization by implementing cutting-edge solutions in eCommerce, Platform Engineering, AI, ML, and Data Science. Driving the future of omnichannel retail by conceptualizing and delivering innovative products and features that enhance customer experiences. Actively representing the organization in the technology community, building a strong presence through speaking engagements, partnerships, and contributions to open-source projects. Identifying opportunities for process automation and optimization to improve operational efficiency. Being adaptable to perform other duties as required, addressing unforeseen challenges, and contributing to organizational goals. Staying updated on industry trends and competitive landscapes to ensure the company remains ahead of the curve. Williams-Sonoma Inc. is the premier specialty retailer of high-quality products for the kitchen and home in the United States. Founded in 1956, it is now one of the United States" largest e-commerce retailers with well-known brands in home furnishings. The India Technology Center serves as a critical hub for innovation, focusing on developing cutting-edge solutions in areas such as e-commerce, supply chain optimization, and customer experience management. Through advanced technologies like artificial intelligence, data analytics, and machine learning, the India Technology Center plays a crucial role in accelerating Williams-Sonoma's growth and maintaining its competitive edge in the global market.,
Posted 1 week ago
3.0 - 8.0 years
0 Lacs
karnataka
On-site
About Groww Groww is a team of dedicated individuals committed to providing financial services to every Indian through a diverse product platform. Our focus is on empowering millions of customers to take control of their financial journey. At Groww, customer obsession drives our actions, ensuring that every product, design, and algorithm is tailored to meet the needs and convenience of our customers. Our team is our greatest asset, embodying qualities of ownership, customer-centricity, integrity, and a drive to challenge the status quo constantly. Vision Our vision at Groww is to equip every individual with the knowledge, tools, and confidence to make informed financial decisions. Through our cutting-edge multi-product platform, we aim to empower every Indian to take charge of their financial well-being. Our ultimate goal is to become the trusted financial partner for millions of Indians. Values Our culture at Groww has played a pivotal role in establishing us as India's fastest-growing financial services company. It fosters an environment of collaboration, transparency, and open communication, where hierarchies diminish, and every individual is encouraged to bring their best selves to work, shaping a promising career path. The foundational values that guide us are radical customer-centricity, ownership-driven culture, simplicity in approach, long-term thinking, and complete transparency. Job Requirement As a prospective team member at Groww, you will be tasked with the following responsibilities: - Collaborating with developers to ensure new environments meet requirements and adhere to best practices. - Playing a crucial role in product scoping, roadmap discussions, and architecture decisions. - Ensuring team cohesion and alignment towards common goals. - Tracking external project dependencies effectively. - Leading organizational improvement efforts. - Cultivating a positive engineering culture focused on reducing technical debt. - Overseeing the team's sprint execution. - Demonstrating a clear understanding of the project domain and engaging in discussions with other team members effectively. - Staying informed and skilled in the latest cloud, infrastructure, orchestration, and automation technologies. - Reviewing the current environment for deficiencies and proposing solutions for enhancement. - Aligning all delivered capabilities with business objectives, IT strategies, and design intent. - Recommending new technologies for improved DevOps services. - Driving project scope definition and backlog management. - Collaborating with architecture and infrastructure delivery teams for consistent solution design and integration. - Working with the engineering team to implement infrastructure lifecycle practices. Qualifications: - 8+ years of experience in planning and implementing cloud infrastructure and DevOps solutions. - Minimum of 3 years of experience in leading DevOps teams. - Proficiency in working with development teams on microservice architectures. - Degree in Computer Science. - Hands-on experience in designing and operating DevOps systems. - Experience in deploying high-performance systems with robust monitoring and logging practices. - Strong communication skills, both written and verbal. - Expertise in Cloud Infrastructure solutions like Microsoft Azure, Google Cloud, or AWS. - Experience in managing containerized environments using Docker, Mesos/Kubernetes. - Familiarity with multiple data stores such as MySQL, MongoDB, Cassandra, Elasticsearch. - Experience in designing observability platforms using open-source tools like mimir, thanos, Prometheus, and Grafana. - Knowledge in cloud infrastructure security and Kubernetes security.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
As a skilled ELK (ElasticSearch, Logstash, and Kibana) Stack Engineer, you will be responsible for designing and implementing ELK stack solutions to create and manage large-scale elastic search clusters on Production and DR environments. Your primary focus will be on designing end-to-end solutions that emphasize performance, reliability, scalability, and maintainability. You will collaborate with subject matter experts (SMEs) to create prototypes and adopt agile and DevOps practices to align with the product delivery lifecycle. Automation of processes using relevant tools and frameworks will be a key aspect of your role. Additionally, you will work closely with Infrastructure and development teams for capacity planning and deployment strategy to achieve a highly available and scalable architecture. Your proficiency in developing ELK stack solutions, including Elasticsearch, Logstash, and Kibana, will be crucial. Experience in upgrading Elasticsearch across major versions, managing large applications in production environments, and proficiency in Python is required. Familiarity with Elasticsearch Painless scripting language, Linux/Unix operating systems (preferably CentOS/RHEL), Oracle PL/SQL, scripting technologies, Git, Jenkins, Ansible, Docker, ITIL, Agile, Jira, Confluence, and security best practices will be advantageous. You should be well versed in applications/infrastructure logging and monitoring tools like SolarWinds, Splunk, Grafana, and Prometheus. Your skills should include configuring, maintaining, tuning, administering, and troubleshooting Elasticsearch clusters in a cloud environment, understanding Elastic cluster architecture, design, and deployment, and handling JSON data ingest proficiently. Agile development experience, proficiency in source control using Git, and excellent communication skills to collaborate with DevOps, Product, and Project Management teams are essential. Your initiative and problem-solving abilities will be crucial in this role, along with the ability to work in a dynamic, fast-moving environment, prioritize tasks effectively, and manage time optimally.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
You should have hands-on experience working with Elasticsearch, Logstash, Kibana, Prometheus, and Grafana monitoring systems. Your responsibilities will include installation, upgrade, and management of ELK, Prometheus, and Grafana systems. You should be proficient in ELK, Prometheus, and Grafana Administration, Configuration, Performance Tuning, and Troubleshooting. Additionally, you must have knowledge of various clustering topologies such as Redundant Assignments, Active-Passive setups, and experience in deploying clusters on multiple Cloud Platforms like AWS EC2 & Azure. Experience in Logstash pipeline design, search index optimization, and tuning is required. You will be responsible for implementing security measures and ensuring compliance with security policies and procedures like CIS benchmark. Collaboration with other teams to ensure seamless integration of the environment with other systems is essential. Creating and maintaining documentation related to the environment is also part of the role. Key Skills required for this position include certification in monitoring systems like ELK, RHCSA/RHCE, experience on the Linux Platform, and knowledge of Monitoring tools such as Prometheus, Grafana, ELK stack, ManageEngine, or any APM tool. Educational Qualifications should include a Bachelor's degree in Computer Science, Information Technology, or a related field. The ideal candidate should have 4-7 years of relevant experience and the work location for this position is Mumbai.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
You will be responsible for: - Delivering complex Java-based solutions and preferably having experience in Fintech product development - Demonstrating a strong understanding of microservices architectures and RESTful APIs - Developing cloud-native applications and being familiar with containerization and orchestration tools such as Docker and Kubernetes - Having experience with at least one major cloud platform like AWS, Azure, or Google Cloud, with knowledge of Oracle Cloud being preferred - Utilizing DevOps tools like Jenkins and GitLab CI/CD for continuous integration and deployment - Understanding monitoring tools like Prometheus and Grafana, as well as event-driven architecture and message brokers like Kafka - Monitoring and troubleshooting the performance and reliability of Cloud-native applications in production environments - Possessing excellent verbal and written communication skills and the ability to collaborate effectively within a team. About Us: Oracle is a global leader in cloud solutions that leverages cutting-edge technology to address current challenges. With over 40 years of experience, we partner with industry leaders across various sectors, maintaining integrity amidst ongoing change. We believe in fostering innovation by empowering all individuals to contribute, striving to build an inclusive workforce that offers opportunities for everyone. Oracle careers provide a gateway to international opportunities where a healthy work-life balance is encouraged. We provide competitive benefits, including flexible medical, life insurance, and retirement options to support our employees. Furthermore, we promote community engagement through volunteer programs. We are dedicated to integrating people with disabilities into all phases of the employment process. If you require assistance or accommodation due to a disability, please contact us at accommodation-request_mb@oracle.com or call +1 888 404 2494 in the United States.,
Posted 1 week ago
4.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Lead DevOps Engineer at Ameriprise India, you will have the opportunity to advocate for DevOps best practices and build scalable infrastructure to provide a world-class experience to clients. You will play a key role in influencing the DevOps roadmap to enhance the speed to market. Responsibilities: - Implement and adopt best practices in DevSecOps, Continuous Integration, Continuous Deployment, and Continuous Testing for both server-side and client-side applications. - Design NextGen application strategy using Cloud-native architectures. - Build scalable and efficient cloud and on-premise infrastructure. - Implement monitoring for automated system health checks. - Develop CI/CD pipelines and provide guidance to teams on DevSecOps best practices. - Collaborate with engineers to resolve issues during application instability. - Maintain and implement change management control procedures for UAT/QA and production releases. - Integrate test automation (UI/API) with CI/CD for comprehensive test coverage and metrics collection. - Work with multiple distributed teams following Agile practices. Required Qualifications: - 7 to 10 years of industry experience in building infrastructure and release management activities. - 4+ years of experience in DevOps practices. - Proficiency in code/scripting for IaaS automation. - Familiarity with Linux, Unix, and Windows operating systems. - Experience with Configuration Management tools like Ansible and Terraform. - Knowledge of containerization tools such as Vagrant, Kubernetes, and Docker. - Understanding of container orchestration tools like Marathon, Kubernetes, EKS, or ECS. - Experience with Cloud/IaaS environments like AWS/GCP and monitoring/alerting tools like Sumologic, Cloud Watch, and Prometheus. - Familiarity with SCM tools like BitBucket/Git and productivity plugins. - Knowledge of code quality and security tools like SonarQube, Blackduck, and Veracode. - Experience with performance tools like PageSpeed and Google Lighthouse. - Proficiency in test and build systems such as Jenkins, Maven, and JFrog/Nexus Artifactory. - Understanding of network topologies, hardware, load balancers (F5, Nginx), and firewalls. Preferred Qualifications: - Experience with CNCF and GitOps principles. - Ability to package and deploy single-page apps following best practices. - Knowledge of CDNs and cloud migration. - Familiarity with load balancers and reverse proxies. About Our Company: Ameriprise India LLP has been providing client-based financial solutions for 125 years, helping clients plan and achieve their financial objectives. Headquartered in Minneapolis, we are a U.S.-based financial planning company with a global presence. Our focus areas include Asset Management and Advice, Retirement Planning, and Insurance Protection. Join our inclusive and collaborative culture that values contributions and offers opportunities for growth. Work with talented individuals who share your passion for excellence and make a difference in your community. If you are talented, driven, and seek to work for an ethical company that cares, consider creating a career at Ameriprise India LLP. Full-Time/Part-Time: Full-time Timings: 2:00 PM - 10:30 PM India Business Unit: AWMPO AWMP&S President's Office Job Family Group: Technology,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You will be joining Appice, a machine learning-based mobile marketing automation platform dedicated to helping businesses establish reliable connections with their customers. Our comprehensive solution covers various mobile aspects including acquisition, engagement, retention, and monetization. From crafting in-app messages to sending push notifications, conducting email marketing, performing mobile A/B testing, collecting data with mobile analytics, we equip businesses with effective tools to engage their mobile audience seamlessly. As a DevOps Engineer (Openshift/ OCP Expert) at Appice, your primary responsibility will involve managing and maintaining our infrastructure in Mumbai. You will play a crucial role in optimizing system performance, ensuring smooth deployment and operation of applications. Collaborating closely with development and operations teams, your tasks will revolve around automating processes to enhance efficiency. You will also be accountable for monitoring, troubleshooting, and resolving production issues to uphold the high availability and reliability of our platform. Your expertise should include independently setting up a Kubernetes cluster from scratch on a RHEL VM 9.3/9.4/9.5 Cluster, monitoring it via Prometheus/Grafana, administering the cluster, and deploying Spark Application on the Kubernetes cluster. Documenting these steps comprehensively with screenshots and descriptive text for replication purposes is essential. Required qualifications for this role include: - Extensive experience with Kubernetes in a bare metal environment (on-prem deployment) - Proficiency in cloud technologies and platforms, preferably Azure - Knowledge of containerization technologies like Docker - Familiarity with infrastructure as code tools such as Terraform and Ansible - Experience with monitoring and logging tools like Prometheus and Grafana - Strong problem-solving and troubleshooting abilities - Excellent teamwork and communication skills - Ability to excel in a fast-paced, collaborative environment - Hands-on experience working on Production Servers and independently creating Production servers/Clusters If you are passionate about DevOps, infrastructure management, and ensuring the seamless operation of applications, this role at Appice offers a dynamic environment where you can leverage your skills and contribute to the success of our platform.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
As an SMO-OSS Integration Consultant specializing in 5G Networks, you will be responsible for designing and implementing integration workflows between the SMO platform and OSS systems to effectively manage 5G network resources and operations. Your role will involve configuring OSS-SMO interfaces for tasks such as fault management, performance monitoring, and lifecycle management to ensure seamless data exchange between SMO and OSS for service orchestration, provisioning, and assurance. You will play a crucial role in enabling automated service provisioning and lifecycle management using SMO, as well as developing workflows and processes for end-to-end network orchestration, including MEC, network slicing, and hybrid cloud integration. Additionally, you will be expected to conduct end-to-end testing of SMO-OSS integration in both lab and production environments, validating integration of specific use cases such as fault detection, service assurance, and anomaly detection. Collaboration with cross-functional teams, including OSS developers, cloud engineers, and AI/ML experts, will be an essential part of your role as you work together to define integration requirements. Furthermore, you will engage with vendors and third-party providers to ensure compatibility and system performance. In terms of required skills and experience, you should possess strong technical expertise in SMO platforms and their integration with OSS systems, along with familiarity with OSS functions like inventory management, fault management, and performance monitoring. Hands-on experience with O-RAN interfaces such as A1, E2, and O1 is essential. Deep knowledge of 5G standards, including 3GPP, O-RAN, and TM Forum, as well as proficiency in protocols like HTTP/REST APIs, NETCONF, and YANG is required. Programming skills in languages like Python, Bash, or similar for scripting and automation, along with experience in AI/ML frameworks and their application in network optimization, will be beneficial. Familiarity with tools like Prometheus, Grafana, Kubernetes, Helm, and Ansible for monitoring and deployment, as well as cloud-native deployments (e.g., OpenShift, AWS, Azure), is desired. Strong problem-solving abilities for debugging and troubleshooting integration and configuration issues are also important for this role.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
indore, madhya pradesh
On-site
As a Technical Architect, you will be responsible for implementing and maintaining the cloud infrastructure for GDP, ensuring the smooth operation of the environment, evaluating new technologies in infrastructure automation and cloud computing, and identifying opportunities to enhance performance, reliability, and automation. Your role will involve providing DevOps capabilities to team members and customers, as well as resolving incidents and change requests effectively. In addition to the above, you will also be required to document solutions and effectively communicate them to users. Your key responsibilities will include providing technology and architectural direction and guidance to the organization, contributing your individual strengths and personality to a cross-functional team, and continuously developing new skills and experiences. To excel in this role, you must possess advanced experience with Linux, including scripting languages such as Bash and Python. You should also have a strong background in cloud infrastructure and services, preferably on Microsoft Azure, as well as expertise in container orchestration (e.g., Kubernetes, Nomad), log and metrics management (ELK Stack), monitoring (Prometheus), infrastructure as code, deployment and configuration automation (Ansible, Terraform), continuous integration and continuous delivery tools (e.g., Gitlab CI, Jenkins, Nexus), and infrastructure security principles. While it is essential to have the aforementioned skills, it would be advantageous if you also have experience as a DevOps professional working closely with Docker, Kubernetes, and Terraform, as well as a solid understanding of cloud architecture (particularly Azure and AWS), different cloud service models (SaaS, PaaS, IaaS), and containerization using Docker. Your commitment to active development, adherence to software development best practices (e.g., version control, automated testing, code reviews), and proficiency in coding languages like Python, Go, or Ruby will further strengthen your candidacy. In summary, this role requires a proactive and skilled Technical Architect who can drive technological innovation, maintain a robust cloud infrastructure, and collaborate effectively with a diverse team to deliver high-quality solutions to users.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
The ideal candidate should have 2-3 years of experience in DevOps along with a mandatory 1-year experience in DevSecOps. The role requires working onsite in Pune and following a second shift from 2 PM to 10 PM IST. Key skills for this position include proficiency in Cloud Technology (Azure), Automation Tools (Azure Kubernetes & Terraform), CI/CD Pipelines (Jenkins and Azure DevOps), Scripting Language (Python), Monitoring tools (Prometheus / Grafana / Splunk / ELK), and Security tools (Azure active directory). Additionally, experience in AI and GenAI would be considered a strong advantage. The selected candidate should be available to start immediately within 2 weeks at maximum.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
You will have the opportunity to work with a wide range of cutting-edge technologies in a high-volume, real-time, mission-critical environment. Your ability to skillfully manage critical situations and earn customer accolades will be crucial. You should be excited about enhancing your communication skills for interactions with CxO level customers and providing Day-2 services to SaaS customers. Join our dynamic team of specialists who are making a global impact. Effective issue triaging, strong debugging skills, a responsive attitude, and the ability to handle pressure are essential qualities for this role. Technical expertise in multiple areas such as Oracle Database, PL/SQL, leading Application Servers, popular Web Technologies, and REST API is required. Experience in Banking Product Support or Development, or working in Bank IT, is highly preferred. Good communication skills for customer interactions and critical situation management are a significant advantage. Familiarity with Spring Framework, Spring Boot, Oracle JET, and Microservices architecture will be beneficial. Technical Skillset: - Core Java, J2EE, Oracle - Basic knowledge of Performance Tuning and Oracle internals, interpreting AWR reports - Application Servers: WebLogic / WebSphere - Familiarity with additional areas is advantageous: - REST/Web Services/ORM - Logging and monitoring tools like OCI Monitoring, Prometheus, Grafana, ELK Stack, Splunk - Kubernetes, Docker, and container orchestration platforms - UI technologies: Knockout JS, OJET, RequireJS, AJAX, jQuery, JavaScript, CSS3, HTML5, JSON, SaaS This position is at Career Level IC3.,
Posted 1 week ago
3.0 - 6.0 years
12 - 22 Lacs
Gurugram, Bengaluru, Mumbai (All Areas)
Work from Office
In the role of a DevOps Engineer, you will be responsible for designing, implementing, and maintaining the infrastructure and CI/CD pipelines necessary to support our Generative AI projects. Furthermore, you will have the opportunity to critically assess and influence the engineering design, architecture, and technology stack across multiple products, extending beyond your immediate focus. - Design, deploy, and manage scalable, reliable, and secure Azure cloud infrastructure to support Generative AI workloads. - Implement monitoring, logging, and alerting solutions to ensure the health and performance of AI applications. - Optimize cloud resource usage and costs while ensuring high performance and availability. - Work closely with Data Scientists and Machine Learning Engineers to understand their requirements and provide the necessary infrastructure and tools. - Automate repetitive tasks, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, and Azure Resource Manager (ARM). - Utilize APM (Application Performance Monitoring) to identify and resolve performance bottlenecks Maintain comprehensive documentation for infrastructure, processes, and workflows. Must Have Skills: - Extensive knowledge of Azure services: Kubernetes, Azure App Service, Azure API management(APIM), Application gateway, AAD, GitHub Action, Istio, Datadog, Proficiency in containerization and orchestration tools such as (Jenkins, GitLab CI/CD, Azure DevOps) - Knowledge of API management platforms like APIM for API governance, security, and lifecycle management. - Expertise in monitoring and observability tools like Datadog, loki, grafana, prometheus for comprehensive monitoring, logging, and alerting solutions. Good scripting skills (Python, Bash, PowerShell). - Experience with infrastructure as code (Terraform, ARM Templates). - Experience in optimizing cloud resource usage and costs utilizing insights from Azure cost and monitor metrics.
Posted 1 week ago
4.0 - 7.0 years
10 - 12 Lacs
Bengaluru
Work from Office
Proficiency in python scripting, designing scalable cloud services, migrating cloud infrastructures into kubernetes clusters,building end to end deployment pipelines. Required Candidate profile azure cloud, kubernetes, prometheus, kafka, CI/CD pipeline
Posted 2 weeks ago
8.0 - 13.0 years
15 - 20 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Project description We're seeking a strong and creative Software Engineer eager to solve challenging problems of scale and work on cutting edge technologies. In this project, you will have the opportunity to write code that will impact thousands of users You'll implement your critical thinking and technical skills to develop cutting edge software, and you'll have the opportunity to interact with teams across disciplines. In Luxoft, our culture is one that strives on solving difficult problems focusing on product engineering based on hypothesis testing to empower people to come up with ideas. In this new adventure, you will have the opportunity to collaborate with a world-class team in the field of Insurance by building a holistic solution, interacting with multidisciplinary teams. Responsibilities As a Lead OpenTelemetry Developer, you will be responsible for developing and maintaining OpenTelemetry-based solutions. You will work on instrumentation, data collection, and observability tools to ensure seamless integration and monitoring of applications. This role involves writing documentation and promoting best practices around OpenTelemetry. Skills Must have Senior candidates with 8+ years of experience Experience in InstrumentationExpertise in at least one programming language supported by Open Telemetry and a broad knowledge of other languages (e.g., Python, Java, Go, PowerShell, .NET) Passion for ObservabilityStrong interest in observability and experience in writing documentation and blog posts to share knowledge.oExperience in Java instrumentation techniques (e.g. bytecode manipulation, JVM internals, Java agents)Secondary Skills Telemetry Familiarity with tools and technologies on one or more of the belowsuch as Prometheus, Grafana, and other observability platforms (E.g. Dynatrace, AppDynamics (Splunk), Amazon CloudWatch, Azure Monitor, Honeycomb) Nice to have -
Posted 2 weeks ago
5.0 - 8.0 years
4 - 8 Lacs
Coimbatore
Work from Office
Role Purpose The purpose of this role is to support delivery through development and deployment of tools. Extensive working knowledge of Splunk administrator and various components (indexer, forwarder, search head, deployment server), as Splunk system administrator. Setting up Splunk Forwarding for new application tiers introduced into the environment. Identifying bad searches/dashboards and partnering with the creators to improve performance. Troubleshooting Splunk performance issues / Opening support cases with Splunk. Monitor the Splunk infrastructure for capacity planning and optimization.. Experience with any Observability tools such as Grafana, Prometheus and also tenants of Observability (Monitoring, Logging and/or tracing) is a plus. Experience with any programming language: Java/GoLang/Python is a plus. Experience working with Linux environment and Unix scripting. Experience with CI/CD: pipeline management with GitHub, Ansible is a plus. Installing, configuration and managing of datadog tool. Creating alerts,dashboards and other metrics in datadog Mandatory Skills: Splunk AIOPS. Experience:5-8 Years.
Posted 2 weeks ago
3.0 - 8.0 years
14 - 18 Lacs
Bengaluru
Work from Office
Project description The project is in Treasury Domain, which is supported by the IT team. The platform operates 24/7, supporting teams in Sydney. The platform undergoes constant change as it provides services to a number of business stakeholders. Our team is composed of engineers and technology leaders who bring in the right mix of skills to enable this transformation. We also work very closely with our business and operations colleagues to support these services, which are critical to the Australian and Global economy. Our team is also responsible also to drive Engineering Governance, Continuous Delivery, and key technological simplification pillars such as Cloud and payments-based architecture. Responsibilities Design and implement automation for production support activities, including alert triage and resolution flows. Reduce testing cycle time from seven to three weeks through creative automation strategies. Explore and implement deployment automation and streamline QRM-related alerts. Build intuitive UIs for visualizing product and support status, if needed. Partner with existing team members to reverse-engineer domain knowledge into reusable automation components. Identify repetitive manual tasks and apply DevOps practices to eliminate them. Skills Must have Minimum 3+ years of experience as a DevOps Automation Engineer AWS EC2, Docker, Kubernetes PowerShell, Python (or similar scripting tools) Microservices & RESTful API development ASP.NET Core, C#, or alternatively Java/Python with React CI/CD toolsTeamCity, GitHub Actions, Artifactory, Octopus Deploy (or equivalents) Highly Preferred Instrumentation & observability toolsGrafana, Prometheus, Splunk, Dynatrace, or AppDynamics Nice to have Exposure to security tools like SonarQube, Checkmarx, or Snyk Familiarity with AI for business process automation Basic SQL and schema design (MSSQL or Oracle)
Posted 2 weeks ago
5.0 - 10.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Project description Institutional Banking Data Platform (IDP) is state-of-the-art cloud platform engineered to streamline data ingestion, transformation, and data distribution workflows that underpin Regulatory Reporting, Market Risk, Credit Risk, Quants, and Trader Surveillance. In your role as Software Engineer, you will be responsible for ensuring the stability of the platform, performing maintenance and support activities, and driving innovative process improvements that add significant business value. Responsibilities Problem solving advanced analytical and problem-solving skills to analyse complex information for key insights and present as meaningful information to senior management Communication excellent verbal and written communication skills with the ability to lead discussions with a varied stakeholder across levels Risk Mindset You are expected to proactively identify and understand, openly discuss, and act on current and future risks Skills Must have Bachelor's degree in computer science, Engineering, or a related field/experience. 5+ years of proven experience as a Software Engineer or similar role, with a strong track record of successfully maintaining and supporting complex applications. Strong hands-on experience with Ab Initio GDE, including Express>It, Control Centre, Continuous>flow. Should have handled and worked with XML, JSON, and Web API. Strong hands-on experience in SQL. Hands-on experience in any shell scripting language. Experience with Batch and streaming-based integrations. Nice to have Knowledge of CI/CD tools such as TeamCity, Artifactory, Octopus, Jenkins, SonarQube, etc. Knowledge of AWS services including EC2, S3, CloudFormation, CloudWatch, RDS and others. Knowledge of Snowflake and Apache Kafka is highly desirable. Experience with configuration management and infrastructure-as-code tools such as Ansible, Packer, and Terraform. Experience with monitoring and observability tools like Prometheus/Grafana.
Posted 2 weeks ago
8.0 - 13.0 years
14 - 18 Lacs
Bengaluru
Work from Office
Project description We are seeking a highly skilled and motivated DevOps Engineer with 8+ years of experience to join our engineering team. You will work in a collaborative environment, automating and streamlining processes related to infrastructure, development, and deployment. As a DevOps Specialist, you will help implement and manage CI/CD pipelines, configure on-prem Windows OS infrastructure, and ensure the reliability and scalability of our systems. The system is on Windows with Microsoft SQL. Responsibilities CI/CD Pipeline ManagementDesign from scratch, implement, and manage automated build, test, and deployment pipelines to ensure smooth code integration and delivery. Infrastructure as Code (IaC)Develop and maintain infrastructure using tools for automated provisioning and management. System Monitoring & MaintenanceSet up monitoring systems for production and staging environments, analyze system performance, and provide solutions to increase efficiency. Deploy and manage configuration using fit-to-purpose tools and scripts with version controls, CI, etc. CollaborationWork closely with software developers, QA teams, and IT staff to define, develop, and improve DevOps processes and solutions. Automation & ScriptingCreate and maintain custom scripts to automate manual processes for deployment, scaling, and monitoring. SecurityImplement security practices and ensure compliance with industry standards and regulations related to cloud infrastructure. Troubleshooting & Issue ResolutionDiagnose and resolve issues related to system performance, deployments, and infrastructure. Drive DevOps thought leadership and delivery experience to the offshore client delivery team. Implement DevOps best practices based on developed patterns. Skills Must have Total 9 to 12 years of experience as a DevOps Engineer 3+ years of experience in AWS Excellent knowledge of DevOps toolchains like GitHub Actions /GitHub Co-pilot Self-starter, capable of driving solutions from 0 to 1 and able to deliver projects from scratch Familiarity with containerization and orchestration tools (Docker, Kubernetes) Working understanding of platform security constructs Good exposure to Monitoring tools/Dashboards like Grafana, Obstack, or similar monitoring solutions Experience of working with Jira, Agile SDLC practices Expert knowledge of CI/CD Excellent written and verbal communication skills, strong collaboration, and teamwork skills Proficient in scripting languages like Python and PowerShell, and Database knowledge of MS SQL Experience with Windows or IIS, including installation, configuration, and maintenance Strong troubleshooting skills, with the ability to think critically, work under pressure, and resolve complex issues Excellent communication skills with the ability to work cross-functionally with development, operations, and IT teams Security Best PracticesKnowledge of security protocols, network security, and compliance standards Adaptability to new learning and strong attention to detail with a proactive approach to identifying issues before they arise Nice to have Cloud CertificationsAWS Certified DevOps Engineer, Google Cloud Professional DevOps Engineer, or equivalent. IAC pipelines and best practice Snyk, sysdiag knowledge Worked on windows OS, SRE, monitoring on Prometheus
Posted 2 weeks ago
10.0 - 15.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Network Systems Engineering Technical Leader - Routing, Switching, Nexus, VPC, VDC, VLAN, VXLAN, BGP - Design and large scale network deployment Meet the Team We are the Data Center Network Services team within Cisco IT that supports network services for Cisco Engineering and business functions worldwide. Our mission is simple build the network of the future that is adaptable and agile on Ciscos networking solutions. Cisco IT networks are deployed, monitored, and managed with a DevOps approach to support rapid application changes. We invest in ground-breaking technologies that enable us to deliver services in a fast and reliable manner. The team culture is collaborative and fun, where thinking creatively and tinkering on new ideas are encouraged. Your Impact You will design, develop, test and deploy DC network capabilities within Data Center Network. You are engaging and comfortable collaborating with fellow engineers across multiple disciplines as well as internal clients. You will create innovative, high-quality capabilities enabling our clients to have the best possible experience. Minimum Requirements Bachelor of Engineering or Technology with a minimum of 10 years of experience in designing, deploying, operating and handling scalable DC network infrastructure (using Nexus OS) Experience in technologies like Routing, Switching, Nexus, VPC, VDC, VLAN, VXLAN, BGP Experience on handling incident, problem and organisational change Familiarity with DevOps principles, comfortable with Agile practices Preferred Qualifications CCNP or CCIE/DE Experience with SONiC NoS including: Basic configuration (both CLI and config_db.json) Network solve with SONiC QoS monitoring and fix, particularly for RoCEv2 Experience with BGP routing Desirable to have experience with L3 Fabrics Desirable to have familiarity with Nvidia and Linux networking Desirable to have experience with Python, Prometheus, Splunk and Grafana Desirable to have experience with Cisco Firepower firewalls (FTD/FMC) Nice to have Qualifications Experience with Nexus Dashboard Fabric Controller for building and fix Networks Experience with VXLan based networks and solve
Posted 2 weeks ago
6.0 - 11.0 years
17 - 22 Lacs
Pune
Work from Office
Hi, Please find JD - Job Title: Azure Kubernetes Architect and Administrator (L3 Capacity, Managed Services) Key Responsibilities: • Azure Kubernetes Service (AKS): Architect, manage, and optimize Kubernetes clusters on Azure, ensuring scalability, security, and high availability. • Azure Infrastructure and Platform Services: o IaaS: Design and implement robust Azure-based infrastructure for critical BFSI applications. o PaaS: Optimize the use of Azure PaaS services, including App Services, Azure SQL Database, and Service Fabric. • Security & Compliance: Ensure adherence to BFSI industry standards by implementing advanced security measures (e.g., Azure Security Center, rolebased access control, encryption protocols). • Cost Optimization: Analyze and optimize Azure resource usage to minimize costs while maintaining performance and compliance standards. • Automation: Develop CI/CD pipelines and automate workflows using tools like Terraform, Helm, and Azure DevOps. • Process Improvements: Continuously identify areas for operational enhancements in line with BFSI-specific needs. • Collaboration: Partner with cross-functional teams to support deployment, monitoring, troubleshooting, and the lifecycle management of applications. Required Skills: • Expertise in Azure Kubernetes Service (AKS), Azure IaaS and PaaS, and container orchestration. • Strong knowledge of cloud security principles and tools such as Azure Security Center and Azure Key Vault. • Proficiency in scripting languages like Python, Bash, or PowerShell. • Familiarity with cost management tools such as Azure Cost Management + Billing. • Experience in monitoring with Prometheus and Grafana. • Understanding of BFSI compliance regulations and standards. • Process improvement experience using frameworks like Lean, Six Sigma, or similar methodologies. Qualifications: • Bachelor\'s degree in Computer Science, Engineering, or related field. • Certifications like Azure Solutions Architect, Certified Kubernetes Administrator (CKA), or Certified Azure DevOps Engineer are advantageous. • Minimum 5 years of hands-on experience in Azure and Kubernetes environments within BFSI or similar industries. Expertise in AKS, Azure IaaS, PaaS, and security tools like Azure Security Center. Proficiency in scripting (Python, Bash, PowerShell). Strong understanding of BFSI compliance standards. Experience with monitoring tools such as Prometheus, Grafana, New Relic, Azure Log Analytics, and ADF. Skilled in cost management using Azure Cost Management tools. Knowledge of ServiceNow ITSM, Freshworks ITSM, change management, team leadership, and process improvement frameworks like Lean or Six Sigma .
Posted 2 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
Gurugram, India
Work from Office
Position Summary: A Senior Full Stack Senior Software Developer designs, develops, and maintains both front-end and back-end systems for scalable, secure, and high-performance web applications. They lead technical projects, mentor junior developers, and ensure best practices across the development lifecycle. How You’ll Make an Impact (responsibilities of role) Build front-end (React, Angular, Vue.js) and back-end (Node.js, Python, Java) systems. Design and optimize databases (SQL, NoSQL) and APIs (REST, GraphQL). Implement cloud solutions (AWS, Azure) and DevOps tools (Docker, Kubernetes). Write clean, maintainable code and ensure testing (unit, integration, CI/CD). Collaborate with teams and provide technical leadership. What You Bring (required qualifications and skills) Must-Have Qualifications EducationBachelor’s/master’s in computer science or related fields. 5–10+ years of professional experience in software development with a broad and deep understanding of modern systems Strong DevOps mindset and hands-on experience with Docker, VMs, and container orchestration At least one cloud platform (AWS, Azure, or GCP) CI/CD pipelines and Git-based workflows Infrastructure as Code (e.g., Terraform, Pulumi) Solid networking fundamentals (DNS, routing, firewalls, etc.) Proven experience in API design, data modeling, and authn/authz mechanisms such as OAuth2, OIDC, or similar Comfortable with backend development in at least one modern language Go, Rust, C#, or similar Strong frontend development skills using modern frameworks React, Angular, Vue, or Web Components Good understanding of design systems, CSS, and responsive UI Ability to learn new languages and tools quickly and independently Experience working in cross-functional teams and agile environments Should-Have Qualifications Contributions to / or experience with open-source projects Cross-disciplinary understanding of UX/UI design principles Familiarity with testing frameworks and quality assurance practices Experience with monitoring and observability tools (e.g., Prometheus, Grafana) Nice-to-Have Experience with hybrid or distributed architecture Exposure to WebAssembly, micro frontends, or edge computing Background in security best practices for web and cloud applications
Posted 2 weeks ago
9.0 - 14.0 years
37 - 45 Lacs
Pune
Work from Office
: Job TitleIT Application Owner - AVP LocationPune, India Role Description At the Service, Solutions and AI Domain, our mission is to revolutionize our Private Bank process landscape by implementing holistic, front-to-back process automation. We are committed to enhancing efficiency, agility, and innovation, with a keen focus on aligning every step of our processes with the customers needs and expectations. Our dedication extends to driving innovative technologies, such as AI & workflow services, to foster continuous improvement. We aim to deliver best in class solutions across products, channels, brands, and regions, thereby transforming the way we serve our customers and setting new benchmarks in the industry. The IT Application Owner (ITAO) is the custodian of the application and is responsible to apply and enable during Life-Cycle of the application the IT policies and procedures with specific consideration to IT management and Information Security. The ITAO ensures a clear separation of the responsibility within the project, aimed at achieving a safe and secure running of the application and compliance to regulations, policies and standards. The ITAO is responsible for application documentation, application infrastructure reliability and compliance, and is usually the IT SPOC for audit initiatives. Join us in our journey to redefine banking with AI and service solutions into the future. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Ensure Application Stability and Performance Oversee the structural availability, stability, and performance of applications in your domain (PRD, UAT, SIT). Organize Level 3 Support and align/organize Level 2 Support Collaborate with development teams to organize Level 3 support for the application and align with Service Operations/SO to organize Level 2 support (or setup L2 support in case this couldnt provided by SO) Policy Compliance Ensure policy compliance and take ownership of projects necessary for compliance, such as security monitoring. Technical Roadmap Management Manage the technical roadmap, including technology compliance, and estimate/budget capacity needs. AI Risk Management Identify and proactively manage risks generated from AI usage in the bank, ensuring responsible AI practices are followed. Project Participation Participate in Domain Expert Meetings and contribute to project cost estimates, including "Run the Bank" and total application cost. Define Non-Functional Requirements Ensure project teams incorporate non-functional requirements in their projects. Validate Deliverables Validate deliverables for all projects/changes, such as test plans and analysis documents. Knowledge and Documentation Ensure the availability of all necessary application/service knowledge and documentation. Audit Collaboration Work closely with Audit Teams to avoid delays or escalations related to non-compliance. Infrastructure Responsibility Take responsibility for access and other infrastructure-related topics. Stakeholder Engagement Engage and manage multiple stakeholders to ensure regulatory compliance, smooth operations, and a sound development lifecycle. This includes Business, Security, Development, Test (QA), IT Support, Finance, external Vendors, and Architecture. Compliance with IT Policies Ensure the application is compliant with the company's IT policies based on regulatory requirements. Service Availability and Stability Be accountable for high service availability and stability, while managing projects for maintenance or enhancements. DevOps Facilitation Facilitate a DevOps approach by setting up monitoring, configuring deployment-automation tools, preparing software packages, raising and implementing changes, and managing certificates and software licenses. Technical Performance Monitoring Monitor the technical performance of applications (Response Times, Error Rates, Memory/Storage Usage) and address issues. Strategic Planning Conduct strategic planning for the applications in scope. Change Management Ensure changes to applications are fully aligned with DB standards and regulations, guaranteeing system stability and smooth transitions to production. Technical Project Management Manage technical projects to maintain required service levels. Go Live Transitions Contribute to Go Live transitions. Operational Collaboration Collaborate with Support entities to ensure proper operational levels for the application. Capacity Management Follow up on infrastructure capacity management. Your skills and experience ITIL Framework Knowledge and certification in the ITIL framework. IT Service Management and Cloud Technologies Experience in IT Service management processes and cloud technologies. Educational Background A bachelors degree in computer science or equivalent. Distributed Development Teams Experience working with distributed development teams especially between Europe (Germany and Romania) and India - and familiarity with the Software Development Life Cycle (SDLC). Communication Skills Excellent written and verbal communication skills at all levels, including senior management. Audit and Compliance Experience with audit and compliance, AI/ML ethics & regulation, continuous integration, and DevOps tools. Cloud Knowledge High-level knowledge of Cloud (IaaS, PaaS, SaaS) and the ability to work on tight deadlines. DevOps Tools Skills in utilizing GitHub CI, Jenkins, TeamCity, Ansible, and experience building CI/CD pipelines. Source Control and Monitoring Tools In-depth knowledge of source control (preferably GitHub, Bitbucket) and working knowledge of environment monitoring tools such as Prometheus, Grafana, Geneos, AppDynamics, and New Relic. Infrastructure as Code Knowledge of Infrastructure as Code (Terraform), SQL, and relational databases. Enterprise-Scale Development Basic exposure to delivering good quality code within enterprise-scale development and hands-on experience with cloud security and operations. Financial Services Knowledge gained in Financial Services environments and practical knowledge of database systems and structures. AI and Data-Centric Applications Experience in managing AI and data-centric applications. Unix/Linux Strong knowledge of Unix/Linux including commands and shell scripting. Analytical and Conceptual Thinking Excellent analytical and conceptual thinking skills. Agile Delivery Teams Strong independence and initiative, ability to work in agile delivery teams. How well support you
Posted 2 weeks ago
10.0 - 16.0 years
45 - 50 Lacs
Bengaluru
Work from Office
: Job Title Senior Site Reliability Engineer - Channels, VP LocationBangalore, India Role Description DWS Technology in India DWS Technology is a global team of technology specialists, spread across multiple trading hubs and tech centres. We have a strong focus on promoting technical excellence our engineers work at the forefront of financial services innovation using cutting-edge technologies. Our India location is our most recent addition to our global network of tech centres and growing strongly. We are committed to building a diverse workforce and to creating excellent opportunities for talented engineers and technologists. Our tech teams and business units use agile ways of working to create #GlobalHausbank solutions from our home market. DWS Digital Products and Channels DWS Digital Products and Channels team orchestrates internal and external API products, portals, enabling services and embedded finance products in global level. The team is highly skilled and innovative group dedicated to developing cutting-edge solutions and services that leverage the power of APIs to drive digital transformation and enhance the asset management experience for clients worldwide. As a Senior Site Reliability Engineer, you will be responsible for the SRE activities across platforms, portals and enabling services together with other SREs and engineers. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities As Senior Site Reliability Engineer you Orchestrate and contribute SRE activities across API Platforms and Integration services Introduce all engineering disciplines that combine software- and systems engineering to build and run large-scale, massively distributed, fault-tolerant systems Implement the core of DevOps with specific principles and practices, focusing on what and how to improve reliability Establish and support capacity planning procedures and have a close eye on SLIs and SLOs for production readiness and in live environment Coordinate with the rest of the division and the teams working on different layers of the application and infrastructure, and you have full commitment to collaboration on problem solving For Infrastructure & Service Management you Engage in and improve the whole lifecycle of services - from inception and design, deployment, operation, and refinement Maintain services once they are live by measuring and monitoring availability, latency, and overall system health Scale systems sustainably through mechanisms like automation; evolve systems by pushing for changes that improve reliability and velocity Develop and enforce policies, standards and guidelines for site reliability Automate application and infrastructure deployment activities to production environments. For Incident & Problem Management you Perform troubleshooting & Emergency Response Investigate root causes and suggest solutions Increase the productivity by leading blameless post-mortems For Application Maintenance you Collaboratively work with Product Owners and Engineers to run reliable services Configure and maintains application & monitoring Identify business objects for monitoring Track system performance, capacity, and use your experience to create effective strategies for maintaining and improving system performance and availability. For Operational Continuous Improvement you Identify issues and optimization potential and introduce related user stories Support with automation knowhow to reduce the risk of bad changes Identify, design, develop, deploy tools and processes to monitor, maintain, and report site performance and availability For Service Onboarding you Support your Squad and your Chapter population in onboarding & promotions Your skills and experience Expert hands-on experience with on-premises Expert hands-on experience with cloud ecosystems run on Google Cloud Expert hands-on experience with Docker / Kubernetes operations with GKE or similar technology Expert experience with automated infrastructure provisioning based on Terraform/TerraGrunt, Terraform Enterprise, Ansible Advanced hands-on experience with Continuous Integration / Continuous Deployment (Github) and patterns for CI/CD pipelines. Advanced hands-on experience of monitoring tools like Prometheus, Grafana, Kibana and alerting tools like OpsGenie, NewRelic, DataDog, Splunk, Google Operations-Suite (Stackdriver) Very good knowledge of security capabilities (TLS, OAuth2, KMS, Vault, Admission Controllers, let's encrypt or similar technologies). Very good understanding of Microservice architectures and experience with API Management with Apigee or WSO2 Experience in software development in at least one language (Java, JavaScript, Python, Go). Good Knowledge of the Software Development Life Cycle processes based on related tools such as TeamCity, BitBucket, Artifactory SonarQube, VeraCode, Crucible JIRA, Confluence, Service Now How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We at DWS are committed to creating a diverse and inclusive workplace, one that embraces dialogue and diverse views, and treats everyone fairly to drive a high-performance culture. The value we create for our clients and investors is based on our ability to bring together various perspectives from all over the world and from different backgrounds. It is our experience that teams perform better and deliver improved outcomes when they are able to incorporate a wide range of perspectives. We call this #ConnectingTheDots.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France