Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Area(s) of responsibility Experience : 6-7 Years Job Responsibilities Support Mission-critical financial applications with sensitive Personally Identifiable Information (PII data) Monitor production jobs and ensure their scheduled execution Create Standard Operating Procedure documents Triage production issues and provide solutions as per Standard Operating Procedure documents Document problem, solutions in the ticketing tools & knowledge base Communicate appropriately & effectively with business, technical teams Work well under pressure with a methodical approach to problem solving Prioritize multiple tasks and manage/assign tasks between team members Monitor application health, perform application/service restarts as per Standard Operating Procedures Monitor, edit, create scheduler jobs Deploy and manage applications, services on Microsoft Windows and cloud environments Primary Skills That Are Must Have For The Candidate Supporting Java/J2EE based applications involving Microservices Strong documentation, communication skills (written and verbal) and Analytical Skills Experience in any relational databases like MySQL, Oracle, SQL Server; Writing queries, understanding DB processes Motivated self-starter, able to work independently or as part of a team Experience with product lifecycle & deployments Experience working in Agile project Time management, Project management skills Secondary skills that are good to have for the right candidate: Any of Job schedulers - Autosys or Visual Cron or Control-M or Ansible or Cron Any of the log analyzers – Splunk or Dynatrace or Datadog Any of the application performance monitoring tools like New Relic, AppDynamics or Dynatrace Cloud, Microservices, Message Queues, NoSQL, CI/CD Work Hours Should be able to work with distributed teams in different time zones at onshore, offshore Should be working in 24x7 model with rolling shifts Open to supporting the team during weekends/holidays as per the needs
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Introduction IBM Infrastructure division builds Servers, Storage, Systems and Cloud Software which are the building blocks for next-generation IT infrastructure of enterprise customers and data centers. IBM Servers provide best-in-class reliability, scalability, performance, and end-to-end security to handle mission-critical workloads and provide seamless extension to hybrid multicloud environments. India Systems Development Lab (ISDL) is part of word-wide IBM Infrastructure division. Established in 1996, the ISDL Lab is headquartered in Bengaluru, with presence in Pune and Hyderabad as well. ISDL teams work across the IBM Systems stack including Processor development (Power and IBM Z), ASCIs, Firmware, Operating Systems, Systems Software, Storage Software, Cloud Software, Performance & Security Engineering, System Test etc. The lab also focuses on innovations, thanks to the creative energies of the teams. The lab has contributed over 400+ patents in cutting edge technologies and inventions so far. ISDL teams also ushered in new development models such as Agile, Design Thinking and DevOps. Your Role And Responsibilities As a Software Engineer at IBM India Systems Development Lab (IBM ISDL), you will get an opportunity to work on all the phases of product development (Design/Development, Test and Support) across core Systems technologies including Operating Systems, Firmware, Systems Software, Storage Software & Cloud Software. As a Software Developer At ISDL: You will be focused on development of IBM Systems products interfacing with development & product management teams and end users, cutting across geos. You would analyze product requirements, determine the best course of design, implement/code the solution and test across the entire product development life cycle. One could also work on Validation and Support of IBM Systems products. You get to work with a vibrant, culture driven and technically accomplished teams working to create world-class products and deployment environments, delivering an industry leading user experience for our customers. You will be valued for your contributions in a growing organization with broader opportunities. At ISDL, work is more than a job - it's a calling: To build. To design. To code. To invent. To collaborate. To think along with clients. To make new products/markets. Not just to do something better, but to attempt things you've never thought was possible. Are you ready to lead in this new era of technology and solve some of the most challenging problems in Systems Software technologies? If so, let’s talk. Required Technical And Professional Expertise Required Technical Expertise: Knowledge of Operating Systems, OpenStack, Kubernetes, Container technologies, Cloud concepts, Security, Virtualization Management, REST API, DevOps (Continuous Integration) and Microservice Architecture. Strong programming skills in C, C++, Go Lang, Python, Ansible, Shell Scripting. Comfortable in working with Github and leveraging Open source tools. AI Software Engineer: As a Software Engineer with IBM AI on Z Solutions teams, you will get the opportunity to get involved in delivering best-in class Enterprise AI Solutions on IBM Z and support IBM Customers while adopting AI technologies / Solutions into their businesses by building ethical, secure, trustworthy and sustainable AI solutions on IBM Z. You will be part of end to end solutions working along with technically accomplished teams. You will be working as a Full stack developer starting from understanding client challenges to providing solutions using AI. Required Technical Expertise: Knowledge of AI/ML/DL, Jupyter Notebooks, Linux Systems, Kubernetes, Container technologies, REST API, UI skills, Strong programming skills like – C, C++, R, Python, Go Lang and well versed with Linux platform. Strong understanding of Data Science, modern tools and techniques to derive meaningful insights Understanding of Machine learning (ML) frameworks like scikit- learn, XGBoost etc. Understanding of Deep Learning (DL) Frameworks like Tensorflow, PyTorch Understanding of Deep Learning Compilers (DLC) Natural Language Processing (NLP) skills Understanding of different CPU architectures (little endian, big endian). Familiar with open source databases PostGreSQL, MongoDB, CouchDB, CockroachDB, Redis, data sources, connectors, data preparations, data flows, Integrate, cleanse and shape data. IBM Storage Engineer: As a Storage Engineer Intern in a Storage Development Lab you would support the design, testing, and validation of storage solutions used in enterprise or consumer products. This role involves working closely with hardware and software development teams to evaluate storage performance, ensure data integrity, and assist in building prototypes and test environments. The engineer contributes to the development lifecycle by configuring storage systems, automating test setups, and analyzing system behavior under various workloads. This position is ideal for individuals with a foundational understanding of storage technologies and a passion for hands-on experimentation and product innovation. Preferred Technical Expertise: Practical working experience with Java, Python, GoLang, ReactJS, Knowledge of AI/ML/DL, Jupyter Notebooks, Storage Systems, Kubernetes, Container technologies, REST API, UI skills, Exposure to cloud computing technologies such as Red Hat OpenShift, Microservices Architecture, Kubernetes/Docker Deployment. Basic understanding of storage technologies: SAN, NAS, DAS Familiarity with RAID levels and disk configurations Knowledge of file systems (e.g., NTFS, ext4, ZFS) Experience with operating systems: Windows Server, Linux/Unix Basic networking concepts: TCP/IP, DNS, DHCP Scripting skills: Bash, PowerShell, or Python (for automation) Understanding of backup and recovery tools (e.g., Veeam, Commvault) Exposure to cloud storage: AWS S3, Azure Blob, or Google Cloud Storage Linux Developer: As a Linux developer, you would be involved in design and development of advanced features in the Linux OS for the next generation server platforms from IBM by collaboration with the Linux community. You collaborate with teams across the hardware, firmware, and upstream Linux kernel community to deliver these capabilities. Preferred Technical Expertise Excellent knowledge of the C programming language Knowledge of Linux Kernel internals and implementation principles. In-depth understanding of operating systems concepts, data structures, processor architecture, and virtualization Experience with working on open-source software using tools such git and associated community participation processes. Hardware Management Console (HMC) / Novalink Software Developer: As a Software Developer in HMC / Novalink team, you will work on design, development, and test of the Management Console for IBM Power Servers. You will be involved in user centric Graphical User Interface development and Backend for server and virtualization management solution development in Agile environment. Preferred Technical Expertise Strong Programming skills in in Core Java 8, C/C++ Web development skills in JavaScript (Frameworks such as Angular.js, React.js etc),, HTML, CSS and related technologies Experience in developing rich HTML applications Web UI Frameworks: Vaadin, React JS and UI styling libraries like Bootstrap/Material Knowledge of J2EE, JSP, RESTful web services and GraphQL API AIX Developer: AIX is a proprietary Unix operating system which runs on IBM Power Servers. It’s a secure, scalable, and robust open standards-based UNIX operating system which is designed to meet the needs of Enterprises class infrastructure. As an AIX developer, you would be involved in development, test or support of AIX OS features development or open source software porting/development for AIX OS Preferred Technical Expertise Strong Expertise in Systems Programming Skills (C, C++) Strong knowledge of operating systems concepts, data structures, algorithms Strong knowledge of Unix/Linux internals (Signals, IPC, Shared Memory,..etc) Expertise in developing/handling multi-threaded Applications. Good knowledge in any of the following areas User Space Applications File Systems, Volume Management Device Drivers Unix Networking, Security Container Technologies Linkers/Loaders Virtualization High Availability & clustering products Strong debugging and Problem-Solving skills Performance Engineer: As a performance Engineer , you will get an opportunity to conduct experiments and analysis to identify performance aspects for operating systems and Enterprise Servers. where you will be responsible for advancing the product roadmap by using your expertise in Linux operating system, building kernel , applying patches, performance characterization, optimization and hardware architecture to analyse performance of software/hardware combinations. You will be involved in conducting experiments and analysis to identify performance challenges and uncover optimization opportunities for IBM Power virtualization and cloud management software built on Open stack. The areas of work will be on characterization, analysis and fine-tune application software to help deliver optimal performance on IBM Power. Preferred Technical Expertise Experience in C/C++ programming Knowledge of Hypervisor, Virtualization concepts Good understanding of system HW , Operating System , Systems Architecture Strong skills in scripting Good problem solving, strong analytical and logical reasoning skills Familiar with server performance management and capacity planning Familiar with performance diagnostic methods and techniques Firmware Engineer: As a Firmware developer you will be responsible for designing and developing components and features independently in IBM India Systems Development Lab. ISDL works on end-to-end design, development across Power, Z and Storage portfolio. You would be a part of WW Firmware development organization and would be involved in designing & developing cutting edge features on the open source OpenBMC stack (https://github.com/openbmc/) and developing the open source embedded firmware code for bringing up the next generation enterprise Power, Z and LinuxONE Servers. You will get an opportunity work alongside with some of the best minds in the industry, forum and communities in the process of contributing to the portfolio. Preferred Technical Expertise Strong System Architecture knowledge Hands on programming skills with C, C++ , C on Linux Distros. Experience/exposure in Firmware/Embedded software design & development, Strong knowledge of Linux OS and Open Source development Experience with Open Source tools & scripting languages: Git, Gerrit, Jenkins, perl/python Other Skills (Common For All The Positions): Strong Communication, analytical, interpersonal & problem solving skills Ability to deliver on agreed goals and the ability to coordinate activities in the team/collaborate with others to deliver on the team vision. Ability to work effectively in a global team environment Enterprise System Design Software Engineer: The Enterprise Systems Design team is keen on hiring passionate Computer science and engineering graduates / Masters students, who can blend their architectural knowledge and programming skills to build the complex infrastructure geared to work for the Hybrid cloud and AI workloads. We have several opportunities in following areas of System & chip development team : Processor verification engineer Needs to develop the test infrastructure to verify the architecture and functionality of the IBM server processors/SOC or ASICs. Will be responsible to creatively think of all the scenarios to test and report the coverage. Work with design as well as other key stakeholders in identifying /debugging & Resolving logic design issues and deliver a quality design Processor Pre / Post silicon validation engineer As a validation engineer you would design and develop algorithms for Post Silicon Validation of next generation IBM server processors, SOCs and ASICs. Electronic design automation – Front & BE tool development. EDA tools development team is responsible for developing state of the art Front End verification , simulation , Formal verification tools , Place & Route, synthesis tools and Flows critical for designing & verifying high performance hardware design for IBM's next generation Systems (IBM P and Z Systems) which is used in Cognitive, ML, DL, and Data Center applications. Required Professional And Technical Skills: Functional Verification / Validation of Processors or ASICs. Computer architecture knowledge, Processor core design specifications, instruction set architecture and logic verification. Multi-processor cache coherency, Memory subsystem, IO subsystem knowledge, any of the protocols like PCIE/CXL, DDR, Flash, Ethernet etc Strong C/C++programming skills in a Unix/Linux environment required Great scripting skills – Perl / Python/Shell Development experience on Linux/Unix environments and in GIT repositories and basic understanding of Continues Integration and DevOps workflow. Understand Verilog / VHDL , verification coverage closure Proven problem-solving skills and the ability to work in a team environment are a must
Posted 1 week ago
5.0 - 7.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Roles and Responsibilities: Ensure the ongoing stability, scalability, and performance of PhonePes Hadoop ecosystem and associated services. Exhibit a high level of ownership and accountability that ensures reliability of the Distributed clusters. Manage and administer Hadoop infrastructure including Apache Hadoop,HDFS, HBase, Hive, Pig, Airflow, YARN, Ranger, Kafka, Pinot,Ozone and Druid. Automate BAU operations through scripting and tool development. Perform capacity planning, system tuning, and performance optimization. Set-up, configure, and manage Nginx in high-traffic environments. Administration and troubleshooting of Linux + Bigdata systems, including networking (IP, Iptables, IPsec). Handle on-call responsibilities, investigate incidents, perform root cause analysis, and implement mitigation strategies. Collaborate with infrastructure, network, database, and BI teams to ensure data availability and quality. Apply system updates, patches, and manage version upgrades in coordination with security teams. Build tools and services to improve observability, debuggability, and supportability. Enabling cluster security using Kerberos and LDAP. Experience in capacity planning and performance tuning of Hadoop clusters. Work with configuration management and deployment tools like Puppet, Chef, Salt, or Ansible. Preferred candidate profile: Minimum 1 year of Linux/Unix system administration experience. Over 4 years of hands-on experience in Apache Hadoop administration. Minimum 1 years of experience managing infrastructure on public cloud platforms like AWS, Azure, or GCP (optional ) . Strong understanding of networking, open-source tools, and IT operations. Proficient in scripting and programming (Perl, Golang, or Python). Hands-on experience with maintaining and managing the Hadoop ecosystem components like HDFS, Yarn, Hbase, Kafka . Strong operational knowledge in systems (CPU, memory, storage, OS-level troubleshooting). Experience in administering and tuning relational and NoSQL databases. Experience in configuring and managing Nginx in production environments. Excellent communication and collaboration skills. Good to Have Experience designing and maintaining Airflow DAGs to automate scalable and efficient workflows. Experience in ELK stack administration. Familiarity with monitoring tools like Grafana, Loki, Prometheus, and OpenTSDB. Exposure to security protocols and tools (Kerberos, LDAP). Familiarity with distributed systems like elasticsearch or similar high-scale environments
Posted 1 week ago
7.0 - 11.0 years
35 - 50 Lacs
Bengaluru
Work from Office
About the Role: This role is responsible for managing and maintaining complex, distributed big data ecosystems. It ensures the reliability, scalability, and security of large-scale production infrastructure. Key responsibilities include automating processes, optimizing workflows, troubleshooting production issues, and driving system improvements across multiple business verticals. Roles and Responsibilities: Manage, maintain, and support incremental changes to Linux/Unix environments. Lead on-call rotations and incident responses, conducting root cause analysis and driving postmortem processes. Design and implement automation systems for managing big data infrastructure, including provisioning, scaling, upgrades, and patching clusters. Troubleshoot and resolve complex production issues while identifying root causes and implementing mitigating strategies. Design and review scalable and reliable system architectures. Collaborate with teams to optimize overall system/cluster performance. Enforce security standards across systems and infrastructure. Set technical direction, drive standardization, and operate independently. Ensure availability, performance, and scalability of systems and services through proactive monitoring, maintenance, and capacity planning. Resolve, analyze, and respond to system outages and disruptions and implement measures to prevent similar incidents from recurring. Develop tools and scripts to automate operational processes, reducing manual workload, increasing efficiency and improving system resilience. Monitor and optimize system performance and resource usage, identify and address bottlenecks, and implement best practices for performance tuning. Collaborate with development teams to integrate best practices for reliability, scalability, and performance into the software development lifecycle. Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities. Develop and enforce SRE best practices and principles. Align across functional teams on priorities and deliverables. Drive automation to enhance operational efficiency. Adapt new technologies as and when the need arises and define architectural recommendations for new tech stacks. Preferred candidate profile Over 6 years of experience managing and maintaining distributed big data ecosystems. Strong expertise in Linux including IP, Iptables, and IPsec. Proficiency in scripting/programming with languages like Perl, Golang, or Python. Hands-on experience with the Hadoop stack (HDFS, HBase, Airflow, YARN, Ranger, Kafka, Pinot). Familiarity with open-source configuration management and deployment tools such as Puppet, Salt, Chef, or Ansible. Solid understanding of networking, open-source technologies, and related tools. Excellent communication and collaboration skills. DevOps tools: Saltstack, Ansible, docker, Git. SRE Logging and monitoring tools: ELK stack, Grafana, Prometheus, opentsdb, Open Telemetry. Good to Have: Experience managing infrastructure on public cloud platforms (AWS, Azure, GCP). Experience in designing and reviewing system architectures for scalability and reliability. Experience with observability tools to visualize and alert on system performance. Experience in massive petabyte scale data migrations, massive upgrades
Posted 1 week ago
2.0 - 5.0 years
2 - 6 Lacs
Mumbai
Remote
Duration: 8 Months Description: Experience with cloud technologies, such as AWS in using AWS IAM, VPC, KMS, Secrets Manager services and AWS Certified Security Specialty Experience with cloud technologies, such as AWS, Azure, or GCP Prior experience working with Phone/Voice authentication technologies Neustar Authenticator/Account link experience Nice Engage/RTA experience Previous experience in planning and managing IT projects Lead and participate in the development, testing, implementation, maintenance, and support of highly complex solutions in adherence to company standards, including robust unit testing and support for subsequent release testing. Participate in efforts related to designing, planning, enhancing, and testing all cybersecurity technologies used in managing external client authentication. Other contact center experience like Genesys, Avaya, and Cisco cubes could be considered as well. Any voice experience in the contact center space such as Sip, Session Boarder Controllers, Cisco, AWS WebRTC Phones, Google Voice.
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Title: Azure DevOps Engineer Experience Required: 6 to 8 Years Job Type: Full-time Location: Jaipur Joining: Immediate Joiners Preferred Job Overview: Emizen Tech is looking for a skilled Azure DevOps Engineer with 6–8 years of hands-on experience in managing CI/CD pipelines, infrastructure automation, and cloud deployments. The ideal candidate should be well-versed with Microsoft Azure services, DevOps best practices, and agile methodologies. Key Responsibilities: Design, implement, and manage CI/CD pipelines using Azure DevOps. Automate infrastructure deployment using ARM templates, Bicep, or Terraform. Set up and manage Azure environments for development, testing, and production. Monitor system performance, availability, and scalability across Azure-hosted services. Implement and maintain security best practices across DevOps workflows. Enable logging, monitoring, and alerting using tools like Azure Monitor, Log Analytics, and Application Insights. Manage containers and orchestration tools like Docker and Kubernetes (AKS). Must-Have Skills: Strong experience with Azure DevOps Services (Repos, Pipelines, Artifacts, Boards). Expertise in Infrastructure as Code (IaC) – ARM templates, Terraform, or Bicep. Proficiency in scripting languages – PowerShell , Bash , or Python . Solid knowledge of Azure Cloud Services (VMs, App Services, Key Vault, Azure SQL, AKS, etc.). Hands-on with containerization and orchestration (Docker, Kubernetes, AKS) Strong troubleshooting and problem-solving skills. Preferred Qualifications: Microsoft Azure certifications ( AZ-400 , AZ-104) are a plus. Exposure to other DevOps tools like Jenkins, Ansible, SonarQube, etc., is a plus.
Posted 1 week ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Job Responsibilities Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex SQL & Sybase databases. Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and compliance. Identifies and resolves problem utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of database management; Consults and advises application development teams on database security, query optimization and performance. Writes scripts for automating DBA routine tasks and documents database maintenance processing flows per standards. Implement industry best practices while performing database administration task Work in Agile model with the understanding of Agile concepts Collaborate with development teams to provide and implement new features. Able to debug production issues by analyzing the logs directly and using tools like Splunk. Begin tackling organizational impediments Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 10+ years of IT and Infrastructure engineering work experience. Experience (In Years) 10+ Years Total IT experience & 7+ Years relevant experience in SQL Server + Sybase Database Technical Skills Database Management: expert in managing and administering SQL Server, Azure SQL Server, and Sybase databases, ensuring high availability and optimal performance. Data Infrastructure & Security: Expertise in designing and implementing robust data infrastructure solutions, with a strong focus on data security and compliance. Backup & Recovery: Skilled in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Performance Tuning & Optimization: Adept at performance tuning and optimization of databases, leveraging advanced techniques to enhance system efficiency and reduce latency. Cloud Computing & Scripting: Experienced in cloud computing environments and proficient in operating system scripting, enabling seamless integration and automation of database operations. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Strong database analytical skills to improve application performance. Should have strong working Knowledge of database performance Tuning, Backup & Recovery, Infrastructure as a Code and Observability tools (Elastic). Must have experience of Automation tools and programming such as Ansible and Python. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Excellent Analytical and Problem-Solving skills Experience managing geographically distributed and culturally diverse workgroups with strong team management, leadership and coaching skills Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Prior experience in handling state side and offshore stakeholders Experience in creating and delivering Business presentations. Demonstrate ability to work independently and in a team environment About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 1 week ago
6.0 - 10.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Transport is at the core of modern society Imagine using your expertise to shape sustainable transport and infrastructure solutions for the futureIf you seek to make a difference on a global scale, working with next-gen technologies and the sharpest collaborative teams, then we could be a perfect match Role Overview We are seeking an experienced IT Architect with a strong background in software defined vehicle development environments and IT infrastructure support This role serves as the key technical liaison between software engineering teams and IT, responsible for enabling and optimizing the infrastructure and toolchain support critical to embedded software development The ideal candidate is collaborative, adaptable, and committed to ensuring smooth technical delivery through stakeholder engagement and proactive issue resolution Key Responsibilities Toolchain Support: Ensure optimal hosting, configuration, and maintenance of embedded toolchains (e-g , cross-compilers, debuggers, build systems) and version control systems (e-g , Git, SVN) CI/CD Environment Management Collaborate with software and DevOps teams to maintain and enhance continuous integration and deployment pipelines using tools like Jenkins, GitLab CI, etc Virtualization & Testing Infrastructure Support virtual machines, simulators, emulators, and containerized environments used for embedded system development and validation Storage, Backup & Data Management Oversee build artifacts, logs, and development data to ensure availability, backups, and retention across infrastructure platforms Security & Compliance Ensure the development infrastructure complies with organizational security policies and industry-specific standards (e-g , ISO 26262, ASPICE) Monitoring & Troubleshooting Proactively monitor infrastructure systems and lead root-cause analysis and resolution of performance bottlenecks or toolchain-related issues Automation & Scripting Utilize scripting (Python, Shell) and tools like Ansible to automate repetitive infrastructure tasks and environment setups Flexibility & Adaptability Remain agile to accommodate the evolving needs of toolchains and software development teams Stakeholder Network Management Build and maintain a strong stakeholder network across engineering, infrastructure, and support teams Cross-functional Collaboration & Forums Create and lead forums to engage stakeholders, track toolchain needs, gather feedback, and manage infrastructure-related backlogs (linked to development use-cases) Delivery & Follow-through Ensure timely execution and tracking of infrastructure-related technical deliveries, coordinating with relevant teams Risk Anticipation & Support Coordination Proactively foresee infrastructure challenges and ensure timely escalation and support from internal teams, HCL, and Stable Teams as needed Required Qualifications Bachelors or Masters degree in Computer Science, Electronics, Information Technology, or a related field 10+ years of experience in IT infrastructure or engineering IT support roles, preferably within software defined vehicle development Proficient in Linux and Windows environments, with familiarity in networking and system monitoring Hands-on experience with CI/CD pipelines and build systems like Make, CMake, and Yocto Skilled in scripting languages (Python, Bash) Familiarity with industry-specific standards and development compliance (e-g , ISO 26262, ASPICE) Preferred Qualifications Experience supporting embedded development / software defined vehicle development for automotive or industrial domains Familiarity with DevSecOps and modern development infrastructure tools Exposure to simulation environments, RTOS, or hardware-in-the-loop (HIL) setups Soft Skills Strong stakeholder communication and coordination skills Ability to manage ambiguity and shifting priorities Proactive, solution-oriented mindset Attention to detail and follow-through We value your data privacy and therefore do not accept applications via mail Who We Are And What We Believe In We are committed to shaping the future landscape of efficient, safe, and sustainable transport solutions Fulfilling our mission creates countless career opportunities for talents across the groups leading brands and entities Applying to this job offers you the opportunity to join Volvo Group Every day, you will be working with some of the sharpest and most creative brains in our field to be able to leave our society in better shape for the next generation We are passionate about what we do, and we thrive on teamwork We are almost 100,000 people united around the world by a culture of care, inclusiveness, and empowerment Group Digital & IT is the hub for digital development within Volvo Group Imagine yourself working with cutting-edge technologies in a global team, represented in more than 30 countries We are dedicated to leading the way of tomorrows transport solutions, guided by a strong customer mindset and high level of curiosity, both as individuals and as a team Here, you will thrive in your career in an environment where your voice is heard and your ideas matter
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? We are looking for talented and motivated professionals, interested in delivering the latest in Application Operations (SaaS) using AWS, in a culture that encourages autonomous productive teams. You are Someone who loves learning how to configure backend systems, and infrastructure that will help us build our global SaaS platform. You have a comprehensive understanding of Amazon AWS platform and know how to light and put fires out. You are an engineer with confidence in his / her skill set, who is not afraid to look under the hood and break stuff to make stuff. If yes, then come be a part of NiCE Customer Services, a team of software engineers re-inventing Application Operations. How will you make an impact? You will get to team up with highly talented and highly motivated engineers and architects, using the latest in AWS, working on cutting edge technology. As a part of this team, you will be working in a fast-paced environment deploying, monitoring, automating & supporting a highly scalable real-time critical platform(s) impacting, millions of individuals & Billions of dollars. Implementing, configuring custom changes, and deploying new Application release upgrades Setup new environments & deploying solutions. Building proactive Monitoring & alerting service. Automation using ansible, python, Perl scripting. Setup & securing new Application instances Change management, Building deployment and rollback plans and procedures Creating and maintaining knowledge base for various technical resolutions Create and setup deployment scripts for different environments (i.e. Test properties vs Prod properties) Configure and optimize instances and web servers for optimal performance. (ex: adjusting default connection limits, adjusting request queuing thresholds) AWS troubleshooting support Support, Architect and Implement alongside Technical & Operations teams to meet our customers' individual needs for their infrastructure & application deployments. Work on critical, highly complex customer problems that will span multiple AWS services (dealing daily with high severity incidents). Help build and improve customer operations through scripts to automate and deploy AWS resources seamlessly with as little manual intervention as possible. Collaborate and help build utilities and tools for internal use that enable you and your fellow Engineers to operate safely at high speed / wide scale. Drive customer communication during critical events. Provide on-call off hour support and flexible to work in 24*7 shift environment Have you got what it takes? 4-8 years of relevant experience Excellent hands-on experience in managing Application Support (3 tier/2 tier apps) Strong problem solving, analytical and communication skills Exposure in handling complex application performance issues Exposure to APM tools like AppDynamics, Dynatrace Excellent skills on managing containerized / cloud-based application with exposure to various cloud services (EC2, S3, IAM, ELB, VPC, VPN). Good experience in a DevOps environment / Operations team / Infrastructure Operations team. Excellent Troubleshooting skills OS level knowledge (Windows or Linux) Database skills ( SQL ,Oracle or Postgres / Casandra) Application Server ( skills on any of Middleware technologies e.g. – Tomcat , WebLogic , WebSphere) Ability to identify the underlying root cause of performance issues & mitigate bottlenecks Good understanding on Networking , Load balancers Good communication both written and verbal Exposure to scripting language (Ansible, Perl, Python, Ruby, Shell script, Powershell etc.) Experience in working with tools like OpsGenie, Nagios, Rundeck, Good understanding in Kubernetes. Cloud / Application level Security experience Experience in Banking & Financial domain Has worked in an Agile / Sprint development model. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 8005 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law.
Posted 1 week ago
1.0 - 2.0 years
1 - 3 Lacs
Bengaluru
Work from Office
Required Skills & Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. 12 years of hands-on experience in a DevOps or Systems Engineer role. Experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions). Basic understanding of cloud platforms (AWS/Azure/GCP). Familiarity with container technologies (Docker) and orchestration tools (Kubernetes is a plus). Strong scripting skills in Bash, Python, or similar. Exposure to Infrastructure as Code tools like Terraform or Ansible is a plus. Good problem-solving skills and attention to detail.
Posted 1 week ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: • Overall 10+ years of experience working within a large enterprise consisting of large and diverse teams. • Minimum of 6 years of experience on APM and Monitoring technologies • Minimum of 3 years of experience on ELK • Design and implement efficient log shipping and data ingestion processes. • Collaborate with development and operations teams to enhance logging capabilities. • Implement and configure components of the Elastic Stack, including, File beat, Metrics beat, Winlog beat, Logstash and Kibana. • Create and maintain comprehensive documentation for Elastic Stack configurations and processes. • Ensure seamless integration between various Elastic Stack components. • Advance Kibana dashboards and visualizations modelling, deployment. • Create and manage Elasticsearch Clusters on premise, including configuration parameters, indexing, search, and query performance tuning, RBAC security governance, and administration. • Hands-on Scripting & Programming in Python, Ansible, bash, data parsing (regex), etc. • Experience with Security Hardening & Vulnerability/Compliance, OS patching, SSL/SSO/LDAP. • Understanding of HA design, cross-site replication, local and global load balancers, etc. • Data ingestion & enrichment from various sources, webhooks, and REST APIs with JSON/YAML/XML payloads & testing POSTMAN, etc. • CI/CD - Deployment pipeline experience (Ansible, GIT). • Strong knowledge of performance monitoring, metrics, planning, and management. • Ability to apply a systematic & creative approach to solve problems, out-of-the-box thinking with a sense of ownership and focus. • Experience with application onboarding - capturing requirements, understanding data sources, architecture diagrams, application relationships, etc. • Influencing other teams and engineering groups in adopting logging best practices. • Effective communication skills with the ability to articulate technical details to a different audience. • Familiarity with Service now, Confluence and JIRA. • Understand SRE and DevOps principles Technical Skills: APM Tools – ELK, AppDynamics, PagerDuty Programming Languages - Java / .Net, Python Operating Systems – Linux and Windows Automation – GitLab, Ansible Container Orchestration – Kubernetes Cloud – Microsoft Azure and AWS Interested candidates please share your resume with balaji.kumar@flyerssoft.com
Posted 1 week ago
5.0 - 8.0 years
10 - 15 Lacs
Chennai
Work from Office
Role & responsibilities Design, configure, and maintain CI/CD pipelines using GitHub Actions, Bamboo, Harness, Jenkins, or equivalent. Administer Atlassian suite: Jira, Confluence, Bamboo, and Bitbucket for planning, repo management, and documentation. Implement, debug, and enhance build tools and deployment workflows (Maven, Gradle, MSBuild, NodeJS, etc.). Manage code quality and security scans via SonarQube, Veracode, and similar tools. Handle artifact repositories such as Nexus, JFrog Artifactory, NuGet, and ProGet. Create and maintain automation scripts in PowerShell, Python, Bash/Shell, and Salt/Ansible for provisioning and administrative tasks. Maintain and monitor observability tools like Splunk, and proactively address performance and availability issues. Support production environments, respond to incidents, triage bugs, and handle escalations via tools like XMatters and Techlines. Participate in daily stand-ups, incident reviews, and on-call support (24x5 or 24x7). Work with development teams to troubleshoot and optimize builds, releases, and deployment flows. Track Jira Stories and ensure timely updates and resolutions aligned with SLAs and operational KPIs. Preferred candidate profile Bachelor's or Master's degree in Computer Science, Information Technology, or related discipline. 5+ years of experience in DevOps / Release Engineering / Production Support roles. Proven experience managing enterprise-grade CI/CD platforms and incident response workflows. Atlassian and GitHub administrator-level experience is a strong plus. CI/CD & DevOps: Experience building and managing CI/CD pipelines using GitHub Actions, Bamboo, and Harness. Build Tools: Skilled in using Maven, Gradle, MSBuild, and NodeJS for build automation. Artifact Repositories: Hands-on with Nexus, JFrog Artifactory, ProGet, and NuGet for artifact management. Code Quality & Security: Familiar with SonarQube and Veracode for static code analysis and vulnerability scanning. Version Control: Proficient in Git with repository management in GitHub and Bitbucket. Configuration Management: Experience with Ansible and Salt for infrastructure automation. Scripting & Automation: Strong scripting skills in PowerShell, Bash, and Python. Monitoring & Logging: Practical knowledge of Splunk for observability, alerting, and log analysis. OS & Infrastructure: Comfortable administering both Linux and Windows environments. Issue & Workflow Tools: Experienced in Jira and Confluence for tracking work and collaboration. Incident/Production Support: Skilled in incident handling, SLA management, and 24/7 support using XMatters/Techlines.
Posted 1 week ago
8.0 - 10.0 years
20 - 25 Lacs
Pune
Work from Office
We are looking for a skilled Infrastructure and Platform Architect with 8-10 years of experience to join our team. Roles and Responsibility Design and implement scalable infrastructure solutions using OpenStack. Collaborate with cross-functional teams to ensure seamless integration of infrastructure and platform components. Develop and maintain technical documentation for infrastructure and platform architecture. Troubleshoot and resolve complex technical issues related to infrastructure and platform operations. Ensure compliance with industry standards and best practices for infrastructure and platform security. Provide technical guidance and mentorship to junior team members on infrastructure and platform architecture. Job Requirements Strong understanding of OpenStack and its ecosystem. Experience with cloud-based technologies and platforms. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work in a fast-paced environment and adapt to changing priorities. Strong leadership and management skills,including the ability to motivate and guide junior team members.
Posted 1 week ago
1.0 - 3.0 years
8 - 12 Lacs
Bengaluru
Work from Office
About The Role Were looking for a backend developer to join our early engineering team and drive our product development. If youre someone who thrives on high ownership, can figure stuff out on your own and wants to be part of the zero-to-one journey, this might be for you. What Youll Do Looking out for cool new technologies and implementing them for required internal as well as business use cases. Designing scalable architectures for backend systems. Optimising performance of applications for full scale production deployments. Implementing business logic and developing APIs and services. Conceptualising and implementing scalable databases across various services. Youll be hiring and mentoring junior makes you a good fit : If you can write code that works, we should be good but read on (disclaimer: most of what follows is not a hard requirement); You have 1-3 years of experience building and scaling backend systems from scratch; You've built server-less backend architectures using AWS platforms and resources like Lambda, DynamoDB, Aurora, etc. (Brownie points if youve worked with NodeJS, Redis and PostgreSQL); You have managed deployment at scale using CI/CD integrations and implemented error management systems like Sentry;
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
EXP- 4+ Years Notice period- Immediate joiner Location: Pune/Bangalore We are seeking a skilled OCI Cloud Engineer to manage cloud integration projects and the daily administration of Oracle Cloud Infrastructure (OCI). The role involves automation, DevOps support, infrastructure deployment, and cloud transformation initiatives. ⸻ Key Responsibilities: • Manage and administer OCI cloud infrastructure and multi-tenant environments. • Design, build, and deploy solutions using Terraform, Ansible, and Python. • Lead automation and integration for OCI workloads. • Migrate on-premise applications to OCI and support hybrid connectivity. • Collaborate in DevOps initiatives and maintain CI/CD pipelines. • Conduct peer code reviews and enforce coding standards. • Provide technical consultation and train teams on OCI best practices. • Troubleshoot issues related to cloud network, storage, and performance. ⸻ Core Skills & Experience: • Strong expertise in Oracle Cloud Infrastructure (OCI) – Compute, Networking, IAM. • Proficiency in Terraform, Ansible, Python, Shell scripting . • Knowledge of serverless architecture , Exadata , Kubernetes , Docker . • Familiarity with Azure DevOps , HashiCorp tools , and DevOps practices . • Good understanding of ITIL, SOX, and security compliance . • Strong documentation and communication skills.
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
GCP Platform cloud Engineer : Immediate Joiner only Exp- 4-7 Years Location- Pune/Bangalore GCP Core Service : IAM, VPC, GCE ( Google Compute Engine) , GCS ( Google Cloud Storage) , CloudSQL, MySQL, CI/CD Tool (Code Build/GitHub Action/), Other Tool : GitHub, Terraform, Shell Script, Ansible. Role purpose: To Develop, build, implement and operate 24x7 Public Cloud infrastructure services mainly into the GCP and technology solution. To design, plan and implement a growing set of public cloud platforms and solutions used to provide mission critical infrastructure services. To constantly analyse, optimise, migrate and transform the global legacy IT infrastructure environment into cloud ready & cloud native solutions and responsible for providing software-related operations support, including managing level two and level three incident and problem management. Core competencies, knowledge, and experience: Profound Cloud Technology, Network, Security and Platform Expertise (AWS) Expertise in GCP cloud services like VPC, Compute Instance, Cloud Storage, Kubernetes Engine, etc. Working experience with Cloud Functions. Expertise in automation and workflow like Terraform, Ansible scripts and Python scripts. DevOps Tools: Jenkins pipeline, Gocd Pipeline, HashiCorp Stack (Packer, Terraform etc.), Docker, Kubernetes Work experience in GCP organisation and multi-tenant project setup. Good documentation and communication skills. Degree in IT (any), 3 years of experience in cloud computing or 5 years in enterprise IT Adapt in ITIL, SOX and security regulations Three to five years of work experience in programming and /or systems analysis applying agile frameworks Experience with Web applications and Web hosting skills. Experience with DevOps concept in cloud environment. Working experience in managing highly business critical environments. GCP Cloud Engineer / GCP Professional Cloud Architect certification with experience preferred.
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Hiring Alert!!! We are looking for highly skilled Lead Site Reliability Engineer (SRE) for our Product Development team based out at Noida Location!!! Only Immediate Joiners preferred!! Job Description We are seeking a highly skilled Site Reliability Engineer (SRE) to join our team. The ideal candidate will have a deep understanding of both software engineering and systems administration, with a focus on creating scalable and reliable systems. You will work closely with development and operations teams to ensure the reliability, availability, and performance of our services. Key Responsibilities Collaborate with engineering teams to design and implement scalable, robust systems. Ensure the reliability and performance of our services through monitoring, incident response, and capacity planning. Develop and maintain automation tools for system provisioning, configuration management, and deployment. Implement and manage monitoring tools to ensure visibility into the health and performance of our systems. Lead incident response efforts, perform root cause analysis, and implement preventative measures. Utilize Infrastructure as Code (IaC) practices to manage and provision infrastructure. Work closely with development and operations teams to ensure smooth deployments and continuous improvement of processes. Ensure that our systems are secure and comply with industry standards and best practices. Create and maintain detailed documentation for systems and processes. Qualifications Bachelor’s degree in computer science, Information Technology, or a related field, or equivalent experience. 8+ Years experience as a Site Reliability Engineer or in a similar role. Experience with cloud platforms (e.g., Azure, AWS & GCP). Strong background in Linux/Unix administration. Proficiency in programming languages such as Python, Go, or Ruby. Experience with configuration management tools (e.g., Ansible, Puppet, Chef). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack, loggly). Understanding of networking concepts and protocols. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work in a fast-paced, dynamic environment. Preferred Qualifications Experience with CI/CD pipelines and tools (e.g., Jenkins, GitLab CI). Familiarity with database management (e.g., MySQL, PostgreSQL, MongoDB). Experience with distributed systems and microservices architecture. Certification in relevant technologies (e.g., AWS Certified Solutions Architect). Exp Required: 8+ Years Competency loggly, PagerDuty, Azure & AWS, Google Cloud, Azure, Site Reliability Engineer. We are looking for candidates with strong Azure & AWS cloud exp. Note: Candidates who can join on immediate basis or max 15 days' notice period can only apply. Interested candidates can share their updated CV with below details at Abhishekkumar.saini@corrohealth.com Total Exp: Current CTC: Expected CTC: Notice Period: Reason for change: Current Location: At CorroHealth, we want to assure all job seekers that we do not require any payment or monetary arrangement as a condition for employment. CorroHealth does not authorize any third party, agency, company, or individual to request money or financial contributions in exchange for a job opportunity. If you receive any request for payment or suspect fraudulent activity related to job applications at Corrohealth, please do not respond. Instead, contact us immed iately at Compliance@corro health.com or report the incident to our Compliance Ho tline via www.lighthouse-services.com/C orroHealth.”
Posted 1 week ago
7.0 years
0 Lacs
India
On-site
Senior Database Engineer – 7 to 12 years Shift Time: 2:30 pm to 11:30 pm Oracle 19c DBA who can support us at least until noon CST proficient in RAC & ASM & Clusterware, 19c Multitenant and OEM Administration, etc. Primary Responsibilities The Offshore Senior Database Engineer has the primary focus on: Monitoring Database health / performance (RAC, Non-RAC and Data Guard Instance). Review Quarterly Database Patching requirement/steps and execute the patch process. Review Quarterly OEM Patching requirement/steps and execute the patch process. Monitor and take appropriate action proactively on night jobs, i.e., RMAN backups and Data Purging, etc. The overall technical support of the Oracle database, shutdown/startup database, issue troubleshooting, incident, alert and responding to requests from DevOps and other IT teams. Performs Database/Query monitoring, ongoing performance tuning and optimization activities Position Requirements Must have 7+ years of experience focused on Oracle DBA Administration in medium to large corporate environments, 3+ years of Oracle 19c administration (multitenant PDB/CDB, Oracle ASM, Oracle Cluster ware, Data Guard, Oracle Single Instance and Patching process). Installation and configuration for OEM13c for database monitoring and management. At least 5 years’ experience performance tuning both instance and SQL statement level tuning, good understanding of SQL Profile and query optimization. Ability to identify the culprit with tool (i.e., OEM) and without tool (query from Oracle dictionary). At least 5 years’ experience with backup and recovery procedures using RMAN. System knowledge and experience with Unix/Linux Operating Systems. Working knowledge of scripting programing (Shell, Python), GitHub, and Automation tools (AWX or Ansible) is a plus but not required. Knowledge, Skills & Abilities Must be a creative problem-solver, flexible, and proactive. Very good people skills, plus written and oral communication skills. Take responsibility, plan and structure all tasks (technical or non-technical) assigned as part of job role to ensure a business efficient service is delivered to customers. Key Performance Indicators Ability to manage multiple assignments (i.e., database/cluster patching, OEM management and troubleshooting) comfortably. Attentive to operational process (i.e. incident management, pickup/resolve/close timely, using SN monitoring tool to proactively monitor database platform, Change Management, etc.). Attentive to Agile process (i.e. story management, pick up/development/implementation and close timely).
Posted 1 week ago
7.0 years
0 Lacs
Patna, Bihar, India
On-site
Roles and Responsibilities: Ensure the reliability, performance, and scalability of our database infrastructure. Work closely with application teams to ship solutions that integrate seamlessly with our database systems. Analyze solutions and implement best practices for supported data stores (primarily MySQL and PostgreSQL). Develop and enforce best practices for database security, backup, and recovery. Work on the observability of relevant database metrics and make sure we reach our database objectives. Provide database expertise to engineering teams (for example, through reviews of database migrations, queries, and performance optimizations). Work with peers (DevOps, Application Engineers) to roll out changes to our production environment and help mitigate database-related production incidents. Work on automation of database infrastructure and help engineering succeed by providing self-service tools. OnCall support on rotation with the team. Support and debug database production issues across services and levels of the stack. Document every action so your learning turns into repeatable actions and then into automation. Perform regular system monitoring, troubleshooting, and capacity planning to ensure scalability. Create and maintain documentation on database configurations, processes, and procedures. Mandatory Qualifications: Have at least 7 years of experience running MySQL/PostgreSQL databases in large environments. Awareness of cloud infrastructure (AWS/GCP). Have knowledge of the internals of MySQL/PostgreSQL. Knowledge of load balancing solutions such as ProxySQL to distribute database traffic efficiently across multiple servers. Knowledge of tools and methods for monitoring database performance. Strong problem-solving skills and ability to work in a fast-paced environment. Excellent communication and collaboration skills to work effectively within cross-functional teams. Knowledge of caching (Redis / Elasticache) Knowledge of scripting languages (Python) Knowledge of infrastructure automation (Terraform/Ansible) Familiarity with DevOps practices and CI/CD pipelines.
Posted 1 week ago
8.0 years
6 - 8 Lacs
Chennai, Tamil Nadu, India
On-site
Experience: 6–8 Years Key Responsibilities Design and implement enterprise-level data network solutions. Lead troubleshooting and resolution of complex network issues. Manage configurations and upgrades across core networking infrastructure. Collaborate with security and infrastructure teams to align on network policies. Mentor junior engineers and ensure best practices. Skills Required Deep experience with Cisco/Juniper networking technologies. Strong knowledge of routing protocols (BGP, OSPF, EIGRP). Network automation skills (e.g., Ansible, Python) is a plus. Certifications like CCNP, CCIE preferred. Skills: ccie,routing protocols,network automation,juniper,juniper networking technologies,network infrastructure,cisco,ccnp,routing protocols (bgp, ospf, eigrp),network automation (ansible, python),networking,ccie certification,data network,cisco networking technologies,networking protocol,ccnp certification
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Nium, the Leader in Real-Time Global Payments Nium , the global leader in real-time, cross-border payments, was founded on the mission to deliver the global payments infrastructure of tomorrow, today. With the onset of the global economy, its payments infrastructure is shaping how banks, fintechs, and businesses everywhere collect, convert, and disburse funds instantly across borders. Its payout network supports 100 currencies and spans 220+ markets, 100 of which in real-time. Funds can be disbursed to accounts, wallets, and cards and collected locally in 35 markets. Nium's growing card issuance business is already available in 34 countries. Nium holds regulatory licenses and authorizations in more than 40 countries, enabling seamless onboarding, rapid integration, and compliance – independent of geography. The company is co-headquartered in San Francisco and Singapore. About the Team: Tech Support team’s goal is to offer better customer service and manage anything that happens in a live/production environment. Nium is one of the beasts to use all the latest tools for support functions. Tools like Kibana, Nagios, and cloud watch enable us to have greater visibility of our services offered to clients and eventually makes our system available round the clock, our uptime is always greater than 99.95%. About the Role: As part of Tech support team, you will be responsible for resolving technical issues faced by users, whether related to software, hardware, or network systems. They troubleshoot problems, offer solutions, and escalate complex cases to specialized teams when necessary. Using ticketing systems, they manage and prioritize support requests to ensure timely and effective resolutions. This role requires strong problem-solving abilities, excellent communication skills, and a solid understanding of technical systems to help users maintain productivity. Key Responsibilities: Based on customer insights and channel performance data, develop and execute on a content roadmap that engages key personas at each point in the customer journey, from top-funnel acquisition to nurture and ongoing customer education, both on Nium offerings as well as the industry Build, develop and manage a high-performing team and culture to achieve breakthrough results; exceptionally high standards and holds self and others accountable Generating editorial ideas and concepts Work with regional Growth Marketing teams to ensure content development aligns with funnel-building objectives for each target segment Measure the impact of our content strategy as well as the performance of individual assets and proactively refine our resource allocation and prioritization accordingly Requirements: 5-7 yrs experience in Supporting production applications on AWS or other cloud platforms Good knowledge of RDBMS (PostgreSQL or MSSQL) and NoSQL databases Willing to work in day/night shifts Understanding of troubleshooting and monitoring microservice and serverless architectures Working knowledge of containerization technology and various orchestration platforms. e.g., Docker, Kubernetes etc. for troubleshooting and monitoring purposes Experience in build and deploy automation tools (Ansible/Jenkins/Chef) Experienced in release and change management, incident, and problem management both from a technology and process perspective Familiar with Server log Management with tools like ELK, and Kibana Certification in ITIL, COBIT or Microsoft Operations Framework would be an added plus Experience with Scripting languages or shell scripting to automate daily tasks would be an added plus Ability to Diagnose and Troubleshoot Technical Issues Ability to work proactively to identify the issue with the help of log monitoring Experienced in monitoring tools, frameworks, and processes Excellent interpersonal skills Experience with one or more case-handling tools like: Freshdesk, Zendesk, JIRA Skilled at triaging and root cause analysis Ability to provide step-by-step technical help, both written and verbal What we offer at Nium We Value Performance: Through competitive salaries, performance bonuses, sales commissions, equity for specific roles and recognition programs, we ensure that all our employees are well rewarded and incentivized for their hard work. We Care for Our Employees: The wellness of Nium’ers is our #1 priority. We offer medical coverage along with 24/7 employee assistance program, generous vacation programs including our year-end shut down. We also provide a flexible working hybrid working environment (3 days per week in the office). We Upskill Ourselves: We are curious, and always want to learn more with a focus on upskilling ourselves. We provide role-specific training, internal workshops, and a learning stipend We Constantly Innovate: Since our inception, Nium has received constant recognition and awards for how we approach both our business and talent opportunities. - 2022 Great Place To Work Certification - 2023 CB Insights Fintech 100 List of Most Promising Fintech Companies . - CNBC World’s Top Fintech Companies 2024. We Celebrate Together: We recognize that work is also about creating great relationships with each other. We celebrate together with company-wide social events, team bonding activities, happy hours, team offsites, and much more! We Thrive with Diversity: Nium is truly a global company, with more than 33 nationalities, based in 18+ countries and more than 10 office locations. As an equal opportunity employer, we are committed to providing a safe and welcoming environment for everyone. For more detailed region specific benefits : https://www.nium.com/careers#careers-perks For more information visit www.nium.com Depending on your location, certain laws may regulate the way Nium manages the data of candidates. By submitting your job application, you are agreeing and acknowledging that you have read and understand our Candidate Privacy Notice located at www.nium.com/privacy/candidate-privacy-notice .
Posted 1 week ago
14.0 - 18.0 years
0 Lacs
karnataka
On-site
You will be working as a Senior Architect (Senior Manager) specializing in GCP Cloud & DevOps or Azure Cloud & DevOps with a minimum of 14 years of experience. Your main responsibilities will involve leading the design and implementation or migration of large and complex applications on Public Cloud platforms. This role requires technical leadership skills to design and construct cloud infrastructure efficiently. Your qualifications should include experience in strategizing, designing, and implementing highly effective solutions on Public Cloud platforms such as Azure or GCP. You will be responsible for ensuring security, resilience, performance, networking, availability, and implementing blue-green deployments for business applications. Delivering top-notch operational tools and practices, including Continuous Integration (CI), Continuous Deployment (CD), ChatOps, Role-Based Access Control (RBA), and Robotic Process Automation (RPA), will be part of your role. You will also need to write infrastructure as code for public or private clouds and implement modern cloud integration architecture. Furthermore, your tasks will involve architecting and implementing end-to-end monitoring solutions in the cloud environment. Key skills required for this Technical Leadership role include being proficient in Cloud technologies such as GCP and Azure, with experience in multi-cloud architecture. You should have hands-on experience with Kubernetes clusters, specifically in K8s architecture and implementation using Google Kubernetes Engine (GKE) or Azure Kubernetes Service (AKS). Additionally, experience with CI/CD tools like Jenkins, GitLab CI, Sonar, Zaproxy is essential. You must have worked on greenfield applications from conceptualization to post-production, ensuring complete delivery. Experience with creating blueprints, configuration tools like Ansible and Chef, Terraform coding, infrastructure as code implementation, and application migration will also be crucial for this role. In summary, as a Senior Architect specializing in GCP Cloud & DevOps or Azure Cloud & DevOps, you will play a critical role in designing and implementing complex applications on Public Cloud platforms. Your technical expertise, hands-on experience with cloud technologies, and proficiency in various tools and frameworks will be instrumental in ensuring the success of cloud infrastructure projects.,
Posted 1 week ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the Role: We are looking for a seasoned DevOps Engineer with 8–10 years of experience to lead our deployment processes and infrastructure strategies. This role is ideal for someone with deep knowledge of cloud platforms, automation tools, microservices architecture, and CI/CD pipelines. You will play a key role in ensuring scalability, security, and performance across our systems. Key Responsibilities: Take end-to-end ownership of CI/CD pipelines and infrastructure deployment. Architect and manage scalable cloud solutions (preferably GCP) for microservices-based applications. Collaborate with engineering, QA, and product teams to streamline release cycles. Monitor, troubleshoot, and optimize system performance and uptime. Build and maintain containerization using Docker and orchestration with Kubernetes. Implement infrastructure-as-code using Terraform, Ansible, or equivalent tools. Ensure effective deployment and scaling of microservices. Drive automation in infrastructure, monitoring, and alerting systems. Conduct root cause analysis and resolve critical production issues. Required Skills & Qualifications: 8–10 years of experience in DevOps or similar engineering roles. Strong command of microservices deployment and management in cloud environments. Expertise in GCP, AWS, or Azure. Proficiency in Git, GitHub workflows, and CI/CD tools (Jenkins, GitLab CI/CD). Knowledge of containerization (Docker) and orchestration (Kubernetes). Strong scripting skills in Shell, Python, or similar. Familiarity with JavaScript frameworks (Node.js, React) and their deployments. Experience with databases (SQL and NoSQL), and tools like Elasticsearch, Hive, Spark, or Presto. Understanding of secure development practices and information security standards. **Preferred: Immediate joiners and local candidates from Noida
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a highly skilled and proactive Senior DevOps Specialist to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities: Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have: Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting Familiarity with SQL for validating database changes, debugging issues, and running schema checks Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups Strong exposure to API testing, load/performance testing, and reliability validation Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog) Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have over 10 years of experience working in a large enterprise with diverse teams. Specifically, you should possess at least 6 years of expertise in APM and Monitoring technologies and a minimum of 3 years of experience with ELK. Your responsibilities will include designing and implementing efficient log shipping and data ingestion processes, collaborating with development and operations teams to enhance logging capabilities, and configuring components of the Elastic Stack such as Filebeat, Metricsbeat, Winlogbeat, Logstash, and Kibana. Additionally, you will be required to create and maintain comprehensive documentation for Elastic Stack configurations, ensure seamless integration between various components, advance Kibana dashboards and visualizations, and manage Elasticsearch Clusters on premise. Hands-on experience in Scripting & Programming languages like Python, Ansible, and bash, as well as knowledge in Security Hardening, Vulnerability/Compliance, and CI/CD deployment pipelines, are essential for this role. You should also have a strong understanding of performance monitoring, metrics, planning, and management, and the ability to apply systematic and creative problem-solving approaches. Experience in application onboarding, influencing other teams to adopt best practices, effective communication skills, and familiarity with tools like ServiceNow, Confluence, and JIRA are highly desirable. Understanding of SRE and DevOps principles is also crucial. In terms of technical skills, you should be proficient in APM Tools like ELK, AppDynamics, PagerDuty, programming languages such as Java, .Net, Python, operating systems like Linux and Windows, automation tools including GitLab and Ansible, container orchestration with Kubernetes, and cloud platforms like Microsoft Azure and AWS. If you meet these qualifications and are interested in this opportunity, please share your resume with gopinath.sonaiyan@flyerssoft.com.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi