Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are seeking an Infrastructure Automation Engineer to automate infrastructure provisioning, configuration, and management using infrastructure-as-code (IaC) tools and cloud-native technologies. Key Responsibilities: Develop and maintain IaC templates using Terraform, Ansible, or similar tools. Automate deployment and monitoring of servers, networks, and storage. Implement continuous delivery/integration pipelines for infrastructure changes. Ensure environments are consistent, scalable, and secure. Collaborate with cloud, security, and DevOps teams. Required Skills & Qualifications: Expertise in IaC (Terraform, CloudFormation, Pulumi). Strong scripting skills (Python, Bash, PowerShell). Experience with configuration management tools (Ansible, Chef, Puppet). Familiarity with CI/CD tools and practices. Solid understanding of networking, cloud computing, and Linux/Windows systems. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 1 week ago
3.0 - 8.0 years
10 - 20 Lacs
Lucknow
Remote
Job Title: JAMF Engineer Job Overview: We are seeking an experienced JAMF Engineer with 36 years of hands-on experience managing Apple devices at scale using JAMF. The role involves providing Tier 2 and Tier 3 support, scripting, and collaborating with US and Europe-based technical teams. Candidates should be well-versed in BASH scripting, familiar with iOS, and possess strong communication and collaboration skills. Responsibilities: Provide Tier 2 and 3 support for JAMF Pro environments and escalated Tier 1 support from the client. Collaborate with the technical team to automate solutions and create quick fixes in Self Service. Develop custom scripts and solutions to manage Apple devices effectively. Meet regularly with the technical team to ensure all support needs are met. Assist with installation, monitoring, and configuration of JAMF tools including JAMF Connect, JAMF Protect, and JAMF Infrastructure Manager. Qualifications: Must-Have: 3–6 years of experience managing Apple devices with JAMF. Strong scripting experience (BASH required; Python or AppleScript a plus). Proficiency in JAMF Connect / Protect, macOS lifecycle management, device enrollment (DEP/ABM), VPP, and MDM profiles. Experience in maintaining secure macOS environments in regulated industries (HIPAA, SOC 2). Preferred: JAMF 400 certification (preferred) or JAMF 300 certification (plus). Familiarity with Zero Trust frameworks and Apple security features. Contributions to tools like JAMF Toolkit or open-source Mac Admin utilities. Experience with Python and AppleScript scripting. Technical Skills & Tools: JAMF Connect, JAMF Protect, JAMF Infrastructure Manager. Bash scripting; Python and AppleScript (preferred). Self Service; JAMF Toolkit. Soft Skills: Excellent written and verbal communication skills. Ability to collaborate with stakeholders and remote teams. Self-starter with a proactive and detail-oriented approach. Strong team player with presentation skills. Familiarity with G-Suite and remote work etiquette. Additional Information: Work requires time overlap until 1 PM EST to coordinate with US and Europe-based teams. Mac administration knowledge is a plus.
Posted 1 week ago
4.0 - 9.0 years
15 - 30 Lacs
Hyderabad
Work from Office
The position involves designing, developing and deploying UVM based reusable testbenches for RTL unit blocks, sub-system level and top level systems with emphasis on verifying the functionality and generating the code/functional coverage reports. The candidate should come up with test plans and test cases in order to achieve 100% code coverage and functional coverage. Educational Qualification: Bachelor major in electronics, embedded programming, ECE, EEE. Key Requirements: Experience in ASIC/FPGA verification using System Verilog. Develop and sign off on test plans and test cases. Strong knowledge of digital design, Verilog, System Verilog, UVM, C/C++. Experience in AMBA AHB/AXI/APB based IPs design/verification. Experience in usage of assertions, constrained random generation, functional and code coverages. Experience in FPGA design and FPGA EDA tools will be a plus. Experience in scripting, such as TCL, Perl, Bash and python to automate the verification methodologies and flows. Able to build and set up scalable simulation / verification environments.
Posted 1 week ago
10.0 - 15.0 years
12 - 22 Lacs
Pune
Hybrid
So, what’s the role all about? The Senior Specialist Technical Support Engineer role is to deliver technical support to end users about how to use and administer the NICE Service and Sales Performance Management, Contact Analytics and/or WFM software solutions efficiently and effectively in fulfilling business objectives. We are seeking a highly skilled and experienced Senior Specialist Technical Support Engineer to join our global support team. In this role, you will be responsible for diagnosing and resolving complex performance issues in large-scale SaaS applications hosted on AWS. You will work closely with engineering, DevOps, and customer success teams to ensure our customers receive world-class support and performance optimization. How will you make an impact? Serve as a subject matter expert in troubleshooting performance issues across distributed SaaS environments in AWS. Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address CSS Recording and Compliance application related product issues and resolve high-level issues. Analyze logs, metrics, and traces using tools like CloudWatch, X-Ray, Datadog, New Relic, or similar. Collaborate with development and operations teams to identify root causes and implement long-term solutions. Provide technical guidance and mentorship to junior support engineers. Act as an escalation point for critical customer issues, ensuring timely resolution and communication. Develop and maintain runbooks, knowledge base articles, and diagnostic tools to improve support efficiency. Participate in on-call rotations and incident response efforts. Have you got what it takes? 10+ years of experience in technical support, site reliability engineering, or performance engineering roles. Deep understanding of AWS services such as EC2, RDS, S3, Lambda, ELB, ECS/EKS, and CloudFormation. Proven experience troubleshooting performance issues in high-availability, multi-tenant SaaS environments. Strong knowledge of networking, load balancing, and distributed systems. Proficiency in scripting languages (e.g., Python, Bash) and familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Excellent communication and customer-facing skills. Preferred Qualifications: AWS certifications (e.g., Solutions Architect, DevOps Engineer). Experience with observability platforms (e.g., Prometheus, Grafana, Splunk). Familiarity with CI/CD pipelines and DevOps practices. Experience working in ITIL or similar support frameworks. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7554 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 week ago
5.0 - 10.0 years
3 - 5 Lacs
Bengaluru
Work from Office
Responsibilities Design and implement cloud-based infrastructure (AWS, Azure, or GCP) Develop and maintain CI/CD pipelines to ensure smooth deployment and delivery processes Manage containerized environments (Docker, Kubernetes) and infrastructure-as-code (Terraform, Ansible) Monitor system health, performance, and security; respond to incidents and implement fixes Collaborate with development, QA, and security teams to streamline workflows and enhance automation Lead DevOps best practices and mentor junior engineers Optimize costs, performance, and scalability of infrastructure Ensure compliance with security standards and best practices Requirements 5+ years of experience in DevOps, SRE, or related roles Strong experience with cloud platforms (AWS, Azure, GCP) Proficiency with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.) Expertise in container orchestration (Kubernetes, Helm) Solid experience with infrastructure-as-code (Terraform, CloudFormation, Ansible) Good knowledge of monitoring/logging tools (Prometheus, Grafana, ELK, Datadog) Strong scripting skills (Bash, Python, or Go)
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Responsibilities Design, implement, and maintain cloud infrastructure (AWS, Azure) Manage containerized applications using Kubernetes and Docker Automate infrastructure provisioning and configuration with Terraform Collaborate with development teams to streamline CI/CD pipelines Requirements 7+ years of experience as a DevOps Engineer Proficiency in at least one major cloud provider (AWS or Azure) Solid understanding of Kubernetes and Docker Experience with configuration management tools (e.g., Ansible, Puppet) Strong scripting skills (Bash, Python) Nice to have Cloud Certification Experience with Kubernetes Apply for the job
Posted 1 week ago
5.0 - 6.0 years
10 - 14 Lacs
Kochi, Ernakulam, Thiruvananthapuram
Work from Office
Proficient in AWS & Devops platforms, designing implementing advanced cloud solutions.in Linux, Windows environments,cloud-native applications nderstanding of cloud computing technologies, storage options, serverless architectures.Exp in AWS & Devops Required Candidate profile Skilled in Python, PowerShell, Bash.Proficient in cloud infrastructure services (EC2, S3, RDS, Lambda, CloudFormation, VMs, Blob Storage, SQL Database,,ARM Templates. IAM, AD, KMS, Key Vault, Shield Perks and benefits Reimbursements & Perks in Addition
Posted 1 week ago
3.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Remote
Senior Cloud Engineer Job Description Position Title: Senior Cloud Engineer -- AWS Location: Remote] Position Overview The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives through innovative cloud engineering. Key Responsibilities Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP) Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence Stay current with emerging cloud technologies, trends, and best practices, recommending improvements and driving innovation Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef) Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions Experience with cloud security, governance, and compliance frameworks Excellent analytical, troubleshooting, and root cause analysis skills Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams Ability to work independently, manage multiple priorities, and lead complex projects to completion Preferred Qualifications Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect) Experience with cloud cost optimization and FinOps practices Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.) Exposure to cloud database technologies (SQL, NoSQL, managed database services) Knowledge of cloud migration strategies and hybrid cloud architectures
Posted 1 week ago
1.0 - 6.0 years
6 - 13 Lacs
Bengaluru
Work from Office
Position Summary: We are seeking an experienced and highly skilled Lead LogicMonitor Administrator to architect, deploy, and manage scalable observability solutions across hybrid IT environments. This role demands deep expertise in LogicMonitor and a strong understanding of modern IT infrastructure and application ecosystems, including on premises, cloud-native, and hybrid environments. The ideal candidate will play a critical role in designing real-time service availability dashboards, optimizing performance visibility, and ensuring comprehensive monitoring coverage for business-critical services. Role & Responsibilities: Monitoring Architecture & Implementation Serve as the subject matter expert (SME) for LogicMonitor, overseeing design, implementation, and continuous optimization. Lead the development and deployment of monitoring solutions that integrate on premise infrastructure, public cloud (AWS, Azure, GCP), and hybrid environments. Develop and maintain monitoring templates, escalation chains, and alerting policies that align with business service SLAs. Ensure monitoring solutions adhere to industry standards and compliance requirements. Real-Time Dashboards & Visualization Design and build real-time service availability dashboards to provide actionable insights for operations and leadership teams. Leverage Logic Monitor’s APIs and data sources to develop custom visualizations, ensuring a single-pane-of-glass view for multi-layered service components. Collaborate with applications and service owners to define KPIs, thresholds, and health metrics. Proficient in interpreting monitoring data and metrics related to uptime and performance. Automation & Integration Automate onboarding/offboarding of monitored resources using LogicMonitor’s REST API, Groovy scripts, and Configuration Modules. Integrate LogicMonitor with ITSM tools (e.g., ServiceNow, Jira), collaboration platforms (e.g., Slack, Teams), and CI/CD pipelines. Enable proactive monitoring through synthetic transactions and anomaly detection capabilities. Streamline processes through automation and integrate monitoring with DevOps practices. Operations & Optimization Perform ongoing health checks, capacity planning, tools version upgrades, and tuning monitoring thresholds to reduce alert fatigue. Establish and enforce monitoring standards, best practices, and governance models across the organization. Lead incident response investigations, root cause analysis, and post-mortem reviews from a monitoring perspective. Optimize monitoring strategies for effective resource utilization and cost efficiency. Qualification Minimum Educational Qualifications: Bachelor’s degree in computer science, Information Technology, Engineering, or a related field Required Skills & Qualifications: 8+ years of total experience. 5+ years of hands-on experience with LogicMonitor, including custom DataSources, Property Sources, dashboards, and alert tuning. Proven expertise in IT infrastructure monitoring: networks, servers, storage, virtualization (VMware, Nutanix), and containerization (Kubernetes, Docker). Strong understanding of cloud platforms (AWS, Azure, GCP) and their native monitoring tools (e.g., CloudWatch, Azure Monitor). Experience in scripting and automation (e.g., Python, PowerShell, Groovy, Bash). Familiarity with observability stacks: ELK, Grafana, is a strong plus. Proficient with ITSM and incident management processes, including integrations with ServiceNow. Excellent problem-solving, communication, and documentation skills. Ability to work collaboratively in cross-functional teams and lead initiatives. Preferred Qualifications: LogicMonitor Certified Professional (LMCA and LMCP) or similar certification. Experience with APM tools (e.g., SolarWinds, AppDynamics, Dynatrace, Datadog) and log analytics platforms and logicmonitor observability Knowledge of DevOps practices and CI/CD pipelines. Exposure to regulatory/compliance monitoring (e.g., HIPAA, PCI, SOC 2). Experience with machine learning or AI-based monitoring solutions. Additional Information Intuitive is an Equal Employment Opportunity Employer. We provide equal employment opportunities to all qualified applicants and employees, and prohibit discrimination and harassment of any type, without regard to race, sex, pregnancy, sexual orientation, gender identity, national origin, color, age, religion, protected veteran or disability status, genetic information or any other status protected under federal, state, or local applicable laws. We will consider for employment qualified applicants with arrest and conviction records in accordance with fair chance laws.
Posted 1 week ago
6.0 - 11.0 years
10 - 16 Lacs
Bengaluru
Work from Office
What You ll Do Developing, Testing, Debugging, and Troubleshooting of (containerized) application hosted on Kubernetes. Manage Kubernetes clusters, including deployment, scaling, and maintenance of containerized applications Design, implement, and manage cloud infrastructure using AWS, Azure Cloud to ensure high availability and scalability Collaborate with development and operations teams to enhance our CI/CD pipelines for efficient code deployment and testing Implement and maintain monitoring, logging, and alerting solutions for system and application health Automate infrastructure provisioning and configuration using Terraform IaaC Stay current with emerging DevOps technologies and industry best practices What You ll Bring Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). At least 6+ years of relevant experience as an DevOps Engineer In depth knowledge of AWS/Azure services like IAM, Monitoring, Load Balancing, Autoscaling, Database, Networking, storage, ECR, AKS, ACR etc Strong Knowledge of container ( Docker ) and container orchestration, Hands-on Kubernetes experience with AKS/EKS Hands on Experience with Linux and concepts around it Strong CI/CD skills , preferably Azure DevOps Experience is setting up Logging and Monitoring functionalities with tools such as Prometheus, Loki, Promtail, Graffana Good to have coding skills for automating regular activities using Python/Bash Good to have experience with Terraform for automating Cloud infrastructure automation Good to have understanding around end to end product development, AI/ML concepts Experience in Source Code Management: Gitlab/ GitHub/ Bitbucket, preferably Azure DevOps Ability to collaborate with cross-functional teams. Good communication skills to explain technical ideas to non-technical people. Additional Skills: Understanding DevOps CI / CD, data security, experience in designing on cloud platform; Willingness to travel to other global offices as needed to work with the client or other internal project teams
Posted 1 week ago
6.0 - 9.0 years
20 - 27 Lacs
Gurugram
Hybrid
The Team: The Financial Risk Analytics (FRA) Cloud Development and Operation team develops, maintains, and supports deployments for our Risk as a Service offering. It maintains all infrastructure and cloud native aspects of the solution. The Impact: As part of the FRA Cloud team, you will help develop, maintain, and secure the 24/7 infrastructure our clients depend on for their business operations. Whats in it for you: As part of a team with a broad engineering and operational mandate, there are multiple directions to grow your career. You will be part of a global team supporting a global customer base. You will be exposed to the latest technologies as we continually refine our deployments to take advantage of them. Responsibilities: Building AWS infrastructure as code to support our hosted offering. Identifying, documenting, and tracking software defects to resolution Continuous improvement of infrastructure components, cloud security, and reliability of services. Automation of end-to-end release cycle. Operational support for cloud infrastructure including incident response and maintenance. What We’re Looking For: Experience with Amazon Web Services (AWS). Exp - 5+ years Experience with container-based deployment technologies such as Amazon ECS, Amazon EKS. Experience with Infrastructure as Code technologies such as CloudFormation and Terraform. Development skills with Python and/or Bash. Strong collaboration skills in a team environment. Excellent written and verbal communication skills
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Responsibilities As Infra AI Automation SME you will work on CI/CD implementations and Automation through ansible, Terraform, puppet, Kubernetes PowerShell, with hands on Infra technologies, On-prem and Cloud infra technologies. Design, develop, and implement Terraform configurations for provisioning and managing cloud infrastructure across various platforms (AWS, Azure, GCP etc.). Create modular and reusable Terraform modules for consistent and efficient infrastructure management. Automate infrastructure deployments and lifecycle management, including provisioning, configuration, and updates. Integrate Terraform with CI/CD pipelines for seamless and continuous deployments. Troubleshoot and debug Terraform code to identify and resolve issues. Stay up-to-date with the latest Terraform features and best practices. Collaborate with developers, DevOps engineers, and other stakeholders to understand requirements and design optimal infrastructure solutions. experience with Terraform, including writing and deploying infrastructure configurations. Strong understanding of IaC principles and best practices. Proficiency in scripting languages like Python or Bash for writing Terraform modules and custom functionality. Document Terraform configurations for clarity and maintainability. Additional Responsibilities: Good Communication skills Good analytical and problem-solving skills Technical and Professional Requirements: At least 6+ years of experience in Infra Automation Tools Proven experience in designing, developing, and implementing automation solutions for infrastructure tasks. Strong proficiency in one or more scripting languages relevant to infrastructure automation (e.g., Python, Bash, PowerShell). Hands-on experience with infrastructure-as-code (IaC) tools such as Ansible or Terraform. Familiarity with configuration management tools (e.g., Ansible, Chef, Puppet) is highly desirable. Good working knowledge on Dockers or Kubernetes Understanding of IT infrastructure components (servers, networking, storage, cloud). Experience with integrating automation with existing IT management tools. Preferred Skills: Technology->Microsoft Technologies->Windows PowerShell Technology->Infra_ToolAdministration->Infra_ToolAdministration-Others Technology->DevOps->Continuous delivery - Environment management and provisioning->Ansible Technology->Container Platform->Docker Technology->Container Platform->Kubernetes Generic Skills: Technology->Cloud Platform->AWS Core services Technology->Cloud Platform->Azure Core services Educational Requirements Master Of Engineering,Master Of Science,Master Of Technology,Bachelor Of Comp. Applications,Bachelor Of Science,Bachelor of Engineering,Bachelor Of Technology Service Line Cloud & Infrastructure Services
Posted 2 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Pune
Work from Office
RabbitMQ Administrator - Prog Leasing1 Job TitleRabbitMQ Cluster Migration Engineer Job Summary: We are seeking an experienced RabbitMQ Cluster Migration Engineer to lead and execute the seamless migration of our existing RabbitMQ infrastructure to a AWS - new high-availability cluster environment. This role requires deep expertise in RabbitMQ, clustering, messaging architecture, and production-grade migrations with minimal downtime. Key Responsibilities: Design and implement a migration plan to move existing RabbitMQ instances to a new clustered setup. Evaluate the current messaging architecture, performance bottlenecks, and limitations. Configure, deploy, and test RabbitMQ clusters (with or without federation/mirroring as needed). Ensure high availability, fault tolerance, and disaster recovery configurations. Collaborate with development, DevOps, and SRE teams to ensure smooth cutover and rollback plans. Automate setup and configuration using tools such as Ansible, Terraform, or Helm (for Kubernetes). Monitor message queues during migration to ensure message durability and delivery guarantees. Document all aspects of the architecture, configurations, and migration process. Required Qualifications: Strong experience with RabbitMQ, especially in clustered and high-availability environments. Deep understanding of RabbitMQ internalsqueues, exchanges, bindings, vhosts, federation, mirrored queues. Experience with RabbitMQ management plugins, monitoring, and performance tuning. Proficiency with scripting languages (e.g., Bash, Python) for automation. Hands-on experience with infrastructure-as-code tools (e.g., Ansible, Terraform, Helm). Familiarity with containerization and orchestration (e.g., Docker, Kubernetes). Strong understanding of messaging patterns and guarantees (at-least-once, exactly-once, etc.). Experience with zero-downtime migration and rollback strategies. Preferred Qualifications: Experience migrating RabbitMQ clusters in production environments. Working knowledge of cloud platforms (AWS, Azure, or GCP) and managed RabbitMQ services. Understanding of security in messaging systems (TLS, authentication, access control). Familiarity with alternative messaging systems (Kafka, NATS, ActiveMQ) is a plus.
Posted 2 weeks ago
8.0 - 12.0 years
0 - 0 Lacs
Pune
Hybrid
So, what’s the role all about? We are seeking a highly motivated and skilled DevOps Engineer to join our growing team with experience on cloud. You will play a vital role in bridging the gap between development and operations, ensuring seamless deployments and efficient infrastructure management for our cloud-based applications. You will possess expertise in DevOps methodologies, cloud platforms, and automation tools to enable rapid and reliable software delivery. How will you make an impact? Infrastructure Management: Provision and manage cloud infrastructure resources (e.g., VMs, containers, Kubernetes clusters) on platforms like AWS, Azure, or GCP. Implement IaC practices using tools like Terraform or Ansible to automate infrastructure provisioning and configuration. Monitor and troubleshoot infrastructure performance, ensuring scalability and high availability. CI/CD Pipelines: Design and implement CI/CD pipelines using tools like Jenkins, GitLab CI, or Azure DevOps. Integrate automated testing and code deployment into the pipeline for continuous delivery. Optimize pipelines for efficiency and reliability. Version Control and Collaboration: Utilize Git for version control, collaborating effectively with developers on code changes and deployments. Ensure proper code review and approval processes are followed. Security and Compliance: Implement security best practices for cloud infrastructure and applications. Maintain compliance with relevant security standards and regulations. Automation: Develop and implement automation scripts for repetitive tasks using tools like Bash, Python, or PowerShell. Continuously improve automation processes for greater efficiency and productivity. Work and collaborate in multi-disciplinary Agile teams, adopting Agile spirit, methodology and tools Interface with various R&D groups and with support tiers Have you got what it takes? Degree in Computer Science, Industrial/Electronic Engineering 8-12 years of experience of leading DevOps toolset adoption and environment provisioning – On premises and on cloud (AWS – Exp 4-5 yr) Should define and implement DevOps Strategy and Plan. Should have hands on experience in configuring and troubleshooting DevOps toolsets (including equivalent) – GitHub, Jenkins, Ansible, Jfrog, Maven, Ant, msbuild, Code security – dynamic and static scans, etc. Experience working with public cloud infrastructure and technologies such as Amazon Web Services (AWS), Google Cloud Engine, or Azure Experience working in and driving Continuous Integration and Delivery practices using industry standard tools such as Jenkins, Terraform, Docker, Kubernetes and Artifactory Exposure to set up DevOps on Cloud. Self-motivated and fast learner with a strong sense of ownership and drive Good interpersonal and communication skills; friendly disposition; work effectively as a team player Ability to work independently and collaboratively What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7190 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 2 weeks ago
4.0 - 7.0 years
11 - 16 Lacs
Pune
Hybrid
So, what’s the role all about? As a Sr. Cloud Services Automation Engineer, you will be responsible for designing, developing, and maintaining robust end-to-end automation solutions that support our customer onboarding processes from an on-prem software solution to Azure SAAS platform and streamline cloud operations. You will work closely with Professional Services, Cloud Operations, and Engineering teams to implement tools and frameworks that ensure seamless deployment, monitoring, and self-healing of applications running in Azure. How will you make an impact? Design and develop automated workflows that orchestrate complex processes across multiple systems, databases, endpoints, and storage solutions in on-prem and public cloud. Design, develop, and maintain internal tools/utilities using C#, PowerShell, Python, Bash to automate and optimize cloud onboarding workflows. Create integrations with REST APIs and other services to ingest and process external/internal data. Query and analyze data from various sources such as, SQL databases, Elastic Search indices and Log files (structured and unstructured) Develop utilities to visualize, summarize, or otherwise make data actionable for Professional Services and QA engineers. Work closely with test, ingestion, and configuration teams to understand bottlenecks and build self-healing mechanisms for high availability and performance. Build automated data pipelines with data consistency and reconciliation checks using tools like PowerBI/Grafana for collecting metrics from multiple endpoints and generating centralized and actionable dashboards. Automate resource provisioning across Azure services including AKS, Web Apps, and storage solutions Experience in building Infrastructure-as-code (IaC) solutions using tools like Terraform, Bicep, or ARM templates Develop end-to-end workflow automation in customer onboarding journey that spans from Day 1 to Day 2 with minimal manual intervention Have you got what it takes? Bachelor’s degree in computer science, Engineering, or related field (or equivalent experience). Proficiency in scripting and programming languages (e.g., C#, .NET, PowerShell, Python, Bash). Experience working with and integrating REST APIs Experience with IaC and configuration management tools (e.g., Terraform, Ansible) Familiarity with monitoring and logging solutions (e.g., Azure Monitor, Log Analytics, Prometheus, Grafana). Familiarity with modern version control systems (e.g., GitHub). Excellent problem-solving skills and attention to detail. Ability to work with development and operations teams, to achieve desired results, on common projects Strategic thinker and capable of learning new technologies quickly Good communication with peers, subordinates and managers You will have an advantage if you also have: Experience with AKS infrastructure administration. Experience orchestrating automation with Azure Automation tools like Logic Apps. Experience working in a secure, compliance driven environment (e.g. CJIS/PCI/SOX/ISO) Certifications in vendor or industry specific technologies. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7454 Reporting into: Director Role Type: Individual Contributor
Posted 2 weeks ago
4.0 - 7.0 years
11 - 16 Lacs
Pune
Hybrid
So, what’s the role all about? In this position we are looking for a strong DevOps Engineer to work with Professional Services teams, Solution Architects, and Engineering teams. Managing an On-prem to Azure Cloud onboarding, Cloud Infra & DevOps solutions.The Engineer will work with US and Pune Cloud Services and Operations Team as well as other support teams across the Globe. We are seeking a talented DevOps Engineer with strong PowerShell scripting skills to join our team. As a DevOps Engineer, you will be responsible for developing and implementing cloud automation workflows and enhancing our cloud monitoring and self-healing capabilities as well as managing our infrastructure and ensuring its reliability, scalability, and security. We encourageInnovative ideas,Flexible work methods,Knowledge collaboration,good vibes! How will you make an impact? Define, build and manage the automated cloud workflows enhancing overall customer experience in Azure SAAS environment saving time, cost and resources. Automate Pre-Post Host/Tenant Upgrade checklists and processes with automation in Azure SAAS environment. Implement, and manage the continuous integration and delivery pipeline to automate software delivery processes. Collaborate with software developers to ensure that new features and applications are deployed in a reliable and scalable manner. Automation of DevOps pipeline and provisioning of environments. Manage and maintain our cloud infrastructure, including provisioning, configuration, and monitoring of servers and services. Provide technical guidance and support to other members of the team. Manage Docker containers and Kubernetes clusters to support our microservices architecture and containerized applications. Implement and manage networking, storage, security, and monitoring solutions for Docker and Kubernetes environments. Experience with integration of service management, monitoring, logging and reporting tools like ServiceNow, Grafana, Splunk, Power BI etc. Have you got what it takes? 4-7 years of experience as a DevOps engineer with Azure preferably. Strong understanding of Kubernetes & Docker, Ansible, Terraform, Azure SAAS Infrastructure. Strong understanding of DevOps tools such as AKS, Azure DevOps, GitHub, GitHub Actions, and logging mechanisms. Working knowledge of all Azure Services and compliances like CJIS/PCI/SOC etc. Exposure to enterprise software architectures, infrastructures, and integration with Azure (or any other cloud solution) Experience with Application Monitoring Metrics Hands on experience with PowerShell, Bash & Python etc. Should have good knowledge on Linux and windows servers. Comprehensive knowledge of design metrics, analytics tools, benchmarking activities, and related reporting to identify best practices. Consistently demonstrates clear and concise written and verbal communication. Passionately enthusiastic about DevOps & cloud technologies. Ability to work independently, multi-task, and take ownership of various parts of a project or initiative. Azure Certifications in DevOps and Architecture is good to have. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7452 Reporting into: Director Role Type: Individual Contributor
Posted 2 weeks ago
3.0 - 5.0 years
3 - 6 Lacs
Pune
Work from Office
What You'll Do: CI/CD Pipeline Management: Design, implement, and maintain robust CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps, CircleCI) to automate the build, test, and deployment processes across various environments (Dev, QA, Staging, Production). Infrastructure as Code (IaC): Develop and manage infrastructure using IaC tools (e.g., Terraform, Ansible, CloudFormation, Puppet, Chef) to ensure consistency, repeatability, and scalability of our cloud and on-premise environments. Cloud Platform Management: Administer, monitor, and optimize resources on cloud platforms (e.g., AWS, Azure, GCP), including compute, storage, networking, and security services. Containerization & Orchestration: Implement and manage containerization technologies (e.g., Docker) and orchestration platforms (e.g., Kubernetes) for efficient application deployment, scaling, and management. Monitoring & Alerting: Set up and maintain comprehensive monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK Stack, Nagios, Splunk, Datadog) to proactively identify and resolve performance bottlenecks and issues. Scripting & Automation: Write and maintain scripts (e.g., Python, Bash, PowerShell, Go, Ruby) to automate repetitive tasks, improve operational efficiency, and integrate various tools. Version Control: Manage source code repositories (e.g., Git, GitHub, GitLab, Bitbucket) and implement branching strategies to facilitate collaborative development and version control. Security & Compliance (DevSecOps): Integrate security best practices into the CI/CD pipeline and infrastructure, ensuring compliance with relevant security policies and industry standards. Troubleshooting & Support: Provide Level 2 support, perform root cause analysis for production incidents, and collaborate with development teams to implement timely fixes and preventive measures. Collaboration: Work closely with software developers, QA engineers, and other stakeholders to understand their needs, provide technical guidance, and foster a collaborative and efficient development lifecycle. Documentation: Create and maintain detailed documentation for infrastructure, processes, and tools.
Posted 2 weeks ago
1.0 - 5.0 years
10 - 15 Lacs
Chennai
Work from Office
About Company Agilysys is well known for its long heritage of hospitality-focused technology innovation. The Company delivers modular and integrated software solutions and expertise to businesses seeking to maximize Return on Experience (ROE) through hospitality encounters that are both personal and profitable. Over time, customers achieve High Return Hospitality by consistently delighting guests, retaining staff and growing margins. Customers around the world include branded and independent hotels; multi-amenity resort properties; casinos; property, hotel and resort management companies; cruise lines; corporate dining providers; higher education campus dining providers; food service management companies; hospitals; lifestyle communities; senior living facilities; stadiums; and theme parks. The Agilysys Hospitality Cloud™ combines core operational systems for property management (PMS), point-of-sale (POS) and Inventory and Procurement (I&P) with Experience Enhancers™ that meaningfully improve interactions for guests and for employees across dimensions such as digital access, mobile convenience, self-service control, personal choice, payment options, service coverage and real-time insights to improve decisions. Core solutions and Experience Enhancers are selectively combined in Hospitality Solution Studios™ tailored to specific hospitality settings and business needs. Agilysys operates across the Americas, Europe, the Middle East, Africa, Asia-Pacific, and India with headquarters located in Alpharetta, GA. For more information visit Agilysys.com. Mode: Work from Office Description: Agilysys delivers highly-available cloud services for the hospitality industry. We practice Agile methodologies, and our cross-functional teams build strong, collaborative relationships as partners in the delivery of quality solutions. As a member of the Agilysys SaaS Operations team, you are responsible for operating SaaS production services, and integrating service operations workflow with product development and customer support lifecycles. You have experience with application administration in a SaaS production environment. You will continually improve the way we deliver software as a service, by automating operations workflows, continually assessing and improving service performance, and cultivating collaboration across the development, support and operations lifecycle. Principal Responsibilities: Act as the primary administrator for designated production SaaS applications. Proactively monitor, apply upgrades & patches, troubleshoot problems and look to find ways to improve system performance and customer satisfaction. Act as a backup administrator for other designated SaaS applications when needed. Act as primary escalation point for production issues with designated SaaS applications. Develop and maintain a deep knowledge of SaaS application functionality, to aid in troubleshooting issues and to mentor field staff in understanding operational capabilities. Work cooperatively with product engineering teams to implement new products, facets of existing products or new solutions that will reduce costs, improve supportability and enhance reliability. Maintain a well-defined production application configuration, and adhere to a disciplined process for introducing change to production. Maintain adherence to standards, policies and procedures. Design and perform routine recurring tasks as defined by SaaS Operations maintenance documentation. Coordinate process efficiency efforts to improve application installation, troubleshooting and maintenance. Write and improve scripts to automate operational tasks. Generate metrics and reports to monitor application up-time, scalability and customer creation. Participate on customer or prospect conference calls as necessary to help define solutions or to provide technical consultation to customers on application specifics. Assist in maintaining documentation of disaster recovery processes, security policies, systems enhancements, and other important business processes that will improve the efficiency of systems usage. Maintain processes to meet industry-standard audit & compliance requirements. Experience working in a PCI-certified data center a plus. Create initial application configuration for new SaaS application instances. Ensure all baseline executables are scheduled; verify backups and disaster recovery plan. Manage moderately complex operations projects through all stages, including: conception and planning, incorporating input from other teams, proof of concept, build and roll out production solution, and documentation. May participate in on-call duties to maintain operational coverage. Education and Experience: Bachelor’s degree with a major course of study in Information Technology, or equivalent experience. 2+ years of recent experience practicing application administration in a production cloud/automated environment. 1+ years of recent experience with Windows or Linux systems administration. Technical Skills: Operating Systems: Experience administering applications hosted on Windows servers. Some familiarity with applications hosted on Linux operating systems. Database: Hands-on experience in Microsoft SQL Server 2016/ 2017 with an ability to write basic query and stored procedures. Operations Automation: Knowledge of scripting technologies, such as Powershell, Python, or Bash. Tools: familiar with the Atlassian suite: Stash/Bitbucket, Jira, and Confluence. About You: Enjoy working in a fast-paced environment with changing priorities. Cultivate collaborative relationships with team members across the operations lifecycle. Bring a sense of humor and a friendly, collaborative approach to solving problems. Calmly own and resolve unexpected requests that occur. Seek out opportunities for continual improvement; take ownership and collaborate with your team to implement. Communicate openly and effectively, with team members in operations, product engineering, and product support. Discuss your work with team members, ask questions, openly give and receive advice. Be disciplined and imaginative in your approach to design and engineering. Escalate issues as needed to other members of the SaaS Operations team. Other Desired Experience: Hospitality experience.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
What You'll Do Were hiring a Site Reliability Engineer to help build and maintain the backbone of Avalaras SaaS platforms. As part of our global Reliability Engineering team, youll play a key role in ensuring the performance, availability, and observability of critical systems used by millions of users. This role combines hands-on infrastructure expertise with modern SRE practices and the opportunity to contribute to the evolution of AI-powered operations. Youll work closely with engineering and operations teams across regions to drive automation, improve incident response, and proactively detect issues using data and machine learning. What Your Responsibilities Will Be Own the reliability and performance of production systems across multiple environments and multiple clouds (AWS, GCP, OCI). Use AI/ML-driven tools and automation to improve observability and incident response. Collaborate with development teams on CI/CD pipelines, infrastructure deployments, and secure practices. Perform root cause analysis, drive postmortems, and reduce recurring incidents. Contribute to compliance and security initiatives (SOX, SOC2, ISO 27001, access and controls). Participate in a global on-call rotation and knowledge-sharing culture. What You'll Need to be Successful 5+ years in SRE, DevOps, or infrastructure engineering roles. Expertise with AWS (GCP or OCI is a plus), AWS Certified Solutions Architect Associate or equivalent Strong scripting/programming skills (Python, Go, Bash, or similar) Experience with infrastructure as code (Terraform, CloudFormation, Pulumi). Proficiency in Linux environments, containers (Docker/Kubernetes), and CI/CD workflows. Strong written and verbal communications skills to support world wide collaboration.
Posted 2 weeks ago
9.0 - 14.0 years
30 - 37 Lacs
Hyderabad
Hybrid
Primary Responsibilities: Design, implement, and maintain scalable, reliable, and secure infrastructure on AWS and EKS Develop and manage observability and monitoring solutions using Datadog, Splunk, and Kibana Collaborate with development teams to ensure high availability and performance of microservices-based applications Automate infrastructure provisioning, deployment, and monitoring using Infrastructure as Code (IaC) and CI/CD pipelines Build and maintain GitHub Actions workflows for continuous integration and deployment Troubleshoot production issues and lead root cause analysis to improve system reliability Ensure compliance with healthcare data standards and regulations (e.g., HIPAA, HL7, FHIR) Work closely with data engineering and analytics teams to support healthcare data pipelines and analytics platforms Mentor junior engineers and contribute to SRE best practices and culture Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors degree in Engineering (B.Tech) or equivalent in Computer Science, Information Technology, or a related field 10+ years of experience in Site Reliability Engineering, DevOps, or related roles Hands-on experience with AWS services, EKS, and container orchestration Experience with healthcare technology solutions, health data interoperability standards (FHIR, HL7), and healthcare analytics Experience with GitHub Actions or similar CI/CD tools Solid expertise in Datadog, Splunk, Kibana, and other observability tools Deep understanding of microservices architecture and distributed systems Proficiency in Python for scripting and automation Solid scripting and automation skills (e.g., Bash, Terraform, Ansible) Proven excellent problem-solving, communication, and collaboration skills Preferred Qualifications: Certifications in AWS, Kubernetes, or healthcare IT (e.g., AWS Certified DevOps Engineer, Certified Kubernetes Administrator) Experience with security and compliance in healthcare environments
Posted 2 weeks ago
5.0 - 9.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Kubernetes, Linux, Cloud, Ansible 1. System Administration and Automation: o Develop and maintain Bash scripts to automate routine tasks and improve operational efficiency. o Ensure the stability and reliability of IPTV servers and infrastructure. 2. Configuration Management: o Deploy, configure, and manage IPTV services using Ansible, ensuring consistent and efficient operations. o Monitor and maintain configuration standards across the environment. 3. Container Orchestration: o Manage and maintain Kubernetes clusters for containerized IPTV applications. o Ensure high availability, scalability, and efficient resource utilization of IPTV platforms. 4. Troubleshooting and Support: o Diagnose and resolve technical issues, including IPTV system errors and network performance challenges. o Provide timely support to internal teams to ensure a positive viewing experience. 5. System Monitoring and Optimization: o Utilize monitoring tools to track IPTV system performance and identify potential bottlenecks. Kubernetes, Linux Administration, Ansible, Python
Posted 2 weeks ago
5.0 - 8.0 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role: Azure Devops Experience Required :5 to 8 yrs Work Location :Hyderabad/Pune/Bangalore Required Skills, Azure Devops Terraform Bash/Powershell/Pytgon Promethesus/Grafana Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 2 weeks ago
3.0 - 5.0 years
3 - 8 Lacs
Noida
Work from Office
Role: Azure Devops Engineer Skillset: Azure Devops, CI/CD, Kubernates, Terraform Experience: 4-6 years Location: Noida and Chennai Requirements Automation-first mindset and hands-on in primary knowledge domain Deep understanding of DevOps framework, tools and metrics for build, test and deployment automation – Configuration Management, scalable mode of infrastructure deployment and best practices Able to assess the existing maturity of client in Cloud and DevOps space, perform gap analysis and provide baseline solution Hands-on with language like Unix Shell-scripting or Power Shell Hands-On experience on CI/CD pipeline and Proficient with tools like GitLab’s, GIT, Azure DevOps etc. Hands-On experience with automation tools on the configuration management tool like Ansible/Puppet/ Hands-On experience with containerized deployment and orchestration using AKS(Azure Kubernetes Service) Hands-On experience with infrastructure as a code tool like Terraform , CloudFormation or ARM Template etc. Previous Application Development (Java/.Net) and System Administration experience in Linux Strong experience with project management methodologies and frameworks (Waterfall, Scrum, Kanban) Able to perform client-facing technical consultancy roles, contribute to knowledge base, work in Team Lead role for small-medium sized teams, etc Experience in implementing monitoring and logging solutions for any cloud environments, including ones in a containerized environment Nice to have: Certifications in DevOps and Kubernetes Technologies DevOps principle and Source code branching strategies Scripting Language - Bash, Powershell etc Docker Native Kubernetes and one of the managed kubernetes service AKS IaC Tool – Terraform, ARM Template, Cloud Formation Configuration Management tool – Ansible/Puppet/ CI/CD – GitLab’s, Azure Pipeline Linux – Linux command and it’s directory exposure System Administration experience in Linux Code quality tool - SonarQube Knowledge of Azure cloud services like – Storage, Compute, Networking, Database
Posted 2 weeks ago
4.0 - 9.0 years
10 - 15 Lacs
Pune
Work from Office
DevOps Engineer (Google Cloud Platform) About Us IntelligentDX is a dynamic and innovative company dedicated to changing the Software landscape in the Healthcare industry. We are looking for a talented and experienced DevOps Engineer to join our growing team and help us build and maintain our scalable, reliable, and secure cloud infrastructure on Google Cloud Platform. Job Summary We are seeking a highly skilled DevOps Engineer with 4 years of hands-on experience, specifically with Google Cloud technologies. The ideal candidate will be responsible for designing, implementing, and maintaining our cloud infrastructure, ensuring the scalability, reliability, and security of our microservices-based software services. You will play a crucial role in automating our development and deployment pipelines, managing cloud resources, and supporting our engineering teams in delivering high-quality applications. Responsibilities Design, implement, and manage robust, scalable, and secure cloud infrastructure on Google Cloud Platform (GCP). Implement and enforce best practices for GCP Identity and Access Management (IAM) to ensure secure access control. Deploy, manage, and optimize applications leveraging Google Cloud Run for serverless deployments. Configure and maintain Google Cloud API Gateway for efficient and secure API management. Implement and monitor security measures across our GCP environment, including network security, data encryption, and vulnerability management. Manage and optimize cloud-based databases, primarily Google Cloud SQL, ensuring data integrity, performance, and reliability. Lead the setup and implementation of new applications and services within our GCP environment. Troubleshoot and resolve issues related to Cross-Origin Resource Sharing (CORS) configurations and other API connectivity problems. Provide ongoing API support to development teams, ensuring smooth integration and operation. Continuously work on improving the scalability and reliability of our software services, which are built as microservices. Develop and maintain CI/CD pipelines to automate software delivery and infrastructure provisioning. Monitor system performance, identify bottlenecks, and implement solutions to optimize resource utilization. Collaborate closely with development, QA, and product teams to ensure seamless deployment and operation of applications. Participate in on-call rotations to provide timely support for critical production issues. Qualifications Required Skills & Experience Minimum of 4 years of hands-on experience as a DevOps Engineer with a strong focus on Google Cloud Platform (GCP). Proven expertise in GCP services, including: GCP IAM: Strong understanding of roles, permissions, service accounts, and best practices. Cloud Run: Experience deploying and managing containerized applications. API Gateway: Experience in setting up and managing APIs. Security: Solid understanding of cloud security principles, network security (VPC, firewall rules), and data protection. Cloud SQL: Hands-on experience with database setup, management, and optimization. Demonstrated experience with the setup and implementation of cloud-native applications. Familiarity with addressing and resolving CORS issues. Experience providing API support and ensuring API reliability. Deep understanding of microservices architecture and best practices for their deployment and management. Strong commitment to building scalable and reliable software services. Proficiency in scripting languages (e.g., Python, Bash) and automation tools. Experience with Infrastructure as Code (IaC) tools (e.g., Terraform, Cloud Deployment Manager). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Excellent problem-solving skills and a proactive approach to identifying and resolving issues. Strong communication and collaboration abilities. Preferred Qualifications GCP certification (e.g., Professional Cloud DevOps Engineer, Professional Cloud Architect). Experience with monitoring and logging tools (e.g., Cloud Monitoring, Cloud Logging, Prometheus, Grafana). Knowledge of other cloud platforms (AWS, Azure) is a plus. Experience with Git and CI/CD platforms (e.g., GitLab CI, Jenkins, Cloud Build). What We Offer Health insurance, paid time off, and professional development opportunities. Fun working environment Flattened hierarchy, where everyone has a say Free snacks, games, and happy hour outings If you are a passionate DevOps Engineer with a proven track record of building and managing robust systems on Google Cloud Platform, we encourage you to apply!
Posted 2 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
Pune, Bengaluru, Hinjewadi
Work from Office
Software Required Skills: Deep experience with Murex (version 3.1 or higher) in a production environment focusing on reports and Datamart modules Strong SQL proficiency for data querying, issue analysis, and troubleshooting Shell scripting (Bash/sh) skills supporting issue investigation and automation Use of incident management tools such as ServiceNow or JIRA for tracking and reporting issues Familiarity with report development and data analysis in financial contextsPreferred Skills: Experience with other reporting tools or frameworks, such as Tableau, PowerBI, or QlikView Knowledge of data warehousing concepts and architecture Basic scripting knowledge in other languages (Python, Perl) for automationOverall Responsibilities Lead and oversee the support activities for Murex Datamart and reporting modules, ensuring operational stability and accuracy Provide L2/L3 technical support for report-related incidents, resolve complex issues, and perform root cause analysis Monitor report generation, data extraction, and reconciliation processes, ensuring timely delivery Collaborate with business stakeholders to address reporting queries, anomalies, and data discrepancies Support and coordinate system upgrades, patches, and configuration changes affecting reporting modules Maintain comprehensive documentation of system configurations, incident resolutions, and process workflows Lead problem resolution initiatives, including performance tuning and automation opportunities Manage support teams during shifts (24x5/24x7), ensuring effective incident escalation and stakeholder communication Drive continuous improvement initiatives to enhance report accuracy, data quality, and operational efficiencyStrategic objectives: Maximize report availability, accuracy, and reliability Reduce incident resolution times and recurring issues Strengthen reporting processes through automation and data quality enhancementsPerformance outcomes: Minimal unplanned downtime of reporting systems High stakeholder satisfaction with timely, accurate reporting Clear documentation and proactive communication with stakeholdersTechnical Skills (By Category)Reporting & Data Analysis (Essential): Extensive experience supporting Murex Datamart, reports, and related workflows SQL proficiency for data extraction, troubleshooting, and validation Understanding of report structures for P&L, MV, Accounting, Risk, etc.Scripting & Automation (Essential): Shell scripting (Bash/sh) for automation, issue diagnosis, and process automation Experience in automating routine report checks and data validationsDatabases & Data Management (Essential): Relational database management, data querying, and reconciliation Knowledge of data warehousing concepts and architectureSupport Tools & Incident Management (Essential): Hands-on experience with ServiceNow, JIRA, or similar platformsAdvanced & Cloud (Preferred): Familiarity with cloud data hosting, deployment, or cloud-based reporting solutions Experience with other programming languages (Python, Perl) for automationExperience Over 10+ years supporting Murex production environments with a focus on Datamart and reporting modules Proven expertise in resolving complex report issues, data discrepancies, and interface problems Demonstrated leadership with experience managing or supporting L2/L3 teams Support support in high-pressure environments, including escalations Industry experience within financial services, especially trading, risk, or accounting, is preferredAlternative experience pathways: Extensive scripting, data support, and operational expertise supporting financial reports may qualify candidates with fewer years but equivalent depth of knowledgeDay-to-Day Activities Monitor system dashboards, reports, and logs for anomalies or failures Troubleshoot report data issues, interface failures, and system errors Lead incident investigations, performed root cause analysis, and document resolutions Collaborate with business units to clarify reporting needs and resolve discrepancies Support deployment, configuration changes, and upgrades affecting Report and Datamart modules Automate repetitive tasks, batch jobs, and data validation workflows Create and maintain documentation, runbooks, and best practices Conduct shift handovers, incident reviews, and process improvement sessions Proactively identify improvement opportunities in reporting reliability and performanceQualifications Bachelors degree in Computer Science, Finance, Data Management, or related discipline Strong expertise in SQL, shell scripting, and report troubleshooting Deep understanding of financial reporting, P&L, MV, Risk, and accounting data flows Support experience in high-availability, high-pressure settings Willingness to work shifts, including nights, weekends, or holidays as neededProfessional Competencies Strong analytical and problem-solving skills for resolving complex issues Excellent communication skills for engaging with technical teams, business stakeholders, and vendors Leadership qualities to support and mentor support teams Ability to work independently and prioritize effectively under pressure Adaptability to evolving systems and technological environments Focus on continuous improvement and operational excellence
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane