Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Required Skills and Experience : Bachelor's degree in Computer Science, Engineering, or a related field. 7+ years of experience in DevOps, with at least 2 years in a leadership role. Proficiency in CI/CD tools like Jenkins, GitLab, Azure DevOps, or similar. Experience with infrastructure-as-code (IaC) tools like Terraform, Ansible, or CloudFormation. Expertise in cloud platforms (AWS, Azure, GCP). Strong knowledge of containerization and orchestration tools (Docker, Kubernetes). Familiarity with monitoring tools (Prometheus, Grafana, ELK Stack, etc.). Excellent problem-solving and communication skills. Preferred Qualifications : Certifications in cloud platforms (e.g., AWS Certified DevOps Engineer, Azure DevOps Expert). Hands-on experience with microservices architecture and serverless frameworks. Knowledge of agile and DevSecOps methodologies.
Posted 1 week ago
6.0 - 11.0 years
7 - 17 Lacs
Hyderabad
Work from Office
In this role, you will: Manage, coach, and develop a team or teams of experienced engineers and engineering managers in roles with moderate complexity and risk, responsible for building high quality capabilities with modern technology Ensure adherence to the Banking Platform Architecture, and meeting non-functional requirements with each release Partner with, engage and influence architects and experienced engineers to incorporate Wells Fargo Technology technical strategies, while understanding next generation domain architecture and enable application migration paths to target architecture; for example cloud readiness, application modernization, data strategy Function as the technical representative for the product during cross-team collaborative efforts and planning Identify and recommend opportunities for driving escalated resolution of technology roadblocks including code, build and deployment while also managing overall software development cycle and security standards Determine appropriate strategy and actions to act as an escalation partner for scrum masters and the teams to meet moderate to high risk deliverables and help remove impediments, obstacles, and friction while encouraging constant learning, experimentation, and continual improvement Build engineering skills side-by-side in the codebase, conduct peer reviews to evaluate quality and solution alignment to technical direction, and guide design, as needed Interpret, develop and ensure security, stability, and scalability within functions of technology with moderate complexity, as well as identify, manage and mitigate technology and enterprise risk Collaborate with, partner with and influence Product Managers/Product Owners to drive user satisfaction, influence technology requirements and priorities in the product roadmap, promote innovative and intelligent solutions, generate corporate value and articulate technical strategy while being a solid advocate of agile and DevOps practices Interact directly with third party vendors and technology service providers Manage allocation of people and financial resources to ensure commitments are met and align with strategic objectives in technology engineering Hire, build and guide a culture of talent development to have the skills required to effectively design and deliver innovative solutions for product areas and products to meet business objectives and strategy, as well as conduct performance management for engineers and managers Required Qualifications: 6+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education 3+ years of management or leadership experience Desired Qualifications: Strong people management experience Should have proven ability and experience of directly managing a diverse of technology delivery resources with formal line of accountability (at least 30+ team members) Ability to conduct research into emerging technologies and trends, standards, and products as required Ability to present ideas in user-friendly language Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment Provide consultation on the use of re-engineering techniques to improve process performance and improvements for greater efficiencies. Ability to work in a fast-paced environment Experienced in strategic process design for enterprise scale development organizations, driving speed , stability and quality of delivery Experience of working in a matrix structure across both global and regional stakeholders in an enterprise scale setup Experience working in a team-oriented, collaborative environment. Container Technologies such as Docker, Kubernetes, OpenStack Cloud Monitoring tools like Splunk, AppD, Dynatrace, Prometheus, Grafana, Elastic, Thousand Eyes etc.. Web development technologies and frameworks: Core Java, Java Enterprise Edition (JSP, Restful WebServices), Spring MVC, Spring Boot, Spring Cloud (Configuration bus, Zipkins), Hibernate, Maven, MQ, JUnit, Angular JS, React JS, jQuery, MQ,(woman)HTML, XML, CSS, Oracle, JNDI, JAAS Good understanding and hands-on exposure with Mongo, Kafka, Redis Job Expectations: Experience in Application Development with at least 5+ years in senior roles participating and driving the transformation for global organization 5+ Years in implementing SRE concepts and leading teams towards SRE maturity and reducing toil. 5+ Years in Observability domain with hands on knowledge on Metrics, Traces, Logs and Events 5+ Years in Application Performance Engineering 3+ Years experience on Dockers and Data streaming tools Experienced and well versed in Agile and CI CD tools and practices, Good exposure to tools like JIRA, Jenkins, CI CD Integration Jenkins, Github, Artifactory, Sonar etc Experience designing and implementing APIs, including deep understanding of REST, SOAP, HTTP etc. API lifecycle exposure designing API (openapi, swagger), developer platform and other API gateway capabilities Have a strong Technical background with experience of managing engineering/development teams across geographies Strong knowledge and implementation experience of High Resiliency/High availability applications including scaling, cloud migration, vertical and horizontal scaling will be an advantage Experienced and well versed in Agile and Waterfall project management practices, Good exposure to tools like JIRA and Confluence to drive the project release from inception to deployment
Posted 1 week ago
4.0 - 6.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Android Middleware/Framework Proficiency in problem solving and troubleshooting technical issues. Willingness to take ownership and strive for the best solutions. Experience in using performance analysis tools, such as Android Profiler, Traceview, perfetto, and Systrace etc. Strong understanding of Android architecture, memory management, and threading. Strong understanding of Android HALs, Car Framework, Android graphics pipeline, DRM, Codecs. Good knowledge in Hardware abstraction layers in Android and/or Linux. Good understanding of the git, CI/CD workflow Experience in agile based projects. Experience with Linux as a development platform and target Extensive experience with Jenkins and Gitlab CI system Hands-on experience with GitLab, Jenkins, Artifactory, Grafana, Prometheus and/or Elastic Search. Experience with different testing frameworks and their implementation in CI system Programming using C/C++, Java/Kotlin, Linux. Yocto and its use in CI Environments Familiarity with ASPICE Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.
Posted 1 week ago
6.0 - 10.0 years
6 - 11 Lacs
Mumbai
Work from Office
Primary Skills Google Cloud Platform (GCP) Expertise in Compute (VMs, GKE, Cloud Run), Networking (VPC, Load Balancers, Firewall Rules), IAM (Service Accounts, Workload Identity, Policies), Storage (Cloud Storage, Cloud SQL, BigQuery), and Serverless (Cloud Functions, Eventarc, Pub/Sub). Strong experience in Cloud Build for CI/CD, automating deployments and managing artifacts efficiently. Terraform Skilled in Infrastructure as Code (IaC) with Terraform for provisioning and managing GCP resources. Proficient in Modules for reusable infrastructure, State Management (Remote State, Locking), and Provider Configuration . Experience in CI/CD Integration with Terraform Cloud and automation pipelines. YAML Proficient in writing Kubernetes manifests for deployments, services, and configurations. Experience in Cloud Build Pipelines , automating builds and deployments. Strong understanding of Configuration Management using YAML in GitOps workflows. PowerShell Expert in scripting for automation, managing GCP resources, and interacting with APIs. Skilled in Cloud Resource Management , automating deployments, and optimizing cloud operations. Secondary Skills CI/CD Pipelines GitHub Actions, GitLab CI/CD, Jenkins, Cloud Build Kubernetes (K8s) Helm, Ingress, RBAC, Cluster Administration Monitoring & Logging Stackdriver (Cloud Logging & Monitoring), Prometheus, Grafana Security & IAM GCP IAM Policies, Service Accounts, Workload Identity Networking VPC, Firewall Rules, Load Balancers, Cloud DNS Linux & Shell Scripting Bash scripting, system administration Version Control Git, GitHub, GitLab, Bitbucket
Posted 1 week ago
4.0 - 9.0 years
9 - 14 Lacs
Mumbai, Pune, Bengaluru
Work from Office
PostgreSQL Database Administrator with over 4+ years of hands-on experience to manage, maintain, and optimize our PostgreSQL database environments. The ideal candidate will be responsible for ensuring high availability, performance, and security of our databases while supporting development and operations teams. Install, configure, and upgrade PostgreSQL database systems. Monitor database performance and implement tuning strategies. Perform regular database maintenance tasks such as backups, restores, and indexing. Ensure database security, integrity, and compliance with internal and external standards. Automate routine tasks using scripting (e.g., Bash, Python). Troubleshoot and resolve database-related issues in a timely manner. Collaborate with development teams to optimize queries and database design. Implement and maintain high availability and disaster recovery solutions. Maintain documentation related to database configurations, procedures, and policies. Participate in on-call rotation and provide support during off-hours as needed. Primary skills 4+ years of experience as a PostgreSQL DBA in production environments. Strong knowledge of PostgreSQL architecture, replication, and performance tuning. Secondary skills Proficiency in writing complex SQL queries and PL/pgSQL procedures. Familiarity with Linux/Unix systems and shell scripting. Experience with monitoring tools like Prometheus , Grafana , or Nagios . Understanding of database security best practices and access control.
Posted 1 week ago
10.0 - 14.0 years
13 - 18 Lacs
Pune
Work from Office
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way youd like, where youll be supported and inspired by a collaborative community of colleagues around the world, and where you ll be able to reimagine what s possible. Join us and help the world s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role Design and manage CI/CD pipelines (Jenkins, GitLab CI, Azure DevOps) Automate infrastructure with Terraform, Ansible, or CloudFormation Implement Docker and Kubernetes for containerization and orchestration Monitor systems using Prometheus, Grafana, and ELK Collaborate with dev teams to embed DevOps best practices Ensure security, compliance, and support production issues Your Profile 614 years in DevOps or related roles Strong CI/CD and infrastructure automation experience Proficient in Docker, Kubernetes, and cloud platforms (AWS, Azure, GCP) Skilled in monitoring tools and problem-solving Excellent team collaboration What youll love about working with us Flexible work optionsremote and hybrid Competitive salary and benefits package Career growth with SAP and cloud certifications Inclusive and collaborative work environment
Posted 1 week ago
7.0 - 10.0 years
0 Lacs
Pune
Hybrid
Job Description EMS and Observability Consultant Location - Bangalore Job Summary: We are seeking a skilled IT Operations Consultant specializing in Monitoring and Observability to design, implement, and optimize monitoring solutions for our customers. The ideal candidate will have a minimum of 7 years of relevant experience, with a strong background in monitoring, observability and IT service management. The ideal candidate will be responsible for ensuring system reliability, performance, and availability by creating robust observability architectures and leveraging modern monitoring tools. Qualification/Experience needed • Minimum 7 years of working experience in Cyber Security Consulting or Advisory. Primary Responsibilities: • Design end-to-end monitoring and observability solutions to provide comprehensive visibility into infrastructure, applications, and networks. • Implement monitoring tools and frameworks (e.g., Prometheus, Grafana, OpsRamp, Dynatrace, New Relic) to track key performance indicators and system health metrics. • Integration of monitoring and observability solutions with IT Service Management Tools. • Develop and deploy dashboards, s, and reports to proactively identify and address system performance issues. • Architect scalable observability solutions to support hybrid and multi-cloud environments. • Collaborate with infrastructure, development, and DevOps teams to ensure seamless integration of monitoring systems into CI/CD pipelines. • Continuously optimize monitoring configurations and thresholds to minimize noise and improve incident detection accuracy. • Automate ing, remediation, and reporting processes to enhance operational efficiency. • Utilize AIOps and machine learning capabilities for intelligent incident management and predictive analytics. • Work closely with business stakeholders to define monitoring requirements and success metrics. • Document monitoring architectures, configurations, and operational procedures. Required Skills: • Strong understanding of infrastructure and platform development principles and experience with programming languages such as Python, Ansible, for developing custom scripts. • Strong knowledge of monitoring frameworks, logging systems (ELK stack, Fluentd), and tracing tools (Jaeger, Zipkin) along with the OpenSource solutions like Prometheus, Grafana. • Extensive experience with monitoring and observability solutions such as OpsRamp, Dynatrace, New Relic, must have worked with ITSM integration (e.g. integration with ServiceNow, BMC remedy, etc.) • Working experience with RESTful APIs and understanding of API integration with the monitoring tools. • Familiarity with AIOps and machine learning techniques for anomaly detection and incident prediction. • Knowledge of ITIL processes and Service Management frameworks. • Familiarity with security monitoring and compliance requirements. • Excellent analytical and problem-solving skills, ability to debug and troubleshoot complex automation issues About Mphasis Mphasis applies next-generation technology to help enterprises transform businesses globally. Customer centricity is foundational to Mphasis and is reflected in the Mphasis’ Front2Back™ Transformation approach. Front2Back™ uses the exponential power of cloud and cognitive to provide hyper-personalized (C=X2C2TM=1) digital experience to clients and their end customers. Mphasis’ Service Transformation approach helps ‘shrink the core’ through the application of digital technologies across legacy environments within an enterprise, enabling businesses to stay ahead in a changing world. Mphasis’ core reference architectures and tools, speed and innovation with domain expertise and specialization are key to building strong relationships with marquee clients. Skills PRIMARY COMPETENCY : Tools PRIMARY SKILL : Dynatrace PRIMARY SKILL PERCENTAGE : 51 SECONDARY COMPETENCY : Tools SECONDARY SKILL : New Relic SECONDARY SKILL PERCENTAGE : 25 TERTIARY COMPETENCY : Tools TERTIARY SKILL : Automation Tools - Chef/Puppet/Ansible/Salt Stack TERTIARY SKILL PERCENTAGE : 24
Posted 1 week ago
4.0 - 6.0 years
4 - 7 Lacs
Gurugram
Work from Office
GreensTurn is seeking a highly skilled DevOps Engineer to manage and optimize our cloud infrastructure, automate deployment pipelines, and enhance the security and performance of our web based platform. The ideal candidate will be responsible for ensuring high availability, scalability, and security of the system while working closely with developers, security teams, and product managers. Key Responsibilities: Cloud Infrastructure Management: Deploy, configure, and manage cloud services on AWS or Azure for scalable, cost-efficient infrastructure. CI/CD Implementation: Develop and maintain CI/CD pipelines for automated deployments using GitHub Actions, Jenkins, or GitLab CI/CD . Containerization & Orchestration: Deploy and manage applications using Docker, Kubernetes (EKS/AKS), and Helm . Monitoring & Performance Optimization: Implement real-time monitoring, logging, and alerting using Prometheus, Grafana, CloudWatch, or ELK Stack . Security & Compliance: Ensure best practices for IAM (Identity & Access Management), role-based access control (RBAC), encryption, firewalls, and vulnerability management . Infrastructure as Code (IaC): Automate infrastructure provisioning using Terraform, AWS CloudFormation, or Azure Bicep . Networking & Load Balancing: Set up VPC, security groups, load balancers (ALB/NLB), and CDN (CloudFront/Azure CDN) . Disaster Recovery & Backup: Implement automated backups, failover strategies, and disaster recovery plans . Database Management: Optimize database performance, backup policies, and replication for MongoDB Collaboration & Documentation: Work with development teams to integrate DevOps best practices and maintain proper documentation for infrastructure and deployment workflows. Preferred candidate profile Perks and benefits
Posted 1 week ago
7.0 - 11.0 years
9 - 12 Lacs
Mumbai, Bengaluru, Delhi
Work from Office
Experience : 7.00 + years Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Must have skills required: DevOps, PowerShell, CLI, Amazon AWS, Java, Scala, Go (Golang), Terraform Opportunity Summary: We are looking for an enthusiastic and dynamic individual to join Upland India as a DevOps Engineer in the Cloud Operations Team. The individual will manage and monitor our extensive set of cloud applications. The successful candidate will possess extensive experience with production systems with an excellent understanding of key SaaS technologies as well as exhibit a high amount of initiative and responsibility. The candidate will participate in technical/architectural discussions supporting Uplands product and influence decisions concerning solutions and techniques within their discipline. What would you do Be an engaged, active member of the team, contributing to driving greater efficiency and optimization across our environments. Automate manual tasks to improve performance and reliability. Build, install, and configure servers in physical and virtual environments. Participate in an on-call rotation to support customer-facing application environments. Monitor and optimize system performance, taking proactive measures to prevent issues and reactive measures to correct them. Participate in the Incident, Change, Problem, and Project Management programs and document details within prescribed guidelines. Advise technical and business teams on tactical and strategic improvements to enhance operational capabilities. Create and maintain documentation of enterprise infrastructure topology and system configurations. Serve as an escalation for internal support staff to resolve issues. What are we looking for Experience: Overall, 7-9 years total experience in DevOps: AWS (solutioning and operations), GitHub/Bitbucket, CI/CD, Jenkins, ArgoCD, Grafana, Prometheus, etc. Technical Skills To be a part of this journey, you should have 7-9 years of overall industry experience managing production systems, an excellent understanding of key SaaS technologies, and a high level of initiative and responsibility. The following skills are needed for this role. Primary Skills: Public Cloud Providers: AWS: Solutioning, introducing new services in existing infrastructure, and maintaining the infrastructure in a production 24x7 SaaS solution. Administer complex Linux-based web hosting configuration components, including load balancers, web, and database servers. Develop and maintain CI/CD pipelines using GitHub Actions, ArgoCD, and Jenkins. EKS/Kubernetes, ECS, Docker Administration/Deployment. Strong knowledge of AWS networking concepts including: Route53, VPC configuration and management, DHCP, VLANs, HTTP/HTTPS and IPSec/SSL VPNs. Strong knowledge of AWS Security concepts: AWS: IAM accounts, KMS managed encryption, CloudTrail, CloudWatch monitoring/alerting. Automating existing manual workload like reporting, patching/updating servers by writing scripts, lambda functions, etc. Expertise in Infrastructure as Code technologies: Terraform is a must. Monitoring and alerting tools like Prometheus, Grafana, PagerDuty, etc. Expertise in Windows and Linux OS is a must. Secondary Skills: It would be advantageous if the candidate also has the following secondary skills: Strong knowledge of scripting/coding with Go, PowerShell, Bash, or Python . Soft Skills: Strong written and verbal communication skills directed to technical and non-technical team members. Willingness to take ownership of problems and seek solutions. Ability to apply creative problem solving and manage through ambiguity. Ability to work under remote supervision and with a minimum of direct oversight. Qualification Bachelors degree in computer science, Engineering, or a related field. Proven experience as a DevOps Engineer with a focus on AWS. Experience with modernizing legacy applications and improving deployment processes. Excellent problem-solving skills and the ability to work under remote supervision. Strong written and verbal communication skills, with the ability to articulate technical information to non-technical team members.
Posted 1 week ago
4.0 - 8.0 years
10 - 12 Lacs
Pune
Work from Office
We are seeking a skilled and motivated DevOps Engineer to join our dynamic team. The ideal candidate will have a strong background in CI/CD pipelines, cloud infrastructure, containerization, and automation, along with basic programming knowledge.
Posted 1 week ago
5.0 - 10.0 years
25 - 35 Lacs
Bengaluru
Remote
- Cloud Support Operations - SaaS and AWS (Storage, Databases, IAM, ECS, EKS, and CloudWatch) - Cloud Observability and Monitoring (Datadog, Splunk, Grafana, and Prometheus) - Infrastructure Management - Kubernetes and Containerization
Posted 1 week ago
6.0 - 11.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 11+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
8.0 - 12.0 years
8 - 18 Lacs
Hyderabad, Bengaluru
Work from Office
**Job Title:** Confluent Kafka Engineer (Azure & GCP Focus) **Location:** [Bangalore or Hyderabad ] **Role Overview** We are seeking an experienced **Confluent Kafka Engineer** with hands-on expertise in deploying, administering, and securing Kafka clusters in **Microsoft Azure** and **Google Cloud Platform (GCP)** environments. The ideal candidate will be skilled in cluster administration, RBAC, cluster linking and setup, and monitoring using Prometheus and Grafana, with a strong understanding of cloud-native best practices. **Key Responsibilities** - **Kafka Cluster Administration (Azure & GCP):** - Deploy, configure, and manage Confluent Kafka clusters on Azure and GCP virtual machines or managed infrastructure. - Plan and execute cluster upgrades, scaling, and disaster recovery strategies in cloud environments. - Set up and manage cluster linking for cross-region and cross-cloud data replication. - Monitor and maintain the health and performance of Kafka clusters, proactively identifying and resolving issues. - **Security & RBAC:** - Implement and maintain security protocols, including SSL/TLS encryption and role-based access control (RBAC). - Configure authentication and authorization (Kafka ACLs) across Azure and GCP environments. - Set up and manage **Active Directory (AD) plain authentication** and **OAuth** for secure user and application access. - Ensure compliance with enterprise security standards and cloud provider best practices. - **Monitoring & Observability:** - Set up and maintain monitoring and alerting using Prometheus and Grafana, integrating with Azure Monitor and GCP-native monitoring as needed. - Develop and maintain dashboards and alerts for Kafka performance and reliability metrics. - Troubleshoot and resolve performance and reliability issues using cloud-native and open-source monitoring tools. - **Integration & Automation:** - Develop and maintain automation scripts (Bash, Python, Terraform, Ansible) for cluster deployment, scaling, and monitoring. - Build and maintain infrastructure as code for Kafka environments in Azure and GCP. - Configure and manage **Kafka connectors** for integration with external systems, including **BigQuery Sync connectors** and connectors for Azure and GCP data services (such as Azure Data Lake, Cosmos DB, BigQuery). - **Documentation & Knowledge Sharing:** - Document standard operating procedures, architecture, and security configurations for cloud-based Kafka deployments. - Provide technical guidance and conduct knowledge transfer sessions for internal teams. **Required Qualifications** - Bachelors degree in Computer Science, Engineering, or related field. - 5+ years of hands-on experience with Confluent Platform and Kafka in enterprise environments. - Demonstrated experience deploying and managing Kafka clusters on **Azure** and **GCP** (not just using pre-existing clusters). - Strong expertise in cloud networking, security, and RBAC in Azure and GCP. - Experience configuring **AD plain authentication** and **OAuth** for Kafka. - Proficiency with monitoring tools (Prometheus, Grafana, Azure Monitor, GCP Monitoring). - Hands-on experience with Kafka connectors, including BQ Sync connectors, Schema Registry, KSQL, and Kafka Streams. - Scripting and automation skills (Bash, Python, Terraform, Ansible). - Familiarity with infrastructure as code practices. - Excellent troubleshooting and communication skills. **Preferred Qualifications** - Confluent Certified Developer/Admin certification. - Experience with cross-cloud Kafka streaming and integration scenarios. - Familiarity with Azure and GCP data services (Azure Data Lake, Cosmos DB, BigQuery). - Experience with other streaming technologies (e.g., Spark Streaming, Flink). - Experience with data visualization and analytics tools.
Posted 1 week ago
4.0 - 6.0 years
10 - 20 Lacs
Pune
Work from Office
Role Overview We are looking for experienced DevOps Engineers (4+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices.
Posted 1 week ago
2.0 - 4.0 years
6 - 7 Lacs
Mumbai Suburban
Work from Office
We are the PERFECT match if you... Are a graduate with a minimum of 2-4 years of technical product support experience with following skills: Clear logical thinking and good communication skills. We believe in individuals who are high on ownership and like to operate with minimum management An ability to "understand" data and analyze logs to help investigate production issues and incidents Hands on experience of Cloud Platforms (GCP/AWS) Experience creating Dashboards & Alerts with tools like Metabase, Grafana, Prometheus Hands-on experience with writing SQL queries Hands on experience of logs monitoring tool (Kibana, Stackdriver, CloudWatch) Knowledge of Scripting language like Elixir/Python is a plus Experience in Kubernetes/Docker is a plus. Has actively worked on documenting RCA and creating incident reports. Good understanding of APls, with hands-on experience using tools like Postman or Insomnia. Knowledge of ticketing tool such as Freshdesk/Gitlab Here's what your day would look like... Defining monitoring events for IDfy's services and setting up the corresponding alerts Responding to alerts, with triaging, investigating and resolving resolution of issues Learning about various IDfy applications and understanding the events emitted Creating analytical dashboards for service performance and usage monitoring Responding to incidents and customer tickets in a timely manner Occasionally running service recovery scripts Helping improve the IDfy Platform by providing insights based on investigations and analysis root cause analysis Get in touch with ankit.pant@idfy.com
Posted 1 week ago
5.0 - 9.0 years
16 - 20 Lacs
Pune
Work from Office
Job Summary Synechron is seeking an experienced Site Reliability Engineer (SRE) / DevOps Engineer to lead the design, implementation, and management of reliable, scalable, and efficient infrastructure solutions. This role is pivotal in ensuring optimal performance, availability, and security of our applications and services through advanced automation, continuous deployment, and proactive monitoring. The ideal candidate will collaborate closely with development, operations, and security teams to foster a culture of continuous improvement and technological innovation. Software Required Skills: Proficiency with cloud platforms such as AWS, GCP, or Azure Expertise with container orchestration tools like Kubernetes and Docker Experience with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation Hands-on experience with CI/CD pipelines using Jenkins, GitLab CI, or similar Strong scripting skills in Python, Bash, or similar languages Preferred Skills: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack Knowledge of configuration management tools such as Ansible, Chef, or Puppet Experience implementing security best practices in cloud environments Understanding of microservices architecture and service mesh frameworks like Istio or Linkerd Overall Responsibilities Lead the development, deployment, and maintenance of scalable, resilient infrastructure solutions. Automate routine tasks and processes to improve efficiency and reduce manual intervention. Implement and refine monitoring, alerting, and incident response strategies to maintain high system availability. Collaborate with software development teams to integrate DevOps best practices into product development cycles. Guide and mentor team members on emerging technologies and industry best practices. Ensure compliance with security standards and manage risk through security controls and assessments. Stay abreast of the latest advancements in SRE, cloud computing, and automation technologies to recommend innovative solutions aligned with organizational goals. Technical Skills (By Category) Cloud Technologies: EssentialAWS, GCP, or Azure (both infrastructure management and deployment) PreferredMulti-cloud management, cloud cost optimization Containers and Orchestration: EssentialDocker, Kubernetes PreferredService mesh frameworks like Istio, Linkerd Automation & Infrastructure as Code: EssentialTerraform, CloudFormation, or similar PreferredAnsible, SaltStack Monitoring & Logging: EssentialPrometheus, Grafana, ELK Stack PreferredDataDog, New Relic, Splunk Security & Compliance: Knowledge of identity and access management (IAM), encryption, vulnerability management Development & Scripting: EssentialPython, Bash scripting PreferredGo, PowerShell Experience 5-9 years of experience in software engineering, systems administration, or DevOps/SRE roles. Proven track record in designing and deploying large-scale, high-availability systems. Hands-on experience with cloud infrastructure automation and container orchestration. Past roles leading incident management, performance tuning, and security enhancements. Experience in working with cross-functional teams using Agile methodologies. BonusExperience with emerging technologies like Blockchain, IoT, or AI integrations. Day-to-Day Activities Architect, deploy, and maintain cloud infrastructure and containerized environments. Develop automation scripts and frameworks to streamline deployment and operations. Monitor system health, analyze logs, and troubleshoot issues proactively. Conduct capacity planning and performance tuning. Collaborate with development teams to integrate new features into production with zero downtime. Participate in incident response, post-mortem analysis, and continuous improvement initiatives. Document procedures, guidelines, and best practices for the team. Stay updated on evolving SRE technologies and industry trends, applying them to enhance our infrastructure. Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or related field. Certifications in cloud platforms (AWS Certified Solutions Architect, Azure DevOps Engineer, Google Professional Cloud Engineer) are preferred. Additional certifications in Kubernetes, Terraform, or security are advantageous. Professional Competencies Strong analytical and problem-solving abilities. Excellent collaboration and communication skills. Leadership qualities with an ability to mentor junior team members. Ability to work under pressure and manage multiple priorities. Commitment to best practices around automation, security, and reliability. Eagerness to learn emerging technologies and adapt to evolving workflows. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 1 week ago
10.0 - 15.0 years
15 - 30 Lacs
Thiruvananthapuram
Work from Office
Job Summary: We are seeking an experienced DevOps Architect to drive the design, implementation, and management of scalable, secure, and highly available infrastructure. The ideal candidate should have deep expertise in DevOps practices, CI/CD pipelines, cloud platforms, and infrastructure automation across multiple cloud environments along with strong leadership and mentoring capabilities. Job Duties and Responsibilities Lead and manage the DevOps team to ensure reliable infrastructure and automated deployment processes. Design, implement, and maintain highly available, scalable, and secure cloud infrastructure (AWS, Azure, GCP, etc.). Develop and optimize CI/CD pipelines for multiple applications and environments. Drive Infrastructure as Code (IaC) practices using tools like Terraform, CloudFormation, or Ansible. Oversee monitoring, logging, and alerting solutions to ensure system health and performance. Collaborate with Development, QA, and Security teams to integrate DevOps best practices across the SDLC. Lead incident management and root cause analysis for production issues. Ensure robust security practices for infrastructure and pipelines (secrets management, vulnerability scanning, etc.). Guide and mentor team members, fostering a culture of continuous improvement and technical excellence. Evaluate and recommend new tools, technologies, and processes to improve operational efficiency. Required Qualifications Education Bachelor's degree in Computer Science, IT, or related field; Master's preferred At least two current cloud certifications (e.g., AWS Solutions Architect, Azure Administrator, GCP DevOps Engineer, CKA, Terraform etc.) Experience: 10+ years of relevant experience in DevOps, Infrastructure, or Cloud Operations. 5+ years of experience in a technical leadership or team lead role. Knowledge, Skills & Abilities Expertise in at least two major cloud platform: AWS , Azure , or GCP . Strong experience with CI/CD tools such as Jenkins, GitLab CI, Azure DevOps, or similar. Hands-on experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. Proficient in containerization and orchestration using Docker and Kubernetes . Strong knowledge of monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, CloudWatch). Scripting knowledge in languages like Python , Bash , or Go . Solid understanding of networking, security, and system administration. Experience in implementing security best practices across DevOps pipelines. Proven ability to mentor, coach, and lead technical teams. Preferred Skills Experience with serverless architecture and microservices deployment. Experience with security tools and best practices (e.g., IAM, VPNs, firewalls, cloud security posture management ). Exposure to hybrid cloud or multi-cloud environments. Knowledge of cost optimization and cloud governance strategies. Experience working in Agile teams and managing infrastructure in production-grade environments Relevant certifications (AWS Certified DevOps Engineer, Azure DevOps Expert, CKA, etc.). Working Conditions Work Arrangement: An occasionally hybrid opportunity based out of our Trivandrum office. Travel Requirements: Occasional travel may be required for team meetings, user research, or conferences. On-Call Requirements: Light on-call rotation may be required depending on operational needs. Hours of Work: Monday to Friday, 40 hours per week, with overlap with PST required as needed. Living AOT s Values Our values guide how we work, collaborate, and grow as a team. Every role at AOT is expected to embody and promote these values: Innovation: We pursue true innovation by solving problems and meeting unarticulated needs. Integrity: We hold ourselves to high ethical standards and never compromise. Ownership: We are all responsible for our shared long-term success. Agility: We stay ready to adapt to change and deliver results. Collaboration: We believe collaboration and knowledge-sharing fuel innovation and success. Empowerment: We support our people so they can bring the best of themselves to work every day.
Posted 1 week ago
3.0 - 7.0 years
3 - 7 Lacs
Mohali
Work from Office
The Cloud Computing Training Expert will be responsible for delivering high-quality training sessions, developing curriculum, and guiding students toward industry certifications and career opportunities. Key Responsibilities 1. Training Delivery Design, develop, and deliver high-quality cloud computing training through courses, workshops, boot camps, and webinars. Cover a broad range of cloud topics, including but not limited to: Cloud Fundamentals (AWS, Azure, Google Cloud) Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Serverless Computing Cloud Security, Identity & Access Management (IAM), Compliance DevOps & CI/CD Pipelines (Jenkins, Docker, Kubernetes, Terraform, Ansible) Networking in the Cloud, Virtualization, and Storage Solutions Multi-cloud Strategies & Cost Optimization 2. Curriculum Development Develop and continuously update training materials, hands-on labs, and real-world projects. Align curriculum with cloud certification programs (AWS Certified Solutions Architect, Azure Administrator, Google Cloud Professional, etc.). 3. Training Management Organize and manage cloud computing training sessions, ensuring smooth delivery and active student engagement. Track student progress and provide guidance, feedback, and additional learning resources. 4. Technical Support & Mentorship Assist students with technical queries and troubleshooting related to cloud platforms. Provide career guidance, helping students pursue cloud certifications and job placements in cloud computing and DevOps roles. 5. Industry Engagement Stay updated on emerging cloud technologies, trends, and best practices. Represent ASB at cloud computing conferences, industry events, and tech forums. 6. Assessment & Evaluation Develop and administer hands-on labs, quizzes, and real-world cloud deployment projects. Evaluate learner performance and provide constructive feedback. Required Qualifications & Skills > Educational Background Bachelors or Masters degree in Computer Science, Information Technology, Cloud Computing, or a related field. > Hands-on Cloud Experience 3+ years of experience in cloud computing, DevOps, or cloud security roles. Strong expertise in AWS, Azure, and Google Cloud, including cloud architecture, storage, and security. Experience in Infrastructure as Code (IaC) using Terraform, CloudFormation, or Ansible. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines. > Teaching & Communication Skills 2+ years of experience in training, mentoring, or delivering cloud computing courses. Ability to explain complex cloud concepts in a clear and engaging way. > Cloud Computing Tools & Platforms Experience with AWS services (EC2, S3, Lambda, RDS, IAM, CloudWatch, etc.). Hands-on experience with Azure and Google Cloud solutions. Familiarity with DevOps tools (Jenkins, GitHub Actions, Kubernetes, Docker, Prometheus, Grafana, etc.). > Passion for Education A strong desire to train and mentor future cloud professionals. Preferred Qualifications > Cloud Certifications (AWS, Azure, Google Cloud) AWS Certified Solutions Architect, AWS DevOps Engineer, Azure Administrator, Google Cloud Professional Architect or a similar architecture. > Experience in Online Teaching Prior experience in delivering online training (Udemy, Coursera, or LMS platforms). > Knowledge of Multi-Cloud & Cloud Security Understanding of multi-cloud strategies, cloud cost optimization, and cloud-native security practices. > Experience in Hybrid Cloud & Edge Computing Familiarity with hybrid cloud deployment, cloud automation, and emerging edge computing trends.
Posted 1 week ago
8.0 - 10.0 years
25 - 30 Lacs
Bengaluru, Indiranagar
Work from Office
Years of Experience 8 to 10 years of experience PD1 Any Project specific Prerequisite skills Candidate will work from customer Location Bangalore (Indiranagar) No of Contractors required 1 Detailed JD Extensive hands-on experience with OpenShift (Azure Redhat OpenShift) - installation, upgrades, administration, and troubleshooting. Strong expertise in Kubernetes, containerization (Docker), and cloud-native development. Deep knowledge of Terraform for infrastructure automation and ArgoCD for GitOps workflows. Experience in CI/CD pipelines, automation, and security integration within a DevSecOps framework. Strong understanding of cybersecurity principles, including vulnerability management, policy enforcement, and access control. Proficiency in Microsoft Azure and its services related to networking, security, and compute. Hands-on experience with monitoring and observability tools (Splunk, Prometheus, Grafana, or similar). Agile mindset, preferably with SAFe Agile experience. Strong communication skills and ability to work with global teams across time zones. Experience with Helm charts and Kubernetes operators. Knowledge of Service Mesh (Istio, Linkerd) for OpenShift (Azure Redhat OpenShift) environments, preferred. Hands-on exposure to Terraform Cloud & Enterprise features. Prior experience in automotive embedded software environments.
Posted 1 week ago
3.0 - 6.0 years
10 - 14 Lacs
Bengaluru
Hybrid
Hi all , we are looking for a role DevOps Engineer experience : 3 - 6 years notice period : Immediate - 15 days location : Bengaluru Description: Job Title: DevOps Engineer with 4+ years experience Job Summary We're looking for a dynamic DevSecOps Engineer to lead the charge in embedding security into our DevOps lifecycle. This role focuses on implementing secure, scalable, and observable cloud-native systems, leveraging Azure, Kubernetes, GitHub Actions, and security tools like Black Duck, SonarQube, and Snyk. Key Responsibilities • Architect, deploy, and manage secure Azure infrastructure using Terraform and Infrastructure as Code (IaC) principles • Build and maintain CI/CD pipelines in GitHub Actions, integrating tools such as Black Duck, SonarQube, and Snyk • Operate and optimize Azure Kubernetes Service (AKS) for containerized applications • Configure robust monitoring and observability stacks using Prometheus, Grafana, and Loki • Implement incident response automation with PagerDuty • Manage and support MS SQL databases and perform basic operations on Cosmos DB • Collaborate with development teams to promote security best practices across SDLC • Identify vulnerabilities early and respond to emerging security threats proactively Required Skills • Deep knowledge of Azure Services, AKS, and Terraform • Strong proficiency with Git, GitHub Actions, and CI/CD workflow design • Hands-on experience integrating and managing Black Duck, SonarQube, and Snyk • Proficiency in setting up monitoring stacks: Prometheus, Grafana, and Loki • Familiarity with PagerDuty for on-call and incident response workflows • Experience managing MSSQL and understanding Cosmos DB basics • Strong scripting ability (Python, Bash, or PowerShell) • Understanding of DevSecOps principles and secure coding practices • Familiarity with Helm, Bicep, container scanning, and runtime security solutions
Posted 1 week ago
3.0 - 5.0 years
5 - 7 Lacs
Hyderabad
Work from Office
Skills (Must have): 3+ years of DevOps experience. Expertise in Kubernetes, Docker, and CI/CD tools (Jenkins, GitLab CI). Hands-on with config management tools like Ansible, Puppet, or Chef. Strong knowledge of cloud platforms (AWS, Azure, or GCP). Proficient in scripting (Bash, Python). Good troubleshooting, analytical, and communication skills. Willingness to explore frontend tech (ReactJS, NodeJS, Angular) is a plus. Skills (Good to have): Experience with Helm charts and service meshes (Istio, Linkerd). Experience with monitoring and logging solutions (Prometheus, Grafana, ELK). Experience with security best practices for cloud and container environments. Contributions to open-source projects or a strong personal portfolio. Role & Responsibility: Manage and optimize Kubernetes clusters, including deployments, scaling, and troubleshooting. Develop and maintain Docker images and containers, ensuring security best practices. Design, implement, and maintain cloud-based infrastructure (AWS, Azure or GCP) using Infrastructure-as-Code (IaC) principles (e.g., Terraform). Monitor and troubleshoot infrastructure and application performance, proactively identifying and resolving issues. Contribute to the development and maintenance of internal tools and automation scripts. Qualification: B.Tech/B.E./M.E./M.Tech in Computer Science or equivalent. Additional Information: We offer a competitive salary and excellent benefits that are above industry standard. Do check our impressive growth rate on and ratings on Please submit your resume in this standard 1-page or 2-page Please hear from our employees on Colleagues Interested in Internal Mobility, please contact your HRBP in-confidence
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Work from Office
Job Title : AI Observability Tools Engineer Experience : 5-7 years Location : Hyderabad (work from office) Shift : Rotational Shift Notice Period : 30 days . Key Responsibilities : Implement observability tools like Prometheus, Grafana, Datadog, Splunk, logic Monitor, thousand eyes for AI/ML environments. Monitor model performance, setting up monitoring thresholds, synthetic test plans, data pipelines, and inference systems. Ensure visibility across infrastructure, application, and network layers Collaborate with SRE, DevOps, and Data Science teams to build proactive alerting and RCA systems. Drive real-time monitoring and AIOps integration for AI workloads Integration with ITSM Solutions like ServiceNow Skills Required Experience with tools: Datadog, Prometheus, Grafana, Splunk, Open Telemetry. Solid understanding of networking concepts (TCP/IP, DNS, Load Balancers) Knowledge of AI/ML infrastructure and observability metrics Scripting : Python, Bash or Go.
Posted 1 week ago
7.0 - 12.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 12+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
4.0 - 6.0 years
6 - 9 Lacs
Ahmedabad
Work from Office
Role Overview: As a DevOps Engineer at ChartIQ , you'll play a critical role not only in building, maintaining, and scaling the infrastructure that supports our Development our Development and QA needs , but also in driving new, exciting cloud-based solutions that will add to our offerings. Your work will ensure that the platforms used by our team remain available, responsive, and high-performing. In addition to maintaining the current infrastructure, you will also contribute to the development of new cloud-based solutions , helping us expand and enhance our platform's capabilities to meet the growing needs of our financial services customers. You will also contribute to light JavaScript programming , assist with QA testing , and troubleshoot production issues. Working in a fast-paced, collaborative environment, you'll wear multiple hats and support the infrastructure for a wide range of development teams. This position is based in Ahmedabad, India , and will require working overlapping hours with teams in the US . The preferred working hours will be until 12 noon EST to ensure effective collaboration across time zones. Key Responsibilities: Design, implement, and manage infrastructure using Terraform or other Infrastructure-as-Code (IaC) tools. Leverage AWS or equivalent cloud platforms to build and maintain scalable, high-performance infrastructure that supports data-heavy applications and JavaScript-based visualizations. Understand component-based architecture and cloud-native applications. Implement and maintain site reliability practices , including monitoring and alerting using tools like DataDog , ensuring the platforms availability and responsiveness across all environments. Design and deploy high-availability architecture to support continuous access to alerting engines. Support and maintain Configuration Management systems like ServiceNow CMDB . Manage and optimize CI/CD workflows using GitHub Actions or similar automation tools. Work with OIDC (OpenID Connect) integrations across Microsoft , AWS , GitHub , and Okta to ensure secure access and authentication. Contribute to QA testing (both manual and automated) to ensure high-quality releases and stable operation of our data visualization tools and alerting systems. Participate in light JavaScript programming tasks, including HTML and CSS fixes for our charting library. Assist with deploying and maintaining mobile applications on the Apple App Store and Google Play Store . Troubleshoot and manage network issues , ensuring smooth data flow and secure access to all necessary environments. Collaborate with developers and other engineers to troubleshoot and optimize production issues. Help with the deployment pipeline , working with various teams to ensure smooth software releases and updates for our library and related services. Required Qualifications: Proficiency with Terraform or other Infrastructure-as-Code tools. Experience with AWS or other cloud services (Azure, Google Cloud, etc.). Solid understanding of component-based architecture and cloud-native applications. Experience with site reliability tools like DataDog for monitoring and alerting. Experience designing and deploying high-availability architecture for web based applications. Familiarity with ServiceNow CMDB and other configuration management tools. Experience with GitHub Actions or other CI/CD platforms to manage automation pipelines. Strong understanding and practical experience with OIDC integrations across platforms like Microsoft , AWS , GitHub , and Okta . Solid QA testing experience, including manual and automated testing techniques (Beginner/Intermediate). JavaScript , HTML , and CSS skills to assist with troubleshooting and web app development. Experience with deploying and maintaining mobile apps on the Apple App Store and Google Play Store that utilize web-based charting libraries. Basic network management skills, including troubleshooting and ensuring smooth network operations for data-heavy applications. Knowledge of package publishing tools such as Maven , Node , and CocoaPods to ensure seamless dependency management and distribution across platforms. Additional Skills and Traits for Success in a Startup-Like Environment: Ability to wear multiple hats : Adapt to the ever-changing needs of a startup environment within a global organization. Self-starter with a proactive attitude, able to work independently and manage your time effectively. Strong communication skills to work with cross-functional teams, including engineering, QA, and product teams. Ability to work in a fast-paced, high-energy environment. Familiarity with agile methodologies and working in small teams with a flexible approach to meeting deadlines. Basic troubleshooting skills to resolve infrastructure or code-related issues quickly. Knowledge of containerization tools such as Docker and Kubernetes is a plus. Understanding of DevSecOps and basic security practices is a plus. Preferred Qualifications: Experience with CI/CD pipeline management , automation, and deployment strategies. Familiarity with serverless architectures and AWS Lambda . Experience with monitoring and logging frameworks, such as Prometheus , Grafana , or similar. Experience with Git , version control workflows, and source code management. Security-focused mindset , experience with vulnerability scanning, and managing secure application environments.
Posted 1 week ago
5.0 - 10.0 years
15 - 20 Lacs
Pune
Hybrid
Team: SRE & Operations Duration 12 Months Shift: General Shift 9:00AM- 5:00 PM Location: Pune Interviews: 2 Round YOE: 5-7 Years (4 Relevant ) NOTES: Preffered Immediate joiner or 15 days np Top Skills Splunk - Querys, Dashboard and application creation Grafana dashboard, Prometheus, Data Visualization Open Telemetry, Grafana, Prometheus Dynatrace, Datadog tools are good to have. Some infra knowledge on servers, storage and web application infrastructure.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
27294 Jobs | Dublin
Wipro
13935 Jobs | Bengaluru
EY
9574 Jobs | London
Accenture in India
8669 Jobs | Dublin 2
Amazon
7820 Jobs | Seattle,WA
Uplers
7606 Jobs | Ahmedabad
IBM
7142 Jobs | Armonk
Oracle
6920 Jobs | Redwood City
Muthoot FinCorp (MFL)
6164 Jobs | New Delhi
Capgemini
5313 Jobs | Paris,France