Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Experienced as an Opsramp Developer/Architect Hands-on experience with Prometheus, OpenTelemetry Experience with data pipelines and redirecting Prometheus metrics to opsramp Proficiency in scripting and programming languages such as Python, Ansible, and Bash. Familiarity with CI/CD deployment pipelines (Ansible, GIT). Strong knowledge of performance monitoring, metrics, capacity planning, and management. Excellent communication skills with the ability to articulate technical details to different audiences. Experience with application onboarding, capturing requirements, understanding data sources, and architecture diagrams. Will work in a collaborative manner with clients and team, abiding to critical timelines and deliverable The general scope of the work for this position is as follows: Design, implement, and optimize OpsRamp solutions in multi tenant model. Implement and configure components of the OpsRamp, Gateway, discovery, opsramp agents, instrumentation via Prometheus etc. Opsramp for Infra , network , app observability OpsRamp event management. Create and maintain comprehensive documentation for OpsRamp configurations and processes. Ensure seamless integration between Opsrmap and other element monitoring tools and ITSM platforms Develop and maintain advanced dashboards and visualizations. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 months ago
7.0 - 10.0 years
6 - 11 Lacs
Bengaluru
Work from Office
Job Title:DevOps Lead Experience7-10 Years Location:Bengaluru : Overall, 7-10 years of experience in IT In-depth knowledge of GCP services and resources to design, deploy, and manage cloud infrastructure efficiently. Certification is big plus. Proficiency in Java or Shell or Python scripting. Develop, maintain, and optimize Infrastructure as Code scripts and templates using tools like Terraform and Ansible, ensuring resource automation and consistency. Strong expertise in Kubernetes using Helm, HAProxy, and containerization technologies Manage and fine-tune databases, including Neo4j, MySQL, PostgreSQL, and Redis Cache Clusters, to ensure performance and data integrity. Skill in managing and optimizing Apache Kafka and RabbitMQ to facilitate efficient data processing and communication. Design and maintain Virtual Private Cloud (VPC) network architecture for secure and efficient data transmission. Implement and maintain monitoring tools such as Prometheus, Zipkin, Loki and Grafana. Utilize Helm charts and Kubernetes (K8s) manifests for containerized application management. Proficient with Git, Jenkins, and ArgoCD to set up and enhance CI and CD pipelines. Utilize Google Artifact Registry and Google Container Registry for artifact and container image management. Familiarity with CI/CD practices, version control and branching and DevOps methodologies. Strong understanding of cloud network design, security, and best practices. Strong Linux and Network debugging skills Primary Skills: - Strong Kubernetes GKE Clusters Grafana Prometheus Terraform and Ansible - good working knowledge Devops Why Join Us: Opportunity to work in a fast-paced and innovative environment. Collaborative team culture with continuous learning and growth opportunities
Posted 2 months ago
8.0 - 12.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Title:Performance Testing Experience8-12 Years Location:Bangalore JMETER (min 5+ yrs ) 8+yearsofstrongexperienceinPerformanceTesting,candidateshouldbeabletocodeanddesignPerformancetestscripts Abletosetup/maintainandexecutePerformancetestscriptsfromscratch. GoodinJmeter,AzureDevops,Grafana(Goodtohave) ExcellentBusinesscommunicationskill ExperienceinPerformancetestingforbothwebandAPI ExperienceinAgilemethodologyandprocess CustomerInteractionandworkindependentlyonhisownonthedailytasks. Goodinteamhandling,EffortEstimation,PerformanceMetricstrackingandTaskmanagement. Shouldbeproactive,solutionproviderandgoodinstatusreporting.
Posted 2 months ago
3.0 - 5.0 years
13 - 15 Lacs
Gurugram
Work from Office
A skilled DevOps Engineer to manage and optimize both on-premises and AWS cloud infrastructure. The ideal candidate will have expertise in DevOps tools, automation, system administration, and CI/CD pipeline management while ensuring security, scalability, and reliability. Key Responsibilities: 1. AWS & On-Premises Solution Architecture: o Design, deploy, and manage scalable, fault-tolerant infrastructure across both on-premises and AWS cloud environments. o Work with AWS services like EC2, IAM, VPC, CloudWatch, GuardDuty, AWS Security Hub, Amazon Inspector, AWS WAF, and Amazon RDS with Multi-AZ. o Configure ASG and implement load balancing techniques such as ALB and NLB. o Optimize cost and performance leveraging Elastic Load Balancing and EFS. o Implement logging and monitoring with CloudWatch, CloudTrail, and on-premises monitoring solutions. 2. DevOps Automation & CI/CD: o Develop and maintain CI/CD pipelines using Jenkins and GitLab for seamless code deployment across cloud and on-premises environments. o Automate infrastructure provisioning using Ansible, and CloudFormation. o Implement CI/CD pipeline setups using GitLab, Maven, Gradle, and deploy on Nginx and Tomcat. o Ensure code quality and coverage using SonarQube. o Monitor and troubleshoot pipelines and infrastructure using Prometheus, Grafana, Nagios, and New Relic. 3. System Administration & Infrastructure Management: o Manage and maintain Linux and Windows systems across cloud and on-premises environments, ensuring timely updates and security patches. o Configure and maintain web/application servers like Apache Tomcat and web servers like Nginx and Node.js. o Implement robust security measures, SSL/TLS configurations, and secure communications. o Configure DNS and SSL certificates. o Maintain and optimize on-premises storage, networking, and compute resources. 4. Collaboration & Documentation: o Collaborate with development, security, and operations teams to optimize deployment and infrastructure processes. o Provide best practices and recommendations for hybrid cloud and on-premises architecture, DevOps, and security. o Document infrastructure designs, security configurations, and disaster recovery plans for both environments. Required Skills & Qualifications: Cloud & On-Premises Expertise: Extensive knowledge of AWS services (EC2, IAM, VPC, RDS, etc.) and experience managing on-premises infrastructure. DevOps Tools: Proficiency in SCM tools (Git, GitLab), CI/CD (Jenkins, GitLab CI/CD), and containerization. Code Quality & Monitoring: Experience with SonarQube, Prometheus, Grafana, Nagios, and New Relic. Operating Systems: Experience managing Linux/Windows servers and working with CentOS, Fedora, Debian, and Windows platforms. Application & Web Servers: Hands-on experience with Apache Tomcat, Nginx, and Node.js. Security & Networking: Expertise in DNS configuration, SSL/TLS implementation, and AWS security services. Soft Skills: Strong problem-solving abilities, effective communication, and proactive learning. Preferred Qualifications: AWS certifications (Solutions Architect, DevOps Engineer) and a bachelors degree in Computer Science or related field. Experience with hybrid cloud environments and on-premises infrastructure automation.
Posted 2 months ago
5.0 - 8.0 years
30 - 32 Lacs
Gurugram, Sector-39
Work from Office
Responsibilities: As a Senior DevOps Engineer at SquareOps, you'll be expected to: Drive the scalability and reliability of our customers' cloud applications. Work directly with clients, engineering, and infrastructure teams to deliver high-quality solutions. Design and develop various systems from scratch with a focus on scalability, security, and compliance. Develop deployment strategies and build configuration management systems. Lead a team of junior DevOps engineers, providing guidance and support on day-to-day activities. Drive innovation within the team, promoting the adoption of new technologies and practices to improve project outcomes. Demonstrate ownership and accountability for project implementations, ensuring projects are delivered on time and within budget. Act as a mentor to junior team members, fostering a culture of continuous learning and growth. The Ideal Candidate: A proven track record in architecting complex production systems with multi-tier application stacks. Expertise in designing solutions tailored to industry-specific requirements such as SaaS, AI, Data Ops, and highly compliant enterprise architectures. Extensive experience working with Kubernetes, various CI/CD tools, and cloud service providers, preferably AWS. Proficiency in automating cloud infrastructure management, primarily with tools like Terraform, Shell scripting, AWS Lambda, and Event Bridge. Solid understanding of cloud financial management strategies to ensure cost-effective use of cloud resources. Experience in setting up high availability and disaster recovery for cloud infrastructure. Strong problem-solving skills with an innovative mindset. Excellent communication skills, capable of effectively liaising with clients, engineering, and infrastructure teams. The ability to lead and mentor a team, guiding them to achieve their objectives. 10. High levels of empathy and emotional intelligence, with a talent for managing and resolving conflict. An adaptable nature, comfortable working in a fast-paced, dynamic environment. At SquareOps, we believe in the power of diversity and inclusion. We encourage applicants of all backgrounds, experiences, and perspectives to apply.
Posted 2 months ago
7.0 - 10.0 years
13 - 23 Lacs
Bengaluru
Work from Office
Title : DevOps Engineer Location : Bangalore Office (4 Days WFO) Exp : 7 to 10 Years Skills : Devops, Kubernetes, CI/CD, Prometheus or Grafana, AWS, Basic SRE
Posted 2 months ago
8.0 - 10.0 years
13 - 15 Lacs
Pune
Work from Office
We are seeking a hands-on Lead Data Engineer to drive the design and delivery of scalable, secure data platforms on Google Cloud Platform (GCP). In this role you will own architectural decisions, guide service selection, and embed best practices across data engineering, security, and performance disciplines. You will partner with data modelers, analysts, security teams, and product owners to ensure our pipelines and datasets serve analytical, operational, and AI/ML workloads with reliability and cost efficiency. Familiarity with Microsoft Azure data services (Data Factory, Databricks, Synapse, Fabric) is valuable, as many existing workloads will transition from Azure to GCP. Key Responsibilities Lead end-to-end development of high-throughput, low-latency data pipelines and lake-house solutions on GCP (BigQuery, Dataflow, Pub/Sub, Dataproc, Cloud Composer, Dataplex, etc.). Define reference architectures, technology standards for data ingestion, transformation, and storage. Drive service-selection trade-offscost, performance, scalability, and securityacross streaming and batch workloads. Conduct design reviews and performance tuning sessions; ensure adherence to partitioning, clustering, and query-optimization standards in BigQuery. Contribute to long-term cloud data strategy, evaluating emerging GCP features and multi-cloud patterns (Azure Synapse, Data Factory, Purview, etc.) for future adoption. Lead the code reviews and oversee the development activities delegated to Data engineers. Implement best practices recommended by Google Cloud Provide effort estimates for the data engineering activities Participate in discussions to migrate existing Azure workloads to GCP, provide solutions to migrate the work loads for selected data pipelines Must-Have Skills 810 years in data engineering, with 3+ years leading teams or projects on GCP. Expert in GCP data services (BigQuery, Dataflow/Apache Beam, Dataproc/Spark, Pub/Sub, Cloud Storage) and orchestration with Cloud Composer or Airflow. Proven track record designing and optimizing large-scale ETL/ELT pipelines (streaming + batch). Strong fluency in SQL and one major programming language (Python, Java, or Scala). Deep understanding of data lake / lakehouse, dimensional & data-vault modeling, and data governance frameworks. Excellent communication and stakeholder-management skills; able to translate complex technical topics to non-technical audiences. Nice-to-Have Skills Hands-on experience with Microsoft Azure data services (Azure Synapse Analytics, Data Factory, Event Hub, Purview). Experience integrating ML pipelines (Vertex AI, Dataproc ML) or real-time analytics (BigQuery BI Engine, Looker). Familiarity with open-source observability stacks (Prometheus, Grafana) and FinOps tooling for cloud cost optimization. Preferred Certifications Google Professional Data Engineer (strongly preferred) or Google Professional Cloud Architect Microsoft Certified: Azure Data Engineer Associate (nice to have) Education Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related technical field. Equivalent professional experience will be considered.
Posted 2 months ago
2.0 - 4.0 years
11 - 12 Lacs
Bengaluru
Work from Office
Employment Type: Contract iSource Services is hiring for one of their client for the position of Commerce - DevOps - Engineer II About the Role: We are looking for a skilled DevOps Engineer (Level II) to support our Commerce platform. The ideal candidate will have 24 years of experience with a strong foundation in DevOps practices, CI/CD pipelines, and solid exposure to React.js, Node.js, and MongoDB for build and deployment automation. Key Responsibilities: Manage CI/CD pipelines and deployment automation for commerce applications Collaborate with development teams using React.js, Node.js, and MongoDB Monitor system performance, automate infrastructure, and troubleshoot production issues Maintain and improve infrastructure as code using tools like Terraform, Ansible, or similar Ensure security, scalability, and high availability of environments Participate in incident response and post-mortem analysis Qualifications: 24 years of hands-on experience in DevOps engineering Proficiency in CI/CD tools (e.g., Jenkins, GitHub Actions, GitLab CI) Working knowledge of React.js, Node.js, and MongoDB Experience with containerization (Docker, Kubernetes) Familiarity with monitoring tools (e.g., Prometheus, Grafana, ELK stack) Good scripting skills (Shell, Python, or similar)
Posted 2 months ago
3.0 - 5.0 years
5 - 7 Lacs
Pune
Work from Office
Role Overview Join our Pune AI Center of Excellence to drive software and product development in the AI space. As an AI/ML Engineer, youll build and ship core components of our AI products—owning end-to-end RAG pipelines, persona-driven fine-tuning, and scalable inference systems that power next-generation user experiences. Key Responsibilities Model Fine-Tuning & Persona Design Adapt and fine-tune open-source large language models (LLMs) (e.g. CodeLlama, StarCoder) to specific product domains. Define and implement “personas” (tone, knowledge scope, guardrails) at inference time to align with product requirements. RAG Architecture & Vector Search Build retrieval-augmented generation systems: ingest documents, compute embeddings, and serve with FAISS, Pinecone, or ChromaDB. Design semantic chunking strategies and optimize context-window management for product scalability. Software Pipeline & Product Integration Develop production-grade Python data pipelines (ETL) for real-time vector indexing and updates. Containerize model services in Docker/Kubernetes and integrate into CI/CD workflows for rapid iteration. Inference Optimization & Monitoring Quantize and benchmark models for CPU/GPU efficiency; implement dynamic batching and caching to meet product SLAs. Instrument monitoring dashboards (Prometheus/Grafana) to track latency, throughput, error rates, and cost. Prompt Engineering & UX Evaluation Craft, test, and iterate prompts for chatbots, summarization, and content extraction within the product UI. Define and track evaluation metrics (ROUGE, BLEU, human feedback) to continuously improve the product’s AI outputs. Must-Have Skills ML/AI Experience: 3–4 years in machine learning and generative AI, including 18 months on LLM- based products. Programming & Frameworks: Python, PyTorch (or TensorFlow), Hugging Face Transformers. RAG & Embeddings: Hands-on with FAISS, Pinecone, or ChromaDB and semantic chunking. Fine-Tuning & Quantization: Experience with LoRA/QLoRA, 4-bit/8-bit quantization, and model context protocol (MCP). Prompt & Persona Engineering: Deep expertise in prompt-tuning and persona specification for product use cases. Deployment & Orchestration: Docker, Kubernetes fundamentals, CI/CD pipelines, and GPU setup. Nice-to-Have Multi-modal AI combining text, images, or tabular data. Agentic AI systems with reasoning and planning loops. Knowledge-graph integration for enhanced retrieval. Cloud AI services (AWS SageMaker, GCP Vertex AI, or Azure Machine Learning)
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring an SRE to improve reliability and scalability of production systems. Ideal for engineers passionate about automation, monitoring, and performance optimization. Key Responsibilities: Design and implement SLOs, SLAs, and alerting systems Automate operational tasks and incident responses Build robust observability into all services Conduct post-incident reviews and root cause analysis Required Skills & Qualifications: Strong coding/scripting skills (Python, Go, Bash) Experience with cloud services and Kubernetes Knowledge of monitoring/logging tools (Datadog, Prometheus, ELK) Bonus: Background in performance engineering or chaos testing Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
1.0 - 6.0 years
2 - 6 Lacs
Bengaluru
Work from Office
We are seeking an experienced OpenShift Engineer to design, deploy, and manage containerized applications on Red Hat OpenShift. Key Responsibilities: Design, deploy, and manage OpenShift container platforms in on-premises and cloud environments. Configure and optimize OpenShift clusters to ensure high availability and scalability. Implement CI/CD pipelines and automation for containerized applications. Monitor and troubleshoot OpenShift environments, identifying and resolving issues proactively. Work closely with development teams to support containerized application deployment and orchestration. Manage security policies, access controls, and compliance for OpenShift environments. Perform upgrades, patches, and maintenance of OpenShift infrastructure. Develop and maintain documentation for OpenShift architecture, configurations, and best practices. Stay updated with industry trends and emerging technologies in containerization and Kubernetes. Deploy, configure, and manage OpenShift clusters in hybrid/multi-cloud environments. Automate deployments using CI/CD pipelines (Jenkins, GitLab CI/CD, ArgoCD). Troubleshoot Kubernetes/OpenShift-related issues and optimize performance. Implement security policies and best practices for containerized workloads. Work with developers to containerize applications and manage microservices. Monitor and manage OpenShift clusters using Prometheus, Grafana, and logging tools.
Posted 2 months ago
3.0 - 8.0 years
15 - 20 Lacs
Pune
Work from Office
About the job Sarvaha would like to welcome Kafka Platform Engineer (or a seasoned backend engineer aspiring to move into platform architecture) with a minimum of 4 years of solid experience in building, deploying, and managing Kafka infrastructure on Kubernetes platforms. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. Please visit our website at What Youll Do - Deploy and manage scalable Kafka clusters on Kubernetes using Strimzi, Helm, Terraform, and StatefulSets - Tune Kafka for performance, reliability, and cost-efficiency - Implement Kafka security: TLS, SASL, ACLs, Kubernetes Secrets, and RBAC - Automate deployments across AWS, GCP, or Azure - Set up monitoring and alerting with Prometheus, Grafana, JMX Exporter - Integrate Kafka ecosystem components: Connect, Streams, Schema Registry - Define autoscaling, resource limits, and network policies for Kubernetes workloads - Maintain CI/CD pipelines (ArgoCD, Jenkins) and container workflows You Bring - BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering - Strong foundation in the Apache Kafka ecosystem and internals (brokers, ZooKeeper/KRaft, partitions, storage) - Proficient in Kafka setup, tuning, scaling, and topic/partition management - Skilled in managing Kafka on Kubernetes using Strimzi, Helm, Terraform - Experience with CI/CD, containerization, and GitOps workflows - Monitoring expertise using Prometheus, Grafana, JMX - Experience on EKS, GKE, or AKS preferred - Strong troubleshooting and incident response mindset - High sense of ownership and automation-first thinking - Excellent collaboration with SREs, developers, and platform teams - Clear communicator, documentation-driven, and eager to mentor/share knowledge Why Join Sarvaha? - Top notch remuneration and excellent growth opportunities - An excellent, no-nonsense work environment with the very best people to work with - Highly challenging software implementation problems - Hybrid Mode. We offered complete work from home even before the pandemic.
Posted 2 months ago
1.0 - 2.0 years
6 - 8 Lacs
Bengaluru
Work from Office
CI/ CD Developer || 1-2 years exp || Bangalore || Work from office Roles & Responsibilities o Automate and optimize CI/CD workflows to enhance efficiency and developer productivity o Design, implement, and maintain automated CI/CD pipelines for seamless code testing, building, and deployment. o Integrate automated testing (unit, integration, performance) to ensure code quality before deployment. o Manage and monitor CI/CD/DevOps infrastructure to ensure high availability. o Embed security best practices in the DevOps pipeline, addressing vulnerabilities early and ensuring compliance. o Oversee monitoring, logging, root cause analysis, and preventive measures for system failures. o Manage user roles, permissions, and enforce security policies across environments. o Generate actionable insights through interactive reports and visualizations using Power BI. o Collaborate with development teams to understand CI/CD needs and deliver effective solutions. o Possess strong analytical, technical, and problem-solving skills with a research-driven approach. o Be a self-starter, contributing to the adoption of DevOps/CI/CD practices. o Research and evaluate new DevOps tools for continuous improvement. o Document CI/CD/DevOps infrastructure, workflows, and automation processes. Skills Technical o Programming and automation: Python, windows batch scripts/Power Shell o Good knowledge of windows platform o Build Tool: Jenkins o Version control: Subversion o Visualization and reporting: PowerBI o Cloud computing, Containerization orchestration You are best equipped for this role if you have o Expertise and working knowledge of Agile Software Development Methodology o Expert knowledge and hands-on experience in scripting (Power shell/batch/python), automation, DevOps tools and methodologies o Expert knowledge and working experience in build automation using Jenkins o Hands on experience in creating and managing Jenkins pipelines o Skilled in Jenkins server administration o Hands on experience in version control tools: Subversion (SVN), Git o Skilled in administrating version control tools on server: Subversion (SVN), Git o Use and integrate different industry standard tools that fit the different parts of the SDLC. o Knowledge of PowerBI for Visualization and reporting o Knowledge of Cloud computing, containerization orchestration o Team player with good communication Skills - Nice to Have o Knowledge and exposer on containerization using Docker, Kubernetes, OpenShift. o Knowledge and exposer on Monitoring and Logging using Prometheus, Grafana o Understand the complete software development life cycle (SDLC).
Posted 2 months ago
5.0 - 7.0 years
35 - 40 Lacs
Mumbai, Pune, Gurugram
Work from Office
Must have 5+ years of experience.Implement & maintain Kubernetes clusters, ensuring high availability and scalability. Established real-time monitoring with Grafana, Prometheus, and CloudWatch Night Shift Location-Mumbai,Gurugram,Chennai,Indore,Remote Bangalore , Delhi,kolkata
Posted 2 months ago
4.0 - 8.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Roles & Responsibilities : - Working closely with the CTO and members of technical staff to meet deadlines. - Working with an agile team to setup and configure GitOps (CI/CD) based pipelines on GitLab - Create and deploy Edge AIoT pipelines using AWS Greengrass or Azure IoT - Design and develop secure cloud system architectures in accordance with enterprise standards - Package and automate deployment of releases using Helm charts - Analyze and optimize resource consumption of deployments - Integrate with Prometheus, Grafana, Kibana etc. for application monitoring - Adhering to best practices to deliver secure and robust solutions Requirements : - Experience with Kubernetes and AWS - Knowledge of cloud architecture concepts (IaaS, PaaS, SaaS) - Knowledge of Docker and Linux bash scripting - Strong desire to expand knowledge in modern cloud architectures - Knowledge of System Security Concepts (SAST, DAST, Penetration Testing, Vulnerability analysis) - Familiarity with version control concepts (Git) Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 2 months ago
3.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About the Role We are seeking a skilled and motivated Cloud Engineer to join our team. In this role, you will be responsible for designing, implementing, and maintaining our cloud infrastructure. You will work closely with development and operations teams to ensure the reliability, scalability, and security of our cloud-based applications and services.Key Responsibilities Cloud Infrastructure Design & Implementation - Design and implement cloud infrastructure solutions using AWS, Azure, or GCP.- Configure and manage virtual machines, storage, networking, and other cloud resources.- Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation.- Design and deploy scalable and highly available cloud architectures.Cloud Operations & Maintenance - Monitor cloud infrastructure performance and identify potential issues.- Troubleshoot and resolve cloud-related incidents.- Perform routine maintenance tasks, such as patching and upgrades.- Implement and maintain backup and disaster recovery solutions.Automation & Scripting - Automate cloud infrastructure provisioning and management tasks using scripting languages (e.g. , Python, Bash).- Develop and maintain automation scripts for CI/CD pipelines.- Implement configuration management using tools like Ansible or Chef.Security & Compliance - Implement and maintain cloud security best practices.- Ensure compliance with industry standards and regulations (e.g. , SOC 2, GDPR).- Implement security monitoring and alerting.- Implement IAM best practices.Containerization & Orchestration - Deploy and manage containerized applications using Docker and Kubernetes.- Implement and maintain container orchestration solutions.- Manage and implement Helm charts.Monitoring & Logging - Implement and maintain monitoring and logging solutions using tools like Prometheus, Grafana, and ELK stack.- Configure alerts and notifications for critical events.- Utilize Cloud native monitoring tools.Collaboration & Communication - Collaborate with development and operations teams to ensure smooth application deployments.- Communicate effectively with stakeholders regarding cloud infrastructure status and issues.- Document cloud infrastructure designs and procedures.Required Technical Skills Cloud Platforms - Proficiency in AWS, Azure, or GCP.- Knowledge of core cloud services (EC2, S3, VPC, Azure VMs, Azure Storage, GCP Compute Engine, GCP Storage).- Infrastructure as Code (IaC) - Experience with Terraform or CloudFormation.Containerization & Orchestration - Proficiency in Docker and Kubernetes.- Experience with Helm.Scripting & Automation - Proficiency in Python or Bash scripting.- Experience with Ansible or Chef.Monitoring & Logging - Experience with Prometheus, Grafana, and ELK stack.- Experience with cloud native monitoring tools.Networking - Understanding of networking concepts and protocols (TCP/IP, DNS, VPN).Security - Knowledge of cloud security best practices and IAM.Operating Systems - Proficiency in Linux or Windows Server administration.Version Control - Experience with Git.Required Experience - 3-5 years of experience in cloud engineering or related roles.- Proven experience in designing and implementing cloud infrastructure.- Experience with automating cloud operations.Soft Skills - Excellent problem-solving and troubleshooting skills.- Strong communication and collaboration skills.- Ability to work independently and as part of a team. - Strong attention to detail. - Strong desire to learn new technologies.Certifications (Preferred) - AWS Certified Solutions Architect - Associate.- Microsoft Certified Azure Administrator Associate.- Google Cloud Certified Professional Cloud Architect. - Certified Kubernetes Administrator (CKA).Education Bachelor's degree in Computer Science, Information Technology, or a related fieldApplyInsightsFollow-upSave this job for future referenceDid you find something suspiciousReport Here! Hide This JobClick here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 2 months ago
4.0 - 5.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job Title DevOps Engineer (Python) Experience 4-5 Years Location Bangalore About the Role We are seeking a highly motivated and skilled DevOps Engineer with strong Python programming skills to join our team. In this role, you will be responsible for automating and streamlining our software development and deployment processes, ensuring efficient and reliable software delivery. Key Responsibilities - Develop and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or Azure DevOps. - Automate infrastructure provisioning and management using tools like Terraform, Ansible, or Puppet. - Develop and maintain Python scripts for various DevOps tasks, such as 1. Automating deployments 2. Monitoring and alerting 3. Data analysis and reporting 4. System administration tasks - Troubleshoot and resolve infrastructure and deployment issues. - Collaborate with development teams to improve software delivery processes. - Stay abreast of the latest DevOps tools, technologies, and best practices. Required Skills Mandatory - Strong Python programming skills - Experience with CI/CD pipelines and tools (Jenkins, GitLab CI/CD, Azure DevOps) - Experience with infrastructure automation tools (Terraform, Ansible, Puppet) - Experience with cloud platforms (AWS, Azure, GCP). - Experience with containerization technologies (Docker, Kubernetes). - Experience with scripting languages (Bash, Shell). - Strong understanding of Linux/Unix systems. - Excellent problem-solving and analytical skills. - Strong communication and collaboration skills. Desired Skills (Optional) - Experience with monitoring and logging tools (Prometheus, Grafana, ELK stack). - Experience with configuration management tools (Chef, SaltStack). - Experience with security best practices and tool Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 2 months ago
2.0 - 5.0 years
11 - 15 Lacs
Ahmedabad
Work from Office
DesignationSenior DevOps Engineer/ DevOps Engineer JDKey Responsibilities : CI/CD Pipeline Development and Management : - Design, build, and maintain CI/CD pipelines using Jenkins, GitLab CI, or similar tools. - Automate deployment processes for microservices and containerized applications across multiple environments. - Ensure high availability and rollback capabilities for production deployments. Infrastructure as Code (IaC) : - Develop and maintain infrastructure provisioning scripts using tools like Terraform or CloudFormation. - Implement configuration management solutions with Ansible, Puppet, or Chef. - Ensure infrastructure scalability, reliability, and security for on-prem and cloud environments. Scripting and Automation : - Write and optimize scripts using Python, Bash, or PowerShell for automating operational tasks. - Build custom tools to streamline repetitive DevOps workflows. - Implement monitoring and alerting automation to proactively address system issues. Database Management : - Collaborate with database administrators to manage and optimize SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). - Implement automated database backup, restoration, and performance monitoring solutions. - Ensure secure handling of database credentials and access through tools like HashiCorp Vault. Performance Monitoring and Optimization : - Integrate monitoring tools like Prometheus, Grafana, or ELK stack for observability. - Conduct root-cause analysis for incidents and implement fixes to avoid recurrence. - Optimize application performance by fine-tuning DevOps processes and infrastructure. Collaboration and Team Support : - Partner with development, QA, and operations teams to align DevOps practices with business goals. - Support developers by troubleshooting build and deployment issues. - Share best practices and mentor junior team members in DevOps methodologies. Technical Skills and Qualifications : Education : - Bachelor's degree in Computer Science, IT, or related field. Core Skills : - Messaging Queues Proficiency with Kafka and other messaging queue systems for real-time data streaming. - CI/CD Tools Expertise in Jenkins, GitLab CI/CD, or similar tools for automation pipelines. - Scripting Strong proficiency in Python, Bash, or PowerShell scripting for automation. - Cloud Platforms Hands-on experience with AWS, Azure, or Google Cloud. - Containerization Proficiency with Docker and Kubernetes for managing containerized applications. - IaC Tools Expertise in Terraform, CloudFormation, or similar tools for infrastructure provisioning. - Monitoring Experience with Prometheus, Grafana, ELK stack, or equivalent monitoring solutions. - Database Management Familiarity with both SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). - Messaging Queues Expertise in Kafka for high-throughput data pipelines. - Knowledge of Helm charts for Kubernetes application deployments. - Experience with MLOps pipelines for AI/ML workload integration. - Familiarity with GitOps tools like ArgoCD or FluxCD for declarative infrastructure management. - Proficiency in implementing service meshes like Istio for microservices. Soft Skills : - Strong analytical and troubleshooting skills. - Excellent communication abilities to collaborate with cross-functional teams. - Commitment to continuous learning and knowledge sharing. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 2 months ago
4.0 - 8.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Position Java Backend Developer Location Chennai Job Type Full-time Job Summary : We are looking for a proficient Java Backend Developer with 4 to 8 years of experience to join our team. The ideal candidate will have hands-on experience in building high-performance, scalable, enterprise-grade applications. The role involves working with Java, Spring Boot, Kafka, and other modern technologies to deliver robust backend services. You will work closely with cross-functional teams to design and implement backend systems and integrate them with front-end components. Key Responsibilities : - Design, develop, and maintain backend services using Java and Spring Boot. - Implement and manage distributed systems with Kafka for real-time data processing. - Develop RESTful APIs and microservices to support front-end functionality. - Ensure high performance and responsiveness of applications. - Troubleshoot and optimize backend systems to ensure reliability and scalability. - Collaborate with the front-end developers, DevOps, and QA teams to ensure seamless integration. - Write clean, scalable, and maintainable code following best practices. - Implement monitoring solutions using tools like Kibana, Prometheus, and Grafana. Primary Skills : - Strong proficiency in Java and Spring Boot framework. - Experience with Kafka for messaging and stream processing. - Familiarity with RESTful API design and microservices architecture. - Understanding of software development lifecycle (SDLC), design patterns, and best coding practices. Secondary Skills : - Experience with monitoring and visualization tools like Kibana, Prometheus, and Grafana. - Knowledge of databases, including MySQL and NoSQL databases. - Hands-on experience with Cloud technologies (preferably AWS). - Exposure to containerization and orchestration tools like Kubernetes. - Familiarity with CI/CD pipelines and DevOps practices. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 2 months ago
8.0 - 12.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Position Sr DevOps Engineer Location : Hyderabad Immediate Joiner Position Overview As a DevOps Engineer, you will play a critical role in ensuring the smooth operation and maintenance of our vendor applications. You will be responsible for scripting, supporting Java applications, and working with SQL databases. Your expertise will help us maintain high availability, performance, and security of our applications. Key Responsibilities : - Collaborate with development and operations teams to support and maintain vendor applications. - Develop and maintain scripts for automation, deployment, and monitoring. - Provide support for Java-based applications, including troubleshooting and performance tuning. - Manage and optimize SQL databases, ensuring data integrity and availability. - Implement and maintain CI/CD pipelines to streamline the software development lifecycle. - Monitor application performance and system health, proactively identifying and resolving issues. - Participate in on-call rotations to provide 24/7 support for critical systems. - Document processes, procedures, and best practices to ensure knowledge sharing and consistency. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field. - Proven experience as a DevOps Engineer or in a similar role. - Strong scripting skills (e.g., Python, Bash, PowerShell). - Experience supporting Java applications, including troubleshooting and performance tuning. - Proficiency in SQL and experience managing SQL databases. - Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI, CircleCI). - Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). - Understanding of cloud platforms (e.g., AWS, Azure, Google Cloud). - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills. Preferred Qualifications : - Experience with configuration management tools (e.g., Ansible, Chef, Puppet). - Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). - Familiarity with Agile and DevOps methodologies. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 2 months ago
6.0 - 11.0 years
18 - 22 Lacs
Chennai, Bengaluru
Work from Office
Who We Are Applied Materials is the global leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. We design, build and service cutting-edge equipment that helps our customers manufacture display and semiconductor chips- the brains of devices we use every day. As the foundation of the global electronics industry, Applied enables the exciting technologies that literally connect our world- like AI and IoT. If you want to work beyond the cutting-edge, continuously pushing the boundaries of"science and engineering to make possible"the next generations of technology, join us to Make Possible® a Better Future. What We Offer Location: Bangalore,IND, Chennai,IND At Applied, we prioritize the well-being of you and your family and encourage you to bring your best self to work. Your happiness, health, and resiliency are at the core of our benefits and wellness programs. Our robust total rewards package makes it easier to take care of your whole self and your whole family. Were committed to providing programs and support that encourage personal and professional growth and care for you at work, at home, or wherever you may go. Learn more about our benefits . Youll also benefit from a supportive work culture that encourages you to learn, develop and grow your career as you take on challenges and drive innovative solutions for our customers."We empower our team to push the boundaries of what is possible"”while learning every day in a supportive leading global company. Visit our Careers website to learn more about careers at Applied. About Applied Applied Materials is the leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. Our expertise in modifying materials at atomic levels and on an industrial scale enables customers to transform possibilities into reality. At Applied Materials, our innovations make possible the technology shaping the future. Our Team Our team is developing a high-performance computing solution for low -latency and high throughput image processing and deep-learning workload s that will enable our Chip Man ufacturing process control equipment to offer differentiated value to our customers. Your Opportunity As an HPC Architect , you will get the opportunity to architect high-performance computing solutions from scratch and design/ optimize all aspects (Compute , Memory, Network ing , Storage) for better cost of Ownership. Roles and Responsibility As an architect , you will be responsible for design ing HPC infrastructure solutions, including compute , networking, storage, and workload management components . You will work closely with cross-functional teams, including Hardware, S oftware, product management , and business stakeholders, to understand compute workload and translate them into Platform architecture and designs that meet business needs . You will c reate and maintain detailed system architecture diagrams and specifications. You will e valuate and select appropriate hardware and software components for HPC environments You will Install, configure, and maintain HPC systems, including hardware, software, and networking components You will d evelop and implement automation scripts for system management and deployment." You will be a s ubject Matter expert to unblock depe n dent teams in the HPC domain. You will be expected to develop system benchmarks , profile systems to understand bottleneck s, optimize workflows and processes to improve cost of ownership. Identify and mitigate technical risks and issues throughout the HPC development life cycle . Ensure that Compute Cluster is resilient, reliable, and maintainable. You will be expected to stay abreast of the latest HPC technologies, including Hardwa re, Software and Networking Solutions Your primary focus will be to understand the compute workload and design HPC cluster with right combination of Nodes, CPU/GPU, Memory, Interconnects and storage to have optimum performance at minimum cost of Ownership. Our Ideal Candidate Someone who has the drive and passion to learn quickly , has the ability to multi- task and switch contexts based on business needs . Qualifications In-depth experience with Linux System administration and Hardware/Software Configuration . Strong knowledge of HPC technologies including cluster computing, high speed interconnects (InfiniBand, RoCE), parallel filesystems ( Lus tre, GPF S, BeeGFS etc ) Experience in creating , maintaining Operating System images with different installation and boot schemes Extremely good with automation tools like Ansible, Chef, Salt-Stack and Scripting languages (Python and Bash) Experience in C reating , maintaining Storage Solution s with different RAID configuration . A bility to design storage solution for different IOPS, Access patterns ( Random vs Sequential RW ) and tun e storage and filesystems for better performance. Good of knowledge Networking concepts including IP addressing, routing, protocols and Switch configuration for RDMA, VLAN configuration, network bonding etc. Good Knowledge Virtualization, Hardware and Software Hypervisors Good kno wledge of containerization technologies like docker, singularity . Experience in Software Defined Networking and Storage. Experience in setting-up remote management protocols like IPMI, Red fish etc. Experience in setting-up and using monitoring systems like Prometheus, Grafana . Experience System profiling and custom tuning for target workload for higher performance and low cost of ownership Very good written and verbal communication skills. Very good in Technical documentation meant to serve as manuals for non-experts in the f ield. Additional Qualifications Experience in HPC Cluster management and Work-load orchestration software ( e.g. SLURM , Torque, LSF) Experience in Setting-up Deep-learning training/ inference solutions . Experience in Private cloud infrastructure like Kubernetes, OpenStack, CloudStack etc. Experience in Distributed High Performance Computing and Parallel programming frameworks Good knowledge of Low-latency and high-throughput data transfer technologies (RDMA on RoCE, InfiniBand) Education Bachelors Degree or higher in Computer science or related Disciplines. Applied Materials is committed to diversity in its workforce including Equal Employment Opportunity for Minorities, Females, Protected Veterans and Individuals with Disabilities. Additional Information Time Type: Full time Employee Type: Assignee / Regular Travel: Relocation Eligible: No Applied Materials is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, ancestry, religion, creed, sex, sexual orientation, gender identity, age, disability, veteran or military status, or any other basis prohibited by law.
Posted 2 months ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
About NetApp NetApp is the intelligent data infrastructure company, turning a world of disruption into opportunity for every customer No matter the data type, workload or environment, we help our customers identify and realize new business possibilities And it all starts with our people, If this sounds like something you want to be part of, NetApp is the place for you You can help bring new ideas to life, approaching each challenge with fresh eyes Of course, you won't be doing it alone At NetApp, we're all about asking for help when we need it, collaborating with others, and partnering across the organization and beyond, Job Summary The NetApp Keystone team is responsible for cutting-edge technologies that enable NetApps pay as you go offering Keystone helps customers manage data on prem or in the cloud and have invoices that are charged in a subscription manner, Job Requirements Role & Responsibilities As a Go Lang Engineer for Keystone, youll have the opportunity to Enjoy working on customer Issues that no one has solved yet Influence Engineering teams to suggest improvement Ideas on features Learn storage as a subscription service Work with other engineers to deliver Best Customer Experience for Keystone Key Skills Strong knowledge of Go programming language, paradigms, constructs, and idioms Bachelors/Masters degree in computer science, information technology, or engineering/ or anything specific that you prefer Knowledge of various Go frameworks and tools year experience working with the Go programming language Strong written and communication skills with proven fluency in English Familiarity with database technologies such as NoSQL, Prometheus and MongoDB Hands-on experience with code conversion tools like Git, Passionate about learning new tools, languages, philosophies, and workflows Working with generated code and code generation techniques Working with document databases and Golang ORM libraries Knowledge of programming methodologies Object Oriented/Functional/Design Patterns Knowledge of software development methodologies SCRUM/AGILE/LEAN Knowledge of software deployment Docker/Kubernetes Knowledge of software team tools GIT/JIRA/CICD Education IC Typically requires a minimum of 5-8 years of related experience with bachelor /master's degree, At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process, Equal Opportunity Employer NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification, Why NetApp We are all about helping customers turn challenges into business opportunity It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better but also to innovate We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches, We enable a healthy work-life balance Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life, If you want to help us build knowledge and solve big problems, let's talk, Submitting an application To ensure a streamlined and fair hiring process for all candidates, our team only reviews applications submitted through our company website This practice allows us to track, assess, and respond to applicants efficiently Emailing our employees, recruiters, or Human Resources personnel directly will not influence your application, Apply
Posted 2 months ago
7.0 - 12.0 years
30 - 35 Lacs
Pune
Work from Office
About The Role : Job TitleProduction Specialist, AVP LocationPune, India Role Description Our organization within Deutsche Bank is AFC Production Services. We are responsible for providing technical L2 application support for business applications. The AFC (Anti-Financial Crime) line of business has a current portfolio of 25+ applications. The organization is in process of transforming itself using Google Cloud and many new technology offerings. As an Assistant Vice President, your role will include hands-on production support and be actively involved in technical issues resolution across multiple applications. You will also be working as application lead and will be responsible for technical & operational processes for all application you support. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Provide technical support by handling and consulting on BAU, Incidents/emails/alerts for the respective applications. Perform post-mortem, root cause analysis using ITIL standards of Incident Management, Service Request fulfillment, Change Management, Knowledge Management, and Problem Management. Manage regional L2 team and vendor teams supporting the application. Ensure the team is up to speed and picks up the support duties. Build up technical subject matter expertise on the applications being supported including business flows, application architecture, and hardware configuration. Define and track KPIs, SLAs and operational metrics to measure and improve application stability and performance. Conduct real time monitoring to ensure application SLAs are achieved and maximum application availability (up time) using an array of monitoring tools. Build and maintain effective and productive relationships with the stakeholders in business, development, infrastructure, and third-party systems / data providers & vendors. Assist in the process to approve application code releases as well as tasks assigned to support to perform. Keep key stakeholders informed using communication templates. Approach support with a proactive attitude, desire to seek root cause, in-depth analysis, and strive to reduce inefficiencies and manual efforts. Mentor and guide junior team members, fostering technical upskill and knowledge sharing. Provide strategic input into disaster recovery planning, failover strategies and business continuity procedures Collaborate and deliver on initiatives and install these initiatives to drive stability in the environment. Perform reviews of all open production items with the development team and push for updates and resolutions to outstanding tasks and reoccurring issues. Drive service resilience by implementing SRE(site reliability engineering) principles, ensuring proactive monitoring, automation and operational efficiency. Ensure regulatory and compliance adherence, managing audits,access reviews, and security controls in line with organizational policies. The candidate will have to work in shifts as part of a Rota covering APAC and EMEA hours between 07:00 IST and 09:00 PM IST (2 shifts). In the event of major outages or issues we may ask for flexibility to help provide appropriate cover. Weekend on-call coverage needs to be provided on rotational/need basis. Your skills and experience 9-15 years of experience in providing hands on IT application support. Experience in managing vendor teams providing 24x7 support. Preferred Team lead role experience, Experience in an investment bank, financial institution. Bachelors degree from an accredited college or university with a concentration in Computer Science or IT-related discipline (or equivalent work experience/diploma/certification). Preferred ITIL v3 foundation certification or higher. Knowledgeable in cloud products like Google Cloud Platform (GCP) and hybrid applications. Strong understanding of ITIL /SRE/ DEVOPS best practices for supporting a production environment. Understanding of KPIs, SLO, SLA and SLI Monitoring ToolsKnowledge of Elastic Search, Control M, Grafana, Geneos, OpenShift, Prometheus, Google Cloud Monitoring, Airflow,Splunk. Working Knowledge of creation of Dashboards and reports for senior management Red Hat Enterprise Linux (RHEL) professional skill in searching logs, process commands, start/stop processes, use of OS commands to aid in tasks needed to resolve or investigate issues. Shell scripting knowledge a plus. Understanding of database concepts and exposure in working with Oracle, MS SQL, Big Query etc. databases. Ability to work across countries, regions, and time zones with a broad range of cultures and technical capability. Skills That Will Help You Excel Strong written and oral communication skills, including the ability to communicate technical information to a non-technical audience and good analytical and problem-solving skills. Proven experience in leading L2 support teams, including managing vendor teams and offshore resources. Able to train, coach, and mentor and know where each technique is best applied. Experience with GCP or another public cloud provider to build applications. Experience in an investment bank, financial institution or large corporation using enterprise hardware and software. Knowledge of Actimize, Mantas, and case management software is good to have. Working knowledge of Big Data Hadoop/Secure Data Lake is a plus. Prior experience in automation projects is great to have. Exposure to python, shell, Ansible or other scripting language for automation and process improvement Strong stakeholder management skills ensuring seamless coordination between business, development, and infrastructure teams. Ability to manage high-pressure issues, coordinating across teams to drive swift resolution. Strong negotiation skills with interface teams to drive process improvements and efficiency gains. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs.
Posted 2 months ago
7.0 - 12.0 years
32 - 37 Lacs
Bengaluru
Work from Office
About The Role : Job TitleSite Reliability Engineer LocationBangalore, India Corporate TitleAVP Role Description You will work closely with application teams to ensure stable, well monitored applications that are resilient to faults. You will agree and review Service Level Objectives (SLOs) to achieve high availability for applications based on their criticality. You will maintain Error Budgets for the application teams and prevent releases in the event of production instability and reduced availability. You will focus on reducing manual toil, improving operational reliability and driving automation-first practices. This is a hands-on role with strong focus on implementing SRE practices and reducing toil for Developer Tools. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Drive stability, performance and reliability improvements for TDI Engineering applications. Build Monitoring and alerting solutions to alert in the event of failures/performance issues across TDI Engineering applications to help us providing the optimum service level to the users. Provide feedback loops to continually improve the application resilience across multiple application teams. Collaborate with product owners and engineering team to prioritize reliability and stability of these applications. Define, measure and maintain SLOs and Error Budgets to ensure availability for end users and to achieve appropriate levels of application stability. Identify opportunities for automation and self-service capabilities and implement them to eliminate toil for both the application teams and the SRE team to optimise effectiveness Manage outage resolution and agree actions to reduce the likelihood of failure happening in future by owning RCA and conducting blameless postmortems. Your skills and experience Bachelors degree from an accredited college or university with a concentration in Computer Science or IT-related discipline (or equivalent work experience or diploma). 8+ Years of Experience in IT in large corporate environments, specifically in controlled production environments. Demonstrable Site Reliability Engineering experience of at least 3+ Years. Excellent analytical and problem-solving skills Experience in implementing observability solution using any industry standard tools Scripting skills (Groovy, shell, Bash, Cron or any equivalent) Experience in mid-range technologies and platforms, i.e. UNIX/LINUX, ORACLE database and Nginx experience. Good to have Understanding and experience in Developer Tools (Jira, Confluence, Bitbucket, TeamCity, Artifactory, Udeploy) as an enterprise level Administrator experienced in managing applications with large user base. Knowledge and experience of observability tools like Grafana, Prometheus. How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 2 months ago
7.0 - 11.0 years
0 - 1 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm (Monday-26th May to Friday-30th May)
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France