Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 10.0 years
38 - 40 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Description We are seeking an experienced DevOps Engineer with a strong background in GitHub Actions, Azure Kubernetes Service (AKS), and ArgoCD to join our dynamic team in India. The ideal candidate will have extensive experience in automating deployment processes and managing containerized applications. Responsibilities Design, implement, and maintain CI/CD pipelines using GitHub Actions. Manage and orchestrate containerized applications using Azure Kubernetes Service (AKS). Automate deployment processes and ensure reliable release management with ArgoCD. Monitor system performance and troubleshoot issues in collaboration with development teams. Implement best practices for infrastructure as code and continuous integration and delivery. Collaborate with cross-functional teams to understand requirements and provide technical solutions. Skills and Qualifications 7-10 years of experience in DevOps or related fields. Strong proficiency in GitHub Actions for CI/CD workflows. Hands-on experience with Azure Kubernetes Service (AKS) for container orchestration. Experience with ArgoCD for continuous delivery and GitOps practices. Solid understanding of cloud services, particularly Azure. Knowledge of scripting languages such as Bash, Python, or PowerShell. Familiarity with monitoring tools and practices, such as Prometheus, Grafana, or Azure Monitor. Strong problem-solving skills and the ability to work in a fast-paced environment. Excellent communication skills and ability to work collaboratively in a team.
Posted 2 weeks ago
5.0 - 8.0 years
5 - 8 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Your role and responsibilities Grit, drive and a deep feeling of ownership. BS or MS in Computer Science or a related technical discipline. Strong experience in Python or equivalent language Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 5+ years of experience on DevOps or equivalent Strong experience on bash scripting , Strong experience with Kubernetes & Strong experience with Docker Strong experience with Terrafor & Experience with DevOps in Azure cloud Preferred technical and professional experience An understanding of application security and information security controls A good understanding of large-scale distributed systems in practice, including multi-tier architectures, application security, monitoring and storage systems. Working knowledge of GitHub Actions, Azure DevOps, Jenkins (or other similar toolset)
Posted 2 weeks ago
3.0 - 7.0 years
5 - 12 Lacs
Hyderabad
Hybrid
Azure DevOps Engineer Location: Hyderabad Work Mode: Hybrid (3 Days per week from Office) What You Will Do: . We are seeking a highly skilled and experienced Azure Cloud experience to join our team. The ideal candidate will be responsible for the further development and implementation of advanced cloud-based solutions on Legal and General's Microsoft Azure platform. The candidate also needs to be proficient with DevOps and DevSecOps processes and practices, with a strong knowledge of GitHub, including GitHub Actions and infrastructure as code scaffolding, particularly Terraform. Key Responsibilities: Lead the design and deployment of enterprise-wide azure solutions, ensuring they meet both functional and non-functional requirements, polices, principals and standards and are secure, scalable, and reliable. Oversee the integration of security tooling into Azure deployments to enhance security posture and compliance (a knowledge of Wiz will be beneficial, but not essential) Utilize GitHub for source control management and collaborate with development teams to implement GitHub Actions for automating workflows. Drive cloud migration strategies, including assessment, planning, and execution, with a focus on security and best practices. Stay abreast of the latest Azure features and capabilities, incorporating them into solution designs as appropriate. Collaborate with cross-functional teams to ensure the security, scalability, and performance of Azure infrastructure. Excellent oral and written communication skills being an Active Team Player Experience Range: 3-7 years Professional Attributes You Possess: Effective Communication Skills Strong analytical and problem-solving skills Excellent organizational skills and attention to detail Ability to function well in a high-paced environment Should be a team player and self-starter Should be a quick learner
Posted 2 weeks ago
5.0 - 8.0 years
15 - 20 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Develop EDS-compatible templates/components, implement reusable structures, integrate Adobe Commerce data, support content migration to AEM Assets, configure EDS pipelines, resolve rendering issues, and follow Adobe best practices. Required Candidate profile 5+ years of front-end and Adobe EDS/Cloud Migration experience; expertise in HTML/CSS/JS; GraphQL & Adobe Commerce integration; AEM Assets handling; Adobe App Builder/API experience is a plus.
Posted 2 weeks ago
6.0 - 9.0 years
8 - 11 Lacs
Pune
Work from Office
We are hiring a DevOps / Site Reliability Engineer for a 6-month full-time onsite role in Pune (with possible extension). The ideal candidate will have 69 years of experience in DevOps/SRE roles with deep expertise in Kubernetes (preferably GKE), Terraform, Helm, and GitOps tools like ArgoCD or Flux. The role involves building and managing cloud-native infrastructure, CI/CD pipelines, and observability systems, while ensuring performance, scalability, and resilience. Experience in infrastructure coding, backend optimization (Node.js, Django, Java, Go), and cloud architecture (IAM, VPC, CloudSQL, Secrets) is essential. Strong communication and hands-on technical ability are musts. Immediate joiners only.
Posted 2 weeks ago
5.0 - 7.0 years
16 - 20 Lacs
Noida, Pune, Gurugram
Work from Office
Below are the key skills and qualifications we are looking for: Over 4 years of software development experience, with expertise in Python and familiarity with other programming languages such as Java and JavaScript. A minimum of 2 years of significant hands-on experience with AWS services, including Lambda and Step Functions. Domain knowledge in invoicing or billing is preferred, with experience on the Zuora platform (Billing and Revenue) being highly desirable. At least 2 years of working knowledge in SQL. Solid experience working with AWS cloud services, especially S3, Glue, Lambda, Redshift, and Athena. Experience with continuous integration/delivery (CI/CD) tools like Jenkins and Terraform. Excellent communication skills are essential. Design and implement backend services and APIs using Python. Build and maintain CI/CD pipelines using tools like GitHub Actions, AWS CodePipeline, or Jenkins. Optimize performance, scalability, and security of cloud applications. Implement logging, monitoring, and alerting for production workloads.
Posted 2 weeks ago
4.0 - 9.0 years
6 - 12 Lacs
Bengaluru
Hybrid
What you ll be doing As an SRE in the demo engineering team, you will be critical in building and securing Okta s demonstration infrastructure. Your expertise and contributions will directly impact our sales team s ability to demonstrate the value of our product. Specifically, your responsibilities will include: Developing, operating, and maintaining critical infrastructure (AWS, Lamda, DynamoDB, Azure). Integration with 3rd-party tools and other infrastructure in the Okta environment Evangelising security best practices, leading initiatives to strengthen our security posture for critical infrastructure and managing security & compliance requirements. Developing and maintaining technical documentation, runbooks, and procedures Triaging and troubleshooting production issues to ensure reliability and performance Identifying and automating manual processes for scaling, onboarding and offboarding. Promoting and applying best practices for building scalable and reliable services across engineering Supporting a 24x7 customer facing environment, managing incidents and determining how we can prevent them in the future as part of an on-call rotation What you ll bring to the role 4+ years of experience as a site reliability or platform engineer, preferably in a fast-scaling environment Experience with the deployment of production workloads on public cloud infrastructure (AWS and Azure) Strong experience in configuration management using IaaS tools such as Terraform and Cloudformation Strong experience in security practices and network engineering in AWS Experience managing CI/CD infrastructures, with a strong proficiency in platforms like GitHub Actions to streamline deployment pipelines and ensure efficient software delivery Strong proficiency in Node.js for backend systems, demonstrating the ability to develop and maintain robust, scalable, and efficient software components essential for the reliability and performance of the infrastructure. Excellent problem-solving skills and a detail-oriented mindset. Ability to work independently with minimal supervision and guidance. Strong communication and collaboration abilities to work effectively within a team. "This role requires in-person onboarding and travel to our Bengaluru, IN office during the first week of employment."
Posted 2 weeks ago
5.0 - 10.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Job Duties and Responsibilities Drive the long-term architecture of our CI/CD platform and developer tools; be the thought Leader in optimizing for developer productivity. Design CI and automation components for the scale of Okta, which, at its peak, runs over 55 million tests a day. Set direction and influence the development of tools that support the end-to-end lifecycle of code management Act as an innovator by advising, recommending, and managing the introduction of new technology and practices Consults with other architects across the R&D and Infrastructure Engineering teams to ensure our solution is addressing the top concerns of engineers Build high-quality tools and automation for internal use to support continuous integration, continuous delivery, and developer productivity. Design software or customize software for engineering use with the aim of optimizing operational efficiency. Provide technical input by implementing Proof of Concept, influence the choice of the right technology, contribute to existing frameworks, and review design and code. Roll out deliverables to internal customers in phases, monitor adoption, collect feedback, and fine-tune the project to respond to internal customers needs. Support pre-prod infrastructure in the cloud--monitoring, backup and restore, SLA, cost control, deployment. Minimum REQUIRED Knowledge, Skills, and Abilities: In-depth understanding of application development, micro services architecture, and successful elements of a multi-service ecosystem In-depth knowledge of large-scale and high-transactional continuous integration systems aimed at performance, accuracy, and stability. Expert at Git, maven, build automation tools (e.g,. Jenkins, CircleCI, Github Actions) Experience with public cloud(AWS), its services, and its supporting tools (cost control, reporting, environment management). Experience working with Gradle, Bazel, Artifactory, Docker Registry, npm registry, and Github Administration. Experience in Kubernetes is a plus. Experience in developing/managing infrastructure as a service is a plus. Education and Training: B.S. in CS or equivalent
Posted 2 weeks ago
8.0 - 13.0 years
10 - 15 Lacs
Ahmedabad
Work from Office
This open position is for Armanino India LLP, which is located in India. Armanino India LLP is a fully owned subsidiary of Armanino. Job Description: We are seeking a highly skilled and motivated Cloud Development Manager (Azure) to oversee the development and deployment of cloud-based applications within the Microsoft Azure ecosystem. This role demands a hands-on technical expert, a strategic thinker, and a strong leader to drive innovation, efficiency, and best practices in cloud architecture, security, and scalability. The Cloud Development Manager (Azure) will collaborate closely with the Armanino product team, ensuring the delivery of high-quality solutions while adhering to industry best practices. Additionally, Cloud Development Manager (Azure) will play a key role in building and maintaining a dynamic, efficient, and skilled development team. Responsibilities: Lead and mentor Azure development teams to ensure seamless execution and high-quality delivery. Oversee the design, development, and deployment of scalable cloud-based applications. Drive Azure best practices, ensuring compliance, security, and performance optimization. Implement and manage Azure DevOps, CI/CD pipelines, and automation strategies to enhance efficiency. Collaborate with stakeholders, architects, and engineers to align technical solutions with business objectives. Work closely with the Armanino product team to align development efforts with organizational goals. Monitor cloud infrastructure, troubleshoot issues, and lead continuous improvements. Stay ahead of emerging Azure technologies, driving innovation and strategic adoption. Oversee developer operations, ensuring adherence to best practices and coding standards. Take ownership of the quality of work delivered by the development team. Assist in building and maintaining a cohesive, skilled, and motivated development team. Requirements: Bachelor s/Master s degree in Computer Science, Engineering, or a related field. 8+ years of hands-on experience in Azure cloud development & architecture with hands-on technical expertise in Azure services Deep understanding of serverless architecture principles, including event-driven programming, function as a service (FaaS), and designing scalable, stateless microservices Experience in designing, developing, and deploying microservices using C# , Java and frameworks Familiarity with DevOps methodologies and tools for continuous integration (CI) and continuous deployment (CD) pipelines such as Azure DevOps, GitHub Actions, etc Understanding with expertise of security best practices in serverless architectures, including identity and access management (IAM), encryption, network security, and compliance standards Demonstrated experience in leading and managing development teams. Ability to work independently and make sound decisions. Excellent written and verbal communication skills. Certifications in Microsoft Azure (e.g.Azure Solutions Architect, Azure DevOps Lead) and or related technologies preferred. Experience in microservices, containerization, and serverless computing preferred. Knowledge of AI/ML integration within Azure environments preferred. Experience with agile development methodologies (e.g. Scrum, POD) preferred. Strong problem-solving, technical expertise and analytical skills preferred. Knowledge of emerging trends and technologies in cloud computing preferred. Compensation and Benefits: Compensation: Commensurate with Industry standards Other Benefits: Provident Fund, Gratuity, Medical Insurance, Group Personal Accident Insurance etc. employment benefits depending on the position.
Posted 2 weeks ago
6.0 - 9.0 years
18 - 20 Lacs
Pune
Work from Office
Timings: Full Time (As per company timings) Notice Period: (Immediate Joiner - Only) Duration: 6 Months (Possible Extension) Shift Timing: 11:30 AM 9:30 PM IST About the Role We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. Key Responsibilities: Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. Required Skills & Experience: 510 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills capable of driving improvements through PRs and design reviews. Tech Stack & Tools Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM Driving Results: A good single contributor and a good team player. Flexible attitude towards work, as per the needs. Proactively identify & communicate issues and risks. Other Personal Characteristics: Dynamic, engaging, self-reliant developer. Ability to deal with ambiguity. Manage a collaborative and analytical approach. Self-confident and humble. Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people
Posted 2 weeks ago
6.0 - 11.0 years
6 - 11 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Key Responsibilities GitHub Actions Development: Design, implement, and optimize CI/CD workflows using GitHub Actions to support multi-environment deployments. Leverage GitHub Actions for automated builds, tests, and deployments, ensuring integration with Azure services. Create reusable GitHub Actions templates and libraries for consistent DevOps practices. GitHub Repository Administration: Manage GitHub repositories, branching strategies, and access permissions. Implement GitHub features like Dependabot, code scanning, and security alerts to enhance code quality and security. Azure DevOps Integration: Utilize Azure Pipelines in conjunction with GitHub Actions to orchestrate complex CI/CD workflows. Configure and manage Azure services such as: Azure Kubernetes Service (AKS) for container orchestration. Azure Application Gateway and Azure Front Door for load balancing and traffic management. Azure Monitoring , Azure App Insights , and Azure KeyVault for observability, diagnostics, and secure secrets management. HELM charts and Microsoft Bicep for Infrastructure as Code. Automation & Scripting: Develop robust automation scripts using PowerShell , Bash , or Python to streamline operational tasks. Automate monitoring, deployments, and environment management workflows. Infrastructure Management: Oversee and maintain cloud environments with a focus on scalability, security, and reliability. Implement containerization strategies using Docker and orchestration via AKS . Collaboration: Partner with cross-functional teams to align DevOps practices with business objectives while maintaining compliance and security standards. Monitoring & Optimization: Deploy and maintain monitoring and logging tools to ensure system performance and uptime. Optimize pipeline execution times and infrastructure costs. Documentation & Best Practices: Document GitHub Actions workflows, CI/CD pipelines, and Azure infrastructure configurations. Advocate for best practices in version control, security, and DevOps methodologies. Qualifications Education: Bachelor's degree in Computer Science, Information Technology, or related field (preferred). Experience: 3+ years of experience in DevOps engineering with a focus on GitHub Actions and Azure DevOps tools. Proven track record of designing CI/CD workflows using GitHub Actions in production environments. Extensive experience with Azure services, including AKS, Azure Front Door, Azure Application Gateway, Azure KeyVault, Azure App Insights, and Azure Monitoring. Hands-on experience with Infrastructure as Code tools, including Microsoft Bicep and HELM charts . Technical Skills: GitHub Actions Expertise: Deep understanding of GitHub Actions, workflows, and integrations with Azure services. Scripting & Automation: Proficiency in PowerShell , Bash , and Python for creating automation scripts and custom GitHub Actions. Containerization & Orchestration: Experience with Docker and Kubernetes , including Azure Kubernetes Service (AKS). Security Best Practices: Familiarity with securing CI/CD pipelines, secrets management, and cloud environments. Monitoring & Optimization: Hands-on experience with Azure Monitoring, App Insights, and logging solutions to ensure system reliability. Soft Skills: Strong problem-solving and analytical abilities. Excellent communication and collaboration skills, with the ability to work in cross-functional and global teams. Detail-oriented with a commitment to delivering high-quality results. Preferred Qualifications Experience in DevOps practices within the financial or tax services industries. Familiarity with advanced GitHub features such as Dependabot, Security Alerts, and CodeQL. Knowledge of additional CI/CD platforms like Jenkins or CircleCI.
Posted 2 weeks ago
6.0 - 8.0 years
8 - 10 Lacs
Hyderabad
Work from Office
ABOUT THE ROLE Role Description: We are seeking a highly skilled, hands- on Senior QA & Test Automation Engineer , will play a critical role in ensuring the accuracy, reliability, and performance of our enterprise data platforms. Reporting to the Test Automation Engineering Manager , you will work at the intersection of data engineering, quality assurance, and automation to validate complex data flows across pipelines , with a special focus on semantic layer validation and GraphQL API testing . This is a hands-on role that demands deep technical proficiency in both manual and automated data validation . You will be responsible for understanding the business logic behind data transformations, validating the flow of data through various systems, and ensuring that semantic and API layers deliver consistent and contract-compliant outputs. You will actively participate in the design and development of automation frameworks , collaborating closely with the QA Manager and engineering teams to ensure testability and maintainability are built into the system from the start. You will also contribute to test strategy, execution planning, and defect lifecycle management , working across cross-functional teams to maintain high standards for data quality. This role is ideal for someone who is passionate about data quality , has hands-on experience with ETL/ELT pipelines , is comfortable working with cloud-native data platforms (AWS, Databricks, etc.) , and has a strong grasp of testing best practices , including API schema validation, CI/CD integration, and semantic layer testing. Youll have the opportunity to shape and contribute to a modern data quality engineering practice , ensuring that downstream consumers such as analytics teams, business stakeholders, and machine learning models can fully trust the data they rely on. Roles & Responsibilities: Collaborate with the Test Automation Manager to design and implement end-to-end test strategies for data validation, semantic layer testing, and GraphQL API validation. Perform manual validation of data pipelines , including source-to-target data mapping, transformation logic, and business rule verification. Develop and maintain automated data validation scripts using Python and PySpark for both real-time and batch pipelines. Contribute to the design and enhancement of reusable automation frameworks , with components for schema validation, data reconciliation, and anomaly detection. Validate semantic layers (e.g., Looker, dbt models) and GraphQL APIs , ensuring data consistency, compliance with contracts, and alignment with business expectations. Track, manage, and report defects using tools like JIRA , ensuring proper prioritization, root cause analysis, and resolution. Collaborate with Data Engineers, Product Managers, and DevOps teams to integrate tests into CI/CD pipelines and promote shift-left testing practices. Ensure comprehensive test coverage across the data lifecycle, including data ingestion, transformation, delivery, and consumption. Participate actively in QA ceremonies (daily standups, sprint planning, retrospectives), and continuously drive improvements to QA processes and culture. Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: 69 years of experience in QA roles, with at least 3+ years focused on ETL/Data Pipeline Testing in cloud-native environments. Strong in SQL, Python, and optionally PySpark comfortable writing complex queries, automation scripts, and custom data validation logic. Practical experience with manual validation of data pipelines, including source-to-target testing and business rule verification. Proven ability to support, maintain, and enhance automation test suites and contribute to framework improvements in collaboration with QA leadership. Experience validating GraphQL APIs, semantic layers (e.g., Looker, dbt), and ensuring schema/data contract compliance. Familiarity with data platforms and tools such as Databricks, AWS Glue, Redshift, Athena, or BigQuery. Strong understanding of QA methodologies, including test planning, test case design, test data management, and defect lifecycle tracking. Proficiency in tools like JIRA, TestRail, or Zephyr for test case and defect management. Skilled in building and automating data quality checks: schema validation, null checks, duplicates, threshold alerts, and data transformation validation. Hands-on with API testing using tools like Postman, pytest, or custom automation frameworks built in Python. Experience working in Agile/Scrum environments, actively participating in QA ceremonies and sprint cycles. Exposure to integrating automated tests into CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Education and Professional Certifications Bachelors degree in computer science and engineering preferred, other Engineering field is considered; Masters degree and 6+ years experience Or Bachelors degree and 8+ years Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 2 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Hyderabad
Work from Office
ABOUT THE ROLE Role Description: We are seeking a highly experienced and hands-on Test Automation Engineering Manager with strong leadership skills and deep expertise in Data Integration, Data Quality , and automated data validation across real-time and batch pipelines . In this strategic role, you will lead the design, development, and implementation of scalable test automation frameworks that validate data ingestion, transformation, and delivery across diverse sources into AWS-based analytics platforms , leveraging technologies like Databricks , PySpark , and cloud-native services. As a lead , you will drive the overall testing strategy, lead a team of test engineers, and collaborate cross-functionally with data engineering, platform, and product teams. Your focus will be on delivering high-confidence, production-grade data pipelines with built-in validation layers that support enterprise analytics, ML models, and reporting platforms. The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Define and drive the test automation strategy for data pipelines, ensuring alignment with enterprise data platform goals. Lead and mentor a team of data QA/test engineers, providing technical direction, career development, and performance feedback. Own delivery of automated data validation frameworks across real-time and batch data pipelines using Databricks and AWS services. Collaborate with data engineering, platform, and product teams to embed data quality checks and testability into pipeline design. Design and implement scalable validation frameworks for data ingestion, transformation, and consumption layers. Automate validations for multiple data formats including JSON, CSV, Parquet, and other structured/semi-structured file types during ingestion and transformation. Automate data testing workflows for pipelines built on Databricks/Spark, integrated with AWS services like S3, Glue, Athena, and Redshift. Establish reusable test components for schema validation, null checks, deduplication, threshold rules, and transformation logic. Integrate validation processes with CI/CD pipelines, enabling automated and event-driven testing across the development lifecycle. Drive the selection and adoption of tools/frameworks that improve automation, scalability, and test efficiency. Oversee testing of data visualizations in Tableau, Power BI, or custom dashboards, ensuring backend accuracy via UI and data-layer validations. Ensure accuracy of API-driven data services, managing functional and regression testing via Postman, Python, or other automation tools. Track test coverage, quality metrics, and defect trends, providing regular reporting to leadership and ensuring continuous improvement. establishing alerting and reporting mechanisms for test failures, data anomalies, and governance violations. Contribute to system architecture and design discussions, bringing a strong quality and testability lens early into the development lifecycle. Lead test automation initiatives by implementing best practices and scalable frameworks, embedding test suites into CI/CD pipelines to enable automated, continuous validation of data workflows, catalog changes, and visualization updates Mentor and guide QA engineers, fostering a collaborative, growth-oriented culture focused on continuous learning and technical excellence. Collaborate cross-functionally with product managers, developers, and DevOps to align quality efforts with business goals and release timelines. Conduct code reviews, test plan reviews, and pair-testing sessions to ensure team-level consistency and high-quality standards. Must-Have Skills: Hands-on experience with Databricks and Apache Spark for building and validating scalable data pipelines Strong expertise in AWS services including S3, Glue, Athena, Redshift, and Lake Formation Proficient in Python, PySpark, and SQL for developing test automation and validation logic Experience validating data from various file formats such as JSON, CSV, Parquet, and Avro In-depth understanding of data integration workflows including batch and real-time (streaming) pipelines Strong ability to define and automate data quality checks : schema validation, null checks, duplicates, thresholds, and transformation validation Experience designing modular, reusable automation frameworks for large-scale data validation Skilled in integrating tests with CI/CD tools like GitHub Actions , Jenkins , or Azure DevOps Familiarity with orchestration tools such as Apache Airflow , Databricks Jobs , or AWS Step Functions Hands-on experience with API testing using Postman , pytest , or custom automation scripts Proven track record of leading and mentoring QA/test engineering teams Ability to define and own test automation strategy and roadmap for data platforms Strong collaboration skills to work with engineering, product, and data teams Excellent communication skills for presenting test results, quality metrics , and project health to leadership Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Understanding of DataOps methodologies and practices Familiarity with monitoring/observability tools such as Datadog, Prometheus, or CloudWatch Experience building or maintainingtest data generators Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 2 weeks ago
6.0 - 9.0 years
8 - 11 Lacs
Hyderabad
Work from Office
Role Description: We are seeking a highly skilled, hands-on Senior QA & Test Automation Specialist (Test Automation Engineer)with strong experience in data validation , ETL testing , test automation , and QA process ownership . This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Collaborate with the QA Manager to design and implement end-to-end test strategies for data validation, semantic layer testing, and GraphQL API validation. Perform manual validation of data pipelines, including source-to-target data mapping, transformation logic, and business rule verification. Develop and maintain automated data validation scripts using Python and PySpark for both real-time and batch pipelines. Contribute to the design and enhancement of reusable automation frameworks, with components for schema validation, data reconciliation, and anomaly detection. Validate semantic layers (e.g., Looker, dbt models) and GraphQL APIs, ensuring data consistency, compliance with contracts, and alignment with business expectations. Write and manage test plans, test cases, and test data for structured, semi-structured, and unstructured data. Track, manage, and report defects using tools like JIRA, ensuring thorough root cause analysis and timely resolution. Collaborate with Data Engineers, Product Managers, and DevOps teams to integrate tests into CI/CD pipelines and enable shift-left testing practices. Ensure comprehensive test coverage for all aspects of the data lifecycle, including ingestion, transformation, delivery, and consumption. Participate in QA ceremonies (standups, planning, retrospectives) and continuously contribute to improving the QA process and culture. Experience building or maintaining test data generators Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: 69 years of experience in QA roles, with at least 3+ years of strong exposure to data pipeline testing and ETL validation. Strong in SQL, Python, and optionally PySpark comfortable with writing complex queries and validation scripts. Practical experience with manual validation of data pipelines and source-to-target testing. Experience in validating GraphQL APIs, semantic layers (Looker, dbt, etc.), and schema/data contract compliance. Familiarity with data integration tools and platforms such as Databricks, AWS Glue, Redshift, Athena, or BigQuery. Strong understanding of test planning, defect tracking, bug lifecycle management, and QA documentation. Experience working in Agile/Scrum environments with standard QA processes. Knowledge of test case and defect management tools (e.g., JIRA, TestRail, Zephyr). Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Deep hands-on expertise in SQL, Python, and PySpark for testing and automating validation. Proven experience in manual and automated testing of batch and real-time data pipelines. Familiarity with data processing and analytics stacks: Databricks, Spark, AWS (Glue, S3, Athena, Redshift). Experience with bug tracking and test management tools like JIRA, TestRail, or Zephyr. Ability to troubleshoot data issues independently and collaborate with engineering for root cause analysis. Experience integrating automated tests into CI/CD pipelines (e.g., Jenkins, GitHub Actions). Experience validating data from various file formats such as JSON, CSV, Parquet, and Avro Strong ability to validate and automate data quality checks: schema validation, null checks, duplicates, thresholds, and transformation validation Hands-on experience with API testing using Postman, pytest, or custom automation scripts Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 2 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Hyderabad
Work from Office
What you will do Amgen India is a key contributor to the companys global digital transformation initiatives, delivering secure, compliant, and user-friendly digital solutions. As part of the Global Medical Technology Medical Information team, you will play a critical role in ensuring quality, compliance, and performance across Amgens Medical Information platforms In this role, you will be responsible for ensuring the quality and compliance of enterprise applications, with a strong focus on test planning, execution, defect management, and test automation. You will work closely with product owners, developers, and QA teams to validate systems against business and regulatory requirements. Your expertise in automation testing, along with manual testing knowledge, will contribute to delivering robust solutions across platforms such as Medical Information Websites and other validated systems. This role is critical in ensuring that applications meet performance, security, and compliance expectations across the development lifecycle, especially in GxP-regulated environments. Roles & Responsibilities: Design and develop manual and automated test cases based on functional and non-functional requirements. Create and maintain the Requirement Traceability Matrix (RTM) to ensure full test coverage. Assist in User Acceptance Testing (UAT), working closely with business stakeholders. Set up test data based on preconditions identified in test cases. Execute tests including Dry Runs, Operational Qualification (OQ), and regression testing. Capture test evidence and document test results in accordance with validation standards. Log and track defects using tools such as JIRA or HP ALM, ensuring clear documentation and traceability. Ensure defects are re-tested post-fix and verified for closure in collaboration with development teams. Execute test scripts provided by analysts, focusing on accuracy and completeness of testing. Collaborate with developers to ensure all identified bugs and issues are addressed and resolved effectively. Support automation testing efforts by building and maintaining scripts in tools like Selenium, TestNG, or similar. Maintain detailed documentation of test cycles, supporting audit readiness and compliance. Ability to perform responsibilities in compliance with GxP and Computer System Validation (CSV) regulations, ensuring proper documentation, execution, and adherence to regulatory standards. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: BS/MS in Computer Science, Information Systems, or related field 3+ years of experience in application testing and automation frameworks Strong knowledge of test lifecycle documentation including test plans, RTMs, OQ protocols, and summary reports Hands-on experience with Selenium, TestNG, Postman, JMeter, or similar tools Experience in validated (GxP) environments and with CSV practices Familiarity with Agile/Scrum methodologies and continuous testing practices Experience working in cross-functional teams, especially in healthcare or pharmaceutical domains Knowledge of test management and defect tracking tools (e.g., HP ALM, JIRA, qTest) Must-Have Skills: Strong experience in designing and executing both manual and automated test cases based on functional and non-functional requirements. Hands-on expertise in building and maintaining automated functional, regression, and integration tests scripts using tools such as Selenium, TestNG, Postman, or similar, enabling scalable and reusable test automation. Proficient in using tools like JIRA, HP ALM, or qTest for defect logging, tracking, test execution, and traceability, ensuring audit readiness and compliance with validation standards. Familiarity with Jenkins, GitHub Actions, or similar tools for integrating testing into DevOps pipelines Develop, enhance, and maintain test automation scripts using data-driven, keyword-driven, or hybrid frameworks, enabling reusable, scalable, and maintainable test coverage across Salesforce and integrated systems. Proficient in creating test lifecycle documentation (RTMs, OQ protocols, test plans, reports) and supporting end-to-end testing within regulated GxP and CSV-compliant computer systems, ensuring adherence to validation, audit, and documentation standards Good-to-Have Skills: Familiarity with performance testing (e.g., JMeter) and API testing using tools like Postman to validate service-level functionality and performance. Experience working within Agile or Scrum frameworks and supporting continuous testing practices across sprint cycles and iterative releases. Proven ability to work effectively with cross-functional teams including developers, QA, and business stakeholders to support UAT, validate fixes, and ensure issue resolution throughout the development lifecycle. Background in testing applications in the healthcare or pharmaceutical industry, with an understanding of compliance, patient safety, and regulatory constraints. Ability to manage test data setup and maintain thorough documentation of test evidence to support compliance, audit preparation, and traceability. Experience collaborating with cross-functional teams, particularly within healthcare or pharmaceutical environments. Professional Certifications ISTQB Foundation or Advanced Level Certification Certified Automation Tester (e.g., Selenium WebDriver Certification) SAFe Agile or Scrum Certification (optional) Certified CSV/Validation Professional (optional) Soft Skills: Strong attention to detail and documentation quality Excellent analytical and problem-solving skills Good verbal and written communication skills Ability to work both independently and within a team High accountability and ownership of assigned tasks Adaptability to fast-paced and evolving environments Strong collaboration skills across functional and technical teams
Posted 2 weeks ago
0.0 - 3.0 years
3 - 6 Lacs
Hyderabad
Work from Office
The ideal candidate will have a deep understanding of automation, configuration management, and infrastructure-as-code principles, with a strong focus on Ansible. You will work closely with developers, system administrators, and other collaborators to automate infrastructure related processes, improve deployment pipelines, and ensure consistent configurations across multiple environments. The Infrastructure Automation Engineer will be responsible for developing innovative self-service solutions for our global workforce and further enhancing our self-service automation built using Ansible. As part of a scaled Agile product delivery team, the Developer works closely with product feature owners, project collaborators, operational support teams, peer developers and testers to develop solutions to enhance self-service capabilities and solve business problems by identifying requirements, conducting feasibility analysis, proof of concepts and design sessions. The Developer serves as a subject matter expert on the design, integration and operability of solutions to support innovation initiatives with business partners and shared services technology teams. Please note, this is an onsite role based in Hyderabad. Key Responsibilities: Automating repetitive IT tasks - Collaborate with multi-functional teams to gather requirements and build automation solutions for infrastructure provisioning, configuration management, and software deployment. Configuration Management - Design, implement, and maintain code including Ansible playbooks, roles, and inventories for automating system configurations and deployments and ensuring consistency Ensure the scalability, reliability, and security of automated solutions. Troubleshoot and resolve issues related to automation scripts, infrastructure, and deployments. Perform infrastructure automation assessments, implementations, providing solutions to increase efficiency, repeatability, and consistency. DevOps Facilite continuous integration and deployment (CI/CD) Orchestration Coordinating multiple automated tasks across systems Develop and maintain clear, reusable, and version-controlled playbooks and scripts. Manage and optimize cloud infrastructure using Ansible and terraform automation (AWS, Azure, GCP, etc.). Continuously improve automation workflows and practices to enhance speed, quality, and reliability. Ensure that infrastructure automation adheres to best practices, security standards, and regulatory requirements. Document and maintain processes, configurations, and changes in the automation infrastructure. Participate in design review, client requirements sessions and development teams to deliver features and capabilities supporting automation initiatives Collaborate with product owners, collaborators, testers and other developers to understand, estimate, prioritize and implement solutions Design, code, debug, document, deploy and maintain solutions in a highly efficient and effective manner Participate in problem analysis, code review, and system design Remain current on new technology and apply innovation to improve functionality Collaborate closely with collaborators and team members to configure, improve and maintain current applications Work directly with users to resolve support issues within product team responsibilities Monitor health, performance and usage of developed solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelors degree and 0 to 3 years of computer science, IT, or related field experience OR Diploma and 4 to 7 years of computer science, IT, or related field experience Deep hands-on experience with Ansible including playbooks, roles, and modules Proven experience as an Ansible Engineer or in a similar automation role Scripting skills in Python, Bash, or other programming languages Proficiency expertise in Terraform & CloudFormation for AWS infrastructure automation Experience with other configuration management tools (e.g., Puppet, Chef). Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (GitHub Actions, CodePipeline, etc.) Familiarity with monitoring tools (e.g., Dynatrace, Prometheus, Nagios) Working in an Agile (SAFe, Scrum, and Kanban) environment Preferred Qualifications: Red Hat Certified Specialist in Developing with Ansible Automation Platform Red Hat Certified Specialist in Managing Automation with Ansible Automation Platform Red Hat Certified System Administrator AWS Certified Solutions Architect Associate or Professional AWS Certified DevOps Engineer Professional Terraform Associate Certification Good-to-Have Skills: Experience with Kubernetes (EKS) and service mesh architectures. Knowledge of AWS Lambda and event-driven architectures. Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) Experience operating within a validated systems environment (FDA, European Agency for the Evaluation of Medicinal Products, Ministry of Health, etc.) Soft Skills: Strong analytical and problem-solving skills. Effective communication and collaboration with multi-functional teams. Ability to work in a fast-paced, cloud-first environment. Shift Information: This position is an onsite role and may require working during later hours to align with business hours. Candidates must be willing and able to work outside of standard hours as required to meet business needs.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad
Work from Office
Position Summary The F5 NGINX Business Unit is seeking a Devops Software Engineer III based in India. As a Devops engineer, you will be an integral part of a development team delivering high-quality features for exciting next generation NGINX SaaS products. In this position, you will play a key role in building automation, standardization, operations support, and tools to implement and support world-class products; design, build, and maintain infrastructure, services and tools used by our developers, testers and CI/CD pipelines. You will champion efforts to improve reliability and efficiency in these environments and explore and lead efforts towards new strategies and architectures for pipeline services, infrastructure, and tooling. When necessary, you are comfortable wearing a developer hat to build a solution. You are passionate about automation and tools. You'll be expected to handle most development tasks independently, with minimal direct supervision. Primary Responsibilities Collaborate with a globally distributed team to design, build, and maintain tools, services, and infrastructure that support product development, testing, and CI/CD pipelines for SaaS applications hosted on public cloud platforms. Ensure Devops infrastructure and services maintain the required level of availability, reliability, scalability, and performance. Diagnose and resolve complex operational challenges involving network, security, and web technologies. This includes troubleshooting problems with HTTP load balancers, API gateways (e.g., NGINX proxies), and related systems. Take part in product support, bug triaging, and bug-fixing activities on a rotating schedule to ensure the SaaS service meets its SLA commitments. Consistently apply forward-thinking concepts relating to automation and CI/CD processes. Skills Experience with deploying infrastructure and services in one or more cloud environments such as AWS, Azure, Google Cloud. Experience with configuration management and deployment automation tools, such as Terraform, Ansible, Packer. Experience with Observability platforms like Grafana, Elastic Stack etc. Experience with source control and CI/CD tools like git, Gitlab CI, Github Actions, AWS Code Pipeline etc. Proficiency in scripting languages such as Python and Bash. Solid understanding of Unix OS Familiarity or experience with container orchestration technologies such as Docker and Kubernetes. Good understanding of computer networking (e.g., DNS, DHCP, TCP, IPv4/v6) Experience with network service technologies (e.g., HTTP, gRPC, TLS, REST APIs, OpenTelemetry). Qualifications Bachelors or advanced degree; and/or equivalent work experience. 5+ years of experience in relevant roles.
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Hyderabad
Work from Office
Shift timings: US Shift (6:00 PM to 4:00 AM IST) Purpose We are looking for a seasoned Senior QA Engineer with a strong foundation in both manual and automation testing . The ideal candidate will be highly detail-oriented, capable of understanding business requirements, and able to contribute to product quality through both hands-on testing and thoughtful feedback. You will work closely with product managers and developers to ensure high-quality releases by creating effective test cases, executing comprehensive test plans, and implementing automation for key workflows. Strong communication skills are essential, as you'll be responsible for clearly reporting QA status, identifying issues early, and proposing areas of improvement in the product. Key Responsibilities Analyze business requirements and functional specs to create test cases and identify potential risks or improvements. Perform manual testing for feature validation, regression testing, and exploratory testing. Design, develop, and maintain automated test scripts using tools such as Selenium Participate in all phases of the software development lifecycle to ensure quality is embedded throughout the process. Collaborate with developers and product owners to reproduce and debug issues, ensuring proper resolution. Execute tests in CI/CD pipelines and maintain automation scripts in branching workflows. Provide clear and concise QA status updates using metrics and dashboards for each release. Document test results, defects, and maintain accurate test records. Suggest functional and UX improvements based on user flows and business context. Required Qualifications 8+ years of experience in software quality assurance , with a solid mix of manual and automation testing . Proficiency in automated testing frameworks using Selenium. Experience working with version control and CI tools (e.g., Git, Jenkins, GitHub Actions). Ability to analyze business flows , ask the right questions, and identify edge cases. Strong experience in functional, regression, and integration testing . Clear understanding of test planning, test case design , and bug lifecycle management . Excellent written and verbal communication skills , with an ability to convey quality issues clearly and constructively. Strong attention to detail and commitment to product quality.
Posted 2 weeks ago
0.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant- Python Full Stack Developers We are looking for a talented and motivated Python Developer to join our development team. The ideal candidate will be responsible for building high-quality applications, writing clean and efficient code, and collaborating with cross-functional teams to develop innovative software solutions. Responsibilities . Proficiency with server-side languages such as Python, GoLang, and web frameworks. . Experience with database technology such as MongoDB, Redis and PostgreSQL . Proficiency with fundamental front-end languages such as HTML, CSS, and Python . Proficiency with Django framework. . Familiarity with JavaScript frameworks such as Angular JS, React . Proficiency with Container technologies like Docker . Experience in containerizing Django application with databases (MongoDB and PostgreSQL) . Exposure to CICD framework is added advantage . Familiarity with running and orchestrating Docker images with Kubernetes . Understanding of technical debt Qualifications we seek in you! Minimum Qualifications . Bachelor&rsquos degree Preferred Qualifications/ Skills Experience with Docker, Kubernetes, or cloud services (AWS, GCP, Azure). Familiarity with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI. Exposure to data processing libraries (e.g., Pandas, NumPy). Understanding of asynchronous programming (asyncio, Celery). Experience with unit testing frameworks (pytest, unittest). Interest or experience in machine learning or data engineering. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 2 weeks ago
2.0 - 4.0 years
3 - 5 Lacs
Bengaluru
Work from Office
Job Skills: Strong proficiency in React.js and TypeScript. Experience integrating frontend applications with JavaScript SDKs. Knowledge of React Query to enhance SDK-based data fetching and state synchronization. Understanding of state management using Zustand or local component state. Proficiency in building real-time data dashboards and network security visualizations. Familiarity with performance optimization techniques (React Profiler, lazy loading, memoization). Knowledge of *frontend security best practices. Ability to work with *Git, GitHub Actions, CI/CD Preferred Qualifications Experience in cybersecurity, network security, or data visualization. Prior work on real-time data dashboards or security monitoring tools. Familiarity with UI/UX design tools (Figma or Zeplin). Knowledge of Storybook for UI documentation and component testing Understanding of WebSockets or real-time event-driven UI updates Role: We are looking for a UI Engineer to build and optimize the frontend for our API-driven cybersecurity platform You will work with a JavaScript SDK to consume APIs, ensuring seamless integration, real-time security dashboards, and user-friendly interfaces We prioritize simplicity, speed, and efficiency, leveraging React.js, out-of-the-box UI solutions like Material-UI, and modern visualization tools to keep development agile and competitive. Since our backend services are exposed via an SDK, you will focus entirely on UI development, integrating the SDK efficiently and ensuring optimal frontend performance. Responsibilities: Develop and maintain React.js applications consuming APIs via a JavaScript SDK Build data-heavy security dashboards with interactive visualizations (Recharts or ECharts). Use React Query to enhance SDK-based data fetching where needed. Implement state management using Zustand or local component state when necessary. Optimize UI for performance, accessibility, and responsiveness Develop WebSockets or SDK event listeners for real-time updates. Write unit and integration tests using Jest, Cypress, and React Testing Library. Collaborate with backend engineers to improve SDK usability and frontend integration. design develop and test APIs for the UI using necessary technologies including but not limited to GraphQL, NodeJS and/or Java Spring Participate in code reviews, security audits, and UI performance testing. Preferred Qualifications Experience in cybersecurity, network security, or data visualization. Prior work on real-time data dashboards or security monitoring tools. Familiarity with UI/UX design tools (Figma or Zeplin). Knowledge of Storybook for UI documentation and component testing Understanding of WebSockets or real-time event-driven UI updates
Posted 2 weeks ago
6.0 - 9.0 years
18 - 20 Lacs
Pune
Work from Office
Notice Period: (Immediate Joiner - Only) Duration: 6 Months (Possible Extension) Shift Timing: 11:30 AM 9:30 PM IST About the Role We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. Key Responsibilities: Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. Required Skills & Experience: 5-10 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills capable of driving improvements through PRs and design reviews. Tech Stack & Tools Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM Driving Results: A good single contributor and a good team player. Flexible attitude towards work, as per the needs. Proactively identify & communicate issues and risks. Other Personal Characteristics: Dynamic, engaging, self-reliant developer. Ability to deal with ambiguity. Manage a collaborative and analytical approach. Self-confident and humble. Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people
Posted 2 weeks ago
1.0 - 3.0 years
3 - 5 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will maximises domain and business process expertise to detail product requirements as epics and user stories, along with supporting artifacts like business process maps, use cases, and test plans for the software development teams. This role involves working closely with business collaborators, Data engineers, AI/ML engineers to ensure that the technical requirements for upcoming development are thoroughly elaborated. This enables the delivery team to estimate, plan, and commit to delivery with high confidence and identify test cases and scenarios to ensure the quality and performance of IT Systems. You will collaborate with the Product Owner and developers to maintain an efficient and consistent process, ensuring quality deliverables from the team. Roles & Responsibilities: Collaborate with System Architects and Product owners to manage business analysis activities, ensuring alignment with engineering and product goals. Monitor, solve, and resolve issues related to case intake and case processing across multiple systems. Work with Product Owners and customers to define scope and value for new developments. Stay focused on software development to ensure it meets requirements, providing proactive feedback to collaborators. Design, implement, and maintain automated CI/CD pipelines for seamless software integration and deployment. Collaborate with developers to enhance application reliability and scalability. Troubleshoot deployment and infrastructure issues, ensuring high availability. Collaborate with business subject matter experts, testing teams and Product Management to prioritize release scopes and groom the Product backlog. Maintain and ensure the quality of documented user stories/requirements in tools like Jira. Basic Qualifications: Masters degree and 1 to 3 years of Life Science/Biotechnology/Pharmacology/Information Systems experience OR Bachelors degree and 3 to 5 years of Life Science/Biotechnology/Pharmacology/Information Systems experience OR Diploma and 7 to 9 years of Life Science/Biotechnology/Pharmacology/Information Systems experience Preferred Qualifications: Functional Skills: Must-Have Experienced in MuleSoft, Java, J2ee & database programming. Demonstrated expertise in monitoring, troubleshooting, and resolving data and system issues. Proficiency in CI/CD tools (Jenkins, GitLab CI/CD, GitHub Actions, or Azure DevOps). Hands-on experience with the ITIL framework and methodologies like (Scrum). Knowledge of SDLC process, including requirements, design, testing, data analysis, change control Functional Skills: Good to Have Experience in managing GxP systems and implementing GxP projects. Knowledge of Artificial Intelligence (AI), Robotic Process Automation (RPA), Machine Learning (ML), Natural Language Processing (NLP) and Natural Language Generation (NLG) automation technologies with building business requirements. Knowledge of cloud technologies such as AWS. Excellent communication skills and the ability to communicate with Product Managers and business collaborators to define scope and value for new developments. Experience of DevOps, Continuous Integration, and Continuous Delivery methodology, and CRM systems Soft Skills: Excellent analytical and troubleshooting skills Able to work under minimal supervision Strong verbal and written communication skills High degree of initiative and self-motivation Team-oriented, with a focus on achieving team goals Ability to manage multiple priorities successfully Ability to deal with ambiguity and think on their feet Shift Information: This position may require you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements.
Posted 2 weeks ago
7.0 - 12.0 years
18 - 30 Lacs
South Goa, Pune
Hybrid
We are looking for a DevOps leader with deep experience in python build/ci/cd ecosystem for an exciting and cutting edge stealth startup in Silicon Valley. Responsibilities: Design and implement complex CI/CD pipelines in python and leveraging cutting-edge python packaging, dependency management, and CI/CD practices Optimize speed and reliability of builds Define test automation tools, architecture and integration with CI/CD platforms, and drive TA implementation in python Implement configuration management to set standards and best practices Manage and optimize cloud infrastructure resources: GCP or AWS or Azure Collaborate with development teams to understand application requirements and optimize deployment processes Work closely with operations teams to ensure smooth transition of applications into production. Develop and maintain documentation for system configurations, processes, and procedures Eligibility: 5-12 years of experience in DevOps, with minimum 2-5years of experience on python build echo system Python packaging, distribution,concurrent builds, dependencies, environments, test framework integrations, linting. Pip, poetry, uv, flint CI/CD: pylint, coverage.py , cprofile, python scripting, docker, k8s, IaC (Terraform, Ansible, Puppet, Helm) Platforms (Teamcity (preferred) or Jenkins or Github Actions or CircleCI, TravisCI) Test Automation: pytest, unittest, integration tests, plyright (preferred) Cloud platforms: AWS or Azure or GCP and platform specific CI/CD services and tools. Familiarity with logging and monitoring tools (e.g. Prometheus, Grafana).
Posted 2 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Bengaluru
Work from Office
About the Role: Grade Level (for internal use): 11 S&P Global Mobility The Role: Senior Data Engineer( AWS Cloud, Python) We are seeking a Senior Data Engineer with deep expertise in AWS Cloud Development to join our fast-paced data engineering organization. This role is critical to both the development of new data products and the modernization of existing platforms. The ideal candidate is a seasoned data engineer with hands-on experience designing, building, and optimizing large-scale data pipelines and architectures in both on-premises (e.g., Oracle) and cloud environments (especially AWS). This individual will also serve as a Cloud Development expert , mentoring and guiding other data engineers as they enhance their cloud skillsets. Responsibilities Data Engineering & Architecture Design, build, and maintain scalable data pipelines and data products. Develop and optimize ELT/ETL processes using a variety of data tools and technologies. Support and evolve data models that drive operational and analytical workloads. Modernize legacy Oracle-based systems and migrate workloads to cloud-native platforms. Cloud Development & DevOps (AWS-Focused) Build, deploy, and manage cloud-native data solutions using AWS services (e.g., S3, Lambda, Glue, EMR, Redshift, Athena, Step Functions). Implement CI/CD pipelines, IaC (e.g., Terraform or CloudFormation), and monitor cloud infrastructure for performance and cost optimization. Ensure data platform security, scalability, and resilience in the AWS cloud. Technical Leadership & Mentoring Act as a subject matter expert on cloud-based data development and DevOps best practices. Mentor data engineers on AWS architecture, infrastructure as code, and cloud-first design patterns. Participate in code and architecture reviews, enforcing best practices and high-quality standards. Cross-functional Collaboration Work closely with product managers, data analysts, software engineers, and other stakeholders to understand business needs and deliver end-to-end solutions. Support and evolve the roadmap for data platform modernization and new product delivery. What We're looking for: Required Qualifications 7+ years of experience in data engineering or equivalent technical role. 5+ years of hands-on experience with AWS Cloud Development and DevOps. Strong expertise in SQL , data modeling , and ETL/ELT pipelines . Deep experience with Oracle (PL/SQL, performance tuning, data extraction). Proficiency in Python and/or Scala for data processing tasks. Strong knowledge of cloud infrastructure (networking, security, cost optimization). Experience with infrastructure as code (Terraform). Familiarity with CI/CD pipelines and DevOps tooling (e.g., Jenkins, GitHub Actions). Preferred (Nice to Have) Experience with Google Cloud Platform (GCP), Snowflake Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Experience with modern orchestration tools (e.g., Airflow, dbt). Exposure to data cataloging, governance, and quality tools.
Posted 2 weeks ago
2.0 - 4.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Role Overview: We are looking for a passionate and driven Software Engineer with 2-4 years of experience in Python Java. The ideal candidate will have hands-on experience in containerization, cloud platforms (AWS or GCP), microservices architecture, and be proficient debugging. You will work as part of an agile development team to build and maintain scalable, high-performance systems. Strong team collaboration skills are crucial to success in this role. Key Responsibilities: Develop and maintain robust backend services and applications using Python Java . Work with microservices architecture to design, implement, and deploy scalable solutions. Exposure to containerization using Docker and work with Kubernetes for orchestration and deployment. Hands on AWS or Google Cloud Platform (GCP) , utilizing cloud-native services and resources. Troubleshoot, debug, and optimize application code and systems for performance and reliability. Write clean, maintainable, and efficient code, following industry best practices. Required Qualifications: 2-4 years of professional experience in Python and Java development. Familiarity with containerization technologies (e.g., Docker ) and orchestration tools like Kubernetes . Experience with deploying and managing applications on AWS or Google Cloud Platform (GCP) . Understanding of microservices architecture and how to build and maintain distributed systems. Strong debugging skills and the ability to solve complex technical issues in large systems. Experience with version control tools (e.g., Git ) and CI/CD tools (e.g., Jenkins , GitLab CI , GitHub Actions ). Knowledge of RESTful APIs and how to build scalable backend services. Strong communication skills and ability to collaborate in a team environment. Ability to adapt to changing requirements and contribute in an Agile, fast-paced development environment.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France