Jobs
Interviews

64 Deployment Automation Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 1.0 years

3 - 6 Lacs

bengaluru

Work from Office

Roles and Responsibilities Design, develop, test, and deploy AI-powered solutions using various technologies such as AIMl, Deep Learning, and Machine Learning. Collaborate with cross-functional teams to identify business problems and implement automation solutions using tools like Blue Prism, Uipath, or Automation Anywhere. Develop data analysis skills to extract insights from large datasets and provide recommendations for process improvements. Troubleshoot issues related to deployment automation and ensure seamless integration with existing systems. Desired Candidate Profile 0-1 year of experience in Artificial Intelligence Engineering or a related field. Strong understanding of AI algorithms, machine learning algorithms, and deep learning concepts. Proficiency in programming languages such as Python or Java; knowledge of C++ is an added advantage.

Posted 1 day ago

Apply

3.0 - 7.0 years

12 - 16 Lacs

bengaluru

Work from Office

Oracle Health & Analytics is a rapidly growing organization that leverages Oracle's cloud technologies to modernize and automate healthcare. Our mission is to improve the quality of life by delivering better, more secure experiences and easier access to health and research data for patients and providers. As a new line of business, we foster a creative, entrepreneurial environment unencumbered by legacy systems and value expertise that helps us create a world-class engineering center focused on excellence. Required Qualifications BS or MS in Computer Science or equivalent domain experience. 4-6 years of relevant SRE or cloud engineering experience, operating independently on senior projects. Experience deploying and managing large-scale, customer-facing web services in a public cloud infrastructure (e.g., OCI, AWS, Azure). Expertise in automated deployment and configuration management tools (Terraform, Kubernetes, Ansible, etc.). Hands-on experience with CI/CD for data workflows, DataOps orchestration, and automated data pipeline management. Familiarity with observability tools and methodologies: monitoring, alerting, logging, and performance tuning. Proficient with scripting and programming languages (Python, Bash, etc.) for automation and system integration. Track record of incident management/troubleshooting and root cause analysis in distributed systems. Strong written and verbal communication skills, able to clearly present complex technical information to diverse audiences. US citizenship and eligibility for federal security clearance (if applicable). Preferred Qualifications Knowledge of healthcare data management, compliance, and governance. Experience with data migration, modernization, and control plane architecture. As a Site Reliability Engineer, you will play a critical role in building and operating the control plane for Oracle Health's modern cloud-based SI platform, with an emphasis on Observability & Scaling. You will design, implement, and automate processes and systems that ensure mission-critical data workflows are secure, reliable, resilient, and highly available. This role presents an opportunity to solve complex problems involving large-scale distributed systems, data pipeline management, and automation, all in a highly collaborative, agile environment. Key Responsibilities Design, implement, and operate the control plane that ensures observability & scaling for data-centric services. Lead efforts in automated data pipeline management, including CI/CD for data workflows, data migration, and modernization. Develop and maintain robust monitoring, alerting, and observability tooling to ensure system performance, reliability, and rapid incident response. Partner with development teams to implement improvements in service architecture, focusing on automation, self-healing, and real-time monitoring. Build and operate DataOps automation and orchestration platforms, including onboarding & bootstrapping automation for new services and tenants. Participate in incident management, troubleshooting, and root cause analysis for issues impacting data pipelines, access, or system availability. Support data access control and governance by designing solutions that meet strict security and compliance requirements. Define and improve KPIs, SLOs, and metrics for data platforms and services. Contribute to technology strategyincluding data modernization, automation frameworks, and integration of new technologies. Collaborate in cross-functional teams and communicate complex technical concepts to stakeholders in clear, concise ways.

Posted 3 days ago

Apply

3.0 - 5.0 years

7 - 11 Lacs

chennai

Work from Office

Primary skills - Jenkins , CHEF, SaltStack, and Habitat, Ansible, Artifactory, Groovy Scripts, Bitbucket, GIT, Maven, XLR, Apmate, Dockers, Kubernetes Secondary skills- Java/JEE Job Description Extensive experience with various Infrastructure Automation tools including Ansible, CHEF, SaltStack, and Habitat Extensive experience building cookbooks/templates/plans to automate provisioning of physical and virtual hosts, as well as operational run activities. Strong background and understanding of various PaaS and Container Orchestration Systems such as Cloud Foundaryand Kubernetes. Good Experience in deployment automation concepts and tooling including Jenkins, Artifactory, Groovy Scripts, Bitbucket, GIT, Maven, XLR, Apmate, Dev Cloud(s) setup and advanced branching strategie Proven track record of leading and executing on enterprise level infrastructure automation initiatives and demonstrated business results. Mandatory Skills: DevOps.Experience: 3-5 Years.

Posted 3 days ago

Apply

10.0 - 14.0 years

0 Lacs

chennai, tamil nadu

On-site

As an experienced AWS Cloud Architect at our IT Services company based in Chennai, Tamil Nadu, India, you will be responsible for designing and implementing AWS architectures for complex, enterprise-level applications. You will be involved in the deployment, automation, management, and maintenance of AWS cloud-based production systems to ensure smooth operation of applications. Your role will also include configuring and fine-tuning cloud infrastructure systems, deploying and configuring AWS services according to best practices, and monitoring application performance to optimize AWS services for improved efficiency and reduced costs. In this position, you will implement auto-scaling and load balancing mechanisms to handle varying workloads and ensure the availability, performance, security, and scalability of AWS production systems. You will manage the creation, release, and configuration of production systems, as well as build and set up new development tools and infrastructure. Troubleshooting system issues and resolving problems across various application domains and platforms will also be part of your responsibilities. Additionally, you will maintain reports and logs for AWS infrastructure, implement and maintain security policies using AWS security tools and best practices, and monitor AWS infrastructure for security vulnerabilities to address them promptly. Ensuring data integrity and privacy by implementing encryption and access controls will be crucial, along with developing Terraform scripts for automating infrastructure provisioning and setting up automated CI/CD pipelines using Kubernetes, Helm, Docker, and CircleCI. Your role will involve providing backup and long-term storage solutions for the infrastructure, setting up monitoring and log aggregation dashboards, and alerts for AWS infrastructure. You will work towards maintaining application reliability and uptime throughout the application lifecycle, identifying technical problems, and developing software updates and fixes. Leveraging best practices and cloud security solutions, you will provision critical system security and provide recommendations for architecture and process improvements. Moreover, you will define and deploy systems for metrics, logging, and monitoring on the AWS platform, design, maintain, and manage tools for automating different operational processes, and collaborate with a team to ensure the smooth operation of AWS cloud solutions. Your qualifications should include a Bachelor's degree in computer science, Information Technology, or a related field (or equivalent work experience), along with relevant AWS certifications such as AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Proficiency in Infrastructure as Code tools, cloud security principles, AWS services, scripting skills, monitoring, and logging tools for AWS, as well as problem-solving abilities and excellent communication skills will be essential for success in this role.,

Posted 1 week ago

Apply

3.0 - 12.0 years

0 Lacs

karnataka

On-site

As an engineering manager at Ingka Group, you will play a crucial role in transforming the digital landscape to support the organization efficiently and inspiringly. With more than 180,000 co-workers worldwide, we aim to revolutionize the way our teams connect to enhance customer interactions and keep IKEA at the forefront in a fast-paced environment. You will drive the development, provision, and operation of digital products and services using cutting-edge technology and agile delivery methods to ensure rapid delivery. Additionally, you will foster a culture of continuous learning and growth in digital skills to maintain and enhance our digital capabilities. We are looking for a resilient and empathetic individual who can lead the transformation of our current landscape into the next evolution of IKEA. Your ability to build teams with a DevOps mindset, create scalable enterprise technology, and prioritize customer outcomes will be paramount. It is essential that you embody IKEA's values of togetherness, simplicity, and leading by example, while possessing the drive to challenge conventions and foster enthusiasm and innovation within your team. With a minimum of three years of experience as an engineering manager, a background in Computer Science or a related field, and expertise in software development and technology architecture, you will be well-equipped to succeed in this role. Your responsibilities will include implementing best practices such as version control, deployment automation, continuous integration, and test automation to drive the development of a modern technology stack. You will have the opportunity to lead technical decisions and build products that enable us to meet customer demands effectively and provide reliable and secure services. As part of a dynamic and diverse team at Ingka Group, you will collaborate with professionals from various backgrounds to create a better everyday life for our customers. In this role, you will report to the Product engineering manager of the Order Management sub-domain and be based in Bangalore, India. If you are passionate about technology, innovation, and leading talented teams, we invite you to apply and join us in shaping the future of IKEA.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chandigarh

On-site

We are seeking a Cloud Transition Engineer to implement cloud infrastructure and services as per approved architecture designs, ensuring a smooth transition of these services into operational support. You will play a crucial role in bridging the gap between design and operations, ensuring the efficient, secure, and fully supported delivery of new or modified services. Collaborating closely with cross-functional teams, you will validate infrastructure builds, coordinate deployment activities, and ensure all technical and operational requirements are met. This position is vital in maintaining service continuity and enabling scalable, cloud-first solutions across the organization. Your responsibilities in this role will include implementing Azure infrastructure and services based on architectural specifications, building, configuring, and validating cloud environments to meet project and operational needs, collaborating with various teams to ensure smooth service transitions, creating and maintaining user documentation, conducting service readiness assessments, facilitating knowledge transfer and training for support teams, identifying and mitigating risks related to service implementation and transition, ensuring compliance with internal standards, security policies, and governance frameworks, supporting automation and deployment using tools like ARM templates, Bicep, or Terraform, and participating in post-transition reviews and continuous improvement efforts. To be successful in this role, you should possess a Bachelor's degree in computer science, Information Technology, Engineering, or a related field (or equivalent experience), proven experience in IT infrastructure or cloud engineering roles with a focus on Microsoft Azure, demonstrated experience in implementing and transitioning cloud-based solutions in enterprise environments, proficiency in infrastructure-as-code tools such as ARM templates, Bicep, or Terraform, hands-on experience with CI/CD pipelines and deployment automation, and a proven track record of working independently on complex tasks while effectively collaborating with cross-functional teams. Preferred qualifications that set you apart include strong documentation, fixing, and communication skills, Microsoft Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect), and experience mentoring junior engineers or leading technical workstreams. At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives because we believe that great ideas come from great teams. Our commitment to ongoing career development and cultivating an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams working together are key to driving growth and delivering business results.,

Posted 3 weeks ago

Apply

10.0 - 15.0 years

12 - 20 Lacs

Pune

Hybrid

Database Developer Company:Kiya.ai Work Location: Pune Work Mode:Hybrid JD: DataStrong knowledge of and hands-on development experience in Oracle PLSQL - Strong knowledge of and hands-on development *** experience SQL analytic functions*** - Experience with *** developing complex, numerically-intense business logic *** - Good knowledge of & experience in database performance tuning - Fluency in UNIX scripting Good-to-have - Knowledge of/experience in any of python, Hadoop/Hive/Impala, horizontally scalable databases, columnar databases - Oracle certifications - Any of DevOps tools/techniques CICD, Jenkins/GitLab, source control/git, deployment automation such Liquibase - Experience with Productions issues/deployments **Interested candidates drop your resume to saarumathi.r@kiya.ai **

Posted 3 weeks ago

Apply

7.0 - 12.0 years

6 - 11 Lacs

Bengaluru

Work from Office

Role Overview: We are hiring for Sr DevOps Engineer who will improve and maintain software development, test and live infrastructure and services. About the Role: Support development teams on various devtools like Jira , Confluence , Github, SVN, Artifactory, Jenkins, SonarQube etc. Follow DevOps practices, automate infrastructure activities, documents standards and procedures. Drive existing automation frameworks forward to benefit automation across multiple products and if needed create new frameworks. Automate software development processes, including build, test, and deployment. Coding using Python and AWS cloud formation. Ability to learn to use new tools, and quickly become a subject matter expert. Additional duties as assigned About You: 7+ years of experience as Developer or Tools Engineer Strong systems analysis and programming skills Extensive scripting experience with Python. Proficiency in Linux Proficiency with Jenkins Pipeline, Groovy, Github, SVN, Artifactory Experience with continuous integration and deployment automation tools such as Salt, Puppet, Chef, Ansible. Understanding of the principles of CI and CD Experience in integrating Code Quality Analysis Tools like SonarQube. Experience in IaC (Cloud formation, Terraform) Able to migrate development environment from one platform to another (SVN to Github)

Posted 3 weeks ago

Apply

4.0 - 10.0 years

20 - 45 Lacs

Hyderabad, Telangana, India

On-site

Total Exp - 4+Yrs Relevant Exp In Pega CDH - 4Yrs Location - Hyderabad, Bangalore Notice Period - Immediate Joiner's Mandatory Certification - CDH, CSSA Description We are urgently hiring for the position of Pega CDH Developer in India. The ideal candidate will have 4-10 years of experience in developing and implementing Pega solutions, specifically focusing on Customer Decision Hub. You will work closely with business teams to deliver high-quality applications that drive decision-making processes. Responsibilities Design and implement Pega Customer Decision Hub (CDH) solutions to meet business requirements. Collaborate with business analysts and stakeholders to gather and analyze requirements. Develop Pega applications, including workflows, rules, and data models. Perform testing and debugging of Pega applications to ensure quality and performance. Provide support and maintenance for existing Pega applications. Stay updated with Pega best practices and new features to enhance application performance. Skills and Qualifications 4-10 years of experience in Pega development with a focus on Customer Decision Hub (CDH). Strong understanding of Pega platform and its features. Experience with Pega 8.x version or higher is preferred. Proficiency in Pega rules, workflows, and integrations. Knowledge of data modeling and decisioning capabilities in Pega. Familiarity with Java and SQL for backend development. Excellent problem-solving skills and attention to detail. Strong communication skills, both verbal and written.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

As a ServiceMax Release Manager at Johnson Controls, you will be responsible for managing and overseeing the end-to-end release management process for ServiceMax and its integrations with Salesforce. Your role will involve collaborating with various technical and business teams to plan, schedule, and execute releases while ensuring smooth, controlled, and timely deployments of new ServiceMax features, updates, and patches. Your main responsibilities will include leading the coordination, and execution of ServiceMax releases, ensuring stakeholder alignment, developing detailed release schedules and documentation, coordinating with development, QA, and business teams to define release scope and objectives, managing release dependencies, overseeing the deployment process, tracking and reporting on release status, and addressing any issues promptly. To excel in this role, you must ensure that all releases follow proper change management procedures, identify and mitigate risks associated with each release, provide regular status updates to stakeholders, implement best practices to improve efficiency, stay updated with new ServiceMax features, facilitate post-release reviews, and foster collaboration among cross-functional teams. The ideal candidate will possess a Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field, along with 5+ years of experience in release management, with at least 2 years focused on ServiceMax and/or Salesforce-based solutions. Strong experience in coordinating and managing the release lifecycle in a complex environment, excellent understanding of ServiceMax functionality, proficiency in release management tools, deployment automation tools, Agile/Scrum methodologies, and SDLC processes are required. Preferred qualifications include Salesforce certifications, experience with version control systems, familiarity with ITIL or other IT service management frameworks, and prior experience in managing releases in regulated industries. At Johnson Controls, we are dedicated to shaping a safer, more comfortable, and sustainable world by providing innovative solutions that make cities more connected, buildings more intelligent, and vehicles more efficient. We are looking for individuals who are passionate about creating a better future through bold ideas, entrepreneurial thinking, and collaboration. Join us on this journey to improve the way the world lives, works, and plays. Your career should be focused on tomorrow because tomorrow needs you.,

Posted 4 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As an Oracle Cloud Migration Engineer - OCI, you will be responsible for executing the migration of VUX systems from on-premises infrastructure to Oracle Cloud Infrastructure (OCI). Your key tasks will include utilizing Terraform for Infrastructure as Code (IaC) to automate provisioning and manage OCI resources, as well as applying Ansible for configuration management and deployment automation throughout the migration process. It will be crucial for you to ensure that migration processes are not only efficient and secure but also aim to minimize downtime. Collaboration with cross-functional teams will be essential as you work together to plan, design, and implement effective migration strategies. As issues may arise during both the migration and post-migration phases, your expertise in troubleshooting will be vital in ensuring a smooth transition. Additionally, you will be responsible for creating and maintaining technical documentation related to all migration activities. This role offers the flexibility to work either from the Hyderabad office or remotely, depending on project requirements.,

Posted 4 weeks ago

Apply

5.0 - 8.0 years

3 - 15 Lacs

Bengaluru, Karnataka, India

On-site

Job Summary: This position is responsible for managing the release and deployment lifecycle of mission-critical global applications. The role involves supporting production systems, improving continuous integration and deployment processes, collaborating with cross-functional teams, and administering source control and automation tools. Key Responsibilities: Manage release cycles and deployment processes across environments Troubleshoot production issues and provide break-fix support Improve CI/CD processes to streamline deployments Ensure high availability and performance of 24/7 global applications and services Collaborate with software engineering, QA, infrastructure, and support teams to meet environment needs Work closely with enterprise architecture and engineering leadership to optimize system architecture Administer source control systems and manage branching/merging strategies Maintain and enhance build automation tools Optimize and manage build/deployment pipelines Document development and release processes using Atlassian tools

Posted 4 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer, you will be responsible for designing, developing, and maintaining robust ETL pipelines using Azure Data Factory (ADF) to support complex insurance data workflows. You will integrate and extract data from various Guidewire modules (PolicyCenter, BillingCenter, ClaimCenter) to ensure data quality, integrity, and consistency. Building reusable components for data ingestion, transformation, and orchestration across Guidewire and Azure ecosystems will be a key part of your role. Your responsibilities will also include optimizing ADF pipelines for performance, scalability, and cost-efficiency while following industry-standard DevOps and CI/CD practices. Collaborating with solution architects, data modelers, and Guidewire functional teams to translate business requirements into scalable ETL solutions will be crucial. You will conduct thorough unit testing, data validation, and error handling across all data transformation steps and participate in end-to-end data lifecycle management. Providing technical documentation, pipeline monitoring dashboards, and ensuring production readiness will be part of your responsibilities. You will support data migration projects involving legacy platforms to Azure cloud environments and follow Agile/Scrum practices, contributing to sprint planning, retrospectives, and stand-ups with strong ownership of deliverables. **Mandatory Skills:** - 6+ years of experience in data engineering with expertise in Azure Data Factory, Azure SQL, and related Azure services. - Hands-on experience in building ADF pipelines integrating with Guidewire Insurance Suite. - Proficiency in data transformation using SQL, Stored Procedures, and Data Flows. - Experience working on Guidewire data models and understanding of PC/Billing/Claim schema and business entities. - Strong understanding of cloud-based data warehousing concepts, data lake patterns, and data governance best practices. - Clear experience in integrating Guidewire systems with downstream reporting and analytics platforms. - Excellent debugging skills to resolve complex data transformation and pipeline performance issues. **Preferred Skills:** - Prior experience in the Insurance (P&C preferred) domain or implementing Guidewire DataHub and/or InfoCenter. - Familiarity with Power BI, Databricks, or Synapse Analytics. - Working knowledge of Git-based source control, CI/CD pipelines, and deployment automation. **Additional Requirements:** - Work Mode: 100% Onsite at Hyderabad office (No remote/hybrid flexibility). - Strong interpersonal and communication skills to work effectively with cross-functional teams and client stakeholders. - Self-starter mindset with a high sense of ownership, capable of thriving under pressure and tight deadlines.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As an experienced Python backend developer with over 4 years of professional experience, you will be responsible for designing, developing, and maintaining backend applications using FastAPI, Flask, and Django. Your key responsibilities will include building and managing RESTful APIs with a strong focus on scalability, performance, and security. Additionally, you will work with API gateways such as Kong, AWS API Gateway, and NGINX for routing, versioning, and access control. You will play a crucial role in contributing to the design and architecture of backend systems and application interfaces. Collaboration with cross-functional teams is essential to ensure the delivery of well-integrated solutions. Furthermore, you will be tasked with managing and optimizing relational and NoSQL databases like PostgreSQL, MySQL, and MongoDB, as well as implementing background task processing using tools like Celery. Ensuring quality through thorough testing (unit/integration) and code reviews will be a key part of your responsibilities. It is imperative that you follow best practices for version control, CI/CD, and deployment automation to maintain efficiency and consistency in the development process. To excel in this role, you must possess proficiency in FastAPI, Flask, and Django, along with hands-on experience in API gateways and managing API lifecycles. A good understanding of system design principles, application architecture, RESTful APIs, ORMs, and database design is crucial. Familiarity with Docker, Git, CI/CD pipelines, and cloud environments is highly desirable. Strong knowledge of security practices, including authentication and rate limiting, is essential. Your ability to write clean, efficient, and well-documented code will be instrumental in your success in this position. This is a full-time position based in Jp Nagar, Bengaluru, with a requirement for in-person work. The ideal candidate should be an immediate joiner with a proactive approach to their work. If you meet these requirements and are looking for a challenging opportunity in backend development, we would love to hear from you. Application Question(s): - What is your expected CTC - What is your notice period ,

Posted 1 month ago

Apply

4.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a SAP Cloud & On-Premise Specialist, your primary responsibility will be to support SAP operations, deployments, and optimizations in both traditional data centers and cloud environments. You should have expertise in managing hybrid SAP landscapes and possess deep knowledge of cloud tools, networking, and platform integration. Your key responsibilities will include managing SAP infrastructure in both on-premises and cloud environments, providing operational support, upgrades, and issue resolution across hybrid setups, integrating on-premise systems securely with cloud services, and ensuring compliance with high availability/disaster recovery, performance, and security standards. To excel in this role, you must have in-depth experience in SAP Basis across S/4HANA and NetWeaver platforms, a solid understanding of both cloud and on-premise SAP hosting models, and hands-on expertise with GCP networking, security, IAM, and deployment automation. Preferred skills for this position include experience in integrating SAP with hybrid services like VPN and Direct Connect, as well as familiarity with monitoring and performance tools across different environments. In addition to technical skills, soft skills such as strong multi-environment coordination, efficient troubleshooting abilities, and analytical thinking to manage priorities in complex systems will be crucial for your success in this role. Joining this role offers you the opportunity to support resilient, hybrid SAP deployments for global clients, work at the forefront of on-premises and cloud infrastructure innovation, and play a significant role in hybrid cloud transformation programs. Apply now and leverage your expertise in S/4HANA, GCP security, cloud technologies, SAP, monitoring tools, Direct Connect, networking, IAM, VPN, NetWeaver, and other relevant skills to make a valuable contribution as a SAP Cloud & On-Premise Specialist in our dynamic team.,

Posted 1 month ago

Apply

1.0 - 5.0 years

2 - 5 Lacs

Chennai

Work from Office

We are seeking a skilled DevOps Engineer to manage and optimize the cloud infrastructure, deployment pipelines, and operational tooling for Bytize . You will work closely with development, QA, and product teams to ensure rapid, secure, and scalable delivery of services. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitHub Actions. Containerize microservices and manage deployments using Docker and Kubernetes (EKS/AKS) . Manage cloud infrastructure (preferably AWS, Azure, or GCP) using Infrastructure as Code (IaC) tools such as Terraform or ARM templates. Ensure high availability , scalability , and monitoring using tools like Prometheus, Grafana, ELK stack. Implement and enforce DevSecOps practices security scanning, vulnerability assessment, secrets management. Set up automated testing and deployment strategies across staging and production environments. Monitor and troubleshoot infrastructure and deployment issues proactively. Support disaster recovery planning and failover automation. Required Skills & Qualifications Bachelor's in Computer Science, Engineering, or related field. Strong experience with CI/CD tools (e.g., Jenkins, GitHub Actions). Proficiency in Docker , Kubernetes , and container orchestration. Experience with at least one cloud provider ( AWS, Azure, GCP ). Expertise in IaC tools (Terraform, Ansible, Bicep, etc.). Familiarity with monitoring/logging tools : ELK, Prometheus/Grafana, CloudWatch. Experience with Git and branching strategies (GitFlow, trunk-based). Scripting skills (Bash, Python, or similar). Working knowledge of networking, security best practices , and performance tuning . Strong communication and collaboration skills.

Posted 1 month ago

Apply

3.0 - 6.0 years

22 - 27 Lacs

Pune

Work from Office

We are growing and seeking a skilled DevOps Engineer to join our devops engineering team. You'll be responsible for building and maintaining scalable cloud infrastructure across clouds and bare metal environments, automating deployment pipelines, and ensuring system reliability. What You’ll Do: Monitor and Optimize: Set up and maintain observability tools (logging, alerting, metrics) to detect and resolve performance bottlenecks. Implement Scalability Solutions: Create programmatic scaling and load balancing strategies to support usage growth. Develop Automation Systems: Write production-grade code for CI/CD pipelines, deployment automation, and infrastructure tooling to accelerate shipping. Migrate services to Kubernetes, improve performance and security of the clusters Improve Data and ML pipelines, work with EMR clusters What You’ll Need: Deep experience in infrastructure, DevOps, or platform engineering roles Deep expertise with cloud platforms (AWS preferred, GCP/Azure also welcome) and linux environment Experience with Terraform Proficiency with CI/CD systems and deployment automation (Jenkins, ArgoCD preferred) Experience with container orchestration using Kubernetes and Helm for application deployments Strong scripting capabilities in Python and Bash for automation and tooling Experience implementing secure systems at scale, including IAM and network security controls Familiarity with monitoring and observability stacks like Prometheus, Grafana, Loki Experience with configuration management tools - Ansible, Puppet, Chef Strong problem-solving skills with a bias toward resilience and scalability Excellent communication and collaboration across engineering teams Shift Timing: The regular hours for this position will cover a combination of business hours in the US and India – typically 2pm-11pm IST. Occasionally, later hours may be required for meetings with teams in other parts of the world. Additionally, for the first 4-6 weeks of onboarding and training, US Eastern time hours (IST -9:30) may be required. Benefits: Medical Insurance coverage is provided to our employees and their dependants, 100% covered by Comscore; Provident Fund is borne by Comscore, and is provided over and above the gross salary to employees; 26 Annual leave days per annum, divided into 8 Casual leave days and 18 Privilege leave days; Comscore also provides a paid “Recharge Week” over the Christmas and New Year period, so that you can start the new year fresh; In addition, you will be entitled to: 10 Public Holidays; 12 Sick leave days; 5 Paternity leave days; 1 Birthday leave day. Flexible work arrangements; “Summer Hours” are offered from March to May: Comscore offers employees the flexibility to work more hours from Monday to Thursday, and the hours can be offset on Friday from 2:00pm onwards; Employees are eligible to participate in Comscore’s Sodexo Meal scheme and enjoy tax benefits About Comscore: At Comscore, we’re pioneering the future of cross-platform media measurement, arming organizations with the insights they need to make decisions with confidence. Central to this aim are our people who work together to simplify the complex on behalf of our clients & partners. Though our roles and skills are varied, we’re united by our commitment to five underlying values: Integrity, Velocity, Accountability, Teamwork, and Servant Leadership. If you’re motivated by big challenges and interested in helping some of the largest and most important media properties and brands navigate the future of media, we’d love to hear from you. Comscore (NASDAQ: SCOR) is a trusted partner for planning, transacting and evaluating media across platforms. With a data footprint that combines digital, linear TV, over-the-top and theatrical viewership intelligence with advanced audience insights, Comscore allows media buyers and sellers to quantify their multiscreen behavior and make business decisions with confidence. A proven leader in measuring digital and set-top box audiences and advertising at scale, Comscore is the industry’s emerging, third-party source for reliable and comprehensive cross-platform measurement. To learn more about Comscore, please visit Comscore.com. Comscore is committed to creating an inclusive culture, encouraging diversity. About Comscore: At Comscore, we’re pioneering the future of cross-platform media measurement, arming organizations with the insights they need to make decisions with confidence. Central to this aim are our people who work together to simplify the complex on behalf of our clients & partners. Though our roles and skills are varied, we’re united by our commitment to five underlying values: Integrity, Velocity, Accountability, Teamwork, and Servant Leadership. If you’re motivated by big challenges and interested in helping some of the largest and most important media properties and brands navigate the future of media, we’d love to hear from you. Comscore (NASDAQ: SCOR) is a trusted partner for planning, transacting and evaluating media across platforms. With a data footprint that combines digital, linear TV, over-the-top and theatrical viewership intelligence with advanced audience insights, Comscore allows media buyers and sellers to quantify their multiscreen behavior and make business decisions with confidence. A proven leader in measuring digital and set-top box audiences and advertising at scale, Comscore is the industry’s emerging, third-party source for reliable and comprehensive cross-platform measurement. To learn more about Comscore, please visit Comscore.com. C omscore is committed to creating an inclusive culture, encouraging diversity. *LI-JL1

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer, you will be responsible for designing, developing, and maintaining robust ETL pipelines using Azure Data Factory (ADF) to support complex insurance data workflows. Your role will involve integrating and extracting data from various Guidewire modules such as PolicyCenter, BillingCenter, and ClaimCenter, ensuring data quality, integrity, and consistency. You will be tasked with building reusable components for data ingestion, transformation, and orchestration across Guidewire and Azure ecosystems. Optimizing ADF pipelines for performance, scalability, and cost-efficiency while following industry-standard DevOps and CI/CD practices will be a key part of your responsibilities. Collaboration with solution architects, data modelers, and Guidewire functional teams to translate business requirements into scalable ETL solutions is essential. You will conduct thorough unit testing, data validation, and error handling across all data transformation steps. Additionally, your involvement will span end-to-end data lifecycle management from requirement gathering through deployment and post-deployment support. Providing technical documentation, pipeline monitoring dashboards, and ensuring production readiness will be crucial. You will also support data migration projects involving legacy platforms to Azure cloud environments. You will need to follow Agile/Scrum practices, contribute to sprint planning, retrospectives, and stand-ups with a strong ownership of deliverables. Your mandatory skills should include 6+ years of experience in data engineering with expertise in Azure Data Factory, Azure SQL, and related Azure services. Hands-on experience in building ADF pipelines that integrate with Guidewire Insurance Suite is a must. Proficiency in data transformation using SQL, Stored Procedures, and Data Flows is required, along with experience working on Guidewire data models and understanding PC/Billing/Claim schema and business entities. A solid understanding of cloud-based data warehousing concepts, data lake patterns, and data governance best practices is expected. You should also have experience in integrating Guidewire systems with downstream reporting and analytics platforms. Excellent debugging skills will be necessary to resolve complex data transformation and pipeline performance issues. Preferred skills include prior experience in the Insurance (P&C preferred) domain or implementing Guidewire DataHub and/or InfoCenter. Familiarity with tools like Power BI, Databricks, or Synapse Analytics is a plus. In terms of work mode, this position requires 100% onsite presence at the Hyderabad office with no remote or hybrid flexibility. Strong interpersonal and communication skills are essential as you will be working with cross-functional teams and client stakeholders. A self-starter mindset with a high sense of ownership is crucial, as you must thrive under pressure and tight deadlines.,

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Role & responsibilities Backend Engineering Design and develop scalable, secure RESTful APIs using Node.js, Express, and MongoDB Implement reusable service modules and data models, adhering to clean code principles Optimize server-side logic for performance, fault-tolerance, and data integrity Integrate third-party APIs, payment gateways, notification services, and platform tools Collaborate with front-end and mobile teams to ensure seamless data exchange and performance DevOps & Infrastructure Set up and maintain CI/CD pipelines for staging and production using GitHub Actions / Jenkins Manage cloud infrastructure (AWS / GCP / DigitalOcean), including EC2, RDS, S3, Lambda Implement containerization and orchestration using Docker and Kubernetes (optional) Monitor and automate server health, logging, and alerts using tools like Prometheus, Grafana, CloudWatch, or similar Own system security, access controls, and cost optimization initiatives Preferred candidate profile 5-8 years of experience in backend development with solid hands-on DevOps exposure Strong problem-solving mindset with experience managing production-grade systems Proven ability to automate deployment, testing, and monitoring pipelines Comfortable owning infrastructure and server-side logic end-to-end Experience working in a product-led or fast-paced startup environment is a big plus Key Skills: Programming Languages: Node.js (must-have), JavaScript, TypeScript (optional) Frameworks: Express.js, NestJS (good to have) Database: MongoDB, PostgreSQL (optional) DevOps Tools: Git, Docker, CI/CD, AWS CLI Cloud Platforms: AWS (preferred), GCP, DigitalOcean Other: REST APIs, JWT, OAuth, WebSockets, Load balancing, Server security, System scaling

Posted 1 month ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Power Apps Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time educationKey Responsibilities Develop innovative virtual assistants for both voice and text interactions. Guide the team in leveraging the full capabilities of the Copilot Studio platform. Integrate banking features into virtual assistants to enhance customer service and functionality. Collaborate with cross-functional teams to define project requirements and deliver high-quality solutions. Required Skills Power Platform experience wrt Solutions and Deployments across Power Platform Environments(preferably automated deployments)Extensive experience in Power AutomateGood experience in Power Virtual Agent(aka Copilot)Good to have experience in PowerApps(at least Canvas Apps)Good to have experience in PowerBIBasic knowledge and understanding on GenAI and its implementation in CopilotGood to have experience in Dataverse(Tables, Security Roles etc.)Good understanding of Azure App Registrations and authentications/authorization using the sameGood understanding of Azure DevOps wrt SCRUM methodologyGood to have CICD pipelines experience in Azure DevOps. Proficiency in Power Platform and Copilot Studio. Experience with Azure DevOps pipelines and Azure. Strong knowledge of Generative AI technologies. Desired Attributes Yes, you have talent, You know the game; development in an Agile way of working is your cup of tea. You possess a relevant HBO (Bachelor) or university education (Master). Critical thinking and initiative:hands-on and innovative. Communication is key:you can articulate (and translate) technical jargon to everyone in the team. Customer is king:you have an unstoppable drive to create the best solutions for our customers. Ownership and Responsibility:you test the functions and are responsible for them when they go live. You are fluent in English, Company Values and Culture , we value diversity and inclusion, fostering a culture where every team member can thrive. We are committed to innovation and excellence, ensuring that our solutions meet the highest standards for our customers Qualification 15 years full time education

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

Hyderabad

Remote

Job Summary: We are looking for a passionate and motivated Salesforce Developer with a basic understanding of Salesforce components and web technologies. This role is ideal for candidates with foundational knowledge in Salesforce and eagerness to grow in a dynamic CRM environment. Key Responsibilities: Assist in the development, configuration, and maintenance of Salesforce applications. Work on customizations using Apex, Visualforce, and Lightning Components (Aura/LWC). Support Salesforce Sales Cloud and Service Cloud implementations. Create and maintain process automation using Flows, Process Builder, and Workflows. Collaborate with senior developers and stakeholders to gather and implement requirements. Participate in unit testing and troubleshooting of Salesforce applications. Help with basic deployment tasks and sandbox management. Required Skills: Basic understanding of Salesforce platform and CRM concepts Hands-on experience or knowledge in: Apex and Visualforce Lightning Components (Aura / LWC) SOQL / SOSL Salesforce Sales Cloud and/or Service Cloud Basic Process Automation (Flows, Validation Rules) Familiarity with Java , HTML , and JavaScript is a plus Understanding of Salesforce Testing and debugging Nice to Have: Exposure to Community Cloud , Marketing Cloud , or Salesforce CPQ Knowledge of Velocity or Deployment Automation Tools like ANT, Copado, or Git Salesforce Certification (ADM 201, PD1) is an added advantage * Perks and Benefits Joining bonus is available . For any queries, please feel free to contact us at the following email address E mail id :: info@heyroot.com

Posted 1 month ago

Apply

8.0 - 10.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Why this job matters The Software Engineering Associate 3 assists the team in executing routine software engineering activities, well covered by existing procedure and processes, in supporting the delivery of the engineering strategy and roadmap that supports BTs commercial strategy through cross functional project and technical delivery and the participation of a team that pursues innovation as well as engineering excellence. What youll be doing 1. Assists with routine work in the implementation of technical solutions for both customers and colleagues and supports the resolution of inter-system issues, working within cross-functional squads to assist in creating and implementing technical solutions for a domain or cross domain activity within a specific technology area 2. Performs technical reviews to continually update knowledge and skills in software engineering principles and practices 3. Assists with the design of technical specifications and development of software solutions of smaller and /or less complex initiatives in partnership with team, documenting the quality of delivery 4. Conducts routine coding activities such as writing, testing, refining and rewriting code as necessary and under close supervision and communicates to engineering professionals and colleagues involved in the project 5. Supports in the integration of existing software products and problem solving to enable incompatible platforms to work together 6. Supports the maintenance of systems by monitoring and correcting software defects 7. Assists with the implementation of new architectures, standards, and methods for large-scale enterprise systems The skills youll need Troubleshooting Agile Development Database Design/Development Debugging Programming/Scripting Microservices/Service Oriented Architecture Version Control IT Security Cloud Computing Software Testing Continuous Integration/Continuous Deployment Automation & Orchestration Application Development Algorithm Design Software Development Lifecycle Decision Making Growth Mindset Inclusive Leadership Our leadership standards Looking in: Leading inclusively and Safely I inspire and build trust through self-awareness, honesty and integrity. Owning outcomes I take the right decisions that benefit the broader organisation. Looking out: Delivering for the customer I execute brilliantly on clear priorities that add value to our customers and the wider business. Commercially savvy I demonstrate strong commercial focus, bringing an external perspective to decision-making. Looking to the future: Growth mindset I experiment and identify opportunities for growth for both myself and the organisation. Building for the future I build diverse future-ready teams where all individuals can be at their best.

Posted 1 month ago

Apply

6.0 - 10.0 years

6 - 11 Lacs

Mumbai

Work from Office

Primary Skills Google Cloud Platform (GCP) Expertise in Compute (VMs, GKE, Cloud Run), Networking (VPC, Load Balancers, Firewall Rules), IAM (Service Accounts, Workload Identity, Policies), Storage (Cloud Storage, Cloud SQL, BigQuery), and Serverless (Cloud Functions, Eventarc, Pub/Sub). Strong experience in Cloud Build for CI/CD, automating deployments and managing artifacts efficiently. Terraform Skilled in Infrastructure as Code (IaC) with Terraform for provisioning and managing GCP resources. Proficient in Modules for reusable infrastructure, State Management (Remote State, Locking), and Provider Configuration . Experience in CI/CD Integration with Terraform Cloud and automation pipelines. YAML Proficient in writing Kubernetes manifests for deployments, services, and configurations. Experience in Cloud Build Pipelines , automating builds and deployments. Strong understanding of Configuration Management using YAML in GitOps workflows. PowerShell Expert in scripting for automation, managing GCP resources, and interacting with APIs. Skilled in Cloud Resource Management , automating deployments, and optimizing cloud operations. Secondary Skills CI/CD Pipelines GitHub Actions, GitLab CI/CD, Jenkins, Cloud Build Kubernetes (K8s) Helm, Ingress, RBAC, Cluster Administration Monitoring & Logging Stackdriver (Cloud Logging & Monitoring), Prometheus, Grafana Security & IAM GCP IAM Policies, Service Accounts, Workload Identity Networking VPC, Firewall Rules, Load Balancers, Cloud DNS Linux & Shell Scripting Bash scripting, system administration Version Control Git, GitHub, GitLab, Bitbucket

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 11 Lacs

Pune

Work from Office

Primary/Essential Duties and Key Responsibilities: Research, design, test, and evaluate technologies for building reactive, event-driven systems Create architectural and technical design for complex features Responsible for maximising the maintainability and extensibility of the solutions Improve the developer experience for product development teams Mentor developers on the team, fostering an environment of continued learning and improvement Empower the team to deliver quality software in a timely manner and continuously improve the development process Write high-quality code, that is supported by an appropriate level of testing and metrics Have a high bar for yourself and others when working with production systems Intellectually curious to adapt to changing technologies, platforms, and environments Enjoy working in a collaborative environment with a diverse group of people partnering effectively with the team members, partners, and customers Bachelor's degree or equivalent in Computer Science or related field At least 5 - 8 years of industry experience Excellent knowledge on API management concepts Knowledge on Rest API, SOAP framework, XML, Web service design Broad experience and in-depth skills with: C#.Net, JavaScript, Angular, jQuery, TypeScript, MongoDB, SQL Database Experience in engineering practices such as code refactoring, design patterns, design-driven development, continuous integration, building highly scalable applications with security Experience in creating interfaces for upstream/downstream applications Experience with Cloud providers (e.g., GCP) and containerisation (e.g., Docker) Strong knowledge for deployment automation Experience in writing WCF, Web services and RESTful services Strong experience working with HTML and CSS Good experience in reviewing the code and ensure the code quality Flexibility to understand and adopt pre-existing/legacy code Team player with strong analytical, problem solving, debugging and troubleshooting skills Working experience of GIT, Bitbucket, TeamCity/Jenkins Familiarity with JIRA/TFS Demonstrated ability to work in a cross geographical team UKG is on the cusp of something truly special.

Posted 1 month ago

Apply

7.0 - 12.0 years

14 - 18 Lacs

Kolkata

Remote

Senior DevOps Engineer Infrastructure & Platform Specialist Department: Product and Engineering Location: Remote / Kolkata, WB (On-site) Job Summary: A Senior DevOps Engineer is responsible for designing, implementing, and maintaining the operational aspects of cloud infrastructure. Their goal is to ensure high availability, scalability, performance, and security of cloud-based systems. Key Responsibilities Design and maintain scalable, reliable, and secure cloud infrastructure. Address integration challenges, data consistency. Choose appropriate cloud services (e.g., compute, storage, networking) based on business needs. Define architectural best practices and patterns (e.g., microservices, serverless, containerization). Ensure version control and repeatable deployments of infrastructure. Automate cloud operations tasks (e.g., deployments, patching, backups). Implement CI/CD pipelines using tools like Jenkins, GitHub Actions, GitLab CI, etc. Design and implement cloud monitoring and alerting systems (e.g., CloudWatch, Azure Monitor, Prometheus, Datadog, ManageEngine). Optimize performance, resource utilization, and cost across environments. Capacity planning of resources Resource planning and deployment (HW, SW, Capex). Financial forecasting. Tracking and Management of allotted budget. Cost optimization with proper architecture and open-source technologies. Ensure cloud systems follow security best practices (e.g., encryption, IAM, zero-trust principles, VAPT). Implement compliance controls (e.g., HIPAA, GDPR, ISO 27001). Conduct regular security audits and assessments. Build systems for high availability, failover, disaster recovery, and business continuity. Participate in incident response and post-mortems. Implement and manage Service Level Objectives (SLOs) and Service Level Indicators (SLIs). Work closely with development, security, and IT teams to align cloud operations with business goals. Define governance standards for cloud usage, billing, and resource tagging. Provide guidance and mentorship to DevOps and engineering teams. Keep updating infrastructure/deployment documents. Interacting with prospective customers in pre-sales meetings to showcase architecture and security layer of the product and answering questions. Key Skills & Qualifications: Technical Skills VM provisioning and infrastructure ops on AWS, GCP, or Azure. Experience with API gateways (Kong, AWS API Gateway, NGINX). Experience managing MySQL and MongoDB on self-hosted infrastructure. Operational expertise with Elasticsearch or Solr. Proficient with Kafka, RabbitMQ, or similar message brokers. Hands-on experience with Airflow, Temporal, or other workflow orchestration tools. Familiarity with Apache Spark, Flink, Confluent/Debezium or similar streaming frameworks. Strong skills in Docker, Kubernetes, and deployment automation. Experience writing IaC with Terraform, Ansible, or CloudFormation. Building and maintaining CI/CD pipelines (GitLab, GitHub Actions, Jenkins). Experience with monitoring/logging stacks like Prometheus, Grafana, ELK, or Datadog. Sound knowledge of networking fundamentals (routing, DNS, VPN, TLS/SSL, firewalls). Experience designing and managing HA/DR/BCP infrastructure. Bonus Skills Prior involvement in SOC 2 / ISO 27001 audits or documentation. Hands-on with VAPT processes especially working directly with clients or security partners. Scripting in Go, in addition to Bash/Python. Exposure to service mesh tools like Istio or Linkerd. Experience: Must have 7+ years of experience as DevOps Engineer

Posted 1 month ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies