Home
Jobs

1135 Puppet Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

SUMMARY : At Surgical Information Systems (SIS), the DevOps Engineer will manage infrastructure projects and processes. A keen attention to detail, problem-solving abilities, and solid knowledge base are essential. The DevOps Engineer will need to have a high aptitude to learn new technologies and processes and deliver against the overall strategy across a wide variety of development environments including public, private, infrastructure as a service and platform as a service cloud operations. You will design mission critical services with a focus on security, resiliency, scale and performance. You need to have a solid understanding of automation and orchestration principles and be eager to automate wherever and whenever possible. ESSENTIAL DUTIES/ RESPONSIBILITIES: Work with the DevOps team to design and implement build, test, deployment, and configuration management workflows and pipelines Work with Security team to implement DevSecOps where needed Build and test automation tools for infrastructure provisioning Handle code deployments in all environments Monitor metrics and develop ways to improve insights into pipeline, software and environment performance Test implementation designs and consult with peers for feedback during testing stages Build, maintain, and monitor configuration standards Contribute to day-to-day management and administration of projects Creating, Customizing and Managing CI\CD pipelines Document and design various processes; update existing processes Improve infrastructure development and application development Assist in troubleshooting and root cause failure analysis for product enhancement Follow all best practices and procedures as established by SIS EDUCATION DESIRED: B.E/B.Tech/MCA/Any graduate SPECIFIC KNOWLEDGE & SKILLS REQUIRED: 3+ years experience in development and operations, or related IT, computer, or operations fields Previous experience with software development, infrastructure development, or development and operations Experience with Windows infrastructures, databases (MS SQL), CI/CD tools, scripting: Experience with automation tools (Ansible, Puppet, Chef, Python, Jenkins, Terraform, Azure DevOps Pipelines) Monitoring tools (Spluk, ELK, Nagios) At least 2 years' experience with Powershell script writing is required Containerization Technologies (Docker, Kubernetes, Rancher) Public cloud experience, preferably Azure Good interpersonal skills and communication with all levels of management Able to multitask, prioritize, and manage time efficiently High level technical aptitude and the ability to problem solve in a logical manner. Ability to work effectively in a team environment. SUPERVISORY RESPONSIBILITIES: None. PHYSICAL REQUIREMENTS: Requires ability to use a telephone Requires ability to use a computer Most of work will be spent in a seated, climate-controlled office

Posted Just now

Apply

9.0 - 11.0 years

37 - 40 Lacs

Ahmedabad, Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

Dear Candidate, We are seeking a DevOps Engineer to streamline our development and deployment processes. Ideal for professionals passionate about automation and infrastructure. Key Responsibilities: Implement and manage CI/CD pipelines Monitor system performance and troubleshoot issues Automate infrastructure provisioning and configuration Ensure system security and compliance Required Skills & Qualifications: Experience with tools like Jenkins, Docker, and Kubernetes Proficiency in scripting languages like Bash or Python Familiarity with cloud platforms (AWS, Azure, or GCP) Bonus: Knowledge of Infrastructure as Code (Terraform, Ansible) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted Just now

Apply

3.0 years

6 - 7 Lacs

Bengaluru

Remote

GlassDoor logo

DevOps Engineer Req ID: 55469 Location: Bangalore, IN Sapiens is on the lookout for a DevOps Engineer to become a key player in our Bangalore team. If you're a seasoned DevOps pro and ready to take your career to new heights with an established, globally successful company, this role could be the perfect fit. Location: Bangalore Working Model: Our flexible work arrangement combines both remote and in-office work, optimizing flexibility and productivity. This position will be part of Sapiens’ L&P division, for more information about it, click here: https://sapiens.com/solutions/life-and-pension-software/ What you’ll do: We are looking for professional Sr DevOps Engineer who are passionate building next generation software and digital solutions to Sapiens Coresuite product. Supporting and improving our software test , development and live infrastructure with security and on cloud. Should have passion for excellence, and a willingness to deal with ambiguity. Experience with cloud platforms Azure/AWS and open source projects are a plus. Who will be working with cross-functional teams to define, design, and deliver DevOps infrastructure and adopt best practices of IAC to guarantee a robust and stable CI/CD process to increase efficiency and achieve 100% automation. At least 3+ years of professional DevOps and development experience in either C++/Java/Go/Python Strong knowledge on DevOps tools, cloud Technologies Azure/AWS, PaaS, Containers, Microservices, Kubernetes, Puppet/Ansible, Docker, Service Fabric Strong scripting knowledge in PowerShell, Python, Go, Shell CI/CD/CF: Jenkins, Octopus Deploy, SVN/Git & Maven Having experience in Database Management will be an added plus Strong Linux skills on Centos/Redhat 8(Preferred) Basic general knowledge and troubleshooting on windows Tomcat/Spring boot knowledge. Strong knowledge on JFrog & SonarQube. Familiarity with agile development processes and experienced working with development teams throughout the software development lifecycle. Bachelor's/Master’s degree in information technology or related fields such as MIS, CSC. Premier Technology Institutes candidates are preferred. What to Have for this position. Must have Skills. Excellent written and oral communication skills. Excellent interpersonal skills. Highly self-motivated and directed. Keen attention to detail. Proven analytical, evaluative, and problem-solving abilities. Exceptional customer service orientation. Ability to motivate and inspire team members Adaptable to changing environment Ability to effectively prioritize and execute tasks in a fast-paced environment. Disclaimer: Sapiens India does not authorise any third parties to release employment offers or conduct recruitment drives via a third party. Hence, beware of inauthentic and fraudulent job offers or recruitment drives from any individuals or websites purporting to represent Sapiens . Further, Sapiens does not charge any fee or other emoluments for any reason (including without limitation, visa fees) or seek compensation from educational institutions to participate in recruitment events. Accordingly, please check the authenticity of any such offers before acting on them and where acted upon, you do so at your own risk. Sapiens shall neither be responsible for honouring or making good the promises made by fraudulent third parties, nor for any monetary or any other loss incurred by the aggrieved individual or educational institution.In the event that you come across any fraudulent activities in the name of Sapiens , please feel free report the incident at sapiens to sharedservices@sapiens.com.

Posted 2 hours ago

Apply

4.0 years

0 Lacs

Noida

On-site

GlassDoor logo

As a Data Engineer, you will design, develop, and support data pipelines and related data products and platforms. Your primary responsibilities include designing and building data extraction, loading, and transformation pipelines across on-prem and cloud platforms. You will perform application impact assessments, requirements reviews, and develop work estimates. Additionally, you will develop test strategies and site reliability engineering measures for data products and solutions, participate in agile development and solution reviews, mentor junior Data Engineering Specialists lead the resolution of critical operations issues, and perform technical data stewardship tasks, including metadata management, security, and privacy by design. Required Skills: ● Design, develop, and support data pipelines and related data products and platforms. ● Design and build data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms. ● Perform application impact assessments, requirements reviews, and develop work estimates. ● Develop test strategies and site reliability engineering measures for data products and solutions. ● Participate in agile development and solution reviews. ● Mentor junior Data Engineers. ● Lead the resolution of critical operations issues, including post- implementation reviews. ● Perform technical data stewardship tasks, including metadata management, security, and privacy by design. ● Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies ● Demonstrate SQL and database proficiency in various data engineering tasks. ● Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow and Prefect. ● Develop Unix scripts to support various data operations. ● Model data to support business intelligence and analytics initiatives. ● Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation. ● Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion, and Dataproc (good to have). Qualification s: ● Bachelor’s degree in Software Engineering, Computer Science, Business, Mathematics, or related field. ● 4+ years of data engineering experience. ● 2 years of data solution architecture and design experience. ● GCP Certified Data Engineer (preferred). Job Type: Full-time Schedule: Day shift Location: Noida, Uttar Pradesh (Required) Work Location: In person

Posted 3 hours ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role Description High-Value Professional Experience and Skills: Design of infrastructure migration projects from on-prem to cloud Proven expert in partnering and leading technology resources in solving complex business needs Cloud architecture design and implementation to solve key business needs and meet team goals in depth Knowledge in AWS solutions Infrastructure-as-Code (IaC) tools (Prefer Terraform; Related: Ansible, Puppet, ARM templates) Automated CI/CD pipelines (Prefer GitHub Actions; Related: Jenkins, Argo CD) Containerized workloads (Prefer AKS & Helm; Related: EKS, other K8s distributions, Docker, JFrog) Serverless solutions (e.g. Logic Apps, Function Apps, Functions, WebJobs, AWS Lambda) Logging and monitoring tools (e.g., Amazon CloudWatch, AWS CloudTrail or Fluentd, Prometheus, Grafana) Other Desirable Professional Experience And Skills Strong and enthusiastic technologist, able to demonstrate broad technical cloud knowledge Ability to act as a point of expertise, sharing knowledge and advising on best practices Strong budgeting/finance skills and experience with cost management Multi-component system integration and troubleshooting Performance analysis and tuning Kubernetes service meshes (Prefer Linkerd; Related: Istio, Traefik mesh) Coding/scripting (e.g., Linux/Bash/Sh, Windows/PowerShell/Batch, Python, Java) Load balancing and service proxies (e.g., Nginx, Traefik, HAProxy, F5) Other products in use: Jira, Confluence, MySQL Workbench, Maven Skills Aws,Terraform, Powershell, Github, Prometheus/Grafana/Cloud watch

Posted 3 hours ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Sapiens is on the lookout for a DevOps Engineer to become a key player in our Bangalore team. If you're a seasoned DevOps pro and ready to take your career to new heights with an established, globally successful company, this role could be the perfect fit. Location: Bangalore Working Model: Our flexible work arrangement combines both remote and in-office work, optimizing flexibility and productivity. This position will be part of Sapiens’ L&P division, for more information about it, click here: https://sapiens.com/solutions/life-and-pension-software/ What You’ll Do We are looking for professional Sr DevOps Engineer who are passionate building next generation software and digital solutions to Sapiens Coresuite product. Supporting and improving our software test , development and live infrastructure with security and on cloud. Should have passion for excellence, and a willingness to deal with ambiguity. Experience with cloud platforms Azure/AWS and open source projects are a plus. Who will be working with cross-functional teams to define, design, and deliver DevOps infrastructure and adopt best practices of IAC to guarantee a robust and stable CI/CD process to increase efficiency and achieve 100% automation. At least 3+ years of professional DevOps and development experience in either C++/Java/Go/Python Strong knowledge on DevOps tools, cloud Technologies Azure/AWS, PaaS, Containers, Microservices, Kubernetes, Puppet/Ansible, Docker, Service Fabric Strong scripting knowledge in PowerShell, Python, Go, Shell CI/CD/CF: Jenkins, Octopus Deploy, SVN/Git & Maven Having experience in Database Management will be an added plus Strong Linux skills on Centos/Redhat 8(Preferred) Basic general knowledge and troubleshooting on windows Tomcat/Spring boot knowledge. Strong knowledge on JFrog & SonarQube. Familiarity with agile development processes and experienced working with development teams throughout the software development lifecycle. Bachelor's/Master’s degree in information technology or related fields such as MIS, CSC. Premier Technology Institutes candidates are preferred. Must Have Skills. What to Have for this position. Excellent written and oral communication skills. Excellent interpersonal skills. Highly self-motivated and directed. Keen attention to detail. Proven analytical, evaluative, and problem-solving abilities. Exceptional customer service orientation. Ability to motivate and inspire team members Adaptable to changing environment Ability to effectively prioritize and execute tasks in a fast-paced environment. Disclaimer: Sapiens India does not authorise any third parties to release employment offers or conduct recruitment drives via a third party. Hence, beware of inauthentic and fraudulent job offers or recruitment drives from any individuals or websites purporting to represent Sapiens . Further, Sapiens does not charge any fee or other emoluments for any reason (including without limitation, visa fees) or seek compensation from educational institutions to participate in recruitment events. Accordingly, please check the authenticity of any such offers before acting on them and where acted upon, you do so at your own risk. Sapiens shall neither be responsible for honouring or making good the promises made by fraudulent third parties, nor for any monetary or any other loss incurred by the aggrieved individual or educational institution.In the event that you come across any fraudulent activities in the name of Sapiens , please feel free report the incident at sapiens to sharedservices@sapiens.com .

Posted 3 hours ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Work as part of the Automation Engineering team focused on delivering value by reducing manual efforts. Focus on automating internal services and improving internal customer experiences. Drive standards and best practices to ensure quality services and sustainability. Provide critical infrastructure automation and support for critical automation services. A fast-paced environment with shifting priorities to meet internal customer requirements. Primary Responsibilities For This Role Include Develops and maintains infrastructure-related automation spanning networking, services, and systems focus areas Creation of automation projects that follow REST methodologies Supports automation platforms and services Contributes to documentation efforts to enable quicker support and knowledge transfer Provides testing and quality assurance on automation applications Provides bug fixes and new feature development on projects Follows SDLC methodologies and team standards Additional Responsibilities Provides quality support to our internal customers Able to understand the difference between what is requested and how to fulfill a request Willingness to proactively improve processes and identify gaps in existing processes Participation in weekly code reviews Essential Knowledge, Skills, and Abilities: Bachelor's degree in computer science or related quantitative field Experience with Linux operating systems Knowledge of Python and Django, FastAPI or Flask Experience debugging and troubleshooting system interoperability Additional Experience creating, running, debugging containers and container orchestration platforms (Kubernetes, Rancher) Knowledge of Puppet, Ansible, and/or Terraform Knowledge of cloud technologies such as Microsoft Azure Strong verbal and written communication skills; intuitive thinking skills Ability to interface with an operating system, such as Windows, Mac OS and iOS and the respective editor; ability to write scripts Ability to train others Ability to work well under pressure Ability to work well with others and to problem-solve Ability to independently learn how to use new facilities quickly from supplied documentation Experience working with change management procedures Facilitation skills including meeting content/agenda and proactive/creative management of issues Ability to handle multiple projects at the same time Ability to manage projects and interface with employees of varying skillsets in a high-pressure environment SAS only sends emails from verified “sas.com” email addresses and never asks for sensitive, personal information or money. If you have any doubts about the authenticity of any type of communication from, or on behalf of SAS, please contact Recruitingsupport@sas.com. #SAS

Posted 3 hours ago

Apply

0.0 - 2.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

As a Data Engineer, you will design, develop, and support data pipelines and related data products and platforms. Your primary responsibilities include designing and building data extraction, loading, and transformation pipelines across on-prem and cloud platforms. You will perform application impact assessments, requirements reviews, and develop work estimates. Additionally, you will develop test strategies and site reliability engineering measures for data products and solutions, participate in agile development and solution reviews, mentor junior Data Engineering Specialists lead the resolution of critical operations issues, and perform technical data stewardship tasks, including metadata management, security, and privacy by design. Required Skills: ● Design, develop, and support data pipelines and related data products and platforms. ● Design and build data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms. ● Perform application impact assessments, requirements reviews, and develop work estimates. ● Develop test strategies and site reliability engineering measures for data products and solutions. ● Participate in agile development and solution reviews. ● Mentor junior Data Engineers. ● Lead the resolution of critical operations issues, including post- implementation reviews. ● Perform technical data stewardship tasks, including metadata management, security, and privacy by design. ● Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies ● Demonstrate SQL and database proficiency in various data engineering tasks. ● Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow and Prefect. ● Develop Unix scripts to support various data operations. ● Model data to support business intelligence and analytics initiatives. ● Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation. ● Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion, and Dataproc (good to have). Qualification s: ● Bachelor’s degree in Software Engineering, Computer Science, Business, Mathematics, or related field. ● 4+ years of data engineering experience. ● 2 years of data solution architecture and design experience. ● GCP Certified Data Engineer (preferred). Job Type: Full-time Schedule: Day shift Location: Noida, Uttar Pradesh (Required) Work Location: In person

Posted 5 hours ago

Apply

3.0 - 8.0 years

15 - 30 Lacs

Bengaluru

Remote

Naukri logo

Hiring for USA based big Multinational Company (MNC) The Cloud Engineer is responsible for designing, implementing, and managing cloud-based infrastructure and services. This role involves working with cloud platforms such as AWS, Microsoft Azure, or Google Cloud to ensure scalable, secure, and efficient cloud environments that meet the needs of the organization. Design, deploy, and manage cloud infrastructure in AWS, Azure, GCP, or hybrid environments. Automate cloud infrastructure provisioning and configuration using tools like Terraform, Ansible, or CloudFormation. Ensure cloud systems are secure, scalable, and reliable through best practices in architecture and monitoring. Work closely with development, operations, and security teams to support cloud-native applications and services. Monitor system performance and troubleshoot issues to ensure availability and reliability. Manage CI/CD pipelines and assist in DevOps practices to streamline software delivery. Implement and maintain disaster recovery and backup procedures. Optimize cloud costs and manage billing/reporting for cloud resources. Ensure compliance with data security standards and regulatory requirements. Stay current with new cloud technologies and make recommendations for continuous improvement. Bachelors degree in Computer Science, Information Technology, Engineering, or a related field. 3+ years of experience working with cloud platforms such as AWS, Azure, or Google Cloud. Proficiency in infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation). Experience with CI/CD tools (e.g., Jenkins, GitLab CI, Azure DevOps). Familiarity with containerization and orchestration (e.g., Docker, Kubernetes). Strong scripting skills (e.g., Python, Bash, PowerShell). Solid understanding of networking, security, and identity management in the cloud. Excellent problem-solving and communication skills. Ability to work independently and as part of a collaborative team.

Posted 5 hours ago

Apply

4.0 - 8.0 years

7 - 11 Lacs

Pune

Work from Office

Naukri logo

About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. DevOps Engineer responsibilities include deploying product updates, identifying production issues and implementing integrations that meet customer needs. If you have a solid background in software engineering and are familiar with Ruby or Python, we?d like to meet you. Ultimately, you will execute and automate operational processes fast, accurately and securely. What You?ll Do Collaborate with coworkers to conceptualize, develop, and release software Conduct quality assurance to ensure that the software meets prescribed guidelines Roll out fixes and upgrades to software, as needed Secure software to prevent security breaches and other vulnerabilities Collect and review customers feedback to enhance user experience Suggest workflow alterations to improve efficiency and success. Pitch ideas for projects based on gaps in the market and technological advancements Expertise You?ll Bring 4 to 8 years of professional experience as a DevOps Engineer who has worked in and advocated for agile environments. Proficiency: Groovy / fluent in Python and Python testing best practices Jenkins configuration using Groovy or Python Various managed and self-hosted CI / CD tooling (Jenkins) SCM tools (Git, Perforce, etc.) Containers (Docker, Kubernetes) Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.

Posted 6 hours ago

Apply

6.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

DevOps Engineer Join us as a DevOps Engineer at Dedalus, one of the world’s leading healthcare technology companies. Be a part of our team in Noida , India, and contribute to delivering high-quality software solutions that make a profound impact in providing better care for a healthier planet. What you’ll achieve As a DevOps Engineer , you will play a crucial role in designing, implementing, and maintaining the infrastructure and automation tools that support our development and deployment processes. You will collaborate with cross-functional teams to ensure the reliability, security, and scalability of our applications, making a profound impact throughout the healthcare sector. You will: Develop and maintain CI/CD pipelines to ensure fast, reliable, and consistent delivery of software. Automate infrastructure provisioning, configuration management, and application deployment using tools like Terraform, Ansible, Puppet, or Chef. Implement and manage monitoring, logging, and alerting systems using tools such as Prometheus, Grafana, ELK Stack, or Splunk. Work closely with development, QA, and operations teams to streamline workflows and improve communication and efficiency. Lead incident response efforts , conduct root cause analysis, and implement preventative measures to ensure system stability and performance. Provide mentorship and guidance to junior DevOps engineers and other team members. Take the next step towards your dream career At Dedalus, Life flows through our software . Every day, we help caregivers and health professionals deliver better care to their communities. Take the next step in your career that will make a profound impact. Here’s what you’ll need to succeed: Essential Requirements 6 to 8 years of experience in a DevOps, Site Reliability Engineer (SRE), or related role . Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience). Extensive experience with cloud platforms (AWS, Azure, Google Cloud). Proficiency with Infrastructure as Code (IaC) and configuration management tools (Terraform, Ansible, Puppet, Chef). Strong experience with CI/CD tools (Jenkins, GitLab CI, CircleCI). Proficiency in scripting languages (Python, Bash, PowerShell). Hands-on experience with containerization and orchestration technologies (Docker, Kubernetes). Desirable Requirements Experience with networking concepts and security best practices . Knowledge of service mesh technologies (Istio, Linkerd, Consul). Familiarity with serverless computing and edge computing architectures . Exposure to compliance frameworks (ISO 27001, SOC 2, HIPAA). We are Dedalus, come join us Dedalus is committed to providing an engaging and rewarding work experience that reflects the passion our employees bring to our mission of helping clinicians and nurses deliver better care to their communities. Our company fosters a culture of innovation, learning, and collaboration , enabling clinical cooperation and efficiency while making a meaningful difference for millions of people worldwide. Each person is at the heart of our activities, making a real impact every day. With 7,600 employees in more than 40 countries , Dedalus continues to drive healthcare innovation globally. We are the people of Dedalus. Application closing date: 30th of June 2025 Our diversity & inclusion commitment – Dedalus Global Dedalus is dedicated to ensuring respect, inclusion, and success for all colleagues and the communities we serve. We are committed to fostering an inclusive and diverse workplace and continuously strive to improve and grow in this journey. Life Flows Through Our Software.

Posted 6 hours ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Hi All Greetings from Live Connections! We have an urgent requirement on Wintel Vmware Engineer role with one of our MNC based company in Pune MH Location. Please find the below job description and kindly share me your updated CV to sharmila@liveconnections.in Position Title: Wintel Vmware Engineer role Experience Level: 5-8 Years Duration: Full Time Location: Pune, MH Notice Period: Immediate to 1 Month Work from Office Skills: Wintel + VMWare Technical Knowledge / Skill Sets / Competencies Server/Virtualization (RHEL, AIX and Solaris are Server OS: Win 2003, 2008, 2012, 2016, 2019 Scripting: Windows power Unix shell scripting Automation skills: Ansible, Puppet etc. (preferred) Business Continuity/Disaster Recovery: BC/DR Cluster technologies – Microsoft cluster System Management Tools – SCCM, SCOM. Microsoft Services – AD, DNS, File server, DHCP,IIS etc SSL cert management Regards, Sharmila sharmila@liveconnections.in

Posted 8 hours ago

Apply

4.0 - 8.0 years

0 Lacs

Karnataka, India

On-site

Linkedin logo

Who You’ll Work With NIKE, Inc. does more than outfit the world's best athletes. It is a place to explore potential, obliterate boundaries and push out the edges of what can be. The company looks for people who can grow, think, dream and create. Its culture thrives by embracing diversity and rewarding imagination. The brand seeks achievers, leaders and visionaries. At Nike, it’s about each person bringing skills and passion to a challenging and constantly evolving game. This role is part of the Enterprise Architecture and Developer Platforms (EADP) and work with Electronic Data Interchange (EDI) part of Data Integration team. Who We Are Looking For We are looking for a Software Engineer II, EDI, EADP who excels in team environments and are excited about building cloud native platforms that can scale with the demand of our business. This role is part of EDI, EADP aggressively innovates solutions to drive growth while creating and implementing tools that help make everything else in the company possible. The candidate needs to have a strong understanding of technical concepts, excellent attention to detail, data accuracy, and data analysis, strong verbal and written communication skills, and be self-motivated and operates with a high sense of urgency and a high level of integrity. Key Skills & Traits Bachelor’s degree in computer science or a related/equivalent on job experience Minimum of 4-8 years applicable experience Detailed understanding and experience with Sterling B2Bi apps and ETL Proficient understanding of code versioning using Git (GitHub) Good hands-on experience in Linux Strong debugging skills Communicating with vendors and customers in a courteous and professional manner. Good understanding of AWS technologies including but not limited to EC2, Lambda, S3, RDS Oracle. Strong Shell and/or Python development experience including expertise in REST/JSON APIs. Good to have experience with CI/CD tools like Jenkins, Puppet, etc. Hands on experience with RedHat Great communication skills Experience with Apex reporting framework Previous work as a Sterling B2Bi Administrator/Engineer and certification What You’ll Work On As a Software Engineer II, you will play a crucial role in shaping, modernizing, and scaling Nike’s EDI platform while also driving innovation and automation within our ecosystem. Platform Responsibilities Recommend and implement the automation of different administration tasks via UNIX Shell scripts, command-line utilities or other applications/scripting languages Work with Sterling B2Bi Vendor on support tickets and bug fixes Full understanding of Sterling B2Bi Admin console and Sterling Control Center applications Familiar with source and target systems that include OpenText and Axway Managed File Transfer, Oracle, SQL Server and numerous file types such as EDI Delimited, XML, JSON and flat files Able to perform Disaster Recovery exercises Willingness for on-call support as needed throughout assigned projects Willingness to work early/late for sync up calls with offshore counterparts Knowledge of how to perform Sterling B2Bi migrations Familiar with GIT (GitHub) Familiar with AWS infrastructure Familiar with ServiceNow and JIRA

Posted 9 hours ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Role Description Engineer a best-in-class Azure Cloud platform, with a focus on PaaS services. Design and integrate cloud solutions and services following industry best practices with scalability, fault tolerance, resilience, security, observability, and simplicity in mind. Run proof of concepts (POCs) for new cloud services and third-party cloud tooling. Collaborate with InfoSec teams to review and enhance Azure security posture. Solve complex technical problems involving distributed systems, scale, and security, and translate solutions into designs and implementations. Work with peers to refine the cloud strategy, adoption plan, and migration roadmap. Partner with development teams to support their cloud adoption journey by identifying requirements, designing solutions, and driving implementation. Continuously improve the platform through automation, reliability enhancements, and better developer experience. Tackle new challenges weekly alongside a skilled team and a modern tech stack. Proactively identify and resolve issues before they impact business productivity. Develop fully deployable cloud services using Infrastructure as Code, integrated into the CI/CD toolchain. The Knowledge, Experience, And Qualifications You Need Strong hands-on experience with Azure PaaS services, including design, engineering, and implementation. Deep technical knowledge of: Kubernetes (AKS) Cosmos DB, App Service Environment (ASE) Cognitive Services, Data Factory, Event Grid Log Analytics, SQL, Blob/Table/Queue Storage, Azure Sentinel, Security Center Experience in cloud transformation and change programs across large technology organizations. Strong foundational knowledge across the infrastructure stack: virtualization, Windows/Linux environments, storage, databases, and networking. Hands-on experience with modern DevOps tools: Git, Azure DevOps, Terraform, ARM templates, Jenkins, Ansible, Puppet, Docker, Kubernetes. Deep understanding of PaaS, Infrastructure-as-Code, and Compliance-as-Code approaches and when to apply them. Experience with modern agile development practices and shift-left CI/CD. Passionate about building highly automated services using APIs. Proficiency in scripting or programming languages (e.g., Python, .NET, PowerShell, Node.js, Ruby, Java). Strong collaboration skills with multi-disciplinary technical teams. In-depth understanding of the broader cloud ecosystem, including cloud computing technologies, business drivers, and emerging trends. Excellent interpersonal and communication skills; ability to self-manage effectively. The Knowledge, Experience, And Qualifications That Will Help Bachelor's degree in Information Technology, Computer Science, or a related discipline. Azure Cloud certifications (Associate, Expert, or Specialty level). Basic understanding of AWS and cross-cloud capabilities. Familiarity with the strengths and capabilities of AWS, Azure, Google Cloud (GCP), and Alibaba Cloud. Experience with Power BI and the Azure Power Platform. Skills Azure Cloud,Azure Paas,Devops Tools

Posted 9 hours ago

Apply

4.0 - 6.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Greetings from DSRC!!! DSRC provides competitive compensation that is revised purely on performance , flexible work hours & friendly work environment . At DSRC you will have opportunity not only to learn but also explore your area of interest with respect to technology and also effectively use the skills acquired over few years of IT experience. Experience: 4 to 6 years Requirement: Linux System Administrator Working from home will be available on an optional basis. Key Responsibilities Linux System Administration Install, configure, manage and maintain Linux servers (Ubuntu, RHEL, CentOS, or similar) in production and staging environments. Perform server migrations, system upgrades, patches, and kernel tuning. Troubleshoot performance issues and resolve system-related incidents. Manage user accounts, permissions, and access control mechanisms in Linux environments. Virtualization and Containerization Management Deploy, Manage and maintain virtualization infrastructure such as VMware, Hyper-V etc. Perform capacity planning, resource allocation, and performance tuning for virtual machines and containers. Build, deploy, and orchestrate containerized applications using Docker and Kubernetes. Ensure seamless orchestration and scaling of container workloads in production environments. Design scalable container infrastructure with Helm charts, namespaces, and network policies. Networking Expertise Configure and manage networking services and protocols including TCP/IP, SSH, HTTP/HTTPS, FTP, NFS, SMB, DNS, DHCP, VPN. Configure and manage firewalls and routing. Troubleshoot network-related issues impacting Linux systems and virtual environments. Diagnose and resolve network connectivity issues and implement firewall rules and NAT. Security and Hardening Implement robust security measures including OS hardening, patch monitoring and management, firewall configuration, intrusion detection/prevention, and compliance adherence. Conduct regular security audits and vulnerability assessments. Harden operating systems using industry best practices (CIS benchmarks, SELinux, AppArmor). Implement and manage security tools like Fail2Ban, auditd, and antivirus solutions. Conduct regular vulnerability assessments and ensure system compliance. Automation and Scripting Develop and maintain automation scripts using Bash, Python, Ansible, Perl or similar tools to automate system installations, configuration, monitoring, provisioning virtual machines with required software and reporting. Streamline operational workflows through scripting and configuration management tools. Develop and maintain deployment pipelines for system provisioning and software rollouts. Backup and Disaster Recovery Design, implement, and test backup and disaster recovery strategies to ensure data integrity, high availability and business continuity. Monitoring & Logging Setup, Deploy and manage monitoring tools to track system and application performance. Analyze metrics and optimize resource utilization. Monitor and troubleshoot issues related to hardware, software, network protocols, and storage systems in multi-layered environments. Proactively monitor infrastructure health and implement solutions to ensure system reliability and uptime. Collaborate with cross-functional teams to assess system capacity, conduct performance tuning, and support application scalability. Provide technical support and root cause analysis for system-related incidents. Required Skills And Qualifications Bachelor’s degree in engineering 4 years of professional experience in Linux system administration. Strong proficiency with multiple Linux distributions (Ubuntu, RHEL, CentOS etc.). Extensive experience with virtualization technologies (VMware, KVM, libvirt, QEMU etc). Proven expertise with container technologies: Docker and Kubernetes. Solid understanding of file systems, storage environments, and network protocols (TCP/IP, SSH, HTTP/HTTPS, FTP, DNS, DHCP, VPN). Solid experience in Linux security best practices, OS hardening, and compliance. Hands-on scripting experience with Shell scripting or Python; experience with automation tools like Ansible or Puppet. Familiarity with system monitoring and logging tools like ELK Stack, Fluentd, Graylog etc. Experience with backup and disaster recovery planning. Experience in configuring monitoring tools like Nagios, Prometheus, Grafana, Zabbix etc. Experience in system documentation, providing technical support to users, and collaborating with IT teams to improve infrastructure and processes. Knowledge of cloud platforms (AWS, Azure, GCP) is a plus. Strong analytical and troubleshooting skills. Experience in database tuning and capacity planning is a plus. Excellent verbal and written communication skills. Self-motivated, organized, and capable of managing multiple priorities simultaneously.

Posted 10 hours ago

Apply

0.0 years

0 Lacs

Mumbai, Maharashtra

On-site

Indeed logo

Mumbai, Maharashtra Full Time 8+ years Job Description Key Responsibilities Develop, design, and implement IT infrastructure solutions to meet business needs, focusing on scalability, reliability, and security. Lead the architecture of complex systems and infrastructure, including servers, storage, networks, and cloud environments. Oversee the daily management, monitoring, and maintenance of IT systems, including servers, storage, and network devices. Ensure high availability, redundancy, and disaster recovery capabilities for critical systems. Required Skills Strong experience in system administration and architecture, including expertise in operating systems (Linux, Windows Server), storage, and virtualization. In-depth knowledge of networking fundamentals (TCP/IP, DNS, VLANs) and security practices. Proficiency in scripting languages (e.g., Python, PowerShell, Bash) and experience with automation tools (e.g., Ansible, Puppet, Chef). Experience with cloud platforms (AWS, Azure, Google Cloud) and hybrid infrastructure setups. Strong troubleshooting skills, with experience managing high-availability environments and complex infrastructure. Preferred Qualifications Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent experience). Industry certifications (e.g., CompTIA, RHCE, AWS Certified Solutions) You can also send your resume to hr@pertsol.com

Posted 10 hours ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing!! Role: AWS Devops Experience Required :6 to 10 yrs Work Location :Bangalore/Pune/Hyderabad Required Skills, AWS Terraform Ansible/Puppet Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 22 hours ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Position Overview We are seeking an exceptional Senior Network Engineer with deep expertise in Software-Defined Networking (SDN) and cloud infrastructure. This role requires a unique blend of advanced networking knowledge and programming skills to architect, implement, and maintain complex cloud networking solutions. The ideal candidate will be proficient in modern networking technologies including OVN, OpenVSwitch, and various tunneling protocols while possessing the coding abilities to automate and optimize network operations. Key Responsibilities: (a) Network Architecture & Design ● Design and implement scalable cloud network architectures using SDN principles ● Architect multi-tenant networking solutions with proper isolation and security controls ● Plan and deploy network virtualization strategies for hybrid and multi-cloud environments ● Create comprehensive network documentation and architectural diagrams (b) SDN & Cloud Networking Implementation ● Deploy and manage Open Virtual Network (OVN) and OpenVSwitch environments ● Configure and optimize virtual networking components including logical switches, routers, and load balancers ● Implement network overlays using VXLAN, GRE, and other tunneling protocols ● Manage distributed virtual routing and switching in cloud environments (c) VPN & Connectivity Solutions ● Design and implement site-to-site and point-to-point VPN solutions ● Configure IPSec, WireGuard, and SSL VPN technologies ● Establish secure connectivity between on-premises and cloud environments ● Optimize network performance across WAN and internet connections (d) Programming & Automation ● Develop network automation scripts using Python, Go, or similar languages ● Create Infrastructure as Code (IaC) solutions using tools like Terraform or Ansible ● Build monitoring and alerting systems for network infrastructure ● Integrate networking solutions with CI/CD pipelines and DevOps practices (e) Troubleshooting & Optimization ● Perform deep packet analysis and network troubleshooting ● Optimize network performance and resolve complex connectivity issues ● Monitor network health and implement proactive maintenance strategies ● Conduct root cause analysis for network incidents and outages Required Qualifications (a) Technical Expertise ● 5+ years of enterprise networking experience with strong TCP/IP fundamentals ● 3+ years of hands-on experience with Software-Defined Networking (SDN) ● Expert-level knowledge of OVN (Open Virtual Network) and OpenVSwitch preffered ● Proficiency in programming languages - Python, or Go required ● Deep understanding of network protocols: BGP, OSPF, VXLAN, GRE, IPSec ● Experience with cloud platforms : AWS, Azure, GCP, or OpenStack ● Strong knowledge of containerization and orchestration (Docker, Kubernetes) (b) Networking Protocols & Technologies ● Layer 2/3 switching and routing protocols ● Network Address Translation (NAT) and Port Address Translation (PAT) ● Quality of Service (QoS) implementation and traffic shaping ● Network security principles and micro-segmentation ● Load balancing and high availability networking ● DNS, DHCP, and network services management (c) Cloud & Virtualization ● Virtual private clouds (VPC) design and implementation ● Hybrid cloud connectivity and network integration ● VMware NSX, Cisco ACI, or similar SDN platforms ● Container networking (CNI plugins, service mesh) ● Network Function Virtualization (NFV) (d) Programming & Automation Skills ● Network automation frameworks (Ansible, Puppet, Chef) ● Infrastructure as Code (Terraform, CloudFormation) ● API integration and REST/GraphQL proficiency ● Version control systems (Git) and collaborative development ● Linux system administration and shell scripting (e) Preferred Qualifications ● Bachelor's degree in Computer Science, Network Engineering, or related field ● Industry certifications: CCIE, JNCIE, or equivalent expert-level certifications ● Experience with network telemetry and observability tools ● Knowledge of service mesh technologies (Istio, Linkerd) ● Experience with network security tools and intrusion detection systems ● Familiarity with agile development methodologies

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. About The Organization Einstein products & platform democratize AI and transform the way Salesforce builds trusted machine learning and AI products - in days instead of months. It augments the Salesforce Platform with the ability to easily create, deploy, and manage Generative AI and Predictive AI applications across all clouds using Agentforce platform. We achieve this vision by providing unified, configuration-driven, and fully orchestrated machine learning APIs, customer-facing declarative interfaces and various microservices for the entire machine learning lifecycle including Data, Training, Predictions/scoring, Orchestration, Model Management, Model Storage, Experimentation etc. We are already producing over a billion predictions per day, Training 1000s of models per day along with 10s of different Large Language models, serving thousands of customers. We are enabling customers' usage of leading large language models (LLMs), both internally and externally developed, so they can leverage it in their Salesforce use cases. Along with the power of Data Cloud, this platform provides customers an unparalleled advantage for quickly integrating AI in their applications and processes. About The Team Join the AI Cloud Infra Engineering team, and become a specialist on Salesforce's AI Platform and Agentforce! You'll get to work with latest technology in the AI Infrastructure space, and collaborate with the team and cloud to identify and solve infrastructure challenges at massive scale planned for this year. We are a diverse team of curious minds that specialize in distributed systems, cloud based infrastructure, continuous delivery, security research, and innovative tool development. We evaluate a broad range of technologies including distributed processing, virtualized environments, micro-services and automated tools. Outside of work, we also focus on volunteering and live the 1:1:1 model! We are looking for Engineering leaders to help us take us to the next level, and build an infrastructure platform that can host and scale to hundreds of thousands of customers, and hundreds of billions of predictions per day and works on bleeding edge technologies on model training, model inferencing and Generative AI. Responsibilities Drive the execution and delivery of features by collaborating with many cross functional teams, architects, product owners and engineers Make critical decisions that attribute to the success of the product Proactive in foreseeing issues and resolve it before it happens Daily management of standups as the ScrumMaster for engineering teams Partner with the program team to align with objectives, priorities , tradeoffs and risk Ensuring the team has clear priorities and adequate resources Empowering the delivery team to self organize Be a multiplier and have a passion for team and team members’ success Providing technical guidance, career development, and mentoring to team members Maintaining high morale and motivating the delivery team to go above and beyond Vocally advocating for technical excellence and helping the teams make good decisions Participating in architecture discussions and planning Participating in cross-functional coordination, planning, and reviews with leads from other engineering teams Maintaining and fostering our culture by interviewing and hiring only the most qualified individuals Be passionate about automation and to avoid doing things manually Occasionally contributing to development tasks such as scripting and feature verifications to assist teams with release commitments, to gain an understanding of the deeply technical product as well as to keep your technical acumen sharp Required Skills Masters / Bachelors degree required in Computer Science, Software Engineering, or equivalent experience 5+ years experience leading software, DevOps or system engineering teams with a distinguished track record on technically demanding projects Strong verbal and written communication skills, organizational and time management skills Ability to be nimble, proactive, comfortable working with minimal specifications Experience in hiring, mentoring and managing engineers Championing a culture and work environment that promotes diversity and inclusion. Working experience of software engineering best practices including coding standards, code reviews, CI, build processes, testing, and operations Experience with AI technology stack like sagemaker, bedrock or similar other LLMs hosting. Experience with Agile development methodologies. ScrumMaster experience required Experience in communicating with users, other technical teams, and product management to understand requirements, describe product features, and technical designs Prior experience in any of the following languages: Go, Python, Ruby, Java Experience working with source control, continuous integration, and testing pipelines Experience building large scale distributed, fault-tolerant systems Experience with container orchestration systems such as Kubernetes, Docker, Helios, Fleet Public cloud engineering on AWS (Amazon Web Services), GCP (Google Cloud Platform), or Azure platforms Experience in configuration management technologies such as Chef, Puppet, Ansible, Terraform Preferred Skills Masters Degree in Computer Science Experience with building large scale Search cluster using a technology like Elastic Search Understanding of fundamental network technologies like DNS, Load Balancing, SSL, TCP/IP, SQL, HTTP Understand cloud security and best practices Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role : Core Networking Engineer Are you passionate about building secure, scalable, and high-performance network infrastructures. Were looking for a Core Networking Engineer with expertise in firewalls, WAF, cloud infrastructure, and DevOps to join our dynamic team. Key Responsibilities Design & deploy enterprise-level networks (LAN, WAN, VPN, SD-WAN) Manage and optimize network performance, availability & security Configure and maintain firewalls (Palo Alto, Fortinet, Cisco ASA, Check Point) Implement and manage Web Application Firewalls (AWS WAF, Azure WAF, F5, Imperva) Administer and automate secure cloud infrastructure (AWS, Azure, GCP) Collaborate with DevOps for network automation using Terraform, Ansible, Puppet, etc. Secure and support containerized environments (Docker, Kubernetes) Conduct regular audits, penetration testing, and compliance assessments Qualifications Any professional degree (B./B.Tech/MCA) Preferred Certifications : CCNA, CCNP, AWS Certified Solutions Architect, Azure Admin Tech Areas You'll Work With TCP/IP, BGP, OSPF, DNS, DHCP IDS/IPS, NAT, VPNs Terraform, Ansible, CloudFormation DevOps + Security integration (ref:hirist.tech)

Posted 1 day ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job ID 2025-14178 Date posted 30/05/2025 Location Bengaluru, India Category IT Staff Automation Engineer As a Staff Automation Engineer, you will be at the forefront of crafting and implementing innovative IT automation solutions that drive our infrastructure and development processes. You will use your expertise in innovative automation technologies to improve our provisioning, configuration management, image management, secret management, and Continuous Integration/Continuous Deployment (CI/CD) pipelines. This role is critical for ensuring our systems are efficient, scalable, and secure. Responsibilities Develop and implement automated solutions for infrastructure provisioning and config management using tools like Terraform, CloudFormation, Ansible, Puppet, Chef. Design, configure, and maintain robust CI/CD pipelines configured via GitLab CI, GitHub Actions, Cloudbees/Jenkins, AWS Code Pipeline or Azure DevOps. Collaborate with DevOps and Engineering teams to continuously improve and optimize engineering workflows, to enhance efficiency and reduce time-to-market. Standardize and automate configuration processes to ensure consistency and compliance across environments. Resolve configuration issues, ensuring systems are up-to-date, secure and optimise from cost management perspective. Required Skills And Experience Validated history of success in automation engineering or a similar role. Confirmed understanding of DevOps approach confirmed by the expertise in the managing and provisioning of infrastructure through code (IaC). High-level of proficiency with automation tools and technologies such as Terraform, Ansible, Docker, Kubernetes, Jenkins, and Vault. Deep understanding of on-premise, cloud platforms (AWS, Azure, GCP), and experience with cloud-native architectures. Good communication, partnership and problem-solving skills with the ability to work successfully in a team-oriented environment. Having experience in testing framework which includes infrastructure testing and application testing frameworks. “Nice To Have” Skills And Experience Certification in relevant technologies (e.g., AWS Certified DevOps Engineer, Certified Kubernetes Administrator) is a plus. In Return We offer exciting and interesting work in a diverse team. Arm's growth trajectory will ensure career progression and the opportunity to have a significant impact on our success! Accommodations at Arm At Arm, we want to build extraordinary teams. If you need an adjustment or an accommodation during the recruitment process, please email Hybrid Working at Arm Arm’s approach to hybrid working is designed to create a working environment that supports both high performance and personal wellbeing. We believe in bringing people together face to face to enable us to work at pace, whilst recognizing the value of flexibility. Within that framework, we empower groups/teams to determine their own hybrid working patterns, depending on the work and the team’s needs. Details of what this means for each role will be shared upon application. In some cases, the flexibility we can offer is limited by local legal, regulatory, tax, or other considerations, and where this is the case, we will collaborate with you to find the best solution. Please talk to us to find out more about what this could look like for you. Equal Opportunities at Arm Arm is an equal opportunity employer, committed to providing an environment of mutual respect where equal opportunities are available to all applicants and colleagues. We are a diverse organization of dedicated and innovative individuals, and don’t discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.

Posted 2 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Within our Database Administration team at Kyndryl, you'll be a master of managing and administering the backbone of our technological infrastructure. You'll be the architect of the system, shaping the base definition, structure, and documentation to ensure the long-term success of our business operations. Your expertise will be crucial in configuring, installing and maintaining database management systems, ensuring that our systems are always running at peak performance. You'll also be responsible for managing user access, implementing the highest standards of security to protect our valuable data from unauthorized access. In addition, you'll be a disaster recovery guru, developing strong backup and recovery plans to ensure that our system is always protected in the event of a failure. Your technical acumen will be put to use, as you support end users and application developers in solving complex problems related to our database systems. As a key player on the team, you'll implement policies and procedures to safeguard our data from external threats. You will also conduct capacity planning and growth projections based on usage, ensuring that our system is always scalable to meet our business needs. You'll be a strategic partner, working closely with various teams to coordinate systematic database project plans that align with our organizational goals. Your contributions will not go unnoticed - you'll have the opportunity to propose and implement enhancements that will improve the performance and reliability of the system, enabling us to deliver world-class services to our customers. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career, from Junior Administrator to Architect. We have training and upskilling programs that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. One of the benefits of Kyndryl is that we work with customers in a variety of industries, from banking to retail. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Having 6+ years of experience as a SQL and AWS Engineer. Develop and maintain SQL queries and scripts for database management, monitoring, and optimization. Design, implement, and manage database solutions using AWS services such as Amazon RDS, Amazon Aurora, and Amazon Redshift. Work closely with development, QA, and operations teams to ensure smooth and reliable database operations. Implement and manage monitoring and logging solutions to ensure database health and performance. Use tools like AWS CloudFormation, Terraform, or Ansible to manage database infrastructure. Ensure the security of databases and applications by implementing best practices and conducting regular audits. Identify and resolve issues related to database performance, deployment, and infrastructure. Preferred Technical and Professional Experience: Proficiency in AWS cloud platform, SQL database management, and scripting languages (e.g., Python, Bash). Experience with Infrastructure as Code (IaC) Terraform and configuration management tools (e.g., Ansible, Puppet). Strong analytical and problem-solving skills, particularly in optimizing SQL queries and database performance. Excellent communication and collaboration skills. Relevant certifications in AWS cloud technologies or SQL database management. Previous experience in a SQL and AWS engineering role or related field. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 days ago

Apply

0 years

4 - 8 Lacs

Gurgaon

On-site

GlassDoor logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Pune, Maharashtra, India; Bengaluru, Karnataka, India; Hyderabad, Telangana, India; Gurugram, Haryana, India . Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, or related technical field, or equivalent practical experience in Software Engineering. Experience with front end technologies like Angular, React, TypeScript, etc. and one or more back end programming languages such as Java, Python, Go, or similar. Experience in maintaining internet -acing production-grade applications. Experience troubleshooting problems across an array of services and functional areas, and experience with relational databases (e.g., PostgreSQL, Db2, etc.). Preferred qualifications: Experience developing scalable applications using Java, Python, or similar, including data structures, algorithms, and software design. Experience working with different types of databases (e.g., SQL, NoSQL, Graph, etc.). Experience with unit or automated testing tools such as Junit, Selenium, Jest, etc. Experience with DevOps practices, including infrastructure as code, continuous integration, and automated deployment. Experience with deployment and orchestration technologies (e.g., Puppet, Chef, Salt, Ansible, Docker, Kubernetes, Mesos, OpenStack, Jenkins). Understanding of open source server software (e.g., NGINX, RabbitMQ, Redis, Elasticsearch). About the job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. In this role, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. You will work with key strategic Google Cloud customers. Alongside the team, you will support customer application development leveraging Google Cloud products, architecture guidance, best practices, troubleshooting, monitoring, and more. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Be a trusted technical advisor to customers and design and build complex applications. Be versatile and enthusiastic to take on new problems across the full stack as we continue to push technology forward. Maintain highest levels of development practices (e.g., technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution) writing clean, modular and self-sustaining code, with repeatable quality and predictability. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. Travel up to 30% of the time depending on the region. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 2 days ago

Apply

0 years

1 - 3 Lacs

India

On-site

GlassDoor logo

Job Title: Account Executive Company: Box Puppet Entertainment Pvt Ltd Location: Marol Andheri, Mumbai Job Type: Full-time About Us: Box Puppet Entertainment is a creative leader in the entertainment industry, producing innovative content that captivates audiences worldwide. We’re looking for a detail-oriented Accounts professional to join our team and manage day-to-day operations of both our studio and café. *Key Responsibilities*: - Manage accounts payable/receivable, invoicing, and basic financial reporting. - Oversee and ensure the smooth functioning of our studio and café, including coordinating schedules, staff, and supplies. - Assist with scheduling, maintaining office supplies, and other administrative tasks. - Manage general office upkeep and filing. *Qualifications*: - Strong organizational skills and attention to detail. - Proficiency in Microsoft Office Suite and basic accounting software. *How to Apply*: Please submit your resume. We look forward to hearing from you! Job Types: Full-time, Permanent Schedule: Day shift Work Location: In person Job Types: Full-time, Permanent Pay: ₹14,000.00 - ₹25,000.00 per month Benefits: Cell phone reimbursement Schedule: Day shift Language: English (Preferred) Work Location: In person Job Types: Full-time, Permanent Benefits: Cell phone reimbursement Food provided Schedule: Day shift Work Location: In person Job Types: Full-time, Permanent Pay: ₹14,000.00 - ₹25,000.00 per month Benefits: Cell phone reimbursement Internet reimbursement Schedule: Day shift Language: English (Preferred) Work Location: In person Expected Start Date: 25/06/2025

Posted 2 days ago

Apply

6.0 - 8.0 years

4 - 7 Lacs

Ahmedabad

On-site

GlassDoor logo

REQUIRED SKILLS: 6-8 years of experience with Java development and architecture. Experience with Java 8 is must. Mandatory skills: JAVA Spring, AWS S3, AWS API Gateway, AWS Beanstalk, Cognito and other AWS services, git, CI CD pipeline Good to have skill - ReactJS Day-to-day roles - Coding and Development, System Design and Architecture, Code Review and Collaboration, Good client communication Extensive experience using open source software libraries Experience with Spring/Maven/Jersey and building REST APIs. Must have built end to end continuous integration and deployment infrastructure for micro services Strong commitment to good engineering discipline and process including code reviews and delivering unit tests in conjunction with feature delivery Must possess excellent communication and teamwork skills. Strong presentation and facilitation skills are required. Self-starter that is results focused with the ability to work independently and in teams. GOOD TO HAVE: Good to have skill - ReactJS Prior experience building modular, common and scalable services Experience using chef, puppet or other deployment automation tools Experience working within a distributed engineering team including offshore Bonus points if you have contributed to an open source project Familiarity and experience with agile (scrum) development process Proven track record of identifying and championing new technologies that enhance the end-user experience, software quality, and developer productivity

Posted 2 days ago

Apply

Exploring Puppet Jobs in India

The demand for professionals skilled in Puppet configuration management software is on the rise in India. Puppet is widely used in the IT industry for automating infrastructure management tasks, making it an essential skill for job seekers in the technology sector.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving IT industries and have a high demand for Puppet professionals.

Average Salary Range

The average salary range for Puppet professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can command salaries ranging from INR 10-15 lakhs per annum.

Career Path

In the field of Puppet, a typical career path may involve starting as a Junior Puppet Developer, advancing to a Senior Puppet Developer, and eventually becoming a Puppet Tech Lead. With experience and expertise, professionals can also explore roles such as Puppet Architect or Puppet Consultant.

Related Skills

In addition to Puppet expertise, professionals in this field are often expected to have knowledge of related tools and technologies such as Ansible, Chef, Docker, Kubernetes, and scripting languages like Python or Ruby.

Interview Questions

  • What is Puppet and how does it differ from other configuration management tools? (basic)
  • Explain the Puppet architecture and its components. (medium)
  • How do you handle dependencies in Puppet manifests? (medium)
  • What are Puppet facts and how are they useful in Puppet manifests? (basic)
  • Explain the role of Puppet modules in Puppet configuration management. (medium)
  • How do you enforce idempotency in Puppet manifests? (advanced)
  • Can you explain the difference between Puppet apply and Puppet agent? (basic)
  • How do you manage secrets or sensitive data in Puppet manifests? (medium)
  • What is Hiera and how is it used in Puppet for data separation? (medium)
  • How do you test Puppet manifests before applying them to production environments? (medium)
  • Explain the Puppet Forge and its importance in Puppet ecosystem. (basic)
  • How do you handle errors or failures in Puppet runs? (medium)
  • Can you explain the purpose of resource collectors in Puppet manifests? (advanced)
  • How do you monitor Puppet infrastructure for performance and reliability? (medium)
  • What are Puppet environments and how are they used in Puppet deployments? (basic)
  • Explain how Puppet manages package installations across different operating systems. (medium)
  • How do you troubleshoot Puppet agent connectivity issues? (medium)
  • What are some best practices for Puppet module development? (medium)
  • How do you handle Puppet code deployments across multiple nodes? (medium)
  • Explain how Puppet manages file resources and permissions. (medium)
  • How do you integrate Puppet with version control systems like Git? (medium)
  • What are Puppet reports and how do you use them for auditing Puppet runs? (medium)
  • Can you explain the differences between Puppet standalone and Puppet client-server setups? (advanced)
  • How do you handle Puppet upgrades and migrations in a production environment? (advanced)
  • Explain how Puppet manages service resources and ensures service availability. (medium)

Conclusion

As the demand for Puppet professionals continues to grow in India, job seekers can enhance their career prospects by acquiring proficiency in Puppet and related technologies. By preparing effectively and showcasing their skills confidently during job interviews, individuals can secure rewarding opportunities in the dynamic field of Puppet configuration management. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies