Jobs
Interviews

1971 Puppet Jobs - Page 35

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Position Overview We are seeking an exceptional Senior Network Engineer with deep expertise in Software-Defined Networking (SDN) and cloud infrastructure. This role requires a unique blend of advanced networking knowledge and programming skills to architect, implement, and maintain complex cloud networking solutions. The ideal candidate will be proficient in modern networking technologies including OVN, OpenVSwitch, and various tunneling protocols while possessing the coding abilities to automate and optimize network operations. Key Responsibilities: (a) Network Architecture & Design ● Design and implement scalable cloud network architectures using SDN principles ● Architect multi-tenant networking solutions with proper isolation and security controls ● Plan and deploy network virtualization strategies for hybrid and multi-cloud environments ● Create comprehensive network documentation and architectural diagrams (b) SDN & Cloud Networking Implementation ● Deploy and manage Open Virtual Network (OVN) and OpenVSwitch environments ● Configure and optimize virtual networking components including logical switches, routers, and load balancers ● Implement network overlays using VXLAN, GRE, and other tunneling protocols ● Manage distributed virtual routing and switching in cloud environments (c) VPN & Connectivity Solutions ● Design and implement site-to-site and point-to-point VPN solutions ● Configure IPSec, WireGuard, and SSL VPN technologies ● Establish secure connectivity between on-premises and cloud environments ● Optimize network performance across WAN and internet connections (d) Programming & Automation ● Develop network automation scripts using Python, Go, or similar languages ● Create Infrastructure as Code (IaC) solutions using tools like Terraform or Ansible ● Build monitoring and alerting systems for network infrastructure ● Integrate networking solutions with CI/CD pipelines and DevOps practices (e) Troubleshooting & Optimization ● Perform deep packet analysis and network troubleshooting ● Optimize network performance and resolve complex connectivity issues ● Monitor network health and implement proactive maintenance strategies ● Conduct root cause analysis for network incidents and outages Required Qualifications (a) Technical Expertise ● 5+ years of enterprise networking experience with strong TCP/IP fundamentals ● 3+ years of hands-on experience with Software-Defined Networking (SDN) ● Expert-level knowledge of OVN (Open Virtual Network) and OpenVSwitch preffered ● Proficiency in programming languages - Python, or Go required ● Deep understanding of network protocols: BGP, OSPF, VXLAN, GRE, IPSec ● Experience with cloud platforms : AWS, Azure, GCP, or OpenStack ● Strong knowledge of containerization and orchestration (Docker, Kubernetes) (b) Networking Protocols & Technologies ● Layer 2/3 switching and routing protocols ● Network Address Translation (NAT) and Port Address Translation (PAT) ● Quality of Service (QoS) implementation and traffic shaping ● Network security principles and micro-segmentation ● Load balancing and high availability networking ● DNS, DHCP, and network services management (c) Cloud & Virtualization ● Virtual private clouds (VPC) design and implementation ● Hybrid cloud connectivity and network integration ● VMware NSX, Cisco ACI, or similar SDN platforms ● Container networking (CNI plugins, service mesh) ● Network Function Virtualization (NFV) (d) Programming & Automation Skills ● Network automation frameworks (Ansible, Puppet, Chef) ● Infrastructure as Code (Terraform, CloudFormation) ● API integration and REST/GraphQL proficiency ● Version control systems (Git) and collaborative development ● Linux system administration and shell scripting (e) Preferred Qualifications ● Bachelor's degree in Computer Science, Network Engineering, or related field ● Industry certifications: CCIE, JNCIE, or equivalent expert-level certifications ● Experience with network telemetry and observability tools ● Knowledge of service mesh technologies (Istio, Linkerd) ● Experience with network security tools and intrusion detection systems ● Familiarity with agile development methodologies

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. About The Organization Einstein products & platform democratize AI and transform the way Salesforce builds trusted machine learning and AI products - in days instead of months. It augments the Salesforce Platform with the ability to easily create, deploy, and manage Generative AI and Predictive AI applications across all clouds using Agentforce platform. We achieve this vision by providing unified, configuration-driven, and fully orchestrated machine learning APIs, customer-facing declarative interfaces and various microservices for the entire machine learning lifecycle including Data, Training, Predictions/scoring, Orchestration, Model Management, Model Storage, Experimentation etc. We are already producing over a billion predictions per day, Training 1000s of models per day along with 10s of different Large Language models, serving thousands of customers. We are enabling customers' usage of leading large language models (LLMs), both internally and externally developed, so they can leverage it in their Salesforce use cases. Along with the power of Data Cloud, this platform provides customers an unparalleled advantage for quickly integrating AI in their applications and processes. About The Team Join the AI Cloud Infra Engineering team, and become a specialist on Salesforce's AI Platform and Agentforce! You'll get to work with latest technology in the AI Infrastructure space, and collaborate with the team and cloud to identify and solve infrastructure challenges at massive scale planned for this year. We are a diverse team of curious minds that specialize in distributed systems, cloud based infrastructure, continuous delivery, security research, and innovative tool development. We evaluate a broad range of technologies including distributed processing, virtualized environments, micro-services and automated tools. Outside of work, we also focus on volunteering and live the 1:1:1 model! We are looking for Engineering leaders to help us take us to the next level, and build an infrastructure platform that can host and scale to hundreds of thousands of customers, and hundreds of billions of predictions per day and works on bleeding edge technologies on model training, model inferencing and Generative AI. Responsibilities Drive the execution and delivery of features by collaborating with many cross functional teams, architects, product owners and engineers Make critical decisions that attribute to the success of the product Proactive in foreseeing issues and resolve it before it happens Daily management of standups as the ScrumMaster for engineering teams Partner with the program team to align with objectives, priorities , tradeoffs and risk Ensuring the team has clear priorities and adequate resources Empowering the delivery team to self organize Be a multiplier and have a passion for team and team members’ success Providing technical guidance, career development, and mentoring to team members Maintaining high morale and motivating the delivery team to go above and beyond Vocally advocating for technical excellence and helping the teams make good decisions Participating in architecture discussions and planning Participating in cross-functional coordination, planning, and reviews with leads from other engineering teams Maintaining and fostering our culture by interviewing and hiring only the most qualified individuals Be passionate about automation and to avoid doing things manually Occasionally contributing to development tasks such as scripting and feature verifications to assist teams with release commitments, to gain an understanding of the deeply technical product as well as to keep your technical acumen sharp Required Skills Masters / Bachelors degree required in Computer Science, Software Engineering, or equivalent experience 5+ years experience leading software, DevOps or system engineering teams with a distinguished track record on technically demanding projects Strong verbal and written communication skills, organizational and time management skills Ability to be nimble, proactive, comfortable working with minimal specifications Experience in hiring, mentoring and managing engineers Championing a culture and work environment that promotes diversity and inclusion. Working experience of software engineering best practices including coding standards, code reviews, CI, build processes, testing, and operations Experience with AI technology stack like sagemaker, bedrock or similar other LLMs hosting. Experience with Agile development methodologies. ScrumMaster experience required Experience in communicating with users, other technical teams, and product management to understand requirements, describe product features, and technical designs Prior experience in any of the following languages: Go, Python, Ruby, Java Experience working with source control, continuous integration, and testing pipelines Experience building large scale distributed, fault-tolerant systems Experience with container orchestration systems such as Kubernetes, Docker, Helios, Fleet Public cloud engineering on AWS (Amazon Web Services), GCP (Google Cloud Platform), or Azure platforms Experience in configuration management technologies such as Chef, Puppet, Ansible, Terraform Preferred Skills Masters Degree in Computer Science Experience with building large scale Search cluster using a technology like Elastic Search Understanding of fundamental network technologies like DNS, Load Balancing, SSL, TCP/IP, SQL, HTTP Understand cloud security and best practices Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role : Core Networking Engineer Are you passionate about building secure, scalable, and high-performance network infrastructures. Were looking for a Core Networking Engineer with expertise in firewalls, WAF, cloud infrastructure, and DevOps to join our dynamic team. Key Responsibilities Design & deploy enterprise-level networks (LAN, WAN, VPN, SD-WAN) Manage and optimize network performance, availability & security Configure and maintain firewalls (Palo Alto, Fortinet, Cisco ASA, Check Point) Implement and manage Web Application Firewalls (AWS WAF, Azure WAF, F5, Imperva) Administer and automate secure cloud infrastructure (AWS, Azure, GCP) Collaborate with DevOps for network automation using Terraform, Ansible, Puppet, etc. Secure and support containerized environments (Docker, Kubernetes) Conduct regular audits, penetration testing, and compliance assessments Qualifications Any professional degree (B./B.Tech/MCA) Preferred Certifications : CCNA, CCNP, AWS Certified Solutions Architect, Azure Admin Tech Areas You'll Work With TCP/IP, BGP, OSPF, DNS, DHCP IDS/IPS, NAT, VPNs Terraform, Ansible, CloudFormation DevOps + Security integration (ref:hirist.tech)

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job ID 2025-14178 Date posted 30/05/2025 Location Bengaluru, India Category IT Staff Automation Engineer As a Staff Automation Engineer, you will be at the forefront of crafting and implementing innovative IT automation solutions that drive our infrastructure and development processes. You will use your expertise in innovative automation technologies to improve our provisioning, configuration management, image management, secret management, and Continuous Integration/Continuous Deployment (CI/CD) pipelines. This role is critical for ensuring our systems are efficient, scalable, and secure. Responsibilities Develop and implement automated solutions for infrastructure provisioning and config management using tools like Terraform, CloudFormation, Ansible, Puppet, Chef. Design, configure, and maintain robust CI/CD pipelines configured via GitLab CI, GitHub Actions, Cloudbees/Jenkins, AWS Code Pipeline or Azure DevOps. Collaborate with DevOps and Engineering teams to continuously improve and optimize engineering workflows, to enhance efficiency and reduce time-to-market. Standardize and automate configuration processes to ensure consistency and compliance across environments. Resolve configuration issues, ensuring systems are up-to-date, secure and optimise from cost management perspective. Required Skills And Experience Validated history of success in automation engineering or a similar role. Confirmed understanding of DevOps approach confirmed by the expertise in the managing and provisioning of infrastructure through code (IaC). High-level of proficiency with automation tools and technologies such as Terraform, Ansible, Docker, Kubernetes, Jenkins, and Vault. Deep understanding of on-premise, cloud platforms (AWS, Azure, GCP), and experience with cloud-native architectures. Good communication, partnership and problem-solving skills with the ability to work successfully in a team-oriented environment. Having experience in testing framework which includes infrastructure testing and application testing frameworks. “Nice To Have” Skills And Experience Certification in relevant technologies (e.g., AWS Certified DevOps Engineer, Certified Kubernetes Administrator) is a plus. In Return We offer exciting and interesting work in a diverse team. Arm's growth trajectory will ensure career progression and the opportunity to have a significant impact on our success! Accommodations at Arm At Arm, we want to build extraordinary teams. If you need an adjustment or an accommodation during the recruitment process, please email Hybrid Working at Arm Arm’s approach to hybrid working is designed to create a working environment that supports both high performance and personal wellbeing. We believe in bringing people together face to face to enable us to work at pace, whilst recognizing the value of flexibility. Within that framework, we empower groups/teams to determine their own hybrid working patterns, depending on the work and the team’s needs. Details of what this means for each role will be shared upon application. In some cases, the flexibility we can offer is limited by local legal, regulatory, tax, or other considerations, and where this is the case, we will collaborate with you to find the best solution. Please talk to us to find out more about what this could look like for you. Equal Opportunities at Arm Arm is an equal opportunity employer, committed to providing an environment of mutual respect where equal opportunities are available to all applicants and colleagues. We are a diverse organization of dedicated and innovative individuals, and don’t discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Within our Database Administration team at Kyndryl, you'll be a master of managing and administering the backbone of our technological infrastructure. You'll be the architect of the system, shaping the base definition, structure, and documentation to ensure the long-term success of our business operations. Your expertise will be crucial in configuring, installing and maintaining database management systems, ensuring that our systems are always running at peak performance. You'll also be responsible for managing user access, implementing the highest standards of security to protect our valuable data from unauthorized access. In addition, you'll be a disaster recovery guru, developing strong backup and recovery plans to ensure that our system is always protected in the event of a failure. Your technical acumen will be put to use, as you support end users and application developers in solving complex problems related to our database systems. As a key player on the team, you'll implement policies and procedures to safeguard our data from external threats. You will also conduct capacity planning and growth projections based on usage, ensuring that our system is always scalable to meet our business needs. You'll be a strategic partner, working closely with various teams to coordinate systematic database project plans that align with our organizational goals. Your contributions will not go unnoticed - you'll have the opportunity to propose and implement enhancements that will improve the performance and reliability of the system, enabling us to deliver world-class services to our customers. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career, from Junior Administrator to Architect. We have training and upskilling programs that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. One of the benefits of Kyndryl is that we work with customers in a variety of industries, from banking to retail. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Having 6+ years of experience as a SQL and AWS Engineer. Develop and maintain SQL queries and scripts for database management, monitoring, and optimization. Design, implement, and manage database solutions using AWS services such as Amazon RDS, Amazon Aurora, and Amazon Redshift. Work closely with development, QA, and operations teams to ensure smooth and reliable database operations. Implement and manage monitoring and logging solutions to ensure database health and performance. Use tools like AWS CloudFormation, Terraform, or Ansible to manage database infrastructure. Ensure the security of databases and applications by implementing best practices and conducting regular audits. Identify and resolve issues related to database performance, deployment, and infrastructure. Preferred Technical and Professional Experience: Proficiency in AWS cloud platform, SQL database management, and scripting languages (e.g., Python, Bash). Experience with Infrastructure as Code (IaC) Terraform and configuration management tools (e.g., Ansible, Puppet). Strong analytical and problem-solving skills, particularly in optimizing SQL queries and database performance. Excellent communication and collaboration skills. Relevant certifications in AWS cloud technologies or SQL database management. Previous experience in a SQL and AWS engineering role or related field. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 month ago

Apply

0 years

4 - 8 Lacs

Gurgaon

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Pune, Maharashtra, India; Bengaluru, Karnataka, India; Hyderabad, Telangana, India; Gurugram, Haryana, India . Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, or related technical field, or equivalent practical experience in Software Engineering. Experience with front end technologies like Angular, React, TypeScript, etc. and one or more back end programming languages such as Java, Python, Go, or similar. Experience in maintaining internet -acing production-grade applications. Experience troubleshooting problems across an array of services and functional areas, and experience with relational databases (e.g., PostgreSQL, Db2, etc.). Preferred qualifications: Experience developing scalable applications using Java, Python, or similar, including data structures, algorithms, and software design. Experience working with different types of databases (e.g., SQL, NoSQL, Graph, etc.). Experience with unit or automated testing tools such as Junit, Selenium, Jest, etc. Experience with DevOps practices, including infrastructure as code, continuous integration, and automated deployment. Experience with deployment and orchestration technologies (e.g., Puppet, Chef, Salt, Ansible, Docker, Kubernetes, Mesos, OpenStack, Jenkins). Understanding of open source server software (e.g., NGINX, RabbitMQ, Redis, Elasticsearch). About the job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. In this role, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. You will work with key strategic Google Cloud customers. Alongside the team, you will support customer application development leveraging Google Cloud products, architecture guidance, best practices, troubleshooting, monitoring, and more. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Be a trusted technical advisor to customers and design and build complex applications. Be versatile and enthusiastic to take on new problems across the full stack as we continue to push technology forward. Maintain highest levels of development practices (e.g., technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution) writing clean, modular and self-sustaining code, with repeatable quality and predictability. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. Travel up to 30% of the time depending on the region. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 month ago

Apply

0 years

1 - 3 Lacs

India

On-site

Job Title: Account Executive Company: Box Puppet Entertainment Pvt Ltd Location: Marol Andheri, Mumbai Job Type: Full-time About Us: Box Puppet Entertainment is a creative leader in the entertainment industry, producing innovative content that captivates audiences worldwide. We’re looking for a detail-oriented Accounts professional to join our team and manage day-to-day operations of both our studio and café. *Key Responsibilities*: - Manage accounts payable/receivable, invoicing, and basic financial reporting. - Oversee and ensure the smooth functioning of our studio and café, including coordinating schedules, staff, and supplies. - Assist with scheduling, maintaining office supplies, and other administrative tasks. - Manage general office upkeep and filing. *Qualifications*: - Strong organizational skills and attention to detail. - Proficiency in Microsoft Office Suite and basic accounting software. *How to Apply*: Please submit your resume. We look forward to hearing from you! Job Types: Full-time, Permanent Schedule: Day shift Work Location: In person Job Types: Full-time, Permanent Pay: ₹14,000.00 - ₹25,000.00 per month Benefits: Cell phone reimbursement Schedule: Day shift Language: English (Preferred) Work Location: In person Job Types: Full-time, Permanent Benefits: Cell phone reimbursement Food provided Schedule: Day shift Work Location: In person Job Types: Full-time, Permanent Pay: ₹14,000.00 - ₹25,000.00 per month Benefits: Cell phone reimbursement Internet reimbursement Schedule: Day shift Language: English (Preferred) Work Location: In person Expected Start Date: 25/06/2025

Posted 1 month ago

Apply

6.0 - 8.0 years

4 - 7 Lacs

Ahmedabad

On-site

REQUIRED SKILLS: 6-8 years of experience with Java development and architecture. Experience with Java 8 is must. Mandatory skills: JAVA Spring, AWS S3, AWS API Gateway, AWS Beanstalk, Cognito and other AWS services, git, CI CD pipeline Good to have skill - ReactJS Day-to-day roles - Coding and Development, System Design and Architecture, Code Review and Collaboration, Good client communication Extensive experience using open source software libraries Experience with Spring/Maven/Jersey and building REST APIs. Must have built end to end continuous integration and deployment infrastructure for micro services Strong commitment to good engineering discipline and process including code reviews and delivering unit tests in conjunction with feature delivery Must possess excellent communication and teamwork skills. Strong presentation and facilitation skills are required. Self-starter that is results focused with the ability to work independently and in teams. GOOD TO HAVE: Good to have skill - ReactJS Prior experience building modular, common and scalable services Experience using chef, puppet or other deployment automation tools Experience working within a distributed engineering team including offshore Bonus points if you have contributed to an open source project Familiarity and experience with agile (scrum) development process Proven track record of identifying and championing new technologies that enhance the end-user experience, software quality, and developer productivity

Posted 1 month ago

Apply

4.0 years

10 Lacs

Ahmedabad

On-site

About Us We are an Offshore Software Development Company in the business since 1999. We specialize in providing Software Development Solutions, Next-Gen Technology Services, Web Development, Mobile App Development, UI/UX Design and QA & Testing Services to Innovators, Startups, Small, Medium and Large Enterprises. We are one of the pioneers in the Software Outsourcing Services and have carved a niche space with our 'Going Beyond' approach. Our 150+ skilled developers available for hire with extensive knowledge and experience on diverse technologies and industry verticals are passionate about delighting our Clients. We serve multiple industries globally and have successfully conceptualized and delivered 3500+ projects. Our happy Clients have helped us to be recognized as one of the most Reliable IT Outsourcing Partners across USA, Canada, Europe and ANZ. Job Description Key Deliverables: Help our team establish DevOps practice. Work closely with the project lead / CTO to identify and establish DevOps practices in the company. Establish configuration management, automate our infrastructure, implement continuous integration, and train the team in DevOps best practices to achieve a continuously deployable system for in-house cloud. Help us to build a scalable, efficient AWS / Azure cloud infrastructure with monitoring for automated system health checks. Build our CI pipeline, and train and guide the team in DevOps practices. Exceptional Interpersonal Skills Very good analytical skills. Able to Conduct technical session for team Any certification in AWS DevOps or Azure DevOps is must. Keep up with the fast-paced tooling landscape in the industry, working on cutting edge and know about the bleeding edge Comfortable working in a fast-paced Agile delivery environment. Very good communication and co-ordination skills Architect Systems knowing what "good" looks like – someone who can understand, design, and implement operational tools for build, test and deployment, monitoring, alerting, recovery, backups etc.. RequirementsWho Should Apply? You have a minimum 4+ years of DevOps and Cloud engineering experience. You have strong Linux and Windows system administration background with excellent knowledge of Unix commands. You have Configuration Management experience. Such as Ansible, Chef, Puppet, or similar. You have experience Managing production infrastructure. with Terraform, CloudFormation, etc. You should be able to code terraform scripts. You have hands on experience using version control and build management tools like Maven / Gradle / Jenkins & GIT. CI/CD pipeline tooling ranging from Jenkins to modern PaaS one like Git-lab, Github, Azure DevOps, AWS CodeCommit. You should be able to advise clients on the principles of Continuous Delivery and Continuous Deployment, DevOps and Agile delivery. You should have scripting and automation skills (e.g. shell, Python, PowerShell, Javascript). You have Good knowledge of various security aspects of Cloud infrastructure. You can design and create visual infra diagrams and present to the team. You are able to Recommend various cloud services. and estimate costing for a given business use case. You have Exposure to Elastic Search , Logstash , Prometheus , Grafana etc. added advantage. BenefitsWhat is in for you? At WeblineIndia, in addition to competitive salary, we also provide an extensive benefits package which includes, 5-days working. On-site international work opportunities. Creative freedom to work. Work-life balance. Festive Holidays. Paid Leaves. Monetary incentives. Referral benefits. Various Awards & Recognition programs. Sponsored learning & certification programs. Fun events. And so much more... Job Type: Full-time Pay: Up to ₹90,000.00 per month Benefits: Leave encashment Schedule: Monday to Friday Experience: DevOps: 1 year (Preferred) Location: Ahmedabad, Gujarat (Preferred) Work Location: In person Speak with the employer +91 9033341252

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Network Automation & Orchestration: Develop low-level automation using Ansible, Terraform, Python, and Puppet to configure routers, switches, firewalls (Cisco, Arista, Palo Alto), and enforce policy-driven changes across hybrid environments. Core Network Engineering: Hands-on expertise in Layer 2/3 protocols (MPLS, BGP, OSPF, VRRP, STP, VLANs) and physical/logical infrastructure design for large-scale, multi-region enterprise networks. Infrastructure as Code (IaC): Implement version-controlled, repeatable network provisioning using Terraform modules, YAML-based configs, and Git workflows to manage complete network lifecycles. Cloud Network Provisioning: Automate and manage cloud-native networking (VPCs, subnets, route tables, VPNs, Direct Connect, Transit Gateway, firewalls, NACLs, Security Groups) in AWS, Azure, and GCP. CI/CD for Network Ops: Integrate network changes into CI/CD pipelines (Jenkins, GitLab CI) for test-driven deployments with linting, validation, rollback, and minimal operational downtime. Internet & Perimeter Security: Configure and manage internet-edge security layers (firewall policies, DDoS protection, IPS/IDS, web proxies) and ensure secure ingress/egress traffic flows. Monitoring, Telemetry & Auto-Remediation: Deploy NMS/telemetry tools (SolarWinds, Prometheus, InfluxDB) and write custom scripts for real-time alerting, anomaly detection, and event-driven remediation. Troubleshooting & Packet Analysis: Perform low-level debugging using packet captures (Wireshark/tcpdump), flow telemetry, syslog, and routing table state analysis to resolve incidents in multi-vendor networks. Cross-Functional Collaboration & Documentation: Collaborate with CloudOps, DevOps, Security, and platform teams to align network architecture with application needs. Maintain up-to-date HLD/LLD, runbooks, topology maps, and compliance records. Innovation & Optimization: Evaluate SDN, SASE, Zero Trust, and AI-based networking solutions to continuously improve agility, reliability, and performance of the network ecosystem.

Posted 1 month ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 12+ years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 month ago

Apply

7.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 7-12 years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 3+ years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 7-12 years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We have immediate opportunity for Site Reliability Engineer 5 to 9 years. Synechron – Pune Job Role: - SRE (Senior Site Reliability Engineer) Job Location: - Pune About Synechron We began life in 2001 as a small, self-funded team of technology specialists. Since then, we’ve grown our organization to 14,500+ people, across 58 offices, in 21 countries, in key global markets. Innovative tech solutions for business We're now a leading global digital consulting firm, providing innovative technology solutions for business. As a trusted partner, we're always at the forefront of change as we lead digital optimization and modernization journeys for our clients. Customized end-to-end solutions Our expertise in AI, Consulting, Data, Digital, Cloud & DevOps and Software Engineering, delivers customized, end-to-end solutions that drive business value and growth. For more information on the company, please visit our website or LinkedIn community. Job Description Title: SRE-Cloud, Dev Ops Synechron Level Years of Experience 5 to 12 years Location: Pune, Bangalore Job Description Base Skills: Experience with cloud platforms (AWS, GCP, Azure). Utilize Infrastructure as Code (IaC) tools like Terraform or CloudFormation to manage Cloud infrastructure. Handson working experience in any one of the Cloud platforms is mandatory. Base Skills: Cloud (AWS/Azure/GCP), Dev Ops, and CI/CD tools. Javascript or Typescript Common Soft skill Experience of independently execute customer facing role of understanding the SRE requirements, assist in build the team, and drive the implementation Hands-on experience of working on RFP/proposals Excellent communication and business presentation skills Must Have skills: Monitoring, observability, Open Telemetry : Using tools like Splunk, AppD, Prometheus, Fluentd, ELK(Elastic Search, Logstash Kibana) , TIG( Telegraf, Influx, Grafana) , DataDog, NewRelic Concepts of SLI, SLO, SLA, Define SLIs (Service Level Indicators), SLOs (Service Level Objectives), and error budgets, Toil Writing complex PromQL or related queries for dashboards. Prepare SLA compliance monitoring dashboards Software Engineering and Development skills: . NET , Go, Python , C++, Ruby or Java, or software delivery platforms such as Puppet, Chef, Ansible, and/or Spinnaker. Being able to instrument services; write exporters and collectors etc ( 60% or above expousre to any one coding language is must) Experience with building and running Microservices at scale, Rest API Integration, detailed Solution Design Optional Skills Incident Mgmt Framework, L1/L2/L3, Facilitate blameless post-mortems to identify root causes of incidents and implement preventative measures. Experience with Kubernetes. Good experience of Automation across applications /services and & Infrastructure Management. QUALIFICATION: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. If you find this this opportunity interesting kindly share your updated profile on Pravin.Chauhan@synechron.com With below details (Mandatory) Total Experience Experience in Site Reliability: - Current CTC Expected CTC Notice period Current Location Available for Face-to-Face interview? Ready to relocate to Pune: Ready to relocate to Bangalore: If you had gone through any interviews in Synechron before? If Yes when Regards, Pravin Chauhan Pravin.Chauhan@synechron.com Hp & WhatsApp # 8956217056

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 315647BR Job Type Full Time Your role Do you have a knack for technology? Are you at your best when supporting others? We’re looking for someone like that to: Provide projects support for the Red Hat Enterprise Linux and Solaris distributed infrastructure, preferably worked in Storage Migration especially NAS, DC Exit, OS Migration, Unix estate wide remediation projects and BCM/Isolation tests, Power down activities. Provide global production support for the Unix distributed infrastructure while on shift. Address incidents picked up by the monitoring systems alerting on global Unix infrastructure within a timely manner, Resolve problem tickets allocated to team on central management tools (e.g. SNOW), Implement requests for service as generated in manual and automated ticketing systems (e.g. SNOW, etc) Regular verbal and written communication with business partners and other support teams to clarify details of incidents or service requests to maintain production stability and meet established SLA’s. Operational knowledge of key technologies, such as RHEL 6/7/8, Oracle Solaris 10/11, SAN/NAS storage, autofs, NFS, SVM/LVM/VxVM, VCS/Sun Cluster/RHCS, Commvault, Blade technology Your team You will be a member of 24x7 Operational support group supporting Global Unix environment within the bank involving weekends and may be night shifts. The team is responsible to maintain quality, stability of the Unix/Linux environment and supporting business managers in planning IT project requirements. Your expertise University degree with 5+ years' experience in supporting UNIX based distributed computing environment with a large number of systems (2000+), a solid understanding of a UNIX-based operating system, preferable Solaris 10/11 and Red Hat Enterprise Linux 7/8 , Knowledge of Veritas Cluster Volume Manager /concepts and operations , Knowledge of Veritas Cluster server. Basic understanding of the Banking and Finance industries with previous job experience a plus. Exposure to enterprise environment ITIL based processes know-how , Issue Management and effective escalation management , Global User support exposure Familiar with Agile and SRE practices Experience in Automation, TOIL reduction and Observability dashboards using tools like Amelia, Powerbi etc Understanding and working knowledge of Cloud / Virtualization Technologies and cloud concepts, preferably in Azure. Strong OS troubleshooting skills in a distributed computing environment Solid knowledge in NFS, Networking, SAN/NAS technologies. Strong in Infrastructure as code technologies , Puppet /Chef/Ansible. Basic configuration knowledge of Oracle SPARC, IBM blade, Lenovo, Dell and HP servers Can Code in languages like Python, Java Scripting, Shell scripting, following SDLC Usage of tools like Gitlab, Jenkins, Service Now Understanding and working knowledge of authentication and naming services, including NIS, LDAP with Kerberos, Centrify, CyberArk, Power Broker, DNS, NTP, etc Should be able to collaborate globally with the business, vendor consultants, support groups, IT Infrastructure departments and other UBS business units About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 1 month ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 312691BR Job Type Full Time Your role Do you have a knack for technology? Are you at your best when supporting others? Are you known for your troubleshooting skills? Do you have strong Linux/Unix experience? Have you Lead a team in an enterprise environment? We’re looking for someone like that to: Support of Red Hat Linux distributed server environment liaising with regional and global counterparts if needed Understand business priorities; adequately prioritize work accordingly to meet project objectives Drive for improvements and implement them in regional or global scale Communicate and collaborate with other internal partners for planning and coordination of implementation to ensure work is completed in a timely manner You will be expected to drive the execution Participation in weekend and evening work Your team You will be a member of 24x7 Operational support group supporting Global Unix environment within the bank involving weekends and evening work. The team is responsible to maintain quality, stability of the Unix/Linux environment and supporting business managers in planning IT project requirements Your expertise University degree with 12+ years' experience in supporting UNIX based distributed computing environment with a large number of systems (2000+), a solid understanding of a UNIX-based operating system, preferable Solaris 10/11 and Red Hat Enterprise Linux 7/8 , Knowledge of Veritas Cluster Volume Manager /concepts and operations , Knowledge of Veritas Cluster server. Basic understanding of the Banking and Finance industries with previous job experience a plus. Exposure to enterprise environment ITIL based processes know-how , Issue Management and effective escalation management , Global User support exposure Familiar with Agile and SRE practices A team player and an accomplished communicator at executive level. Experience in Automation, TOIL reduction and Observability dashboards using tools like Amelia, Powerbi etc Understanding and working knowledge of Cloud / Virtualization Technologies and cloud concepts, preferably in Azure. Strong OS troubleshooting skills in a distributed computing environment Solid knowledge in NFS, Networking, SAN/NAS technologies, Puppet /Chef/Ansible. Basic configuration knowledge of Oracle SPARC, IBM blade, Lenovo, Dell and HP servers Can Code in languages like Python, Java Scripting, Shell scripting, following SDLC Usage of tools like Gitlab, Jenkins, Service Now Understanding and working knowledge of authentication and naming services, including NIS, LDAP with Kerberos, Centrify, CyberArk, Power Broker, DNS, NTP, etc About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. Join us At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact? Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 1 month ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 12+ years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 3+ years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 1 month ago

Apply

9.0 - 15.0 years

11 - 17 Lacs

Pune

Work from Office

Join us as a Vice President Senior Site Reliability Engineer (Linux and KDB) at Barclays. The ideal candidate should possess strong Linux expertise and foundational knowledge of KDB and will be working with front office trading systems and will be responsible for ensuring the stability and efficiency of our trading infrastructure. The candidate is expected to manage and maintain Linux servers, ensuring optimal performance and uptime, support and maintain KDB databases, including data ingestion, query optimization, and troubleshooting,implement and maintain monitoring tools to proactively identify and resolve issues, develop and deploy automation scripts to streamline operations and reduce manual intervention, respond to and resolve incidents, ensuring minimal impact on trading activities, while working closely with development and trading teams to understand requirements and provide reliable solutions and also create and maintain comprehensive documentation for systems and processes, To be successful as a Vice President Senior Site Reliability Engineer (Linux and KDB), you should have experience with:. Linux Expertise: Strong knowledge of Linux system administration, including shell scripting and performance tuning, KDB Basics: Understanding of KDB database architecture, query language (q), and data management, Front Office Trading Systems: Experience with front office trading platforms and their operational requirements, Problem-Solving: Excellent troubleshooting skills and ability to work under pressure, Communication: Strong verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams, Some Other Highly Valued Skills May Include. Experience with APIs: Familiarity with REST APIs and Python APIs for data integration, Cloud Services: Knowledge of AWS or other cloud platforms for data storage and processing, Automation Tools: Experience with automation tools such as Ansible, Puppet, or Chef, You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skill, This role is for Pune location, Purpose of the role. To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure, Accountabilities. Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data, Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures, Development of processing and analysis algorithms fit for the intended data complexity and volumes, Collaboration with data scientist to build and deploy machine learning models, Vice President Expectations. To contribute or set strategy, drive requirements and make recommendations for change. Plan resources, budgets, and policies; manage and maintain policies/ processes; deliver continuous improvements and escalate breaches of policies/procedures,. If managing a team, they define jobs and responsibilities, planning for the department’s future needs and operations, counselling employees on performance and contributing to employee pay decisions/changes. They may also lead a number of specialists to influence the operations of a department, in alignment with strategic as well as tactical priorities, while balancing short and long term goals and ensuring that budgets and schedules meet corporate requirements,. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others,. OR for an individual contributor, they will be a subject matter expert within own discipline and will guide technical direction. They will lead collaborative, multi-year assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will train, guide and coach less experienced specialists and provide information affecting long term profits, organisational risks and strategic decisions,. Advise key stakeholders, including functional leadership teams and senior management on functional and cross functional areas of impact and alignment, Manage and mitigate risks through assessment, in support of the control and governance agenda, Demonstrate leadership and accountability for managing risk and strengthening controls in relation to the work your team does, Demonstrate comprehensive understanding of the organisation functions to contribute to achieving the goals of the business, Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategies, Create solutions based on sophisticated analytical thought comparing and selecting complex alternatives. In-depth analysis with interpretative thinking will be required to define problems and develop innovative solutions, Adopt and include the outcomes of extensive research in problem solving processes, Seek out, build and maintain trusting relationships and partnerships with internal and external stakeholders in order to accomplish key business objectives, using influencing and negotiating skills to achieve outcomes, All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave, Show more Show less

Posted 1 month ago

Apply

2.0 - 6.0 years

5 - 9 Lacs

Noida

Work from Office

About Us :. At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service all backed by TELUS, our multi-billion dollar telecommunications parent.. Required Skills :. Minimum 6 years of experience in Architectecture, Design and building data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms.. Perform application impact assessments, requirements reviews, and develop work estimates.. Develop test strategies and site reliability engineering measures for data products and solutions.. Lead agile development "scrums" and solution reviews.. Mentor junior Data Engineering Specialists.. Lead the resolution of critical operations issues, including post-implementation reviews.. Perform technical data stewardship tasks, including metadata management, security, and privacy by design.. Demonstrate expertise in SQL and database proficiency in various data engineering tasks.. Automate complex data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect.. Develop and manage Unix scripts for data engineering tasks.. Intermediate proficiency in infrastructure-as-code tools like Terraform, Puppet, and Ansible to automate infrastructure deployment.. Proficiency in data modeling to support analytics and business intelligence.. Working knowledge of ML Ops to integrate machine learning workflows with data pipelines.. Extensive expertise in GCP technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud. Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion, Dataproc (good to have), and BigTable.. Advanced proficiency in programming languages (Python).. Qualifications:. Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field.. Analytics certification in BI or AI/ML.. 6+ years of data engineering experience.. 4 years of data platform solution architecture and design experience.. GCP Certified Data Engineer (preferred).. Show more Show less

Posted 1 month ago

Apply

2.0 - 5.0 years

3 - 7 Lacs

Mumbai

Work from Office

Job Description. We are searching for a highly skilled Linux System Engineer to join our team at Algoquant Fintech Limited. As a Linux System Engineer, you will play a critical role in designing, implementing, and maintaining the Linux-based systems that underpin our high-frequency trading infrastructure.. Responsibilities. Linux System Administration:. Deploy, configure, and maintain Linux servers and workstations to support high-frequency trading operations.. Perform system upgrades, patch management, and security hardening to ensure system integrity and compliance.. Monitor system performance and resource utilisation, troubleshooting and optimising as needed to maintain optimal performance.. Scripting and Automation:. Develop and maintain automation scripts using Bash, Python, or other scripting languages to streamline routine system administration tasks.. Automate deployment processes, configuration management, and monitoring tasks to enhance efficiency and reliability.. Systems Engineering:. Design, architect, and implement Linux-based systems and solutions to meet the performance, scalability, and reliability requirements of high-frequency trading.. Evaluate and recommend hardware and software technologies to optimise system performance and meet business objectives.. Implement and maintain systems monitoring, logging, and alerting solutions to ensure proactive detection and resolution of issues.. Security and Compliance:. Implement security controls and best practices to protect Linux systems from cyber threats and unauthorised access.. Perform security assessments, vulnerability scans, and audits to identify and mitigate security risks.. Ensure compliance with industry regulations and standards such as PCI-DSS, GDPR, and SOC 2.. Incident Response and Disaster Recovery:. Develop and maintain incident response plans and procedures to address security incidents and system outages.. Participate in incident response activities, including root cause analysis, remediation, and post-incident reviews.. Implement disaster recovery solutions and conduct regular tests to ensure business continuity.. Qualifications. Bachelor's degree in Computer Science, Information Technology, or related field.. 5+ years of experience in Linux system administration and systems engineering roles, preferably in the financial industry or HFT firms.. Proficiency in Bash or Python scripting for automation and system administration tasks.. Deep understanding of Linux operating system fundamentals, including kernel internals, file systems, and network stack.. Experience with configuration management tools such as Ansible, Puppet, or Chef.. Strong knowledge of networking protocols, security principles, and best practices.. Excellent troubleshooting skills and attention to detail.. Strong communication and collaboration skills, with the ability to work effectively in a fast-paced, team-oriented environment.. Show more Show less

Posted 1 month ago

Apply

2.0 - 6.0 years

9 - 12 Lacs

Nagar

Work from Office

Summary. We are seeking a Junior DevOps Engineer to join our team. The ideal candidate will be responsible for supporting the development and deployment of software applications. The Junior DevOps Engineer will work closely with the development team to ensure that the software is deployed in a timely and efficient manner. The candidate should have a strong understanding of software development and deployment processes.. Responsibilities. Work with the development team to ensure that software is deployed in a timely and efficient manner. Develop and maintain deployment scripts and automation tools. Monitor and troubleshoot production systems. Collaborate with other teams to ensure that software is deployed in a consistent and reliable manner. Participate in the design and implementation of new systems and processes. Continuously improve the deployment process to increase efficiency and reduce downtime. Qualifications. Bachelor's degree in Computer Science or related field. Strong understanding of software development and deployment processes. Experience with automation tools such as Jenkins. Experience with Proxmox Virtual Environment is a must.. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with containerization technologies such as Docker or Kubernetes. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Show more Show less

Posted 1 month ago

Apply

2.0 - 6.0 years

5 - 9 Lacs

Noida

Work from Office

About Us :. At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service all backed by TELUS, our multi-billion dollar telecommunications parent.. Required Skills:. Design, develop, and support data pipelines and related data products and platforms.. Design and build data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms.. Perform application impact assessments, requirements reviews, and develop work estimates.. Develop test strategies and site reliability engineering measures for data products and solutions.. Participate in agile development "scrums" and solution reviews.. Mentor junior Data Engineers.. Lead the resolution of critical operations issues, including post-implementation reviews.. Perform technical data stewardship tasks, including metadata management, security, and privacy by design.. Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies. Demonstrate SQL and database proficiency in various data engineering tasks.. Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect.. Develop Unix scripts to support various data operations.. Model data to support business intelligence and analytics initiatives.. Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation.. Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog,. Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion and Dataproc (good to have).. Qualifications:. Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field.. 4+ years of data engineering experience.. 2 years of data solution architecture and design experience.. GCP Certified Data Engineer (preferred).. Show more Show less

Posted 1 month ago

Apply

3.0 years

10 - 25 Lacs

Faridabad, Haryana, India

Remote

Experience : 3.00 + years Salary : INR 1000000-2500000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - FourKites, Inc.) What do you need for this opportunity? Must have skills required: container security, Cloud Security, Security Automation, Vulnerability Assessment, Security Information and Event Management (SIEM) FourKites, Inc. is Looking for: At FourKites we have the opportunity to tackle complex challenges with real-world impacts. Whether it’s medical supplies from Cardinal Health or groceries for Walmart, the FourKites platform helps customers operate global supply chains that are efficient, agile and sustainable. Join a team of curious problem solvers that celebrates differences, leads with empathy and values inclusivity. We are seeking an experienced Security Engineer with a strong background in DevOps, DevSecOps , and cloud infrastructure management. The ideal candidate will have hands-on expertise in AWS, GCP, Azure, and microservices architecture, combined with a deep understanding of security principles and best practices. You will be responsible for implementing and securing cloud-based environments, deploying infrastructure with automation tools, and ensuring that security is embedded throughout the development lifecycle. What you’ll be doing: Cloud Infrastructure & Security Architect and secure highly available, scalable, and fault-tolerant systems across AWS, GCP, and Azure environments. Design and implement cloud security solutions, focusing on compute, network, storage, content delivery, administration, and security. Implement security controls for Kubernetes clusters, containerized applications, and cloud-native services. DevOps & Automation: Leverage automation technologies (Ansible, Chef, Puppet, Jenkins, Docker) to manage infrastructure and deployment pipelines. Develop, deploy, and maintain infrastructure-as-code solutions with tools such as CloudFormation, Terraform, and AWS/GCP/Azure CLI. Enable CI/CD pipelines for secure application delivery while ensuring security is integrated into the build and deployment processes. Programming & Application Security: Implement and secure microservices architecture using tools such as AWS Lambda, Docker, Kubernetes, and serverless technologies. Develop and maintain secure, scalable applications using programming languages such as C++, C#, Java, and Python. Monitoring & Threat Detection: Continuously monitor cloud environments to identify and mitigate security threats and vulnerabilities. Conduct risk assessments and threat modeling for cloud applications and infrastructure. Use monitoring tools (e.g., AWS CloudWatch, GCP Stackdriver, Azure Monitor) to detect and respond to potential security incidents. Collaboration & Reporting: Collaborate with cross-functional teams including business leaders, engineers, and other security professionals to design and implement security solutions. Communicate security risks, mitigations, and incident reports to both technical and non-technical stakeholders. Produce detailed documentation of security policies, procedures, and technical implementations. Who you are: 3+ years of IT experience with a strong focus on DevOps, DevSecOps, and cloud security engineering. Strong hands-on experience with cloud platforms such as AWS, GCP, and Azure, and familiarity with their foundational services (e.g., EC2, DynamoDB, API Gateway, RDS, Lambda, CloudFront, etc.). Strong experience in Kubernetes security controls is a must. CKA/ CKAD/ CKS preferred. In-depth knowledge of Kubernetes, microservices, container orchestration, and security controls. Experience designing, deploying, and securing cloud-native applications with a focus on scalability, high availability, and load balancing. CISSP (Certified Information Systems Security Professional) or equivalent industry-recognized security certifications. Or AWS Associate or higher certifications (e.g., AWS Certified Solutions Architect – Associate). Or equivalent certifications would work Technical Skills : Expertise in implementing security best practices in cloud environments and DevOps pipelines. Familiarity with container security tools and methodologies. Strong analytical, troubleshooting, and problem-solving skills with the ability to quickly identify and address security threats. Excellent verbal and written communication skills to effectively engage with stakeholders at all levels. Strong teamwork orientation, collaborating with multidisciplinary teams to achieve organizational goals. Additional Requirements: Ability to work in a fast-paced environment and manage multiple tasks concurrently. A proactive approach to learning new technologies and staying up-to-date with industry trends in cloud security. Who we are : FourKites® is the #1 supply chain visibility platform in the world, extending visibility beyond transportation into yards, warehouses, stores and beyond. Tracking more than 2.5 million shipments daily across road, rail, ocean, air, parcel and courier, and reaching over 185 countries, FourKites combines real-time data and powerful machine learning to help companies digitize their end-to-end supply chains. More than 1,000 of the world’s most recognized brands — including 9 of the top-10 CPG and 18 of the top-20 food and beverage companies — trust FourKites to transform their business and create more agile, efficient and sustainable supply chains. Benefits Medical benefits start on the first day of employment 36 PTO days (Sick, Casual and Earned), five recharge days, two volunteer days Home Office setups and Technology reimbursement Lifestyle & Family benefits Ongoing learning & development opportunities (Professional development program, Toast Master club, etc.) One can also apply through this link - https://job-boards.greenhouse.io/fourkites/jobs/6986116 How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies