Jobs
Interviews

2636 Helm Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

14 - 17 Lacs

Pune

Work from Office

Critical Skills to Possess: Strong Experience Azure Cloud Should have working knowledge on Kubernetes, Jenkins and Terraform Should have working knowledge on Packer and Flux tools Should have good exposure on Linux and Windows administration Strong knowledge on YAML scripts Should have good exposure on Helm Knowledge in setting up pipelines in Jenkins Manage CI/CD automation and knowledge on Bitbucket/Jenkins integration Interpersonal communications skills, to interface with customers, peers and management Preferred Qualifications: Bachelor’s degree in computer science or a related field (or equivalent work experience) Roles and Responsibilities Roles and Responsibilities: Design, implement, and maintain the organization's continuous integration and delivery (CI/CD) pipelines to automate software build, test, and deployment processes. Collaborate with development teams to understand their requirements and provide technical guidance on building scalable and reliable infrastructure. Develop and maintain infrastructure as code (IaC) using tools like Ansible, Puppet, or Terraform to enable automated provisioning and configuration management. Manage and monitor cloud-based infrastructure (such as AWS, Azure, or Google Cloud) to ensure high availability, scalability, and performance of applications. Implement and maintain monitoring and logging systems to proactively identify and resolve performance bottlenecks and security vulnerabilities. Troubleshoot issues related to application deployment, performance, and reliability, working closely with development and operations teams to ensure timely resolution. Implement and enforce security best practices for infrastructure and applications, including access control, data encryption, and vulnerability scanning.

Posted 9 hours ago

Apply

4.0 - 7.0 years

10 - 14 Lacs

Hyderabad

Work from Office

As a Sr. DevOps engineer, you will be responsible for automating all aspects of application development related to building, testing, deploying, monitoring, and scaling. Youll work closely with development teams to improve software delivery processes through continuous integration, automated testing, and deployment pipelines. About the role: Design, build, and maintain CI/CD pipelines using industry standard tools such as Jenkins, Github Actions, Azure DevOps, etc. Implement automated unit tests and end-to-end functional testing for web applications. Deploy web applications to cloud platforms like AWS or Azure. Monitor production environments using monitoring tools like Datadog Implement best practices in system reliability engineering, including blue-green deployments, canary releases, A/B testing, and feature flags. Work with development teams to implement infrastructure as code using Terraform. Collaborate with development teams on improving software delivery processes by implementing feedback mechanisms, performance optimization techniques, and scalable architectures. Provide technical guidance and support to other team members. Document procedures and create documentation around deployment strategies, troubleshooting guides, and best practices. Continuously learn new technologies and trends in software development and DevOps. About You: Bachelor's degree in computer science, information technology, or a related field. Master's degree preferred. At least 4-7 years of experience working as a DevOps engineer or Development experience in any of the known technologies like .Net, Java, or Python Proficiency in programming languages such as C#, Python, Java, or NodeJS. Experience with containerization tools like Docker, Kubernetes, and Helm Charts. Knowledge of version control systems like Git and branch management workflows. Expertise in CI/CD pipeline automation using Jenkins, Github Actions, Azure DevOps, or similar tools. Strong understanding of cloud computing concepts and services offered by major public clouds (AWS, Azure, Google Cloud Platform). Familiarity with microservices architecture patterns and design principles. Experience with automated testing frameworks such as JUnit, Pytest, or Robot Framework. Knowledge of system reliability engineering best practices, including monitoring, logging, alerting, and incident management. Good to be familiar with configuration management tools like Chef, Ansible, Puppet, or SaltStack. Ability to communicate complex technical concepts effectively to both technical and non-technical stakeholders. #LI-NR1 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 11 hours ago

Apply

7.0 - 14.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position Type Full time Type Of Hire Experienced (relevant combo of work and education) Education Desired Bachelor of Computer Engineering Travel Percentage 0% Are you curious, motivated, and forward-thinking? At FIS you’ll have the opportunity to work on some of the most challenging and relevant issues in financial services and technology. Our talented people empower us, and we believe in being part of a team that is open, collaborative, entrepreneurial, passionate and above all fun. About The Team This role is a part of our OPF team. FIS Open Payment Framework (OPF) is a set of reusable and extensible components, frameworks, andtechnical services which can be assembled in different configurations to build a personalized Payment Processing System. From the Open PaymentFramework, FIS has created predefined solutions around the bank payment hub, including Domestic & International payments (XCT) , SEPA DirectDebits & Credit Transfers (SEPA) , SCT INST ,UK Faster Payments ,Immediate Payments ,eBanking (EBK) ,Business Payments (BP), NPP,BACS ,US ACH. What You Will Be Doing Develop application code for java programs Design, implement and maintain java application phases Designing, coding, and debugging and maintenance of Java, J2EE application systems Object-oriented Design and Analysis (OOA and OOD) Evaluate and identify new technologies for implementation Ability to convert business requirement into executable code solution Provide leadership to technical team What You Bring Must have 7 to 14 years of experience in Java Technologies Must have experience on Banking domain Proficiency in Core Java, J2EE, ANSI SQL, XML, Struts, Hibernate, Spring and Springboot Good experience in Database concepts (Oracle/DB2), docker (helm), kubernates, Core Java Language (Collections, Concurrency/Multi-Threading, Localization, JDBC), microservices Hands on experience in Web Technologies (Either Spring or Struts, Hibernate, JSP, HTML/DHTML, Rest Web services, JavaScript) Must have knowledge of one J2EE Application Server e.g.~ WebSphere Process Server, WebLogic, jboss Working Knowledge of JIRA or equivalent What We Offer You An exciting opportunity be a part of World’s Leading FinTech Product MNC To be a part of vibrant team and to build up a career on core banking/payments domain Competitive salary and attractive benefits including GHMI/ Hospitalization coverage for employee and direct dependents A multifaceted job with a high degree of responsibility and a broad spectrum of opportunities Privacy Statement FIS is committed to protecting the privacy and security of all personal information that we process in order to provide services to our clients. For specific information on how FIS protects personal information online, please see the Online Privacy Notice. Sourcing Model Recruitment at FIS works primarily on a direct sourcing model; a relatively small portion of our hiring is through recruitment agencies. FIS does not accept resumes from recruitment agencies which are not on the preferred supplier list and is not responsible for any related fees for resumes submitted to job postings, our employees, or any other part of our company. #pridepass

Posted 13 hours ago

Apply

2.0 years

1 - 6 Lacs

Hyderābād

Remote

Software Engineer Hyderabad, Telangana, India Date posted Jul 28, 2025 Job number 1844928 Work site Up to 100% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Are you ready to be at the cutting edge of technology and make a global impact? At Microsoft, we’re not just about cloud computing—we’re about revolutionizing how people and organizations thrive through advanced technology. Join our Azure Specialized team in India and be a part of a vibrant group of innovators shaping the future of cloud infrastructure. From building and offering specialized workloads, bare-metal and software capabilities on Azure, involving large-scale specialized solutions like VMWare, SAP, Oracle, Epic Healthcare systems etc. to pioneering in AI infrastructure, your work will push boundaries and redefine possibilities. We’re looking for dynamic, customer-centric engineers eager to solve complex problems across various computer science domains such as hardware, operating systems, networking, security, and distributed design. If you’re passionate about sustainability and quality, and ready to advocate for and revolutionize customer experiences, you belong here with us. Dive into a role where you break down barriers, spearhead groundbreaking solutions, and lead initiatives that ensure exceptional service. Let’s empower every person and organization on the planet to achieve more, together. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required/Minimum Qualifications: Bachelor's Degree in Computer Science, or related technical discipline with proven experience coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. 2+ years of professional experience in designing, developing, and shipping software. Excellent design, coding, debugging, and teamwork, and communication skills. Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter. Additional or Preferred Qualifications: Bachelor's Degree in Computer Science OR related technical field AND 1+ year(s) technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, OR Python OR Master's Degree in Computer Science or related technical field with proven experience coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Have a customer focused innovation mindset. Passionate about craftsmanship in engineering. Proven ability to solve complex technical issues for running online services. #azurecorejobs Responsibilities Collaborate with internal business units and stakeholders to understand the requirements for efficient and effective delivery. Write clean, robust, and well-thought-out code with an emphasis on performance, simplicity, durability, scalability, and maintainability. Independently develop a product, service or feature, taking code reusability, quality and security into consideration. Develop and implement testing strategy including unit testing, functional testing and end-to-end testing using industry standard testing tools and frameworks. Contribute to the architecture & design of the products and services. Use the debugging and analysis tools at your disposal to root cause issues and provide a viable and permanent solution. Show flexibility and confidence to pick up any new programming language or tech stack based on the needs of the feature/project. Take the helm in ensuring seamless service operations by addressing real-time challenges as they emerge, empowering you to directly enhance service reliability and customer satisfaction. Help create a diverse and inclusive culture where everyone can bring their full and authentic self, where all voices are heard, and where we do our best work as a result. Embody Microsoft culture and values. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 13 hours ago

Apply

5.0 years

5 - 10 Lacs

Gurgaon

On-site

Manager EXL/M/1435552 ServicesGurgaon Posted On 28 Jul 2025 End Date 11 Sep 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code D013514 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - UK & Europe Organization Services LOB Analytics - UK & Europe SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill JAVA HTML Minimum Qualification B.COM Certification No data available Job Description Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About the Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good to have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry Workflow Workflow Type Back Office

Posted 13 hours ago

Apply

18.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Chief Engineer Location: Bangalore Business & Team: Business Banking Technology (BBT) we have a vision of becoming the leading business bank, powered by the next horizon of technology. We’re delivering on this by working hand-in-hand with our business colleagues to jointly solve problems with customer centricity and technical innovation, cultivating a world-class team of empowered people, and building technology solutions for the future. We put the customer at the center of everything we do and measure our performance against the groups’ external customer satisfaction measures. The Business Banking Technology team within the Technology function manages the end-to-end technology needs for Business Banking (BB) within the CBA Group. Our team is composed of engineers and technology leaders, who bring in the right mix of skills to enable this transformation. We also work very closely with our business and operations colleagues to support these services which are critical to the Australian and Global economy. Working as part of the Payments Senior Leadership team, you will be accountable for prioritizing, coordinating, and leading the execution of technical platform delivery and service management across the BB domain. Impact & Contribution: This position will be responsible to work with a team of engineers across Payments Technology and partner with stakeholders to build, design and deliver solutions. As the Chief Engineer, you will be responsible for leading a large multi-disciplinary function responsible for defining, designing and building the highest-quality re-usable framework for Payments Technology. Define the capability uplift approach for the Chapter Area and drives uplift, in partnership with the Practice and peers (where applicable) and in line with industry. As important as ensuring quality outcomes, your role is focused on continually refining implementation standards, accelerating delivery through process improvements and building a world-class team culture. Reporting Lines: Direct line reporting into the Executive Manager, Payments Technology Functional (dotted line) reporting into the General Manager, Payments Technology Roles & Responsibilities: Chief Engineer plays a pivotal role to deliver on Payments Technology purpose through quality execution and delivery. Leverage metrics and data to inform continuous improvement opportunities that increase the effectiveness of the Chapter. Implement, evolve and support consistent ways of working across the Chapter and CBA business that align to industry standards. Uplift Chapters maturity by fostering a culture of sharing, learning and problem solving across Chapter Areas. Facilitate the constructive resolution of conflicts which may arise both internally & externally to the Chapter. Participate and contribute to the Practice and/or Chapter including maintaining technical experience commensurate with their specific Chapter. Providing oversight and leadership of data projects with strategic business and customer value for Business Banking Optimizing the resource model for efficiency and scale with resources located onshore and offshore. Building world-class team culture and engagement with a strong focus on career development. Adhere to the Code of Conduct. The Code of Conduct sets the standards of behaviour, actions and decisions we expect from our people Essential Skills: An experienced senior leader who has demonstrated success achieving measurable performance improvements across large and complex businesses. Well experienced with “Everything as code” development approach and experience with related tooling such as Ansible, Terraform, Python. 18+ years’ experience preferably gained in Banking and Finance. Have hands-on experience working with the following technologies: Java, Typescript, PySpark or Python AWS Cloud and Data Platforms AI - Exposure to Ai tools in engineering, Ex: GitHub Copilot, Route Load Proven ability to design, implement, and manage CI/CD/CT pipelines using GitHub Actions Expertise with Microservices, Rest API Integration, detailed Solution Design Sound knowledge and experience working with AWS services (such as EKS, Helm charts, Lambda among others) Ability to drive and influence senior level stakeholder engagement across business and technology. Leadership experience across multi-disciplinary teams Excellent communication skills, especially in relation to verbal communications and proven presentation skills to large audience. Experienced technologist that understands data platforms and their capabilities Education Qualifications: Bachelors or Master's degree in engineering in Computer Science/Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 09/08/2025

Posted 13 hours ago

Apply

3.0 - 5.0 years

5 - 5 Lacs

Bengaluru

Work from Office

Role Proficiency: Acts under minimum guidance of DevOps Architect to set up and manage DevOps tools and pipelines. Outcomes: Interpret the DevOps Tool/feature/component design and develop/support the same in accordance with specifications Follow and contribute existing SOPs to trouble shoot issues Adapt existing DevOps solutions for new contexts Code debug test and document; and communicate DevOps development stages/status of DevOps develop/support issues Select appropriate technical options for development such as reusing improving or reconfiguration of existing components Support users onboarding them on existing tools with guidance from DevOps leads Work with diverse teams with Agile methodologies Facilitate saving measures through automation Mentor A1 and A2 resources Involved in the Code Review of the team Measures of Outcomes: Schedule adherence Quality of the code Defect injection at various stages of lifecycle # SLA related to level 1 and level 2 support # of domain certification/ product certification obtained Facilitate saving measures through automation Outputs Expected: Automated components: Deliver components that automate parts to install components/configure of software/tools in on-premises and on cloud Deliver components that automate parts of the build/deploy for applications Configured components: Configure a CI/CD pipeline that can be used by application development/support teams Scripts: Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/ configuration/ build/ deployment tasks Onboard users: Onboard and extend existing tools to new app dev/support teams Mentoring: Mentoring and providing guidance to peers Stakeholder Management: Guide the team in preparing status updates; keeping management updated regarding the status Data Base: Data Insertion Data update Data Delete Data view creations Skill Examples: Install configure troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Integrate with code/test quality analysis tools like Sonarqube/Cobertura/Clover Integrate build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Scripting skills (Python Linux/Shell/Perl/Groovy/PowerShell) Repository Management/Migration Automation - GIT/BitBucket/GitHub/Clearcase Build automation scripts - Maven/Ant Artefact repository management - Nexus/Artifactory Dashboard Management & Automation- ELK/Splunk Configuration of cloud infrastructure (AWS/Azure/Google) Migration of applications from on-premises to cloud infrastructures Working on Azure DevOps/ARM (Azure Resource Manager)/DSC (Desired State Configuration) Strong debugging skill in C#/C Sharp/Dotnet Basic working knowledge of database Knowledge Examples: Knowledge of Installation/Config/Build/Deploy tools and knowledge of DevOps processes Knowledge of IAAS - Cloud providers (AWS/Azure/Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes & tools Knowledge of Agile methodologies Knowledge of security policies and tools Additional Comments: Must Have - DevOps: Git, Jenkins Pipeline, Test Automation - Orchestration: Docker, Kubernetes - Scripting: Python, Linux Shell - OS: Linux (in-depth), Windows - Networking: network configuration and debugging - Cloud: Azure/OpenShift, Public Cloud management - Security Practices: Knowledge of critical cyber security controls Would Be Nice - Cloud: M365, AWS, etc. - Orchestration: Terraform, Cloudify, other IaC - SRE: Flux, Grafana, Splunk - Agile: Jira - Products: BigID Professional Skills - Experience working within Agile teams - Experience in product integration e.g., taking open-source products/tools and deploying/integrating them into an enterprise environment using DevOps methodologies - Knowledge of IT Service Management (ITIL) - Ability to quickly learn and understand proprietary technologies in a complex regulated environment - Self-starter with proven & demonstrable experience in technical & application support of enterprise systems - Excellent verbal and written communication skills coupled with a collaborative approach - An automation/orchestration mind-set, enable the product squads to spend more time coding and less time on manual processes Attributes - Passionate around automation, DevOps & SRE - Being comfortable with frequent, incremental code testing and deployment - Take a hands-on approach to implementing DevOps processes right from requirements analysis, test design, automation and analysis - Own the quality and timeliness of delivery - Communicate key issues and progress updates in a regular, accurate, timely fashion Additional Notes: 3-6 Years - One position Handling of dockers, images and work experience (3 months - 2 years hands-on) Kubernetes Must and should be very strong with fundamentals, basics and real hands-on Native Kubernetes concepts - AKS, AWS, on-premise, red hat OpenShift anything is fine Linux, shell scripting - moderate level experience required Observability - Grafana, Elastic search (anything is good) - need minimum experience Automated Pipeline (Jenkins to deploy - real hands-on) required Should be aware of basic Networking concepts Awareness of certifications is needed though certification is not mandatory (for Kubernetes and cloud) Helm / Helm chart is experience is preferably good orchestration wise - Kubernetes is needed and must Deployment of microservices - hands-on Okay with any cloud (AWS/AZURE) SSL Concepts, Moderate security knowledge ITSM concepts (incident, problem, change and release) is desirable Required Skills Kubernetes,Devops,Dockers

Posted 13 hours ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Are you ready to be at the cutting edge of technology and make a global impact? At Microsoft, we’re not just about cloud computing—we’re about revolutionizing how people and organizations thrive through advanced technology. Join our Azure Specialized team in India and be a part of a vibrant group of innovators shaping the future of cloud infrastructure. From building and offering specialized workloads, bare-metal and software capabilities on Azure, involving large-scale specialized solutions like VMWare, SAP, Oracle, Epic Healthcare systems etc. to pioneering in AI infrastructure, your work will push boundaries and redefine possibilities. We’re looking for dynamic, customer-centric engineers eager to solve complex problems across various computer science domains such as hardware, operating systems, networking, security, and distributed design. If you’re passionate about sustainability and quality, and ready to advocate for and revolutionize customer experiences, you belong here with us. Dive into a role where you break down barriers, spearhead groundbreaking solutions, and lead initiatives that ensure exceptional service. Let’s empower every person and organization on the planet to achieve more, together. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Collaborate with internal business units and stakeholders to understand the requirements for efficient and effective delivery. Write clean, robust, and well-thought-out code with an emphasis on performance, simplicity, durability, scalability, and maintainability. Independently develop a product, service or feature, taking code reusability, quality and security into consideration. Develop and implement testing strategy including unit testing, functional testing and end-to-end testing using industry standard testing tools and frameworks. Contribute to the architecture & design of the products and services. Use the debugging and analysis tools at your disposal to root cause issues and provide a viable and permanent solution. Show flexibility and confidence to pick up any new programming language or tech stack based on the needs of the feature/project. Take the helm in ensuring seamless service operations by addressing real-time challenges as they emerge, empowering you to directly enhance service reliability and customer satisfaction. Help create a diverse and inclusive culture where everyone can bring their full and authentic self, where all voices are heard, and where we do our best work as a result. Embody Microsoft culture and values. Qualifications Required/Minimum Qualifications: Bachelor's Degree in Computer Science, or related technical discipline with proven experience coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. 2+ years of professional experience in designing, developing, and shipping software. Excellent design, coding, debugging, and teamwork, and communication skills. Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter. Additional Or Preferred Qualifications Bachelor's Degree in Computer Science OR related technical field AND 1+ year(s) technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, OR Python OR Master's Degree in Computer Science or related technical field with proven experience coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Have a customer focused innovation mindset. Passionate about craftsmanship in engineering. Proven ability to solve complex technical issues for running online services. #azurecorejobs Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 13 hours ago

Apply

7.0 - 9.0 years

5 - 5 Lacs

Bengaluru

Work from Office

Role Proficiency: Act under guidance of DevOps; leading more than 1 Agile team. Outcomes: Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates relevant DevOps solutions for new contexts Codes debugs tests and documents and communicates DevOps development stages/status of DevOps develop/support issues Selects appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install and troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Good understanding of Agile methodologies and is able to work with diverse teams Knowledge of more than 1 DevOps toolstack (AWS Azure GCP opensource) Measures of Outcomes: Quality of Deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA/KPI for onboarding projects or applications Stakeholder Management Percentage achievement of specification/completeness/on-time delivery Outputs Expected: Automated components : Deliver components that automates parts to install components/configure of software/tools in on premises and on cloud Deliver components that automates parts of the build/deploy for applications Configured components: Configure tools and automation framework into the overall DevOps design Scripts: Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Training/SOPs : Create Training plans/SOPs to help DevOps Engineers with DevOps activities and to in onboarding users Measure Process Efficiency/Effectiveness: Deployment frequency innovation and technology changes. Operations: Change lead time/volume Failed deployments Defect volume and escape rate Meantime to detection and recovery Skill Examples: Experience in design installation and configuration to to troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python Linux/Shell Perl Groovy PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Poweshell) Experience in repository Management/Migration Automation - GIT BitBucket GitHub Clearcase Experience in build automation scripts - Maven Ant Experience in Artefact repository management - Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS Azure Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps ARM (Azure Resource Manager) & DSC (Desired State Configuration) & Strong debugging skill in C# C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker & Kubernetes Knowledge Examples: Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS Azure Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build and release Branching/Merging Knowledge about containerization Knowledge of Agile methodologies Knowledge of software security compliance (GDPR/OWASP) and tools (Blackduck/ veracode/ checkmarxs) Additional Comments: Must Have - DevOps: Git, Jenkins Pipeline, Test Automation - Orchestration: Docker, Kubernetes - Scripting: Python, Linux Shell - OS: Linux (in-depth), Windows - Networking: network configuration and debugging - Cloud: AWS/Azure - Cloud: Azure/OpenShift, Public Cloud management - Security Practices: Knowledge of critical cyber security controls Would Be Nice - Cloud: M365, AWS, etc. - Orchestration: Terraform, Cloudify, other IaC - SRE: Flux, Grafana, Splunk - Agile: Jira - Products: BigID Professional Skills - Experience working within Agile teams - Experience in product integration e.g., taking open-source products/tools and deploying/integrating them into an enterprise environment using DevOps methodologies - Knowledge of IT Service Management (ITIL) - Ability to quickly learn and understand proprietary technologies in a complex regulated environment - Self-starter with proven & demonstrable experience in technical & application support of enterprise systems - Excellent verbal and written communication skills coupled with a collaborative approach - An automation/orchestration mind-set, enable the product squads to spend more time coding and less time on manual processes Attributes - Passionate around automation, DevOps & SRE - Being comfortable with frequent, incremental code testing and deployment - Take a hands-on approach to implementing DevOps processes right from requirements analysis, test design, automation and analysis - Own the quality and timeliness of delivery - Communicate key issues and progress updates in a regular, accurate, timely fashion Additional Notes: 6-12 Years - One Position (at lead level and should be able to guide the juniors), more than 12 years not needed Handling of dockers, images and work experience (3 months - 2 years hands-on) Kubernetes Must and should be very strong with fundamentals, basics and real hands-on Native Kubernetes concepts - AKS, AWS, on-premise, red hat OpenShift anything is fine Linux, shell scripting - moderate level experience required Observability - Grafana, Elastic search (anything is good) - need minimum experience Automated Pipeline (Jenkins to deploy - real hands-on) required Should be aware of basic Networking concepts Awareness of certifications is needed though certification is not mandatory (for Kubernetes and cloud) Helm / Helm chart is experience is preferably good orchestration wise - Kubernetes is needed and must Deployment of microservices - hands-on Okay with any cloud (AWS/AZURE) SSL Concepts, Moderate security knowledge ITSM concepts (incident, problem, change and release) is desirable Required Skills Kubernetes,Devops,Dockers

Posted 14 hours ago

Apply

6.0 - 9.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Job Requirements Job Description Development experience in Java and Node JS / Typescript. Must have hands on experience in Kubernetes or Helm Basic knowledge in AWS cloud infrastructure and services.(Optional) Develop and maintain CI/CD pipelines using tools like Jenkins Automate deployment and configuration management using tools such as Terraform, Ansible. Monitor system performance and troubleshoot issues to ensure high availability and reliability. Collaborate with development teams to integrate DevOps practices into the software development lifecycle. Implement security best practices for cloud environments. Maintain documentation of systems, processes, and procedures. Stay updated on industry trends and emerging technologies in DevOps and cloud computing. Work Experience Required Skills and Experience Strong development background in Java and Node JS or Typescript Must have hands on experience in Kubernetes or Helm Expertise in cloud technologies like AWS(Optional). Expertise in Shell scripting.(Optional) Expertise in developing and managing Pipelines. Expertise in Source code management using GIT/Bitbucket. Experience in container technologies like Docker. Experience in Build/Release management. Good understanding of Agile processes Cloud Platforms: Extensive experience on AWS platform -EC2, EKS, ECS, S3, RDS, IAM, Kubernetes, Helm(Desirable). Monitoring Tools: Knowledge of monitoring and observability tools : Grafana.(Optional) Automation: should be comfortable with infrastructure-as-code (e.g., Terraform, Ansible).(Optional) Problem-Solving: Strong analytical skills to troubleshoot complex issues. Deployments Know-How : CI/CD, pods management, SonarQube, Git, etc

Posted 15 hours ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Engineering Services Practitioner Project Role Description : Assist with end-to-end engineering services to develop technical engineering solutions to solve problems and achieve business objectives. Solve engineering problems and achieve business objectives using scientific, socio-economic, technical knowledge and practical experience. Work across structural and stress design, qualification, configuration and technical management. Must have skills : 5G Wireless Networks & Technologies Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Job Title: 5G Core Network Ops Senior Engineer Summary: We are seeking a skilled 5G Core Network Senior Engineer to join our team. The ideal candidate will have extensive experience with Nokia 5G Core platforms and will be responsible for fault handling, troubleshooting, session and service investigation, configuration review, performance monitoring, security support, change management, and escalation coordination. Roles and Responsibilities: 1. Fault Handling & Troubleshooting: Provide Level 2 (L2) support for 5G Core SA network functions in production environment. Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. Ensure EDRs are correctly generated for all relevant 5G Core functions (AMF, SMF, UPF, etc.) and interfaces (N4, N6, N11, etc.). Validate EDR formats and schemas against 3GPP and Nokia specifications. NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform. Manage lifecycle operations of CNFs, VNFs, and network services (NSs) across distributed Kubernetes and OpenStack environments. Analyze alarms from NetAct/Mantaray, or external monitoring tools. Correlate events using Netscout, Mantaray, and PM/CM data. Troubleshoot and resolve complex issues related to registration, session management, mobility, policy, charging, DNS, IPSec and Handover issues. Handle node-level failures (AMF/SMF/UPF/NRF/UDM/UDR/PCF/CHF restarts, crashes, overload). Perform packet tracing (Wireshark) or core trace (PCAP, logs) and Nokia PCMD trace capturing and analysis. Perform root cause analysis (RCA) and implement corrective actions. Handle escalations from Tier-1 support and provide timely resolution. 2. Automation & Orchestration Automate deployment, scaling, healing, and termination of network functions using NCOM. Develop and maintain Ansible playbooks, Helm charts, and GitOps pipelines (FluxCD, ArgoCD). Integrate NCOM with third-party systems using open APIs and custom plugins. 3. Session & Service Investigation: Trace subscriber issues (5G attach, PDU session, QoS). Use tools like EDR, Flow Tracer, Nokia Cloud Operations Manager (COM). Correlate user-plane drops, abnormal release, bearer QoS mismatch. Work on Preventive measures with L1 team for health check & backup. 4. Configuration and Change Management: Create a MOP for required changes, validate MOP with Ops teams, stakeholders before rollout/implementation. Maintain detailed documentation of network configurations, incident reports, and operational procedures. Support software upgrades, patch management, and configuration changes. Maintain documentation for known issues, troubleshooting guides, and standard operating procedures (SOPs). Audit NRF/PCF/UDM etc configuration & Database. Validate policy rules, slicing parameters, and DNN/APN settings. Support integration of new 5G Core nodes and features into the live network. 5. Performance Monitoring: Use KPI dashboards (NetAct/NetScout) to monitor 5G Core KPIs e.g registration success rate, PDU session setup success, latency, throughput, user-plane utilization. Proactively detect degrading KPIs trends. 6. Security & Access Support: Application support for Nokia EDR and CrowdStrike. Assist with certificate renewal, firewall/NAT issues, and access failures. 7. Escalation & Coordination: Escalate unresolved issues to L3 teams, Nokia TAC, OSS/Core engineering. Work with L3 and care team for issue resolution. Ensure compliance with SLAs and contribute to continuous service improvement. 8. Reporting Generate daily/weekly/monthly reports on network performance, incident trends, and SLA compliance. Technical Experience and Professional Attributes: 5–9 years of experience in Telecom industry with hands on experience. Mandatory experience with Nokia 5G Core-SA platform. Handson Experience on Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. Experience on NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform NF deployment and troubleshooting experience on deployment, scaling, healing, and termination of network functions using NCOM. Solid understanding for 5G Core Packet Core Network Protocol such as N1, N2, N3, N6, N7, N8, 5G Core interfaces, GTP-C/U, HTTPS and including ability to trace, debug the issues. Hands-on experience with 5GC components: AMF, SMF, UPF, NRF, AUSF, NSSF, UDM, PCF, CHF, SDL, NEDR, Provisioning and Flowone. In-depth understanding of 3GPP call flows for 5G-SA, 5G NSA, Call routing, number analysis, system configuration, call flow, Data roaming, configuration and knowledge of Telecom standards e.g. 3GPP, ITU-T and ANSI. Familiarity with policy control mechanisms, QoS enforcement, and charging models (event-based, session-based). Hands-on experience with Diameter, HTTP/2, REST APIs, and SBI interfaces. Strong analytical and troubleshooting skills. Proficiency in monitoring and tracing tools (NetAct, NetScout, PCMD tracing). And log management systems (e.g., Prometheus, Grafana). Knowledge of network protocols and security (TLS, IPsec). Excellent communication and documentation skills. Educational Qualification: BE / BTech 15 Years Full Time Education Additional Information: Nokia certifications (e.g., NCOM, NCS, NSP, Kubernetes). Experience in Nokia Platform 5G Core, NCOM, NCS, Nokia Private cloud and Public Cloud (AWS preferred), cloud-native environments (Kubernetes, Docker, CI/CD pipelines). Cloud Certifications (AWS)/ Experience on AWS Cloud

Posted 15 hours ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

#Work from office in Gurgaon #Immediate joiners only We are seeking a skilled DevSecOps Engineer with 3–5 years of hands-on experience to join our growing team. The ideal candidate will be responsible for embedding security into every phase of the development lifecycle, automating infrastructure, and ensuring observability and performance across cloud-native environments. Key Responsibilities: Security Integration: Integrate security controls into CI/CD pipelines using tools like Jenkins to enable secure delivery of applications. Infrastructure as Code (IaC): Automate infrastructure provisioning using Terraform, Ansible, or similar tools. Monitoring & Observability: Deploy and manage monitoring and logging tools like Prometheus, Grafana, CloudWatch, and Azure Application Insights. Containerization & Orchestration: Build and manage containerized applications using Docker and Kubernetes, including Helm chart creation. Scripting & Automation: Write automation scripts using Bash, Shell, or similar to streamline operational tasks. Security Audits & Compliance: Perform regular audits and assessments to ensure systems meet internal and external security standards. Collaboration & Knowledge Sharing: Work closely with development and operations teams to advocate secure coding practices and support incident response readiness. Telemetry & Dashboards: Configure telemetry in Azure for diagnostics and usage insights, build proactive dashboards, and create alerts to detect anomalies and bottlenecks. Qualifications & Skills: Bachelor’s degree in computer science, Engineering, or a related field. 3–5 years of experience in DevSecOps, DevOps, or Cloud Infrastructure roles. Strong experience with CI/CD tools (e.g., Jenkins, GitHub Actions). Hands-on expertise in Terraform, Ansible, or other IAC tools. Proficiency in Docker, Kubernetes, and Helm. Familiarity with monitoring tools such as Prometheus, Grafana, and Azure Application Insights. Solid understanding of security frameworks and compliance standards. Excellent scripting skills in Bash/Shell. Good communication and cross-functional collaboration skills.

Posted 15 hours ago

Apply

9.0 - 14.0 years

6 - 9 Lacs

Pune, Bengaluru

Work from Office

Role Overview: Trellix is looking for quality engineers who are self-driven and passionate to work on on-prem/cloud products that cover SIEM, EDR, and XDR technologies. This job involves manual, automated testing (including automation development), non-functional (performance, stress, soak), security testing and much more. Work smartly by using cutting edge technologies and AI driven solutions. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Mentors and leads junior engineers and drives the team to meet and beat expected outcomes. Continuously looks for optimization in automation cycles, come up with solutions for gap areas Works closely with developers/architects and sets the quality bar for the team. Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is must. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Automate using popular frameworks suitable for backend code, APIs and frontend. Hands-on with automation programming languages (Python, go, Java, etc) is a must Execute, monitor and debug automation runs Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 9-15 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, and functionality. Solid fundamentals in any programming language (preferably, Python or go). Sound knowledge of popular automation frameworks such as selenium, Playwright, Postman, PyTest, etc. Hands-on with any of the popular CI/CD tools such as Teamcity, Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is a must Familiarity and exposure to AWS and its offerings, such as S3, EC2, EBS, EKS, IAM, etc., is an added advantage. Exposure to Kubernetes, Docker, helm, GitOps is a must. strong foundational knowledge in working on Linux based systems. Hands-on with non-functional testing, such as, performance and load, is desirable. Some proficiency with prometheus, grafana, service metrics and analysis is highly desirable. Understanding of cyber security concepts would be helpful. Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each other's unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement

Posted 16 hours ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Lead Software Engineer Overview We are looking for a Lead Software Engineer to join an award-winning team with a proven track record of combining data science techniques with an intimate knowledge of payments data to aid Financial Institutions in their fight against money laundering and fraud. We craft bespoke services that help our clients gain an understanding of the underlying criminal behaviour that drives financial crime, empowering them to take action. As part of the application development team, your role will focus on creating and maintaining products across the whole lifecycle. Role Establish and enforce best practices for microservices architecture, ensuring scalability, reliability, and maintainability of our solutions. Collaborate with cross-functional teams to define project requirements and deliver scalable solutions. Mentor team members on microservices design principles, patterns and technologies. Take personal responsibility for creating and maintaining microservices, primarily in Golang. Iterate design and build to solve bugs, improve performance, and add new features. Containerise your services and make ready for deployment onto a k8s environment using helm charts. Ensure resilience and reliability of services. Develop a complete understanding of end-to-end technical architecture and dependency systems. Apply that understanding in code. Write tests with high coverage including unit, contract, e2e, and integration. Version control code with git and build, test and deploy using ci/cd pipelines. Build and test remotely on your own machine and deploy to low-level envs. Review team members code, identifying errors and improving performance and readability. Drive code design and process trade-off discussions within team when required. Report status and manage risks within your primary application/service. Perform demos and join acceptance discussions with analysts, developers and product owners. Assist in task planning and review as part of a sprint-based workflow. Estimate and own delivery tasks (design, dev, test, deployment, configuration, documentation) to meet the business requirements. The role is hybrid, and the expectation is that you attend the office according to Mastercard policy. All About You First and foremost, you enjoy building products to solve real, pressing problems for your customers. You enjoy working in a team and have an interest in data science and how advanced algorithms may be deployed as product offerings. You are detail oriented and enjoy writing and reviewing code to a high standard with tests to prove it. Demonstrable ability to write Golang, Python and SQL in a production context. You are happy to learn new programming languages and frameworks as necessary. Experience with large volumes of data and high throughput, low latency solutions. You have experience with, and are interested in, contemporary approaches to service design, including the use of containers and container orchestration technologies, streaming data platforms, APIs, and in-memory/NoSQL stores. You have experience in resolving different solutions and approaches to problems and can choose between pragmatic and rigorous solutions depending on the situation. You are comfortable working in a devops-based software development workflow, including building, testing, and continuous integration/deployment. You are also happy to be evolve along with the development process and contribute to its success. You are comfortable communicating with a range of stakeholders, including subject matter experts, data scientists, software engineers, devops and security professionals. You have the ability to engage with best practices for code review, version control, and change control, balancing the need for a quality codebase with the unique and particular demands of scale-up stage software engineering. You have experience optimising solution performance with a constrained set of technologies. You have experience or are keen to engage with productionising machine learning technologies. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 17 hours ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

Remote

ClockHash Technologies is looking for an experienced Senior Backend Developer with strong expertise in Python or Node.js. You will be part of a dedicated R&D team from France, working on cutting-edge solutions that drive innovation in network management systems. Our dynamic team thrives on collaboration, autonomy, and continuous growth . Education: Bachelor’s degree in a relevant field (ICT, Computer Engineering, Computer Science, or Information Systems preferred). Experience Minimum 5+ years working with modern WebGUI technologies based on Python, Node.js, or MVC frameworks. Work Location: Bangalore Work mode: Hybrid, 2 days per week in the office Preferred Skills: Primary Technologies: Strong expertise in Python or Node.js , with a deep understanding of backend development, API design, and system architecture. Microservices & Cloud: Hands-on experience with microservices architecture, container-based deployments, and RESTful APIs . Deployment & Orchestration: Proficiency in using Helm for Kubernetes deployments. Operating Systems: Strong knowledge of Linux concepts Database: Experience with MySQL databases. Soft Skills: Autonomous, proactive, and curious personality. Strong communication and collaboration skills. Language: Fluency in English, both oral and written. Key Responsibilities Design, develop, and maintain web applications for large-scale data handling. Ensure application performance, security, and reliability. Develop and deploy microservices-based applications using containerization technologies. Ensure proper deployment of container-based applications with Docker Swarm or Kubernetes, providing necessary artifacts and documentation, and manage Kubernetes deployments using Helm. Work with RESTful APIs for seamless system integrations. Maintain and optimize MySQL database solutions. Participate in Agile processes, including sprint planning, code reviews, and daily stand-ups. Troubleshoot, debug, and enhance existing applications. Nice to Have Experience with modern Web UI frameworks and libraries. Familiarity with Laravel and other MVC frameworks for structured web development. Exposure to CI/CD pipelines and DevOps practices. Experience with cloud platforms like AWS, GCP, or Azure. Knowledge of message queue systems like RabbitMQ or Kafka. Knowledge of front-end technologies such as React or Vue.js . Networking: Familiarity with networking technologies is appreciated. What We Offer Friendly environment with good work-life balance. Opportunity to grow and visibility for your work. Health Insurance. Work from Home support (covering Internet Bill, Gym, or Recreational activities costs). Educational allowances (Certification fees reimbursement). Rich engagement culture with regular team events. ClockHash Technologies is an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, pregnancy, age, marital status, disability, or status as a protected veteran. Please note: The initial screening call will be conducted by our AI assistant.

Posted 17 hours ago

Apply

7.0 - 11.0 years

0 Lacs

surat, gujarat

On-site

As a Senior Full Stack Developer at Sakrat, you will be an integral part of our team, contributing your expertise to both the frontend and backend development of robust and scalable platforms. Your responsibilities will include working with React/TypeScript for the frontend and Python/FastAPI/Celery for the backend, ensuring a seamless integration between the two layers. You will play a key role in developing the visual editor using react-flow, enhancing the backend logic with Celery for asynchronous job queues, and optimizing REST APIs for efficient performance. Additionally, you will be involved in containerizing the system for streamlined multi-user deployments using Docker and Kubernetes. For the backend aspect of the role, you should have a strong proficiency in Python (3.10+), hands-on experience in building REST APIs with FastAPI/Uvicorn, and familiarity with tools like Celery, Redis, and RabbitMQ for job queues and worker scaling. Your database skills should encompass PostgreSQL, SQLite, and ORMs like SQLAlchemy or Tortoise. Knowledge of Docker, Docker Compose, and container-based deployments, along with exposure to Kubernetes or Helm, will be advantageous. Moreover, you should be well-versed in environment-based configuration, .env patterns, and secure secrets management. On the frontend side, you should demonstrate expertise in React (functional components, hooks) and TypeScript, with experience in using react-flow to build visual editors. A good understanding of frontend build tools like npm, webpack, and CSS frameworks such as Tailwind CSS is essential. You should also have the ability to create responsive, accessible, and dynamic UI components that contribute to an intuitive user experience. In addition to your technical skills, you will be expected to architect clean frontend-backend integrations, deploy full-stack applications in production environments, and understand CI/CD pipelines, versioning, and testing. Experience with multi-user architecture, session handling, and security best practices will be beneficial in this role. If you possess bonus skills such as familiarity with LangChain, RAG, or agent-based LLM pipelines, have contributed to open-source projects, or have prior experience in flow-based editors or chat widgets, it will be considered a plus. At Sakrat, we are a product engineering and digital transformation partner focused on building high-performance software systems for startups, scaleups, and enterprises. You will collaborate closely with founders, CTOs, and product leaders to deliver clean MVPs, modernize legacy platforms, and optimize cloud infrastructure. Our projects are led by experts and senior engineers with extensive experience in platform development, SaaS, AI, and enterprise systems. We prioritize secure, scalable, and well-documented systems that avoid technical debt by following clean architecture, agile practices, and automated pipelines.,

Posted 19 hours ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Senior DevOps Engineer (SRE2) Location: Gurugram Experience: 3+ Years About HaaNaa HaaNaa is a skill-based opinion trading platform that lets users trade their opinions on diverse topics using simple Yes/No choices. From politics, crypto, and finance to sports, entertainment, and current affairs—HaaNaa transforms opinions into assets. With a gamified interface, users get rewarded for informed predictions, while tracking real-time trends, analyzing insights, and engaging with a vibrant community. Role Overview We are looking for a Senior DevOps Engineer (SRE2) to lead and scale our infrastructure as we grow our real-time trading platform. This role demands a mix of hands-on DevOps skills and strong ownership of system reliability, scalability, and observability. Key Responsibilities Design, deploy, and manage scalable, secure, and resilient infrastructure on AWS, focusing on EKS (Elastic Kubernetes Service) for container orchestration. Implement and manage service mesh using Istio, enabling traffic control, observability, and security across microservices. Drive Infrastructure-as-Code (IaC) using Terraform for consistent and repeatable provisioning of cloud resources. Build and maintain robust CI/CD pipelines (GitHub Actions, Jenkins, or CircleCI) to ensure efficient and automated delivery workflows. Ensure high system availability, performance, and reliability—taking ownership of SLIs/SLOs/SLAs, alerts, and dashboards. Implement observability practices using tools like Prometheus, Grafana, ELK/EFK, or OpenTelemetry. Manage incident response, root cause analysis (RCA), and drive postmortem culture. Collaborate with cross-functional teams (engineering, QA, product) to ensure DevOps and SRE best practices are followed. Harden platform against security threats (including DDoS) using Cloudflare, Akamai, or equivalent. Automate repetitive tasks using scripting (Python, Bash) and tools like Ansible. Contribute to platform cost optimization, auto-scaling, and multi-region failover strategies. Requirements 3+ years of hands-on DevOps/SRE experience including team mentorship or leadership. Proven expertise in managing AWS cloud-native architecture, especially EKS, IAM, VPC, ALB/NLB, S3, RDS, CloudWatch. Hands-on with Istio for service mesh and microservice observability/security. Deep experience with Terraform for managing cloud infrastructure. Proficiency in CI/CD and automation tools (GitHub Actions, Jenkins, CircleCI, Ansible). Strong scripting skills in Python, Bash, or equivalent. Familiar with Kubernetes administration, Helm charts, and container orchestration. Strong understanding of monitoring, alerting, and logging systems. Experience handling DDoS mitigation, WAF rules, and CDN configuration. Excellent problem-solving and incident management skills with a proactive mindset. Strong collaboration and communication skills. Nice to Have Experience in high-growth startups or gaming platforms. Understanding of security best practices, IAM policies, and compliance frameworks (SOC2, ISO, etc.). Experience in backend performance tuning, horizontal scaling, and chaos engineering. Familiarity with progressive delivery techniques like Canary deployments or Blue/Green strategies. Why Join HaaNaa? Ownership: Play a key role in shaping the platform’s infrastructure and reliability. Innovation: Work on scalable, low-latency systems powering real-time gamified trading. Teamwork: Join a dynamic, talented team solving complex engineering challenges. Growth: Be part of a rapidly expanding company with leadership growth opportunities. Perks & Benefits: Competitive salary, health insurance, and the freedom to experiment with the latest cloud-native tools. Skills: devops,terraform,ci/cd,cloudformation,go,networking,datadog,aws,grafana,sre,kubernetes,azure,security,prometheus,infrastructure-as-code,gcp,bash,docker,python,linux system administration,elk stack

Posted 20 hours ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Your journey at Crowe starts here: At Crowe, you can build a meaningful and rewarding career. With real flexibility to balance work with life moments, you're trusted to deliver results and make an impact. We embrace you for who you are, care for your well-being, and nurture your career. Everyone has equitable access to opportunities for career growth and leadership. Over our 80-year history, delivering excellent service through innovation has been a core part of our DNA across our audit, tax, and consulting groups. That's why we continuously invest in innovative ideas, such as AI-enabled insights and technology-powered solutions, to enhance our services. Join us at Crowe and embark on a career where you can help shape the future of our industry. About the Role: As an ML Release Engineer, you will be responsible for managing the release process for machine learning solutions, ensuring that updates are deployed seamlessly to both test and production environments. Your role will focus on automating processes, improving deployment methodologies, and ensuring compliance with security and regulatory standards. - Own and manage release checklists for deploying version updates of ML solutions, ensuring compliance with SOC standards and conducting thorough security checks. - Deploy updated model versions through CI/CD pipelines in a GitLab environment, ensuring smooth transitions and minimal downtime. - Manage documentation for the Change Review Board (CRB) and represent the Applied AI and Machine Learning team at CRB meetings to ensure visibility, alignment, and approval for releases. - Oversee CI/CD pipelines and the deployment process, identifying opportunities for automation and process improvements to enhance efficiency and reliability. - Collaborate with partner teams to coordinate release timing and manage dependencies, ensuring effective communication and synchronization across projects. Required Skills: - Proficiency in managing containers and understanding containerization as it relates to deployment processes (Kubernetes, Helm, Docker). - Strong knowledge of compliance requirements and experience in implementing compliance checks within the release process. - Experience with build tooling, including Git and package management systems, to manage version control and dependencies. - Experience working in GitHub or similar development platform (we use GitLab). Preferred Skills: - Experience with automation tools and scripting to streamline deployment processes. - Solid communication skills, capable of effectively coordinating with multiple teams and stakeholders. - Proactive problem-solving attitude, with a focus on continuous improvement and innovation in release management practices. - You enjoy machine learning and have working knowledge of common machine learning models beyond ChatGPT. We expect the candidate to uphold Crowe's values of Care, Trust, Courage, and Stewardship. These values define who we are. We expect all of our people to act ethically and with integrity at all times. Our Benefits: At Crowe, we know that great people are what makes a great firm. We value our people and offer employees a comprehensive benefits package. Learn more about what working at Crowe can mean for you! How You Can Grow: We will nurture your talent in an inclusive culture that values diversity. You will have the chance to meet on a consistent basis with your Career Coach that will guide you in your career goals and aspirations. Learn more about where talent can prosper! More about Crowe: Crowe Horwath IT Services Private Ltd. is a wholly owned subsidiary of Crowe LLP (U.S.A.), a public accounting, consulting and technology firm with offices around the world. Crowe LLP is an independent member firm of Crowe Global, one of the largest global accounting networks in the world. The network consists of more than 200 independent accounting and advisory firms in more than 130 countries around the world. Crowe does not accept unsolicited candidates, referrals or resumes from any staffing agency, recruiting service, sourcing entity or any other third-party paid service at any time. Any referrals, resumes or candidates submitted to Crowe, or any employee or owner of Crowe without a pre-existing agreement signed by both parties covering the submission will be considered the property of Crowe, and free of charge.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

maharashtra

On-site

As a Senior Infrastructure Specialist in the IT department, you will be responsible for leading and managing scalable infrastructure and container-based environments. Your focus will be on Kubernetes orchestration, automation, and ensuring the security, reliability, and efficiency of platform services. Your role is crucial in modernizing infrastructure systems through DevOps practices and promoting the adoption of containerization and cloud-native technologies within the organization. Your key responsibilities will include designing, automating, and maintaining CI/CD pipelines, managing hypervisor templates, scaling container platforms, administering Kubernetes clusters, and enhancing solutions related to container platforms, edge computing, and virtualization. You will lead the transition to a Kubernetes-based virtualization architecture from VMware OVAs and prioritize platform automation using Infrastructure as Code to minimize manual tasks. Ensuring security hardening and compliance for all infrastructure components is also a key aspect of your role, along with collaborating closely with development, DevOps, and security teams to drive container adoption and lifecycle management. To be successful in this role, you should have at least 8 years of infrastructure engineering experience, deep expertise in Kubernetes architecture, strong Linux systems administration skills, proficiency in cloud platforms like AWS, Azure, and GCP, hands-on experience with Infrastructure as Code tools such as Terraform and Ansible, and familiarity with CI/CD development tools like GitLab, Jenkins, and ArgoCD. Key skills required include Kubernetes management, containerization, cloud-native infrastructure, Linux system engineering, Infrastructure as Code, DevOps, automation tools, and security and compliance in container platforms. Soft skills such as a proactive and solution-oriented mindset, strong communication and collaboration abilities, analytical thinking, and time management skills are also essential for this role. Preferred qualifications include CKA/CKAD certification, cloud certifications, experience with container security and compliance tools, exposure to GitOps tools, and monitoring and alerting experience. Your success in this role will be measured by the uptime and reliability of container platforms, reduction in manual deployment tasks, successful Kubernetes migration, cluster performance, security compliance, and team enablement and automation adoption. Your alignment with the competency framework includes mastery in Kubernetes, infrastructure automation, containerization leadership, strategic execution, and collaboration with various teams and external partners.,

Posted 1 day ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About The Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good To Have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry

Posted 1 day ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture.

Posted 1 day ago

Apply

1.0 - 4.0 years

3 - 7 Lacs

Pune, Bengaluru

Work from Office

Role Overview: Trellix is looking for quality engineers who are self-driven and passionate to work on on-prem/cloud products that cover SIEM, EDR, and XDR technologies. This job involves manual, automated testing (including automation development), non-functional (performance, stress, soak), security testing and much more. Work smartly by using cutting edge technologies and AI driven solutions. About the role: Champion a quality-first mindset throughout the entire software development lifecycle. Develop and implement comprehensive test strategies and plans for a complex hybrid application, considering the unique challenges of both on-premise and cloud deployments. Collaborate with architects and development teams to understand system architecture, design, and new features to define optimal test approaches. Peruse the requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas. Design, develop, and maintain robust, scalable, and high-performance automated test frameworks and tools from scratch, utilizing industry-standard programming languages (e.g., Python, Java, Go). Manage and maintain test environments, including setting up and configuring both on-premise and cloud instances for testing. Execute new feature and regression cases manually, as needed for a product release. Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is essential. Filing defects effectively, i.e., noting all the relevant details that reduce the back-and-forth, and aids quick turnaround with bug fixing, is an essential trait for this job Identify cases that are automatable, and within this scope, segregate cases with high ROI from low-impact areas to improve testing efficiency Analyze test results, identify defects, and work closely with development teams to ensure timely resolution. Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 1-4 years of experience in an SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, and functionality. Solid fundamentals in any programming language (preferably, Python or go) and OOPS concepts. Also, hands-on with any of the popular CI/CD tools such as Teamcity, Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is a must Familiarity and exposure to AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is required. Exposure to Docker, Helm, GitOps is an added advantage. Extensive experience designing, developing, and maintaining automated test frameworks (e.g., Playwright, Selenium, Cypress, TestNG, JUnit, Pytest). Experience with API testing tools and frameworks (e.g., Postman, Rest Assured, OpenAPI/Swagger). Good foundational knowledge in working on Linux based systems. This includes setting up git repos, user management, network configurations, use of package managers, etc. Hands-on with functional and non-functional testing, such as, performance and load, is desirable. Any level of proficiency with prometheus, grafana, service metrics, would be nice to have Understanding of cyber security concepts would be helpful.

Posted 1 day ago

Apply

6.0 - 9.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Job Title: Senior SDET Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is advantageous. Execute, monitor and debug automation runs Author automation code to improve coverage across the board Lead fellow team members and also aspects of the product end-to-end, while thinking of all aspects including enhancements, automation, performance, and others Write automation code to reduce repetitive tasks and improve regression coverage About you: 6-9 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty Solid fundamentals in any programming language (preferably, Python ) and OOPS concepts. Also, hands-on with CI/CD with Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is desired Knowledge of creating design patterns is desirable Strong foundational knowledge in working on Linux based systems and their administration is needed. This includes, setting up git repos, user management, network configurations, use of package managers, etc. Proficient with Kubernetes and AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is highly desired. Exposure to Docker, helm, argo CD is an added advantage Exposure to non-functional testing, such as, performance and load, is desired. Being hands-on with tools such as Locust and/or JMeter would be a huge advantage Any level of proficiency with Prometheus, grafana, service metrics, and such is desired Understanding of Endpoint security concepts around Endpoint Detection and Response (EDR) and hands-on experience working on SaaS based applications and platforms would be a plus Proven track record of taking ownership and driving aspects of product enhancements end-to-end

Posted 1 day ago

Apply

6.0 - 9.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is advantageous.Execute, monitor and debug automation runs Author automation code to improve coverage across the board Lead fellow team members and also aspects of the product end-to-end, while thinking of all aspects including enhancements, automation, performance, and others Write automation code to reduce repetitive tasks and improve regression coverage About you: 6-9 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty Solid fundamentals in any programming language (preferably, Python ) and OOPS concepts. Also, hands-on with CI/CD with Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is desired Knowledge of creating design patterns is desirable Strong foundational knowledge in working on Linux based systems and their administration is needed. This includes, setting up git repos, user management, network configurations, use of package managers, etc. Proficient with Kubernetes and AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is highly desired. Exposure to Docker, helm, argo CD is an added advantage Exposure to non-functional testing, such as, performance and load, is desired. Being hands-on with tools such as Locust and/or JMeter would be a huge advantage Any level of proficiency with Prometheus, grafana, service metrics, and such is desired Understanding of Endpoint security concepts around Endpoint Detection and Response (EDR) and hands-on experience working on SaaS based applications and platforms would be a plus Proven track record of taking ownership and driving aspects of product enhancements end-to-end

Posted 1 day ago

Apply

Exploring Helm Jobs in India

Helm is a popular package manager for Kubernetes that simplifies the deployment and management of applications. In India, the demand for professionals with expertise in Helm is on the rise as more companies adopt Kubernetes for their container orchestration needs.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi NCR

Average Salary Range

The average salary range for helm professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can command salaries upwards of INR 15 lakhs per annum.

Career Path

Typically, a career in Helm progresses as follows: - Junior Helm Engineer - Helm Engineer - Senior Helm Engineer - Helm Architect - Helm Specialist - Helm Consultant

Related Skills

In addition to proficiency in Helm, professionals in this field are often expected to have knowledge of: - Kubernetes - Docker - Containerization - DevOps practices - Infrastructure as Code (IaC)

Interview Questions

  • What is Helm and how does it simplify Kubernetes deployments? (basic)
  • Can you explain the difference between a Chart and a Release in Helm? (medium)
  • How would you handle secrets management in Helm charts? (medium)
  • What are the limitations of Helm and how would you work around them? (advanced)
  • How do you troubleshoot Helm deployment failures? (medium)
  • Explain the concept of Helm Hooks and when they are triggered during the deployment lifecycle. (medium)
  • How do you version and manage Helm charts in a production environment? (medium)
  • What are the best practices for Helm chart organization and structure? (basic)
  • Describe a scenario where you used Helm to deploy a complex application and the challenges you faced. (advanced)
  • How do you manage dependencies between Helm charts? (medium)
  • Explain the difference between Helm 2 and Helm 3. (basic)
  • How do you perform a rollback of a Helm release? (medium)
  • What security considerations should be taken into account when using Helm? (advanced)
  • How do you customize Helm charts for different environments (dev, staging, production)? (medium)
  • Can you automate the deployment of Helm charts using CI/CD pipelines? (medium)
  • What is Tiller in Helm and why was it removed in Helm 3? (advanced)
  • How do you manage upgrades of Helm releases without causing downtime? (medium)
  • Explain how you would handle configuration management in Helm charts. (medium)
  • What are the advantages of using Helm over manual Kubernetes manifests? (basic)
  • How do you ensure the idempotency of Helm deployments? (medium)
  • How do you perform linting and testing of Helm charts? (basic)
  • Can you explain the concept of Helm repositories and how they are used? (medium)
  • How would you handle versioning of Helm charts to ensure compatibility with different Kubernetes versions? (medium)
  • Describe a situation where you had to troubleshoot a Helm chart that was failing to deploy. (advanced)

Closing Remark

As the demand for Helm professionals continues to grow in India, it is important for job seekers to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a valuable asset to organizations looking to leverage Helm for their Kubernetes deployments. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies