Jobs
Interviews

17693 Terraform Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

Madurai, Tamil Nadu, India

On-site

Position: Azure DevOps Engineer Location: Madurai, Tamil Nadu, India (Onsite). Experience: 3-5 years. About KoinBX KoinBX is one of the leading FIU-registered centralized cryptocurrency exchange founded in India, operating successfully for over six years. We’ve grown steadily with a clear vision: to make crypto trading secure, simple, and accessible to users around the world. Our platform supports a wide range of digital assets and is known for its strong security framework, intuitive user experience, and commitment to transparency. With a growing global user base, KoinBX is building a trusted ecosystem where traders and investors can confidently engage with the future of finance. As we continue to lead the charge in the Web3 revolution, we’d love to have you on board! Join our team of passionate innovators who are pushing boundaries and shaping the future of Web3. Together, we’ll simplify the complex, unlock the inaccessible, and turn the impossible into reality. Inside KoinBX’s Engineering Team Our Engineering team is the backbone of KoinBX's cutting-edge products and platforms. We take on complex challenges to develop scalable, secure, and high-performance solutions that drive the future of digital finance. If you're an engineer passionate about innovation and solving real-world problems, come be a part of the team shaping the future of Web3 technology. You’ll be diving into these tasks: Design, implement, and maintain scalable, secure, and highly available cloud infrastructure using Microsoft Azure. Develop CI/CD pipelines to enable efficient deployment processes and seamless software delivery. Automate infrastructure provisioning, configuration management, and application deployments. Monitor system performance, troubleshoot issues, and ensure uptime and reliability of services. Implement robust security measures to safeguard infrastructure and comply with industry standards. Collaborate with development, QA, and operations teams to streamline workflows and optimize resource utilization. Manage containerized environments using tools such as Docker and Kubernetes. Conduct system backups, disaster recovery, and failover procedures. Optimize cost and resource allocation within the Azure environment. Stay up to date with the latest trends and advancements in DevOps practices and tools. Bring these HODL-worthy skills to the table: Mandatory: Hands-on experience with Azure Cloud Services (VMs, Azure Kubernetes Service, App Services, Azure DevOps, etc.). Proven experience in designing and managing CI/CD pipelines using tools like Azure DevOps, Jenkins, or GitHub Actions. Strong expertise in Infrastructure as Code (IaC) tools such as Terraform, Ansible, or ARM templates. Proficiency in containerization technologies like Docker and Kubernetes. Strong knowledge of Linux and Windows system administration. Familiarity with monitoring tools like Prometheus, Grafana, Azure Monitor, or equivalent. Experience with scripting languages such as Python, Bash, or PowerShell for automation. Solid understanding of security best practices in a cloud environment. Problem-solving skills and the ability to work in a fast-paced, dynamic environment. Could you be the key element our team needs: You have an insatiable curiosity for Web3 and VDAs, constantly exploring new trends and insights. The fast-paced crypto space energizes you and keeps you motivated to learn and grow. You’re proactive by nature, always aiming to make meaningful contributions. Collaboration is at your core—you value shared success over individual credit. You see change not as a challenge, but as an opportunity to innovate and evolve. You're a creative thinker who thrives on pushing limits and redefining what’s possible. Why Join KoinBX? Be part of India’s rapidly growing blockchain technology company. Contribute to the evolution of the cryptocurrency industry. Develop customer-facing technology products for global users. Work in a performance-driven environment that values ownership and innovation. Gain exposure to cutting-edge technologies with a steep learning curve. Experience a meritocratic, transparent, and open work culture. High visibility in the global blockchain ecosystem. KoinBX Interview Process: Initial Screening – Telephonic or In-Person Interview. Technical Assessment – Evaluating core competencies. Final Interview – With Department Head and key stakeholders. Perks & Benefits at KoinBX : Exciting and challenging work environment. Opportunity to work with highly skilled professionals. Team events and celebrations. A dynamic and growth-oriented career path. Join us and be a part of the revolution in the cryptocurrency industry!

Posted 14 hours ago

Apply

4.0 - 7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

A senior Infrastructure Automation Analyst, responsible for the development, maintenance & continuous improvement of infrastructure automations. As a member of the Infrastructure Automation team, the successful candidate will be expected to contribute when discussing and designing new automations and troubleshoot and support existing automations across the tech stack. The Infrastructure Services Team are responsible for approximately 3,000 Windows and Linux servers across multiple data centres globally and within AWS Cloud. The team’s responsibilities include server hosting, storage, and backup/DR & recoveries, all managed for strict compliance to enterprise security standards. The role involves working as a member of the automation team, developing and maintaining automation solutions Work closely with operations and project teams throughout the wider Technology team to identify opportunities for automation and driving an automation mindset. Although the role’s primary function is Infrastructure Operations automation, it also involves development of automation solutions for other Technology teams when requested. Key Responsibilities Proficient in scripting: Particularly PowerShell and Python Automation Tools: Experience with tooling such as Ansible Automation Platform CI/CD Pipelines: Knowledge of Continuous integration and continuous deployment practices and tooling, particularly Jenkins Knowledge of Devops and IAC concepts and tooling, particularly Terraform Operating Systems: Strong knowledge of operating systems, particularly Windows Server and Redhat Linux API Integration: Proficiency in automating that leverages API and web services Git / Atlassian Bitbucket Cloud Services: Experience working with AWS Cloud Solutions Required Qualifications Bachelors/Master degree in Computer Science/Information Systems or equivalent. Person should have above qualifications and 4 -7 years of experience in relevant disciplines including: Excellent teamwork; able to collaborate with peers, business partners, project managers and leaders Problem solver; ability to diagnose issues, identify solutions and implement effective fixes Attention to detail; Precision in writing code and catching errors and bugs in code Adaptability; Able to adjust to changes to project demands, technologies, and team dynamics Creativity; Innovative thinking that leads to the development of unique solutions to existing challenges A self-motivated technologist keen to learn new technologies and skills to complete tasks Take a methodical and analytical approach to tasks Be inquisitive – asking questions of existing processes and identifying opportunities for automation Build strong working relationships with global and regional teams An excellent communicator who is able to convey their ideas clearly and concisely Able to work collaboratively with others and discuss and share ideas Strong documentation skills Preferred Qualifications Candidates who have used following tools (or have familiarity with below) will have added advantage: VMWare vSphere Red Hat Linux PowerBI System Center Configuration Manager ServiceNow Automation System Center Operations Manager Microsoft Active Directory Tidal Enterprise Scheduler SQL Javascript Sumologic AWS CloudFormation About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology

Posted 14 hours ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Introduction: About Quranium In a world where rapid innovation demands uncompromising security, Quranium stands as the uncrackable foundation of the digital future. With its quantum-proof hybrid DLT infrastructure, Quranium is redefining what's possible, ensuring data safety and resilience against current and future threats, today. No other blockchain can promise this level of protection and continuous evolution. Quranium is more than a technology—it's a movement. Empowering developers and enterprises to build with confidence, it bridges the gaps between Web2 and Web3, making digital adoption seamless, accessible, and secure for all. As the digital superhighway for a better future, Quranium is setting the standard for progress in an ever-evolving landscape. Role Overview We are hiring a DevOps Engineer to architect and maintain the infrastructure supporting our blockchain nodes and Web3 applications. The ideal candidate has deep experience working with GCP, Azure, AWS, and modern hosting platforms like Vercel, and is capable of deploying, monitoring, and scaling blockchain-based systems with a security-first mindset. Key Responsibilities Blockchain Infrastructure Deploy, configure, and maintain core blockchain infrastructure such as full nodes, validator nodes, and indexers (e.g., Ethereum, Solana, Bitcoin) Monitor node uptime, sync health, disk usage, and networking performance Set up scalable RPC endpoints and archive nodes for dApps and internal use Automate blockchain client upgrades and manage multi-region redundancy Web3 Application DevOps Manage the deployment and hosting of Web3 frontends, smart contract APIs, and supporting services Create and maintain CI/CD pipelines for frontend apps, smart contracts, and backend services Integrate deployment workflows with Vercel, GCP Cloud Run, AWS Lambda, or Azure App Services Securely handle smart contract deployment keys and environment configurations Cloud Infrastructure Design and manage infrastructure across AWS, GCP, and Azure based on performance, cost, and scalability considerations Use infrastructure-as-code (e.g., Terraform, Pulumi, CDK) to manage provisioning and automation Implement cloud-native observability solutions: logging, tracing, metrics, and alerts Ensure high availability and disaster recovery for critical blockchain and app services Security, Automation, and Compliance Implement DevSecOps best practices across cloud, containers, and CI/CD Set up secrets management and credential rotation workflows Automate backup, restoration, and failover for all critical systems Ensure infrastructure meets required security and compliance standards Preferred Skills And Experience Experience running validators or RPC services for Proof-of-Stake networks (Ethereum 2.0, Solana, Avalanche, etc.) Familiarity with decentralized storage systems like IPFS, Filecoin, or Arweave Understanding of indexing protocols such as The Graph or custom off-chain data fetchers Hands-on experience with Docker, Kubernetes, Helm, or similar container orchestration tools Working knowledge of EVM-compatible toolkits like Foundry, Hardhat, or Truffle Experience with secrets management (Vault, AWS SSM, GCP Secret Manager) Previous exposure to Web3 infrastructure providers (e.g., Alchemy, Infura, QuickNode) Tools and Technologies Cloud Providers: AWS, GCP, Azure, Vercel DevOps Stack: Docker, Kubernetes, Terraform, GitHub Actions, CircleCI Monitoring: Prometheus, Grafana, CloudWatch, Datadog Blockchain Clients: Geth, Nethermind, Solana, Erigon, Bitcoin Core Web3 APIs: Alchemy, Infura, Chainlink, custom RPC providers Smart Contracts: Solidity, EVM, Hardhat, Foundry Requirements 3+ years in DevOps or Site Reliability Engineering Experience with deploying and maintaining Web3 infrastructure or smart contract systems Strong grasp of CI/CD pipelines, container management, and security practices Demonstrated ability to work with multi-cloud architectures and optimize for performance, cost, and reliability Strong communication and collaboration skills What You'll Get The opportunity to work at the intersection of blockchain infrastructure and modern cloud engineering A collaborative environment where your ideas impact architecture from day one Exposure to leading decentralized technologies and smart contract systems Flexible work setup and a focus on continuous learning and experimentation

Posted 14 hours ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Introduction: About Quranium In a world where rapid innovation demands uncompromising security, Quranium stands as the uncrackable foundation of the digital future. With its quantum-proof hybrid DLT infrastructure, Quranium is redefining what's possible, ensuring data safety and resilience against current and future threats, today. No other blockchain can promise this level of protection and continuous evolution. Quranium is more than a technology—it's a movement. Empowering developers and enterprises to build with confidence, it bridges the gaps between Web2 and Web3, making digital adoption seamless, accessible, and secure for all. As the digital superhighway for a better future, Quranium is setting the standard for progress in an ever-evolving landscape. Role Overview We are hiring a DevOps Engineer to architect and maintain the infrastructure supporting our blockchain nodes and Web3 applications. The ideal candidate has deep experience working with GCP, Azure, AWS, and modern hosting platforms like Vercel, and is capable of deploying, monitoring, and scaling blockchain-based systems with a security-first mindset. Key Responsibilities Blockchain Infrastructure Deploy, configure, and maintain core blockchain infrastructure such as full nodes, validator nodes, and indexers (e.g., Ethereum, Solana, Bitcoin) Monitor node uptime, sync health, disk usage, and networking performance Set up scalable RPC endpoints and archive nodes for dApps and internal use Automate blockchain client upgrades and manage multi-region redundancy Web3 Application DevOps Manage the deployment and hosting of Web3 frontends, smart contract APIs, and supporting services Create and maintain CI/CD pipelines for frontend apps, smart contracts, and backend services Integrate deployment workflows with Vercel, GCP Cloud Run, AWS Lambda, or Azure App Services Securely handle smart contract deployment keys and environment configurations Cloud Infrastructure Design and manage infrastructure across AWS, GCP, and Azure based on performance, cost, and scalability considerations Use infrastructure-as-code (e.g., Terraform, Pulumi, CDK) to manage provisioning and automation Implement cloud-native observability solutions: logging, tracing, metrics, and alerts Ensure high availability and disaster recovery for critical blockchain and app services Security, Automation, and Compliance Implement DevSecOps best practices across cloud, containers, and CI/CD Set up secrets management and credential rotation workflows Automate backup, restoration, and failover for all critical systems Ensure infrastructure meets required security and compliance standards Preferred Skills And Experience Experience running validators or RPC services for Proof-of-Stake networks (Ethereum 2.0, Solana, Avalanche, etc.) Familiarity with decentralized storage systems like IPFS, Filecoin, or Arweave Understanding of indexing protocols such as The Graph or custom off-chain data fetchers Hands-on experience with Docker, Kubernetes, Helm, or similar container orchestration tools Working knowledge of EVM-compatible toolkits like Foundry, Hardhat, or Truffle Experience with secrets management (Vault, AWS SSM, GCP Secret Manager) Previous exposure to Web3 infrastructure providers (e.g., Alchemy, Infura, QuickNode) Tools and Technologies Cloud Providers: AWS, GCP, Azure, Vercel DevOps Stack: Docker, Kubernetes, Terraform, GitHub Actions, CircleCI Monitoring: Prometheus, Grafana, CloudWatch, Datadog Blockchain Clients: Geth, Nethermind, Solana, Erigon, Bitcoin Core Web3 APIs: Alchemy, Infura, Chainlink, custom RPC providers Smart Contracts: Solidity, EVM, Hardhat, Foundry Requirements 3+ years in DevOps or Site Reliability Engineering Experience with deploying and maintaining Web3 infrastructure or smart contract systems Strong grasp of CI/CD pipelines, container management, and security practices Demonstrated ability to work with multi-cloud architectures and optimize for performance, cost, and reliability Strong communication and collaboration skills What You'll Get The opportunity to work at the intersection of blockchain infrastructure and modern cloud engineering A collaborative environment where your ideas impact architecture from day one Exposure to leading decentralized technologies and smart contract systems Flexible work setup and a focus on continuous learning and experimentation

Posted 15 hours ago

Apply

2.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Site Reliability Engineering (SRE) at Equifax is a discipline that combines software and systems engineering for building and running large-scale, distributed, fault-tolerant systems. SRE ensures that internal and external services meet or exceed reliability and performance expectations while adhering to Equifax engineering principles. SRE is also an engineering approach to building and running production systems – we engineer solutions to operational problems. Our SREs are responsible for overall system operation and we use a breadth of tools and approaches to solve a broad set of problems. Practices such as limiting time spent on operational work, blameless postmortems, proactive identification, and prevention of potential outages. Our SRE culture of diversity, intellectual curiosity, problem solving and openness is key to its success. Equifax brings together people with a wide variety of backgrounds, experiences and perspectives. We encourage them to collaborate, think big, and take risks in a blame-free environment. We promote self-direction to work on meaningful projects, while we also strive to build an environment that provides the support and mentorship needed to learn, grow and take pride in our work. What You’ll Do Work in a DevSecOps environment responsible for the building and running of large-scale, massively distributed, fault-tolerant systems. Work closely with development and operations teams to build highly available, cost effective systems with extremely high uptime metrics. Work with cloud operations team to resolve trouble tickets, develop and run scripts, and troubleshoot Create new tools and scripts designed for auto-remediation of incidents and establishing end-to-end monitoring and alerting on all critical aspects Build infrastructure as code (IAC) patterns that meets security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Participate in a team of first responders in a 24/7, follow the sun operating model for incident and problem management. What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 2-5 years of experience in software engineering, systems administration, database administration, and networking. 1+ years of experience developing and/or administering software in public cloud Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives. Experience in languages such as Python, Bash, Java, Go JavaScript and/or node.js Demonstrable cross-functional knowledge with systems, storage, networking, security and databases System administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: DevSecOps - Uses knowledge of DevSecOps operational practices and applies engineering skills to improve resilience of products/services. Designs, codes, verifies, tests, documents, modifies programs/scripts and integrated software services. Applies agreed SRE standards and tools to achieve a well-engineered result. Operational Excellence - Prioritizes and organizes one’s own work. Monitors and measures systems against key metrics to ensure availability of systems. Identifies new ways of working to make processes run smoother and faster. Systems Thinking - Uses knowledge of best practices and how systems integrate with others to improve their own work. Understand technology trends and use knowledge to identify factors that achieve the defined expectations of systems availability. Technical Communication/Presentation - Explains technical information and the impacts to stakeholders and articulates the case for action. Demonstrates strong written and verbal communication skills. Troubleshooting - Applies a methodical approach to routine issue definition and resolution. Monitors actions to investigate and resolve problems in systems, processes and services. Determines problem fixes/remedies. Assists with the implementation of agreed remedies and preventative measures. Analyzes patterns and trends.

Posted 15 hours ago

Apply

3.0 - 6.0 years

2 - 4 Lacs

Hyderābād

On-site

Experience: 3 to 6 years Roles and Responsibilities Overview: We are seeking a motivated and technically skilled Cloud Security Engineering Analyst with at least 3 years of experience in AWS Cloud Security. The role involves leading the design, testing, deployment, and compliance validation of AWS security policies and controls. You will be responsible for integrating cloud-native and custom guardrails, performing risk assessments, managing policy exceptions, and collaborating with cross-functional teams to enforce security-by-default principles. This position requires a strong understanding of AWS-native security services and the ability to develop scalable policy enforcement strategies across multiple accounts. Key Responsibilities: Design, develop, and deploy custom and AWS-native security policies (e.g., SCPs, IAM policies, AWS Config Rules) across AWS accounts. Perform pre-deployment compliance assessments and identify non-compliant configurations in AWS environments. Collaborate with application and infrastructure teams to remediate misconfigurations and implement secure-by-design practices. Validate and monitor policy effectiveness post-deployment using tools like AWS Config, Security Hub, CloudTrail, and GuardDuty. Own and manage the AWS policy exemption workflow — review exception requests, conduct risk assessments, and track approvals. Maintain detailed documentation on policy changes, enforcement status, and exception decisions. Participate in tool evaluations and implementations that support cloud security posture management and automation. Support continuous improvement of cloud security posture through quarterly reviews, metrics, and tuning recommendations. Required Qualifications: Minimum 3 years of hands-on experience in AWS cloud security or policy enforcement. Strong working knowledge of AWS security services: IAM, SCPs, AWS Config, Security Hub, CloudTrail, GuardDuty, KMS, etc. Experience with cloud compliance standards (e.g., CIS AWS Foundations Benchmark, NIST, ISO 27001, HIPAA). Proficient in writing and troubleshooting IAM policies, JSON/YAML templates, Lambda functions, and scripting (Python/Bash). Familiarity with DevSecOps practices and Infrastructure as Code (IaC) tools such as Terraform or CloudFormation. Preferred Certifications: AWS Certified Security – Specialty AWS Certified Solutions Architect – Associate or Professional Soft Skills: Excellent communication and stakeholder collaboration skills. Strong analytical thinking and problem-solving abilities. Ability to manage multiple tasks and priorities in a fast-paced environment.

Posted 15 hours ago

Apply

3.0 years

3 - 5 Lacs

Hyderābād

On-site

DESCRIPTION The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. BASIC QUALIFICATIONS 3+ years of experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Experience in large scale application/server migration from on-premise to cloud. Good knowledge on Compute, Storage, Security and Networking technologies Good understanding and experience on dealing with firewalls, VPCs, network routing, Identity and Access Management and security implementation PREFERRED QUALIFICATIONS AWS experience preferred, with proficiency in a wide range of AWS services (e.g., EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Solutions Architect Professional, DevOps Engineer Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in assessment of source architecture and map it to relevant target architecture in the cloud environment with knowledge on capacity and performance management Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 15 hours ago

Apply

0 years

2 - 4 Lacs

Hyderābād

On-site

DESCRIPTION The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. AWS Global Services includes experts from across AWS who help our customers design, build, operate, and secure their cloud environments. Customers innovate with AWS Professional Services, upskill with AWS Training and Certification, optimize with AWS Support and Managed Services, and meet objectives with AWS Security Assurance Services. Our expertise and emerging technologies include AWS Partners, AWS Sovereign Cloud, AWS International Product, and the Generative AI Innovation Center. You’ll join a diverse team of technical experts in dozens of countries who help customers achieve more with the AWS cloud. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. BASIC QUALIFICATIONS Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets Experience in mentoring junior team members, and guiding them on machine learning and data modeling applications PREFERRED QUALIFICATIONS AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 15 hours ago

Apply

4.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

OakNorth is a profitable business that has supported the growth of thousands of businesses. We help entrepreneurs scale quickly, realise their ambitions and make data-driven decisions. We're looking for engineers who are particularly passionate about data analytics and data engineering to join our team. You'd use both your generalist and specialist skills to better our products and our team. You'd join our data platform squad as an immediate contributor. The role 👋 As an Analytics Engineer you’ll work with our Finance teams to transform raw data into meaningful insights using tools like DBT, BigQuery, and Tableau Have 4-8 years of relevant hands-on experience Develop and maintain data models using DBT (Data Build Tool). Write and optimize SQL queries to transform raw data into structured formats. Develop and maintain interactive dashboards and reports in Tableau. Collaborate with stakeholders to gather requirements and translate them into analytical solutions. Work well cross-functionally and earn trust from co-workers at all levels Care deeply about mentorship and growing your colleagues Prefer simple solutions and designs over complex ones Enjoy working with a diverse group of people with different areas of expertise. You challenge the existing approach when you see the cliff edge racing towards us, but also get on board once the options have been debated and the team has made a decision You're comfortably organised amongst chaos You’re a broad thinker and have the capability to see the potential impact of decisions across the wider business What You’d Work On ⛏️ Our squads are cross-functional, mission driven, and autonomous, solving specific user and business problems. We have several product squads that you may rotate through but, initially you will be upskilling within Data Platform squad. The Data Platform squad look after all internal data products and the data warehouse. It’s a new team which is driving the bank’s data strategy and has many opportunities for exciting, greenfield projects. Technology 🧱 We're pragmatic about our technology choices. These are some of the things we use at the moment: 🗃 Python, DBT, Tableau 🗄️ PostgreSQL, BigQuery, MySQL 🔧 pytest ☁️ AWS, GCP 🚀 Docker, Terraform, GitHub, GIT How We Expect You To Work 👷‍♀️ We expect you to work in these ways, as well as encouraging and enabling these practices from others: Collaborate - We work in cross-functional, mission driven, autonomous squads that gel over time. We pair program to work better through shared experience and knowledge. Focus on outcomes over outputs - Solving a problem for users that translates to business results is our goal. Measurements focused on that goal help us to understand if we are succeeding. Practice continuous improvement - We optimize for feedback now, rather than presume what might be needed in the future and introduce complexity before it will be used. This means we learn faster. We share learnings in blame-free formats, so that we do not repeat things that have failed, but still have confidence to innovate. Seek to understand our users - We constantly seek understanding from data and conversations to better serve our users' needs, taking an active part in research to hear from them directly and regularly. Embrace and enable continuous deployment - Seamless delivery of changes into an environment - without manual intervention - is essential for us to ensure that we are highly productive; consider resiliency; and practice security by design. Test outside-in, test first - TDD keeps us confident in moving fast and deploying regularly. We want to solve user problems, and so we test with that mindset - writing scenarios first, then considering our solution, coupling tests to behavior, rather than implementation. You build it, you run it - We embrace DevOps culture and end-to-end ownership of products and features. Every engineer, regardless of their role, has the opportunity to lead delivery of features from start to finish. Be cloud native - We leverage automation and hosted services to deliver resilient, secure services quickly and consistently. Where SaaS tools help us achieve more productivity and better-quality results for a cheap price, we use these to automate low value tasks. How We Expect You To Behave ❤️ We embrace difference and know that when we can be ourselves at work, we are happier, more motivated and creative. We want to be able to bring our whole selves to work, have our own perspectives and know that we belong. As such, through your behaviours at work, we expect you to reflect and actively sustain a healthy engineering environment that looks like this: A wide range of voices heard to the benefit of all Teams that are clearly happy, engaged, and laugh together Perceivable safety to have an opinion or ask a question No egos - people listen to and learn from others at all levels, with strong opinions held loosely About Us We’re OakNorth Bank and we embolden entrepreneurs to realise their ambitions, understand their markets, and apply data intelligence to everyday decisions to scale successfully at pace. Banking should be barrier-free. It’s a belief at our very core, inspired by our entrepreneurial spirit, driven by the unmet financial needs of millions, and delivered by our data-driven tools. And for those who love helping businesses thrive? Our savings accounts help diversify the high street and create new jobs, all while earning savers some of the highest interest on the market. But we go beyond finance, to empower our people, encourage professional growth and create an environment where everyone can thrive. We strive to create an inclusive and diverse workplace where people can be themselves and succeed. Our story OakNorth Bank was built on the foundations of frustrations with old-school banking. In 2005, when our founders tried to get capital for their data analytics company, the computer said ‘no’. Unfortunately, all major banks in the UK were using the same computer – and it was broken. Why was it so difficult for a profitable business with impressive cashflow, retained clients, and clear commercial success to get a loan? The industry was backward-looking and too focused on historic financials, rather than future potential. So, what if there was a bank, founded by entrepreneurs, for entrepreneurs? One that offered a dramatically better borrowing experience for businesses? No more what ifs, OakNorth Bank exists. For more information regarding our Privacy Policy and practices, please visit: https://oaknorth.co.uk/legal/privacy-notice/employees-and-visitors/

Posted 15 hours ago

Apply

1.5 years

0 Lacs

Hyderābād

On-site

Overview: Job Purpose Intercontinental Exchange, Inc. (ICE) presents a unique opportunity to work with cutting-edge technology and business challenges in the financial services sector. ICE team members work across departments and traditional boundaries to innovate and respond to industry demand. A successful candidate will be able to multitask in a dynamic team-based environment demonstrating strong problem-solving and decision-making abilities and the highest degree of professionalism. We are seeking an experienced AWS solution design engineer/architect to join our infrastructure cloud team. The infrastructure cloud team is responsible for internal services that provide developer collaboration tools, the build and release pipeline, and shared AWS cloud services platform. The infrastructure cloud team enables engineers to build product features and efficiently and confidently them into production. Responsibilities Develop utilities or furthering existing application and system management tools and processes that reduce manual efforts and increase overall efficiency Build and maintain Terraform/CloudFormation templates and scripts to automate and deploy AWS resources and configuration changes Experience reviewing and refining design and architecture documents presented by teams for operational readiness, fault tolerance and scalability Monitor and research cloud technologies and stay current with trends in the industry Participate in an on-call rotation and identify opportunities for reducing toil and avoiding technical debt to reduce support and operations load. Knowledge and Experience Essential 1.5+ years of experience in an DevOps, preferably DevSecOps, or SRE role in an AWS cloud environment. 1.5+ years’ strong experience with configuring, managing, solutioning, and architecting with AWS (Lambda, EC2, ECS, ELB, EventBridge, Kinesis, Route 53, SNS, SQS, CloudTrail, API Gateway, CloudFront, VPC, TransitGW, IAM, Security Hub, Service Mesh) Python, or Golang proficiency. Proven background of implementing continuous integration, and delivery for projects. A track record of introducing automation to solve administrative and other business as usual tasks. Beneficial Proficiency in Terraform, CloudFormation, or Ansible A history of delivering services developed in an API-first approach. Coming from a system administration, network, or security background. Prior experience working with environments of significant scale (thousands of servers) -: Intercontinental Exchange, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to legally protected characteristics.

Posted 15 hours ago

Apply

2.0 years

1 - 9 Lacs

Hyderābād

On-site

JOB DESCRIPTION We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer at JPMorgan Chase within the Consumer and community banking - Data Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems. Creates secure and high-quality production code and maintains algorithms that run synchronously with in integrated systems. Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development. Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture. Contributes to software engineering communities of practice and events that explore new and emerging technologies. Adds to team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 2+ years applied experience. Expertise on UI development using React, Node JS and Jest framework. Hands on experience in building or enhancing UI frameworks and contributed to Single Page Application (SPA) development. Through understanding of application security. Especially around authentication and authorization. Hands on experience in building and consuming RESTful api using Java AWS development EAC, terraform, cloud design patterns, distributed Streaming (Kafka), OpenSearch, S3 and PostgreSQL. Deployed complex, high available scalable systems & resilient apps on AWS cloud. Serve as team member to the delivery of high-quality, full-stack UI solutions using React JS, Jest, Java and cloud-based technologies while actively contributing to the code base. Hands-on practical experience in system design, application development, testing, and operational stability Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages. Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) ABOUT US

Posted 15 hours ago

Apply

5.0 years

2 - 3 Lacs

Hyderābād

On-site

Summary Location: Hyderabad To work in AWS platform managing SAP workloads and develop automation scripts using AWS services. Support 24*7 environment and be ready to learn newer technologies. About the Role Major Accountabilities Solve incidents and perform changes in AWS Cloud environment. Own and drive incidents and its resolution. Must have extensive knowledge in workings of performance troubleshooting and capacity management. Champion the standardization and simplification of AWS Operations involving various services including S3, EC2, EBS, Lamda, Network, NACL, Security Groups and others. Prepare and run internal AWS projects, identify critical integration points and dependencies, propose solutions for key gaps, provide effort estimations while ensuring alignment with business and other teams Assure consistency and traceability between user requirements, functional specifications, Agile ways of working and adapting to DevSecOps, architectural roadmaps, regulatory/control requirements, and smooth transition of solutions to operations Deliver assigned project work as per agreed timeline within budget and on-quality adhering to following the release calendars Able to work in dynamic environment and supporting users across the globe. Should be a team player. Weekend on-call duties would be applicable as needed. Minimum Requirements Bachelor’s degree in business/technical domains AWS Cloud certifications / trainings. Able to handle OS security vulnerabilities and administer the patches and upgrades > 5 years of relevant professional IT experience in the related technical area Proven experience in handling AWS Cloud workload, preparing Terraform scripts and running pipelines. Excellent troubleshooting skills and be independently able to solve P1/P2 incidents. Have working knowledge on of DR, Cluster, SuSe Linux and tools associated within AWS ecosystem Knowledge of handling SAP workloads would be added advantage. Extensive monitoring experience and should have worked in 24*7 environment in the past Experience with installaing and setting up SAP environment in AWS Cloud. EC2 Instance setup, EBS and EFS Setup, S3 configuration Alert Configuration in Cloud Watch. Management of extending filesystems and adding new HANA instance. Capacity / Consumption Management, Manage AWS Cloud accounts along with VPC, Subnets and NAT Good knowledge on NACL and Security Groups,Usage of Cloud Formation and automation piplelines,Identify and Access Management. Create and manage Multi-Factor Authentication Good understanding of ITIL v4 principles and able to work on complex 24*7 environment. Proven track record of broad industry experience and excellent understanding of complex enterprise IT landscapes and relationships Why consider Novartis? Our purpose is to reimagine medicine to improve and extend people’s lives and our vision is to become the most valued and trusted medicines company in the world. How can we achieve this? With our people. It is our associates that drive us each day to reach our ambitions. Be a part of this mission and join us! Learn more here: https://www.novartis.com/about/strategy/people-and-culture Commitment to Diversity and Inclusion: Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve. Join our Novartis Network: If this role is not suitable to your experience or career goals but you wish to stay connected to hear more about Novartis and our career opportunities, join the Novartis Network here: https://talentnetwork.novartis.com/network Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division Operations Business Unit CTS Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Technology Transformation Job Type Full time Employment Type Regular Shift Work No

Posted 15 hours ago

Apply

7.0 years

0 Lacs

India

On-site

Key Responsibilities: Design & manage CI/CD pipelines (Jenkins, GitLab CI/CD, GitHub Actions). Automate infrastructure provisioning (Terraform, Ansible, Pulumi). Monitor & optimize cloud environments (AWS, GCP, Azure). Implement containerization & orchestration (Docker, Kubernetes - EKS/GKE/AKS). Maintain logging, monitoring & alerting (ELK, Prometheus, Grafana, Datadog). Ensure system security, availability & performance tuning . Manage secrets & credentials (Vault, AWS Secrets Manager). Troubleshoot infrastructure & deployment issues . Implement blue-green & canary deployments . Collaborate with developers to enhance system reliability & productivity .\ Required Qualifications & Skills: 7+ years in DevOps, SRE, or Infrastructure Engineering . Strong expertise in Cloud (AWS/GCP/Azure) & Infrastructure-as-Code (Terraform, CloudFormation) . Proficient in Docker & Kubernetes . Hands-on with CI/CD tools & scripting (Bash, Python, or Go) . Strong knowledge of Linux, networking, and security best practices . Experience with monitoring & logging tools (ELK, Prometheus, Grafana). Familiarity with GitOps, Helm charts & automation . Preferred Skills: Certifications ( AWS DevOps Engineer, CKA/CKAD, Google Cloud DevOps Engineer ). Experience with multi-cloud, microservices, event-driven systems . Exposure to AI/ML pipelines & data engineering workflows . Job Types: Full-time, Permanent Benefits: Cell phone reimbursement Health insurance Provident Fund Ability to commute/relocate: Palayam, Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Current Monthly salary? Least expected monthly salary? How early you can join? Experience: DevOps: 5 years (Required) SRE: 5 years (Preferred) Terraform: 4 years (Preferred) Docker & Kubernetes.: 4 years (Preferred) CI/CD: 4 years (Preferred) Linux: 5 years (Preferred) Azure: 5 years (Required) Language: malayalam (Required) Location: Palayam, Thiruvananthapuram, Kerala (Preferred) Work Location: In person

Posted 15 hours ago

Apply

5.0 years

0 Lacs

Thiruvananthapuram

On-site

What you’ll do Design, develop, test, deploy, maintain, and improve high scale cloud native applications across the full engineering stack using Java, Spring Boot, Angular etc Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Mentor high performing software engineers to achieve business goals while reporting into a senior tech lead. Manage project priorities, deadlines, and deliverables. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity. Cloud Certification Strongly Preferred What experience you need Bachelor's degree or equivalent experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, Spring Boot. 5+ years experience designing and developing microservices using Java, Spring Boot, GCP SDKs/Any cloud sdk, GKE/Kubernetes 5+ years experience designing and developing cloud-native solutions 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs 1+ experience leading an engineering team What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Awareness of latest technologies and trends Good knowledge on software configuration management systems. UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+)

Posted 15 hours ago

Apply

5.0 years

0 Lacs

Haryana

Remote

About Focuz Focuz is a well-funded early-stage startup on a mission to redefine video intelligence. We are building a next-generation platform that transforms raw video streams from any camera into structured, actionable insights. Our architecture is designed to be robust, scalable, and portable, capable of running both in the cloud and on-premise for our customers. We are a small, agile team of builders, and we're looking for foundational members who share our passion for solving hard problems in distributed systems and networking. About the Role As our foundational Senior DevOps Engineer, you will be the architect of our entire infrastructure, automation, and networking strategy. This is a high-impact role that goes far beyond traditional CI/CD and cloud management. Your primary mission will be to solve the complex networking challenges inherent in connecting thousands of on-premise devices to our cloud services, and to wrap those solutions in elegant, code-driven automation. You will be responsible for everything from deploying our core services to architecting the network pathways that allow real-time video to flow reliably across firewalls and private networks. You will own our "Infrastructure as Code" vision, our observability platform, and the deployment strategy for our on-site agents. If you are excited by the challenge of building a resilient, distributed system and are not afraid to dive deep into networking protocols, this role is for you. What You'll Do Lead our "Infrastructure as Code" (IaC) strategy from the ground up. You will write the code (using tools like Pulumi or Terraform) that defines and manages all our cloud and on-premise resources. Design and automate the deployment of critical networking infrastructure, including STUN/TURN servers for WebRTC NAT traversal and highly-available MQTT brokers for IoT communication. Solve real-world network complexity problems to ensure reliable connectivity between our on-premise agents and our cloud platform. Architect and implement a comprehensive observability platform. You will be responsible for gathering and correlating metrics, logs, and traces from every part of our system - from on-site camera stream health and network bandwidth to our cloud APIs and AI workflows. Develop and maintain robust CI/CD pipelines (e.g., using GitHub Actions) to automate the entire build, test, and deployment lifecycle. Own the containerization strategy for all our applications, creating efficient and secure Docker images designed to run on customer-managed hardware. Collaborate closely with the development team to diagnose connectivity issues, optimize data flow, and build a seamless, reliable developer experience. What you'll bring 5+ years of professional experience in a DevOps, SRE, or Infrastructure Engineering role with a strong software engineering mindset. Deep, hands-on expertise with Infrastructure as Code (IaC). You must have significant experience with modern IaC tools like Pulumi (highly preferred) or Terraform. A strong, practical understanding of networking principles and protocols (TCP/IP, UDP, DNS, firewalls, NAT). You must be able to reason about and solve complex connectivity issues. Expert-level knowledge of Docker and containerization. You should be comfortable deploying and managing Docker applications on bare-metal Linux servers. Proven experience building and maintaining CI/CD pipelines for a production environment. A strong passion for and practical experience with observability. You should have experience building monitoring and logging solutions (e.g., using the Prometheus/Grafana stack). Experience with a major cloud provider (AWS, GCP, or Azure). A proactive, ownership-driven mindset and the ability to drive complex technical projects from concept to completion. Nice to have Direct experience deploying and managing real-time communication infrastructure (STUN/TURN, MQTT, WebRTC gateways) is a massive advantage. Proficiency in a general-purpose programming language for infrastructure tasks (e.g., Python, Go, TypeScript), especially in the context of a tool like Pulumi. Experience with configuration management tools like Ansible. Familiarity with GitOps principles and tools (e.g., ArgoCD, Flux). What we offer A fully remote and flexible work environment. A foundational role with a massive impact on the product, architecture, and company culture. The opportunity to solve challenging networking and infrastructure problems that are core to our business. A key strategic position where you will build a modern, greenfield infrastructure with a "code-first" philosophy. #focuz

Posted 15 hours ago

Apply

8.0 years

6 - 8 Lacs

Gurgaon

On-site

We are seeking a seasoned Site Reliability Engineer (SRE) with a solid background in payment systems and high-availability architectures. The ideal candidate will have hands-on experience managing large-scale, distributed systems in production, with a deep understanding of reliability, scalability, and performance tuning in the financial services or payments industry. Key Responsibilities: Design, build, and maintain scalable, resilient, and secure infrastructure for high-volume payment platforms. Ensure system uptime, reliability, and performance through effective monitoring, alerting, and incident response strategies. Collaborate with software engineering and DevOps teams to implement CI/CD pipelines and improve deployment efficiency. Automate infrastructure management tasks using Infrastructure-as-Code (IaC) tools (Terraform, Ansible, etc.). Proactively identify and mitigate system bottlenecks, failures, and potential points of failure. Manage disaster recovery strategies, failover planning, and performance testing for critical payment services. Work with development teams to ensure services are designed for reliability, scalability, and observability from the ground up. Participate in root cause analysis and post-incident reviews to prevent future outages. Required Skills & Experience: 8+ years of overall experience in infrastructure engineering or SRE roles, with at least 3+ years in the payments/fintech domain . Strong understanding of payment protocols (UPI, IMPS, RTGS, NEFT, SWIFT, etc.) and transaction processing systems. Proven expertise in Linux systems administration , cloud platforms (AWS, GCP, or Azure), and container orchestration (Kubernetes). Solid experience with monitoring/logging tools like Prometheus, Grafana, ELK Stack, Splunk , etc. Proficiency in one or more scripting languages (Python, Shell, Go, etc.) for automation. Experience with incident management , SLAs, and system troubleshooting in high-pressure environments. Familiarity with security and compliance practices in the financial sector (e.g., PCI-DSS, ISO 27001). Preferred Qualifications: Previous experience supporting mission-critical applications in banking or financial services . Exposure to Kafka , Redis , or other real-time streaming and caching technologies. Experience with Site Reliability Engineering principles and implementing SLOs/SLIs . Understanding of the Error Budget (EL) concept and how it ties into availability and release decisions. Experience on any performance testing tool like K6, JMeter, LoadRunner . Familiarity with mocking tools like Mockito, WireMock, Microcks .

Posted 15 hours ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

StackBill is looking for a passionate and skilled Cloud Support Engineer to join our dynamic support team. This is an exciting opportunity to work on a leading Apache CloudStack-based cloud platform, supporting public cloud infrastructure across VMware and KVM environments. You’ll play a crucial role in maintaining high availability and performance for our clients, troubleshooting complex infrastructure issues, and contributing to a reliable and scalable cloud experience. 🔧 What You’ll Do Deliver L2/L3 technical support for customers on StackBill’s Apache CloudStack cloud platform Troubleshoot and resolve issues across compute, network, and storage layers (VMware ESXi, KVM, Ceph, NFS, etc.) Monitor system performance and ensure SLAs are consistently met Perform incident management, root cause analysis, and post-mortem reporting Collaborate with engineering teams to deploy, configure, and optimize CloudStack environments Guide customers on scaling strategies, best practices, and cloud performance tuning Create and maintain internal knowledge base and troubleshooting documentation 🧩 What You Bring 3–5 years of hands-on experience with VMware ESXi/vSphere and KVM Strong Linux administration skills (CentOS/Ubuntu) Good grasp of core networking: VLANs, VXLAN, SDN, Load Balancers, VPN Experience with storage technologies (Ceph, NFS, iSCSI) Sound troubleshooting abilities in complex infrastructure setups Familiarity with Apache CloudStack architecture is a strong advantage ⭐ Nice to Have Experience with DevOps tools: Ansible, Terraform, Jenkins, Git Familiarity with monitoring tools: Prometheus, Grafana, Zabbix Scripting (Bash/Python) for automation and orchestration Exposure to public cloud platforms (AWS, Azure, GCP) 🧠 Soft Skills Excellent written and verbal communication Strong analytical and problem-solving mindset A proactive, customer-first attitude Willingness to work in 24x7 rotational shifts

Posted 15 hours ago

Apply

0 years

2 - 7 Lacs

Haryāna

On-site

Job Description: MySQL DBA Lead (AWS/Azure Native MySQL)Role Overview: We are seeking an experienced MySQL DBA Lead with expertise in cloud-based database management (AWS, Azure) to lead and optimize our MySQL database environments. The ideal candidate will have extensive experience in MySQL performance tuning, high availability setups, backup and recovery strategies, and managing MySQL in cloud-native platforms like AWS RDS, Aurora, or Azure Database for MySQL. Key Responsibilities: • Lead the design, implementation, and optimization of MySQL databases on AWS and Azure cloud environments. Manage cloud-native MySQL services such as AWS RDS, Aurora, Azure Database for MySQL. Oversee database security, including user management, encryption, and backup strategies. Develop and implement performance tuning strategies, including query optimization, indexing, and hardware scaling. Design and manage high availability and disaster recovery strategies using replication, clustering, and automated backups. Automate routine DBA tasks using tools like Ansible, Python, or Shell scripting. Monitor MySQL database performance using cloud-native monitoring tools and third-party solutions. Troubleshoot and resolve database-related issues in a timely manner, ensuring high availability and minimal downtime. Lead and mentor a team of junior DBAs, ensuring effective collaboration with development and operations teams. Manage database migrations, upgrades, and capacity planning for future growth. Required Skills & Experience: • Proven experience as a MySQL DBA, with a focus on cloud platforms like AWS (RDS, Aurora) and Azure (Azure Database for MySQL). Strong expertise in MySQL performance tuning, query optimization, and index management. Hands-on experience with high availability solutions (replication, clustering) and backup/recovery strategies. Expertise in cloud-native database management and deployment in AWS and Azure environments. Proficient in database automation using scripting languages (Python, Bash, Ansible). Experience with monitoring tools (CloudWatch, Azure Monitor, Percona Monitoring, Nagios). Strong troubleshooting skills and ability to resolve complex database issues quickly. Experience with security management, including access control, encryption, and auditing. Familiarity with database migrations and upgrades in cloud environments. Preferred Qualifications: • MySQL certifications or cloud certifications (AWS Certified Database – Specialty, Azure Database certifications). Experience with Infrastructure as Code (Terraform, CloudFormation) for MySQL provisioning. Familiarity with DevOps and CI/CD processes in a database environment. Experience in managing MySQL in containerized environments (Docker, Kubernetes).

Posted 15 hours ago

Apply

7.0 - 9.0 years

0 Lacs

New Delhi, Delhi, India

On-site

The purpose of this role is to understand, model and facilitate change in a significant area of the business and technology portfolio either by line of business, geography or specific architecture domain whilst building the overall Architecture capability and knowledge base of the company. Job Description: Role Overview : We are seeking a highly skilled and motivated Cloud Data Engineering Manager to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. The GCP Data Engineering Manager will design, implement, and maintain scalable, reliable, and efficient data solutions on Google Cloud Platform (GCP). The role focuses on enabling data-driven decision-making by developing ETL/ELT pipelines, managing large-scale datasets, and optimizing data workflows. The ideal candidate is a proactive problem-solver with strong technical expertise in GCP, a passion for data engineering, and a commitment to delivering high-quality solutions aligned with business needs. Key Responsibilities : Data Engineering & Development : Design, build, and maintain scalable ETL/ELT pipelines for ingesting, processing, and transforming structured and unstructured data. Implement enterprise-level data solutions using GCP services such as BigQuery, Dataform, Cloud Storage, Dataflow, Cloud Functions, Cloud Pub/Sub, and Cloud Composer. Develop and optimize data architectures that support real-time and batch data processing. Build, optimize, and maintain CI/CD pipelines using tools like Jenkins, GitLab, or Google Cloud Build. Automate testing, integration, and deployment processes to ensure fast and reliable software delivery. Cloud Infrastructure Management : Manage and deploy GCP infrastructure components to enable seamless data workflows. Ensure data solutions are robust, scalable, and cost-effective, leveraging GCP best practices. Infrastructure Automation and Management: Design, deploy, and maintain scalable and secure infrastructure on GCP. Implement Infrastructure as Code (IaC) using tools like Terraform. Manage Kubernetes clusters (GKE) for containerized workloads. Collaboration and Stakeholder Engagement : Work closely with cross-functional teams, including data analysts, data scientists, DevOps, and business stakeholders, to deliver data projects aligned with business goals. Translate business requirements into scalable, technical solutions while collaborating with team members to ensure successful implementation. Quality Assurance & Optimization : Implement best practices for data governance, security, and privacy, ensuring compliance with organizational policies and regulations. Conduct thorough quality assurance, including testing and validation, to ensure the accuracy and reliability of data pipelines. Monitor and optimize pipeline performance to meet SLAs and minimize operational costs. Qualifications and Certifications : Education: Bachelor’s or master’s degree in computer science, Information Technology, Engineering, or a related field. Experience: Minimum of 7 to 9 years of experience in data engineering, with at least 4 years working on GCP cloud platforms. Proven experience designing and implementing data workflows using GCP services like BigQuery, Dataform Cloud Dataflow, Cloud Pub/Sub, and Cloud Composer. Certifications: Google Cloud Professional Data Engineer certification preferred. Key Skills : Mandatory Skills: Advanced proficiency in Python for data pipelines and automation. Strong SQL skills for querying, transforming, and analyzing large datasets. Strong hands-on experience with GCP services, including Cloud Storage, Dataflow, Cloud Pub/Sub, Cloud SQL, BigQuery, Dataform, Compute Engine and Kubernetes Engine (GKE). Hands-on experience with CI/CD tools such as Jenkins, GitHub or Bitbucket. Proficiency in Docker, Kubernetes, Terraform or Ansible for containerization, orchestration, and infrastructure as code (IaC) Familiarity with workflow orchestration tools like Apache Airflow or Cloud Composer Strong understanding of Agile/Scrum methodologies Nice-to-Have Skills: Experience with other cloud platforms like AWS or Azure. Knowledge of data visualization tools (e.g., Power BI, Looker, Tableau). Understanding of machine learning workflows and their integration with data pipelines. Soft Skills : Strong problem-solving and critical-thinking abilities. Excellent communication skills to collaborate with technical and non-technical stakeholders. Proactive attitude towards innovation and learning. Ability to work independently and as part of a collaborative team. Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 15 hours ago

Apply

5.0 years

9 - 13 Lacs

Gurgaon

On-site

Job Title: Senior Software Engineer – AI/ML (Tech Lead) Experience: 5+ Years Location: Gurugram Notice Period: Immediate Joiners Only Roles & Responsibilities Design, develop, and deploy robust, scalable AI/ML-driven products and features across diverse business verticals. Provide technical leadership and mentorship to a team of engineers, ensuring delivery excellence and skill development. Drive end-to-end execution of projects — from architecture and coding to testing, deployment, and post-release support. Collaborate cross-functionally with Product, Data, and Design teams to align technology efforts with product strategy. Build and maintain ML infrastructure and model pipelines , ensuring performance, versioning, and reproducibility. Lead and manage engineering operations — including monitoring, incident response, logging, performance tuning, and uptime SLAs. Take ownership of CI/CD pipelines , DevOps processes, and release cycles to support rapid, reliable deployments. Conduct code reviews , enforce engineering best practices, and manage team deliverables and timelines. Proactively identify bottlenecks or gaps in engineering or operations and implement process improvements. Stay current with trends in AI/ML, cloud technologies, and MLOps to continuously elevate team capabilities and product quality. Tools & Platforms Languages & Frameworks: Python, FastAPI, PyTorch, TensorFlow, Hugging Face Transformers MLOps & Infrastructure: MLflow, DVC, Airflow, Docker, Kubernetes, Terraform, AWS/GCP CI/CD & DevOps: GitHub, GitLab CI/CD, Jenkins Monitoring & Logging: Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Sentry Project & Team Management: Jira, Notion, Confluence Analytics: Mixpanel, Google Analytics Collaboration & Prototyping: Slack, Figma, Miro Job Type: Full-time Pay: ₹900,000.00 - ₹1,300,000.00 per year Application Question(s): Total years of experience you have in developing AI/ML based tools ? Total years of experience you have in developing AI/ML projects ? Total years of experience you have in Handling team ? Current CTC? Expected CTC? In how many days you can join us if gets shortlisted? Current Location ? Are you ok to work from office (Gurugram , sector 54)? Rate your English communication skills out of 10 (1 is lowest and 10 is highest)? Please mention your all tech skills which makes you a fit for this role ? Have you gone through the JD and ok to perform all roles and responsibilities ? Work Location: In person

Posted 15 hours ago

Apply

0 years

2 - 4 Lacs

Gurgaon

On-site

DESCRIPTION The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. AWS Global Services includes experts from across AWS who help our customers design, build, operate, and secure their cloud environments. Customers innovate with AWS Professional Services, upskill with AWS Training and Certification, optimize with AWS Support and Managed Services, and meet objectives with AWS Security Assurance Services. Our expertise and emerging technologies include AWS Partners, AWS Sovereign Cloud, AWS International Product, and the Generative AI Innovation Center. You’ll join a diverse team of technical experts in dozens of countries who help customers achieve more with the AWS cloud. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. BASIC QUALIFICATIONS Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets Experience in mentoring junior team members, and guiding them on machine learning and data modeling applications PREFERRED QUALIFICATIONS AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 15 hours ago

Apply

7.0 years

3 - 6 Lacs

Gurgaon

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. System Administrator - AI/ML Platform: We are looking for a detail-oriented and technically proficient AI/ML Cloud Platform Administrator to manage, monitor, and secure our cloud-based platforms supporting machine learning and data science workloads. This role requires deep familiarity with both AWS and Azure cloud services, and strong experience in platform configuration, resource provisioning, access management, and operational automation. You will work closely with data scientists, MLOps engineers, and cloud security teams to ensure high availability, compliance, and performance of our AI/ML platforms. Your responsibilities will include: Provision, configure, and maintain ML infrastructure on AWS (e.g., SageMaker, Bedrock, EKS, EC2, S3) and Azure (e.g., Azure Foundry, Azure ML, AKS, ADF, Blob Storage) Manage cloud resources (VMs, containers, networking, storage) to support distributed ML workflows Deploy and Manage the open source orchestration ML Frameworks such as LangChain and LangGraph Implement RBAC, IAM policies, Azure AD, and Key Vault configurations to manage secure access. Monitor security events, handle vulnerabilities, and ensure data encryption and compliance (e.g., ISO, HIPAA, GDPR) Monitor and optimize performance of ML services, containers, and jobs Set up observability stacks using Fiddler , CloudWatch, Azure Monitor, Grafana, Prometheus, or ELK . Manage and troubleshoot issues related to container orchestration (Docker, Kubernetes – EKS/AKS) Use Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Bicep to automate environment provisioning Collaborate with MLOps teams to automate deployment pipelines and model operationalization Implement lifecycle policies, quotas, and data backups for storage optimization Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in cloud administration, with 2+ years supporting AI/ML or data platforms Proven hands-on experience with both AWS or Azure Proficient in Terraform, Docker, Kubernetes (AKS/EKS), Git, Python or Bash scripting Security Practices: IAM, RBAC, encryption standards, VPC/network setup Requisition ID: 611331 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 15 hours ago

Apply

3.0 - 5.0 years

2 - 4 Lacs

Gurgaon

On-site

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Senior Engineer-.Net With AWS ͏ Mandatory: Microsoft .Net Development Technologies (C#, MVC, .NET 4, 4.5, 6/7 ,NET Core) ,WEB API, Design Patterns, Knowledge of MS SQL Server and ability to write procedures and debug SQL query. Knowledge of Full Stack (Frontend, backend, and Database)- Secondary Good to Have: Knowledge of Microservices and Serverless architecture AWS Cloud Native development Terraform for AWS Infrastructure deployments. Excellent communication and organizational skills Drive to improve systems and processes through a sense of shared ownership. ͏ Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: FullStack Microsoft .NET Smart Web App. Experience: 3-5 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 15 hours ago

Apply

0 years

6 - 12 Lacs

Delhi

Remote

Position- SRE Developer Exp- 10 +yrs Location- Remote Budget- 1.20 LPM Salary range : 1,00,000 INR Duration - 6 months ( C2C) JD: Technical Skills: Programming: Proficiency in languages like Python, Bash, or Java is essential. Operating Systems: Deep understanding of Linux/Windows operating systems and networking concepts. Cloud Technologies: Experience with AWS & Azure including services, architecture, and best practices. Containerization and Orchestration: Hands-on experience with Docker, Kubernetes, and related tools. Infrastructure as Code (IaC): Familiarity with tools like Terraform, CloudFormation or Azure CLI. Monitoring and Observability: Experience with tools like Splunk, New Relic or Azure Monitoring. CI/CD: Experience with continuous integration and continuous delivery pipelines, GitHub, GitHub Actions. Knowledge in supporting Azure ML, Databricks and other related SAAS tools. Preferred Qualifications: Experience with specific cloud platforms (AWS, Azure). Certifications related to cloud engineering or DevOps. Experience with microservices architecture including supporting AI/ML solutions. Experience with large-scale system management and configuration. Job Type: Full-time Pay: ₹50,000.00 - ₹100,000.00 per month Work Location: In person

Posted 15 hours ago

Apply

0 years

8 - 10 Lacs

Delhi Cantonment

Remote

ABOUT TIDE At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. ABOUT THE TEAM Our 40+ engineering teams are working on designing, creating and running the rich product catalogue across our business and enablement areas (e.g. Payments Services, Admin Services, Ongoing Monitoring, etc.). We have a long roadmap ahead of us and always have interesting problems to tackle. We trust and empower our engineers to make real technical decisions that affect multiple teams and shape the future of Tide's Global One Platform. We work in small autonomous teams, grouped under common domains owning the full lifecycle of products and microservices in Tide's service catalogue. Our engineers self-organize, gather together to discuss technical challenges, and set their own guidelines in the different Communities of Practice regardless of where they currently stand in our Growth Framework. ABOUT THE ROLE As a Full Stack Engineer at Tide, you will be a key contributor to our engineering teams, working on designing, creating, and running the rich product catalogue across our business and enablement areas. You will have the opportunity to make a real difference by taking ownership of engineering practices and contributing to our event-driven Microservice Architecture, which currently consists of over 200 services owned by more than 40 teams. Design, build, run, and scale the services your team owns globally. You will define and maintain the services your team owns (you design it, you build it, you run it, you scale it globally) Work on both new and existing products, tackling interesting and complex problems. Collaborate closely with Product Owners to translate user needs, business opportunities, and regulatory requirements into well-engineered solutions. Define and maintain the services your team owns, exposing and consuming RESTful APIs with a focus on good API design. Learn and share knowledge with fellow engineers, as we believe in experimentation and collaborative learning for career growth. Have the opportunity to join our Backend and Web Community of Practices, where your input on improving processes and maintaining high quality will be valued. WHAT ARE WE LOOKING FOR A sound knowledge of a backend framework such as Spring/Spring Boot, with experience in writing microservices that expose and consume RESTful APIs. While Java experience is not mandatory, a willingness to learn is essential as most of our services are written in Java. Experience in engineering scalable and reliable solutions in a cloud-native environment, with a strong understanding of CI/CD fundamentals and practical Agile methodologies. Have some experience in web development, with a proven track record of building server-side applications, and detailed knowledge of the relevant programming languages for your stack. Strong knowledge of Semantic HTML, CSS3, and JavaScript (ES6). Solid experience with Angular 2+, RxJS, and NgRx. A passion for building great products in small, autonomous, agile teams. Experience building sleek, high-performance user interfaces and complex web applications that have been successfully shipped to customers. A mindset of delivering secure, well-tested, and well-documented software that integrates with various third-party providers. Solid experience using testing tools such as Jest, Cypress, or similar. A passion for automation tests and experience writing testable code. OUR TECH STACK Java 17 , Spring Boot and JOOQ to build the RESTful APIs of our microservices Event-driven architecture with messages over SNS+SQS and Kafka to make them reliable Primary datastores are MySQL and PostgreSQL via RDS or Aurora (we are heavy AWS users) Angular 15+ (including NgRx and Angular Material) Nrwl Nx to manage them as mono repo Storybook as live components documentation Node.js, NestJs and PostgreSQL to power up the BFF middleware Contentful to provide some dynamic content to the apps Docker, Terraform, EKS/Kubernetes used by the Cloud team to run the platform DataDog, ElasticSearch/Fluentd/Kibana, Semgrep, LaunchDarkly, and Segment to help us safely track, monitor and deploy GitHub with GitHub actions for Sonarcloud, Snyk and solid JUnit/Pact testing to power the CI/CD pipelines WHAT YOU WILL GET IN RETURN Make work, work for you! We are embracing new ways of working and support flexible working arrangements. With our Working Out of Office (WOO) policy our colleagues can work remotely from home or anywhere in their home country. Additionally, you can work from a different country for up to 90 days a year. Plus, you'll get: Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 25 Annual leaves Family & Friendly Leaves TIDEAN WAYS OF WORKING At Tide, we're Member First and Data Driven, but above all, we're One Team. Our Working Out of Office (WOO) policy allows you to work from anywhere in the world for up to 90 days a year. We are remote first, but when you do want to meet new people, collaborate with your team or simply hang out with your colleagues, our offices are always available and equipped to the highest standard. We offer flexible working hours and trust our employees to do their work well, at times that suit them and their team. TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity status or disability status. We believe it's what makes us awesome at solving problems! We are One Team and foster a transparent and inclusive environment, where everyone's voice is heard. Tide Website: https://www.tide.co/en-in/ Tide LinkedIn: https://www.linkedin.com/company/tide-banking/mycompany/ TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members' diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone's voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone's voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice .

Posted 15 hours ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies