Jobs
Interviews

136 Cloudwatch Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 7.0 years

5 - 15 Lacs

Chennai

Work from Office

About the Role We are seeking a proactive and experienced DevOps Engineer to manage and scale our new AWS-based cloud architecture. This role is central to building a secure, fault-tolerant, and highly available environment that supports our Sun, Drive, and Comm platforms. You'll play a critical role in automation, deployment pipelines, monitoring, and cloud cost optimization. Key Responsibilities Design, implement, and manage infrastructure using AWS services across multiple Availability Zones. Maintain and scale EC2 Auto Scaling Groups, ALBs, and secure networking layers (VPC, public/private subnets). Manage API Gateways, Bastion Hosts, and secure SSH/VPN access for developers and administrators. Setup and optimize Aurora SQL Clusters with multi-AZ active-active failover and backup strategies. Implement and maintain observability using CloudWatch for centralized logging, metrics, and alarms. Enforce infrastructure-as-code practices using Terraform/CloudFormation. Configure and maintain CI/CD pipelines (e.g., GitHub Actions, Jenkins, or CodePipeline). Ensure backup lifecycle management using S3 tiering and retention policies. Collaborate with engineering teams to enable DevSecOps best practices and drive automation. Continuously optimize infrastructure for performance, resilience, and cost (e.g., Savings Plans, S3 lifecycle policies). Must-Have Skills Strong hands-on experience with AWS core services: EC2(Linux and Windows), ALB, VPC, S3, Aurora (MySQL/PostgreSQL), CloudWatch, API Gateway, IAM, VPN. Deep understanding of multi-AZ, high availability, and auto-healing architectures. Experience with CI/CD tools and scripting (Bash, Python, or Shell). Working knowledge of networking and cloud security best practices (Security Groups, NACLs, IAM roles). Experience with Bastion architecture, Client VPNs, Route 53 and VPC peering. Familiarity with backup/restore strategies and monitoring/logging pipelines. Good-to-Have Exposure to containerization (Docker/ECS/EKS) or future readiness for CloudFront/ElastiCache integration. Knowledge of cost management strategies on AWS (e.g., billing reports, Trusted Advisor). Why Join Us? Work on a mission-critical mobility platform with a growing user base. Be a key part of transforming our legacy systems into a modern, scalable infrastructure. Collaborative and fast-paced environment with real ownership. Opportunity to drive automation and shape future DevSecOps practices.

Posted 6 hours ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Thoucentric, the Consulting arm of Xoriant, a renowned digital engineering services company with 5000 employees, is looking for a skilled Integration Consultant with 5 to 6 years of experience to join their team. As a part of the Consulting business of Xoriant, you will be involved in Business Consulting, Program & Project Management, Digital Transformation, Product Management, Process & Technology Solutioning, and Execution across various functional areas such as Supply Chain, Finance & HR, Sales & Distribution in the US, UK, Singapore, and Australia. Your role will involve designing, building, and maintaining data pipelines and ETL workflows using tools like AWS Glue, CloudWatch, PySpark, APIs, SQL, and Python. You will be responsible for creating and optimizing scalable data pipelines, developing ETL workflows, analyzing and processing data, monitoring pipeline health, integrating APIs, and collaborating with cross-functional teams to provide effective solutions. **Key Responsibilities** - **Pipeline Creation and Maintenance:** Design, develop, and deploy scalable data pipelines ensuring data accuracy and integrity. - **ETL Development:** Create ETL workflows using AWS Glue and PySpark adhering to data governance and security standards. - **Data Analysis and Processing:** Write efficient SQL queries and develop Python scripts for data tasks automation. - **Monitoring and Troubleshooting:** Utilize AWS CloudWatch to monitor pipeline performance and resolve issues promptly. - **API Integration:** Integrate and manage APIs for connecting external data sources and services. - **Collaboration:** Work closely with cross-functional teams to understand data requirements and communicate effectively with stakeholders. **Required Skills and Qualifications** - **Experience:** 5-6 Years - **o9 solutions platform exp is Mandatory** - Strong expertise in AWS Glue, CloudWatch, PySpark, Python, and SQL. - Hands-on experience in API integration, ETL processes, and pipeline creation. - Strong analytical and problem-solving skills. - Familiarity with data security and governance best practices. **Preferred Skills** - Knowledge of other AWS services such as S3, EC2, Lambda, or Redshift. - Experience with Pyspark, API, SQL Optimization, Python. - Exposure to data visualization tools or frameworks. **Education:** - Bachelors degree in computer science, Information Technology, or a related field. In this role at Thoucentric, you will have the opportunity to define your career path, work in a dynamic consulting environment, collaborate with Fortune 500 companies and startups, and be part of a supportive working environment that encourages personal development. Join us in the exciting growth story of Thoucentric in Bangalore, India. (Posting Date: 05/22/2025),

Posted 12 hours ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a Cloud Data Engineer at Barclays, where you'll spearhead the evolution of the digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. You may be assessed on key critical skills relevant for success in the role, such as risk and control, change and transformations, business acumen, strategic thinking, and digital technology, as well as job-specific skill sets. To be successful as a Cloud Data Engineer, you should have experience with: - Experience on AWS Cloud technology for data processing and a good understanding of AWS architecture. - Experience with computer services like EC2, Lambda, Auto Scaling, VPC, EC2. - Experience with Storage and container services like ECS, S3, DynamoDB, RDS. - Experience with Management & Governance KMS, IAM, CloudFormation, CloudWatch, CloudTrail. - Experience with Analytics services such as Glue, Athena, Crawler, Lake Formation, Redshift. - Experience with Solution delivery for data processing components in larger End to End projects. Desirable skill sets/good to have: - AWS Certified professional. - Experience in Data Processing on Databricks and unity catalog. - Ability to drive projects technically with right first deliveries within schedule and budget. - Ability to collaborate across teams to deliver complex systems and components and manage stakeholders" expectations well. - Understanding of different project methodologies, project lifecycles, major phases, dependencies and milestones within a project, and the required documentation needs. - Experienced with planning, estimating, organizing, and working on multiple projects. This role will be based out of Pune. Purpose of the role: To build and maintain systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architecture pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage appropriate data volumes and velocity and adhere to required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Takes responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision making within own area of expertise. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver your work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organization's products, services, and processes within the function. - Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. - Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. - Guide and persuade team members and communicate complex/sensitive information. - Act as a contact point for stakeholders outside of the immediate function, while building a network of contacts outside the team and external to the organization. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 14 hours ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a DataOps Engineer, you will play a crucial role within our data engineering team, operating in the realm that merges software engineering, DevOps, and data analytics. Your primary responsibility will involve creating and managing secure, scalable, and production-ready data pipelines and infrastructure that are vital in supporting advanced analytics, machine learning, and real-time decision-making capabilities for our clientele. Your key duties will encompass designing, developing, and overseeing the implementation of robust, scalable, and efficient ETL/ELT pipelines leveraging Python and contemporary DataOps methodologies. You will also be tasked with incorporating data quality checks, pipeline monitoring, and error handling mechanisms, as well as constructing data solutions utilizing cloud-native services on AWS like S3, ECS, Lambda, and CloudWatch. Furthermore, your role will entail containerizing applications using Docker and orchestrating them via Kubernetes to facilitate scalable deployments. You will collaborate with infrastructure-as-code tools and CI/CD pipelines to automate deployments effectively. Additionally, you will be involved in designing and optimizing data models using PostgreSQL, Redis, and PGVector, ensuring high-performance storage and retrieval while supporting feature stores and vector-based storage for AI/ML applications. In addition to your technical responsibilities, you will be actively engaged in driving Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives to ensure successful sprint delivery. You will also be responsible for reviewing pull requests (PRs), conducting code reviews, and upholding security and performance standards. Your collaboration with product owners, analysts, and architects will be essential in refining user stories and technical requirements. To excel in this role, you are required to have at least 10 years of experience in Data Engineering, DevOps, or Software Engineering roles with a focus on data products. Proficiency in Python, Docker, Kubernetes, and AWS (specifically S3 and ECS) is essential. Strong knowledge of relational and NoSQL databases like PostgreSQL, Redis, and experience with PGVector will be advantageous. A deep understanding of CI/CD pipelines, GitHub workflows, and modern source control practices is crucial, as is experience working in Agile/Scrum environments with excellent collaboration and communication skills. Moreover, a passion for developing clean, well-documented, and scalable code in a collaborative setting, along with familiarity with DataOps principles encompassing automation, testing, monitoring, and deployment of data pipelines, will be beneficial for excelling in this role.,

Posted 16 hours ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

The ideal candidate for this role should possess the following technical skills: - Proficiency in Java/J2EE, Spring/Spring Boot/Quarkus Frameworks, Microservices, Angular, Oracle, PostgreSQL, MongoDB - Experience with AWS services such as S3, Lambda, EC2, EKS, CloudWatch - Familiarity with Event Streaming using Kafka, Docker, and Kubernetes - Knowledge of GitHub and experience with CI/CD Pipeline In addition to the above, it would be beneficial for the candidate to also have the following technical skills: - Hands-on experience with cloud platforms like AWS, Azure, or GCP - Understanding of CI/CD pipelines and tools like Jenkins, GitLab CI/CD - Familiarity with monitoring and logging tools such as Prometheus and Grafana Overall, the successful candidate will be someone with a strong technical background in various technologies and platforms, along with the ability to adapt to new tools and frameworks as needed.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Manager at Autodesk, you will lead the BI and Data Engineering Team to develop and implement business intelligence solutions. Your role is crucial in empowering decision-makers through trusted data assets and scalable self-serve analytics. You will oversee the design, development, and maintenance of data pipelines, databases, and BI tools to support data-driven decision-making across the CTS organization. Reporting to the leader of the CTS Business Effectiveness department, you will collaborate with stakeholders to define data requirements and objectives. Your responsibilities will include leading and managing a team of data engineers and BI developers, fostering a collaborative team culture, managing data warehouse plans, ensuring data quality, and delivering impactful dashboards and data visualizations. You will also collaborate with stakeholders to translate technical designs into business-appropriate representations, analyze business needs, and create data tools for analytics and BI teams. Staying up to date with data engineering best practices and technologies is essential to ensure the company remains ahead of the industry. To qualify for this role, you should have 3 to 5 years of experience managing data teams and a BA/BS in Data Science, Computer Science, Statistics, Mathematics, or a related field. Proficiency in Snowflake, Python, SQL, Airflow, Git, and big data environments like Hive, Spark, and Presto is required. Experience with workflow management, data transformation tools, and version control systems is preferred. Additionally, familiarity with Power BI, AWS environment, Salesforce, and remote team collaboration is advantageous. The ideal candidate is a data ninja and leader who can derive insights from disparate datasets, understand Customer Success, tell compelling stories using data, and engage business leaders effectively. At Autodesk, we are committed to creating a culture where everyone can thrive and realize their potential. Our values and ways of working help our people succeed, leading to better outcomes for our customers. If you are passionate about shaping the future and making a meaningful impact, join us in our mission to turn innovative ideas into reality. Autodesk offers a competitive compensation package based on experience and location. In addition to base salaries, we provide discretionary annual cash bonuses, commissions, stock grants, and a comprehensive benefits package. If you are interested in a sales career at Autodesk or want to learn more about our commitment to diversity and belonging, please visit our website for more information.,

Posted 1 day ago

Apply

6.0 - 11.0 years

18 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role & responsibilities JD for Java + React + AWS Experience: 6 - 10 years Required Skills: Java, Spring, Spring Boot, React, microservices, JMS, ActiveMQ, Tomcat, Maven, GitHub, Jenkins, Linux/Unix, Oracle and PL/SQL, AWS EC2, S3, API Gateway, Lambda, Route53, Secrets Manager, CloudWatch Nice to have skills: Experience with rewriting legacy Java applications using Spring Boot & React Building serverless applications Ocean Shipping domain knowledge AWS CodePipeline Responsibilities: Develop and implement front-end and back-end solutions using Java, Spring, Spring Boot, React, microservices, Oracle and PL/SQL and AWS services. Experience working with business users in defining processes and translating those to technical specifications. Design and develop user-friendly interfaces and ensure seamless integration between front-end and back-end components. Write efficient code following best practices and coding standards. Perform thorough testing and debugging of applications to ensure high-quality deliverables. Optimize application performance and scalability through performance tuning and code optimization techniques. Integrate third-party APIs and services to enhance application functionality. Build serverless applications Deploy applications in AWS environment Perform Code Reviews. Pick up production support engineer role when needed Excellent grasp of application security concerns and remediation techniques. Well-rounded technical background in current web and micro-service technologies. Responsible for being an expert resource for architects in the development of target architectures to ensure that they can be properly designed and implemented through best practices. Should be able to work in a fast paced environment. Stay updated with the latest industry trends and emerging technologies to continuously improve skills and knowledge.

Posted 1 day ago

Apply

6.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

You have a job opportunity for a Back End Engineer position requiring 6-14 years of experience in technologies such as Java, Springboot, Microservices, Python, AWS or Cloud Native Deployment, Event bridge, Api gateway, DynamoDb, and CloudWatch. The ideal candidate should have at least 7 years of experience in these technologies and be comfortable working with complex code and requirements. The essential functions of this position include working with a Tech Stack that includes Java, Springboot, Microservices, Python, AWS, Event bridge, Api gateway, DynamoDb, and CloudWatch. The qualifications required for this role include expertise in Spring boot (Annotations, Autowiring with reflection, spring starters, auto-configuration vs configuration), CI CD Tools, Gradle or Maven Knowledge, Docker, containers, scale up and scale down, Health checks, Distributed Tracing, exception handling in microservices, Lambda expressions, threads, and streams. Candidates with knowledge of GraphQL, prior experience working on projects with a lot of PII data, or experience in the Financial Services industry are preferred. The job offers an opportunity to work on bleeding-edge projects, collaborate with a highly motivated team, competitive salary, flexible schedule, benefits package including medical insurance, sports, corporate social events, professional development opportunities, and a well-equipped office. Grid Dynamics (NASDAQ: GDYN) is the company offering this job opportunity. They are a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. With a focus on solving technical challenges and enabling positive business outcomes for enterprise companies undergoing business transformation, Grid Dynamics has expertise in enterprise AI, data, analytics, cloud & DevOps, application modernization, and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

kochi, kerala

On-site

As an AWS Cloud Engineer at our company based in Kerala, you will play a crucial role in designing, implementing, and maintaining scalable, secure, and highly available infrastructure solutions on AWS. Your primary responsibility will be to collaborate closely with developers, DevOps engineers, and security teams to support cloud-native applications and business services. Your key responsibilities will include designing, deploying, and maintaining cloud infrastructure using various AWS services such as EC2, S3, RDS, Lambda, and VPC. Additionally, you will be tasked with building and managing CI/CD pipelines, automating infrastructure provisioning using tools like Terraform or AWS CloudFormation, and monitoring and optimizing cloud resources through CloudWatch, CloudTrail, and other third-party tools. Furthermore, you will be responsible for managing user permissions and security policies using IAM, ensuring compliance, implementing backup and disaster recovery plans, troubleshooting infrastructure issues, and responding to incidents promptly. It is essential that you stay updated with AWS best practices and new service releases to enhance our overall cloud infrastructure. To be successful in this role, you should possess a minimum of 3 years of hands-on experience with AWS cloud services, a solid understanding of networking, security, and Linux system administration, as well as experience with DevOps practices and Infrastructure as Code (IaC). Proficiency in scripting languages such as Python and Bash, familiarity with containerization tools like Docker and Kubernetes (EKS preferred), and holding an AWS Certification (e.g., AWS Solutions Architect Associate or higher) would be advantageous. It would be considered a plus if you have experience with multi-account AWS environments, exposure to serverless architecture (Lambda, API Gateway, Step Functions), familiarity with cost optimization, and the Well-Architected Framework. Any previous experience in a fast-paced startup or SaaS environment would also be beneficial. Your expertise in AWS CloudFormation, Kubernetes (EKS), AWS services (EC2, S3, RDS, Lambda, VPC), cloudtrail, cloud, scripting (Python, Bash), CI/CD pipelines, CloudWatch, Docker, IAM, Terraform, and other cloud services will be invaluable in fulfilling the responsibilities of this role effectively.,

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at Cisco ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of the cloud and big data platforms. Your role will involve representing the NADP SRE team, contributing to the technical roadmap, and collaborating with cross-functional teams to design, build, and maintain SaaS systems operating at multi-region scale. Your efforts will be crucial in supporting machine learning (ML) and AI initiatives by ensuring the platform infrastructure is robust, efficient, and aligned with operational excellence. You will be tasked with designing, building, and optimizing cloud and data infrastructure to guarantee high availability, reliability, and scalability of big-data and ML/AI systems. This will involve implementing SRE principles such as monitoring, alerting, error budgets, and fault analysis. Additionally, you will collaborate with various teams to create secure and scalable solutions, troubleshoot technical problems, lead the architectural vision, and shape the technical strategy and roadmap. Your role will also encompass mentoring and guiding teams, fostering a culture of engineering and operational excellence, engaging with customers and stakeholders to understand use cases and feedback, and utilizing your strong programming skills to integrate software and systems engineering. Furthermore, you will develop strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices. To be successful in this role, you should have relevant experience (8-12 yrs) and a bachelor's engineering degree in computer science or its equivalent. You should possess the ability to design and implement scalable solutions, hands-on experience in Cloud (preferably AWS), Infrastructure as Code skills, experience with observability tools, proficiency in programming languages such as Python or Go, and a good understanding of Unix/Linux systems and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure is essential, along with a sense of ownership and accountability in architecting software and infrastructure at scale. Additional qualifications that would be advantageous include experience with the Hadoop Ecosystem, certifications in cloud and security domains, and experience in building/managing a cloud-based data platform. Cisco encourages individuals from diverse backgrounds to apply, as the company values perspectives and skills that emerge from employees with varied experiences. Cisco believes in unlocking potential and creating diverse teams that are better equipped to solve problems, innovate, and make a positive impact.,

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

Are you at an early stage of your career and looking to work in a practical domain Do you have a desire to support scientists, researchers, students, and more by making relevant data accessible to them If so, Elsevier is looking for individuals who bring accountability, innovation, and a strong sense of ownership to their work. We are committed to solving the world's most pressing information challenges and hiring individuals who truly care about delivering impactful solutions that benefit global research, healthcare, and education. We are currently seeking a Senior Quality Engineer (SDET) who embodies our values of empowerment, collaboration, and continuous improvement. In this role, you will take end-to-end responsibility for product quality and serve as a Quality Engineering Subject Matter Expert. Your primary focus will be ensuring that our products meet the highest standards through strong technical practices, innovation, and proactive ownership. As a Senior Quality Engineer, you will be responsible for developing and executing performance and automation tests. You will work closely with management to enhance quality and process standards, plan and execute effective test approaches, and ensure the on-time and efficient delivery of high-quality software products and/or data. This role requires an intermediate understanding of QA testing, including different testing methodologies and both legacy and new innovation/acquisition products. Your responsibilities will include putting customers first by delivering solutions with reliability, performance, and usability at their core. You will own the quality lifecycle of product features, collaborate with cross-functional teams, drive test strategy, automation, and continuous validation across UI, API, and data layers, establish actionable metrics and feedback loops, champion the use of smart automation tools, mentor and uplift the quality engineering team, and drive continuous improvement through retrospectives and team ceremonies. To qualify for this role, you should have a Bachelor's degree in Computer Science, any Engineering, or a related field, along with 6+ years of experience in software testing and automation in agile environments. You should have a deep understanding of testing strategies, QA patterns, and cloud-native application architectures, as well as hands-on knowledge of programming languages such as JavaScript, Python, or Java. Experience with UI automation frameworks, observability tools, API testing tools, CI/CD tools, and performance/load testing tools is also required. At Elsevier, we promote a healthy work/life balance and provide various well-being initiatives to help you meet both your immediate responsibilities and long-term goals. We offer comprehensive benefits to support your health and well-being, including health insurance, life insurance, flexible working arrangements, employee assistance programs, medical screenings, family benefits, paid time-off options, and more. Join us at Elsevier, a global leader in information and analytics, where your work contributes to addressing the world's grand challenges and fostering a more sustainable future. We harness innovative technologies to support science and healthcare, partnering for a better world.,

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 2 to 4 years of experience in Python scripting. Additionally, experience in SQL Athena in AWS for querying and analyzing large datasets is required. You should be proficient in SQL programming using Microsoft SQL Server and have the ability to create complex stored procedures, triggers, functions, views, etc. Experience in Crystal Reports and Jasper Reports would be an added advantage. Knowledge in AWS Lambda/CloudWatch event is a plus. The ideal candidate should be able to work both independently and collaboratively with teams. Good communication skills are essential for this role. Knowledge of ASP.NET and GitHub would be considered a plus. Qualifications: - UG: B.Tech / B.E. in Any Specialization - PG: MCA in Any Specialization Experience: 2 to 4 years,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

kochi, kerala

On-site

As an Automation Software Engineer, you will play a crucial role in designing and implementing robust automation frameworks and tools using your strong Python expertise and deep understanding of software engineering principles. This is not a traditional QA role but rather a position that focuses on engineering solutions to replace manual testing workflows with automation, leveraging advanced Python capabilities and test automation best practices. Your responsibilities will include developing, enhancing, and maintaining test automation frameworks for complex systems, with a particular emphasis on leveraging pytest to create modular, reusable, and scalable automated test solutions. You will also automate end-to-end testing of APIs, databases, and system integrations, utilizing pytest fixtures and hooks effectively. Additionally, you will implement advanced Python automation techniques, collaborate with development and product teams, utilize pytest parametrization to enhance test coverage, work with CI/CD pipelines, troubleshoot and debug automation frameworks, and continuously improve test strategies and workflows. To qualify for this role, you should have a Bachelor's degree in computer science, Software Engineering, or a related field, along with at least 5 years of hands-on software development experience with a strong emphasis on Python. Proficiency in pytest, deep understanding of Python's advanced features, experience with REST API testing, JSON schema validation, and HTTP protocols, as well as familiarity with RDBMS concepts and version control systems are required. Strong problem-solving skills, the ability to work effectively in a collaborative, Agile development environment, and experience with CI/CD tools are also essential. Preferred skills for this role include experience with Python concurrency, familiarity with cloud platforms and related tools, and experience working with containerized environments and testing distributed systems effectively. If you are looking for a challenging opportunity where you can leverage your software development skills and Python expertise to drive automation solutions in a collaborative environment, this role might be the perfect fit for you. Immediate joiners are preferred, and this is a contract position with a duration of at least 6 months, located in Kochi, Trivandrum, or Chennai with a requirement to work from the office from day one.,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

We are looking for a highly skilled and detail-oriented Technical Business Analyst to join our team in Bengaluru, India. As a Technical Business Analyst, you will be responsible for bridging the gap between business objectives and technical implementation. Your main tasks will include gathering and analyzing business requirements, translating them into technical specifications, and ensuring successful project execution. You will work closely with cross-functional teams, including business stakeholders, software developers, and quality assurance professionals, to deliver high-quality technical solutions, with a focus on cloud computing, infrastructure, AWS services like CloudWatch and SNS, and the AWS Cloud Development Kit (CDK). In this role, your responsibilities will include: Requirement Gathering and Analysis: - Collaborating with business stakeholders to understand their needs, objectives, and challenges. - Analyzing business processes and systems to identify improvement opportunities. - Documenting and prioritizing business requirements, ensuring clarity and alignment with organizational goals. - Performing gap analysis to propose solutions for discrepancies between current and desired states. Technical Solution Design: - Translating business requirements into clear technical specifications. - Working with developers and architects to design scalable technical solutions. - Ensuring proposed solutions align with industry best practices and security standards. - Understanding and utilizing AWS services like CloudWatch and SNS for system monitoring and notifications. Project Management and Coordination: - Defining project scope, timelines, and resource requirements with project managers. - Creating project plans, monitoring progress, and managing risks. - Facilitating communication between business stakeholders and technical teams. - Conducting status meetings and providing updates for project success. Testing and Quality Assurance: - Developing and executing test plans to validate technical solutions. - Collaborating with QA professionals to identify and resolve defects. - Testing cloud-based infrastructure and AWS services for functionality and reliability. - Supporting user acceptance testing and production deployments. Documentation and Training: - Creating and maintaining comprehensive documentation and training materials. - Documenting cloud computing infrastructure configurations and CDK deployments. Qualifications: - Bachelor's degree in Computer Science, Information Systems, Business Administration, or related field. - Proven experience as a Business Analyst with technical project focus. - Strong analytical and problem-solving skills. - Understanding of SDLC methodologies and Agile frameworks. - Proficiency in requirements gathering, process modeling, and documentation. - Experience with cloud computing concepts, particularly AWS. - Familiarity with AWS services like CloudWatch, SNS, and CDK. - Knowledge of infrastructure-as-code principles and tools. - Effective communication and collaboration skills. - Experience with data analysis, SQL, and software testing. - Familiarity with project management tools. - Excellent written and verbal communication skills.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Node.js Developer at Fitelo, a fast-growing health and wellness platform, you will play a crucial role in leading the data strategy. Collaborating with a team of innovative thinkers, front-end experts, and domain specialists, you will be responsible for designing robust architectures, implementing efficient APIs, and ensuring that our systems are both lightning-fast and rock-solid. Your role goes beyond mere coding; it involves shaping the future of health and wellness technology by crafting elegant solutions, thinking creatively, and making a significant impact on our platform. Your responsibilities will include taking complete ownership of designing, developing, deploying, and maintaining server-side components and APIs using Node.js. You will manage the database operations lifecycle with MongoDB and PostgreSQL, collaborate with front-end developers for seamless integration, optimize application performance and scalability, and implement security protocols to safeguard data integrity. Additionally, you will oversee the entire development process, conduct code reviews, maintain documentation, research and integrate new technologies, and drive collaboration across teams to ensure successful project delivery. The ideal candidate for this role would have at least 3 years of experience in backend development primarily with Node.js. Advanced proficiency in JavaScript and Typescript, along with experience in frameworks like Express.js or Nest.js, is required. A strong understanding of asynchronous programming, event-driven architecture, SQL and NoSQL databases, RESTful APIs, GraphQL services, microservices architecture, and front-end integration is essential. Proficiency in version control tools, CI/CD pipelines, cloud platforms, problem-solving skills, debugging, and testing frameworks are also key qualifications for this role. If you are passionate about technology, enjoy crafting innovative solutions, and want to contribute to the future of health and wellness tech, we welcome you to join our team at Fitelo. Qualifications: - Bachelor's degree in technology This is a full-time position with a day shift schedule, based in Gurugram.,

Posted 5 days ago

Apply

1.0 - 5.0 years

0 Lacs

kochi, kerala

On-site

As a Java Backend Developer in our IoT domain team based in Kochi, you will be responsible for designing, developing, and deploying scalable microservices using Spring Boot, SQL databases, and AWS services. Your role will involve leading the backend development team, implementing DevOps best practices, and optimizing cloud infrastructure. Your key responsibilities will include architecting and implementing high-performance, secure backend services using Java (Spring Boot), developing RESTful APIs and event-driven microservices with a focus on scalability and reliability, designing and optimizing SQL databases (PostgreSQL, MySQL), and deploying applications on AWS using services like ECS, Lambda, RDS, S3, and API Gateway. You will also be responsible for implementing CI/CD pipelines, monitoring and improving backend performance, ensuring security best practices, and authentication using OAuth, JWT, and IAM roles. The required skills for this role include proficiency in Java (Spring Boot, Spring Cloud, Spring Security), microservices architecture, API development, SQL (PostgreSQL, MySQL), ORM (JPA, Hibernate), DevOps tools (Docker, Kubernetes, Terraform, CI/CD, GitHub Actions, Jenkins), AWS cloud services (EC2, Lambda, ECS, RDS, S3, IAM, API Gateway, CloudWatch), messaging systems (Kafka, RabbitMQ, SQS, MQTT), testing frameworks (JUnit, Mockito, Integration Testing), and logging & monitoring tools (ELK Stack, Prometheus, Grafana). Preferred skills that would be beneficial for this role include experience in the IoT domain, work experience in startups, event-driven architecture using Apache Kafka, knowledge of Infrastructure as Code (IaC) with Terraform, and exposure to serverless architectures. In return, we offer a competitive salary, performance-based incentives, the opportunity to lead and mentor a high-performing tech team, hands-on experience with cutting-edge cloud and microservices technologies, and a collaborative and fast-paced work environment. If you have any experience in the IoT domain and are looking for a full-time role with a day shift schedule in an in-person work environment, we encourage you to apply for this exciting opportunity in Kochi.,

Posted 5 days ago

Apply

3.0 - 7.0 years

0 Lacs

chandigarh

On-site

As a DevOps Engineer, you will be responsible for designing, implementing, and managing CI/CD pipelines to streamline software development and deployment processes. Your role will involve overseeing Jenkins management for continuous integration and automation, as well as deploying and managing cloud infrastructure using AWS services. Additionally, you will configure and optimize brokers such as RabbitMQ, Kafka, or similar messaging systems to ensure efficient communication between microservices. Monitoring, troubleshooting, and enhancing system performance, security, and reliability will also be key aspects of your responsibilities. Collaboration with developers, QA, and IT teams is essential to optimize development workflows effectively. To excel in this role, you are required to have an AWS Certification (preferably AWS Certified DevOps Engineer, Solutions Architect, or equivalent) and strong experience in CI/CD pipeline automation using tools like Jenkins, GitLab CI/CD, or GitHub Actions. Proficiency in Jenkins management, including installation, configuration, and troubleshooting, is necessary. Knowledge of brokers for messaging and event-driven architectures, hands-on experience with containerization tools like Docker, and proficiency in scripting and automation (e.g., Python, Bash) are also essential. Experience with monitoring and logging tools such as Prometheus, Grafana, ELK stack, or CloudWatch, along with an understanding of networking, security, and cloud best practices, will be beneficial. Preferred skills for this role include experience in mobile and web application development environments and familiarity with Agile and DevOps methodologies. This is a full-time position with benefits such as paid sick time, paid time off, and a performance bonus. The work schedule is during the day shift from Monday to Friday, and the work location is in person.,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You will be joining Wipro Limited, a leading technology services and consulting company with a focus on developing innovative solutions to meet the complex digital transformation needs of clients. With a vast portfolio of capabilities in consulting, design, engineering, and operations, you will play a crucial role in helping clients achieve their ambitious goals and create sustainable businesses. Wipro has a global presence with over 230,000 employees and business partners in 65 countries, committed to supporting customers, colleagues, and communities in navigating the ever-changing business landscape. For more information, please visit www.wipro.com. As a Technical Lead specializing in AWS and DevOps, you will be expected to possess the following skills: - Proficiency in Terraform, AWS, and DevOps is required - AWS Certified Solution Architect - Associate - AWS Certified DevOps Engineer - Professional Your responsibilities as an AWS/DevOps Analyst will include: - Having more than 6 years of IT experience - Setting up and maintaining ECS solutions - Designing and building AWS solutions using various services such as VPC, EC2, WAF, ECS, ALB, IAM, KMS, ACM, Secret Manager, S3, CloudFront, etc. - Working with SNS, SQS, and EventBridge - Setting up and maintaining databases such as RDS, Aurora DB, Postgres DB, DynamoDB, and Redis - Configuring AWS Glue jobs and AWS Lambda - Establishing CI/CD pipelines using Azure DevOps - Utilizing GitHub for source code management - Building and maintaining cloud-native applications - Experience with container technologies like Docker - Configuring logging and monitoring solutions like CloudWatch and OpenSearch - Managing system configurations using Terraform and Terragrunt - Ensuring system security through best-in-class security solutions - Identifying and recommending process and architecture improvements - Troubleshooting distributed systems effectively In addition to technical skills, the following interpersonal skills are essential: - Strong communication and collaboration abilities - Team player mentality - Analytical and problem-solving skills - Familiarity with Agile methodologies - Ability to train others on procedural and technical topics Mandatory Skills: Cloud App Dev Consulting Experience Range: 5-8 Years At Wipro, we are reinventing ourselves to meet the demands of the digital age. We are looking for individuals who are inspired by reinvention and are committed to continuous personal and professional growth. Join us in building a modern Wipro that is a pioneer in digital transformation. We encourage individuals with disabilities to apply and be a part of our purpose-driven organization.,

Posted 5 days ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

YASH Technologies is a leading technology integrator that specializes in assisting clients in reimagining operating models, enhancing competitiveness, optimizing costs, fostering exceptional stakeholder experiences, and driving business transformation. As part of our team, you will be working with cutting-edge technologies alongside a group of talented individuals who are dedicated to making a positive impact in an increasingly virtual world. We are currently seeking DevOps Professionals to join us in various locations, including Hyderabad, Pune, and Indore. In this role, you will be responsible for designing, implementing, and managing robust and scalable cloud infrastructure on AWS and Azure, while adhering to best practices for security and cost optimization. You will also be tasked with developing and maintaining CI/CD pipelines using tools like Jenkins, GitHub Actions, or Azure DevOps, automating build, test, and deployment processes. Additionally, you will be expected to implement and manage infrastructure as code (IaC) using tools such as Terraform, CloudFormation, or ARM Templates, as well as containerization and orchestration platforms like Docker and Kubernetes. Monitoring and logging systems, security best practices, and collaboration with development teams to troubleshoot and resolve issues will also be key aspects of the role. To be successful in this position, you should have 6-8 years of experience in DevOps engineering or a related field, a strong understanding of cloud computing concepts, and proficiency in scripting languages like Python and Bash. Experience with CI/CD tools, IaC tools, containerization and orchestration technologies, monitoring and logging tools, networking concepts, and Linux/Unix operating systems is also required. Knowledge of security best practices in cloud environments, excellent communication skills, and strong problem-solving abilities are essential. Preferred skills include experience with serverless technologies, Agile methodologies, and certifications in AWS or Azure. At YASH, we offer a supportive and inclusive team environment where you can create a career path that aligns with your goals. Our Hyperlearning workplace is built upon principles of flexibility, agility, support, and stability, ensuring that you have the resources and opportunities needed to succeed in a rapidly evolving industry.,

Posted 5 days ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a Tech Lead, Data Architecture at Fiserv, you will play a crucial role in our data warehousing strategy and implementation. Your responsibilities will include designing, developing, and leading the adoption of Snowflake-based solutions to ensure efficient and secure data systems that drive our business analytics and decision-making processes. Collaborating with cross-functional teams, you will define and implement best practices for data modeling, schema design, and query optimization in Snowflake. Additionally, you will develop and manage ETL/ELT workflows to ingest, transform, and load data from various resources into Snowflake, integrating data from diverse systems like databases, APIs, flat files, and cloud storage. Monitoring and tuning Snowflake performance, you will manage caching, clustering, and partitioning to enhance efficiency while analyzing and resolving query performance bottlenecks. You will work closely with data analysts, data engineers, and business users to understand reporting and analytic needs, ensuring seamless integration with BI Tools like Power BI. Your role will also involve collaborating with the DevOps team for automation, deployment, and monitoring, as well as planning and executing strategies for scaling Snowflake environments as data volume grows. Keeping up to date with emerging trends and technologies in data warehousing and data management is essential, along with providing technical support, troubleshooting, and guidance to users accessing the data warehouse. To be successful in this role, you must have 8 to 10 years of experience in data management tools like Snowflake, Streamsets, and Informatica. Experience with monitoring tools like Dynatrace and Splunk, Kubernetes cluster management, and Linux OS is required. Additionally, familiarity with containerization technologies, cloud services, CI/CD pipelines, and banking or financial services experience would be advantageous. Thank you for considering employment with Fiserv. To apply, please use your legal name, complete the step-by-step profile, and attach your resume. Fiserv is committed to diversity and inclusion and does not accept resume submissions from agencies outside of existing agreements. Beware of fraudulent job postings not affiliated with Fiserv to protect your personal information and financial security.,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for working with AWS CDK using Type Script and CloudFormation template to manage various AWS services such as Redshift, Glue, IAM roles, KMS keys, Secrets Manager, Airflow, SFTP, AWS Lambda, S3, and Event Bridge. Your tasks will include executing grants, store procedures, queries, and Redshift Spectrum to query S3, defining execution roles, debugging jobs, creating IAM roles with fine-grained access, integrating and deploying services, managing KMS keys, configuring Secrets Manager, creating Airflow DAGs, executing serverless AWS Lambda functions, debugging Lambda functions, managing S3 object storage including lifecycle configuration, resource-based policies, and encryption, and setting up event triggers using Lambda Event Bridge with rules. You should have knowledge of AWS Redshift SQL workbench for executing grants and a strong understanding of networking concepts, security, and cloud architecture. Experience with monitoring tools like CloudWatch and familiarity with containerization tools like Docker and Kubernetes would be beneficial. Strong problem-solving skills and the ability to thrive in a fast-paced environment are essential. Virtusa is a company that values teamwork, quality of life, and professional and personal development. With a global team of 27,000 professionals, Virtusa is committed to supporting your growth by providing exciting projects, opportunities to work with cutting-edge technologies, and a collaborative team environment that encourages the exchange of ideas and excellence. At Virtusa, you will have the chance to work with great minds and unleash your full potential in a dynamic and innovative workplace.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

kochi, kerala

On-site

As a Lead Engineer, you will be responsible for designing, analyzing, developing, and deploying new features for the product. Your role will involve managing tasks in a sprint, reviewing the code of team members, and ensuring the first-time quality of code. You will actively participate in Agile ceremonies such as sprint planning, story grooming, daily scrums, Retrospective meetings, and Sprint reviews. Your responsibilities will include connecting with stakeholders to understand requirements and producing technical specifications based on business needs. You will write clean, well-designed code and follow technology best practices. Additionally, you will follow a modern agile-based development process, including automated unit testing. You will take complete ownership of tasks and user stories committed by yourself or the team. It is essential to understand and adhere to the development processes agreed upon at the organization/client level, actively participating in optimizing and evolving these processes for improved project execution. Your role will involve understanding user stories, translating them into technical specifications, and converting them into working code. You will troubleshoot, test, and maintain core product software and databases to ensure strong optimization and functionality. Contribution to all phases of the development lifecycle is expected. You should stay updated on industry trends and tools, pilot them, and ensure that the team can scale up technically to adopt best practices over time. Initiative in suggesting and implementing best practices in respective technology areas is encouraged. Expertise in developing Java Framework with RDBMS or NoSQL database back-end is required. Strong skills in Java, Rest, Springboot, and Microservices are essential. Proven expertise in Java 21, Spring boot MVC, Spring data, Hibernate, PostgresSQL, and REST API is necessary. Knowledge in object-oriented concepts & design patterns is essential. Exposure to the AWS ecosystem and services, along with experience in Docker, AWS S3, AWS Secrets Manager, and Cloudwatch is preferred. Understanding Angular concepts and exposure to web and JavaScript technologies will be advantageous. Experience in writing unit test cases using Jasmine/Karma or Jest is a plus. Demonstrated willingness to learn and develop with new/unfamiliar technologies is expected. Understanding the impacts of performance-based designs, accessibility standards, and security compliance in development is crucial. Good knowledge of project tracking tools like JIRA, Azure DevOps, and project collaboration tools like Confluence is required. Excellent communication skills are essential for conveying ideas with clarity, depth, and details. Understanding of Continuous Integration and Continuous Delivery best practices, along with experience in setting up CI/CD using Jenkins, GitHub, and plugins, is beneficial.,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have a solid understanding of object-oriented programming and software design patterns. Your responsibilities will include designing, developing, and maintaining Web/Service/desktop applications using .NET, .NET Core, and React.js. Additionally, you will be working on React.js/MVC and front-end development, AWS services like ASG, EC2, S3, Lambda, IAM, AMI, CloudWatch, Jenkins, RESTful API development, and integration. It is important to have familiarity with database technologies such as SQL Server and ensure the performance, quality, and responsiveness of applications. Collaboration with cross-functional teams to define, design, and ship new features will be crucial. Experience with version control systems like Git/TFS is required, with a preference for GIT. Excellent communication and teamwork skills are essential, along with familiarity with Agile/Scrum development methodologies. This is a full-time position with benefits including cell phone reimbursement. The work location is in person.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an AWS DevOps & System Administrator, you will be responsible for managing the cloud infrastructure, deployment pipelines, and system administration tasks. Your role will involve ensuring high availability, scalability, and security of systems hosted on AWS while also streamlining DevOps practices across teams. Your key responsibilities will include designing, implementing, and managing AWS cloud infrastructure, maintaining and optimizing CI/CD pipelines, performing system administration tasks for Linux/Unix-based environments, ensuring infrastructure as code using tools like Terraform or AWS CloudFormation, automating provisioning, configuration, and monitoring, monitoring system performance, troubleshooting issues, implementing security best practices, collaborating with development teams, and creating/maintaining documentation related to infrastructure. To be successful in this role, you should have at least 5 years of experience in DevOps/System Administration roles, strong hands-on experience with core AWS services, proficiency in Linux system administration, experience with IaC tools like Terraform or CloudFormation, familiarity with CI/CD tools such as Jenkins or GitHub Actions, scripting experience with Bash or Python, understanding of networking, knowledge of containerization technologies, experience with monitoring/logging tools, excellent problem-solving skills, and the ability to work independently. Preferred qualifications include being an AWS Certified SysOps Administrator/DevOps Engineer Professional, experience with hybrid cloud/on-prem environments, and exposure to compliance frameworks. In return, you will have the opportunity to work on cutting-edge technologies and impactful projects, opportunities for career growth and development, a collaborative and inclusive work environment, as well as a competitive salary and benefits package.,

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Full Stack Developer at our company, you will be involved in working on complex engineering projects, platforms, and marketplaces for our clients by utilizing emerging technologies. You will always stay ahead of the technology curve and receive continuous training to become Polyglots. Your role will involve solving end customer challenges through designing and coding, making end-to-end contributions to technology-oriented development projects, providing solutions in Agile mode, and collaborating with Power Programmers and the Open Source community. You will have the opportunity to work on custom development of new platforms and solutions, large-scale digital platforms, and marketplaces. Additionally, you will be working on complex engineering projects using cloud-native architecture and collaborating with innovative Fortune 500 companies on cutting-edge technologies. Your responsibilities will include co-creating and developing new products and platforms for our clients, contributing to Open Source projects, and continuously upskilling in the latest technology areas. To excel in this role, you should have a minimum of 8 years of overall experience and possess the following skills: - Solid experience in developing serverless applications on the AWS platform. - In-depth knowledge of AWS services such as Lambda, API Gateway, DynamoDB, S3, IAM, CloudFormation, CloudWatch, etc. - Proficiency in React.Js, Node.js, JavaScript, and/or TypeScript, along with experience in AWS Serverless Application Model (SAM). - Experience with serverless deployment strategies, monitoring, and troubleshooting, as well as a solid understanding of RESTful APIs and best practices for designing scalable APIs. - Familiarity with version control systems (e.g., Git) and CI/CD pipelines. - Strong problem-solving and analytical skills with a focus on delivering high-quality code and solutions. - Excellent communication skills to effectively collaborate and communicate technical concepts to both technical and non-technical stakeholders. - Experience with Azure DevOps or similar CI/CD tools and an understanding of Agile/Scrum methodologies in an Agile development environment.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies