Jobs
Interviews

414 Amazon Cloudwatch Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

3 - 7 Lacs

Mumbai, Pune, Bengaluru

Work from Office

Your Role We are seeking a highly skilled Java AWS Developer with hands-on experience in AWS Lambda and other AWS services. The ideal candidate will be responsible for designing, developing, and deploying serverless applications and microservices using Java and AWS technologies. Design and develop scalable, event-driven applications usingJavaandAWS Lambda. Integrate AWS services such asAPI Gateway,S3,DynamoDB,SNS/SQS, andCloudWatch. Build and maintain RESTful APIs and backend services. Optimize application performance and ensure high availability. Collaborate with DevOps for CI/CD pipelines using tools likeCodePipeline,CodeBuild, andCloudFormation. Your Profile 5+ years of Java development experience with 2+ years in AWS Lambda and serverless architecture. Proficient in AWS services including S3, DynamoDB, API Gateway, IAM, and CloudWatch. Experienced in microservices architecture and event-driven systems. Skilled in CI/CD pipelines, Git, Docker, and infrastructure as code (CloudFormation/Terraform). Strong problem-solving and communication skills with knowledge of JSON and YAML. What you"ll love about working here You can shape yourcareerwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. At Capgemini, you can work oncutting-edge projectsin tech and engineering with industry leaders or createsolutionsto overcome societal and environmental challenges. Equal Opportunities at frog Frog and Capgemini Invent are Equal Opportunity Employers encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status, or any other characteristic protected by law.

Posted 12 hours ago

Apply

1.0 - 2.0 years

6 - 10 Lacs

Mumbai, Hyderabad, Chennai

Work from Office

Your Role You would be working Enterprise Data Management Consolidation (EDMCS) Enterprise Profitability & Cost Management Cloud Services (EPCM) Oracle Integration cloud (OIC). Full life cycle Oracle EPM Cloud Implementation. Creating forms, OIC Integrations, and complex Business Rules. Understanding dependencies and interrelationships between various components of Oracle EPM Cloud. Keep abreast of Oracle EPM roadmap and key functionality to identify opportunities where it will enhance the current process within the entire Financials ecosystem. Collaborate with the FP&A to facilitate the Planning, Forecasting and Reporting process for the organization. Create and maintain system documentation, both functional and technical Your Profile Experience in Implementation in EDMCS Modules Proven ability to collaborate with internal clients in an agile manner, leveraging design thinking approaches. Collaborate with the FP&A to facilitate the Planning, Forecasting and Reporting process for the organization. Create and maintain system documentation, both functional and technical Experience of Python, AWS Cloud (Lambda, Step functions, EventBridge etc.) is preferred. What you"ll love about capgemini You can shape yourcareer with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theopportunity to learn on one of the industry"s largest digital learning platforms, with access to 250,000+ courses and numerous certifications. About Capgemini Location - Hyderabad,Chennai,Mumbai,Pune,Bengaluru

Posted 12 hours ago

Apply

2.0 - 3.0 years

4 - 5 Lacs

Noida, Gurugram

Work from Office

About the Role: Grade Level (for internal use): 10 S& Global Mobility The Role: Cloud DevOps Engineer The Impact: This role is crucial to the business as it directly contributes to the development and maintenance of cloud-based DevOps solutions on the AWS platform. Whats in it for you: Drive Innovation Join a dynamic and forward-thinking organization at the forefront of the automotive industry. Contribute to shaping our cloud infrastructure and drive innovation in cloud-based solutions on the AWS platform. Technical Growth Gain valuable experience and enhance your skills by working with a team of talented cloud engineers. Take on challenging projects and collaborate with cross-functional teams to define and implement cloud infrastructure strategies. Impactful Solutions Contribute to the development of solutions that directly impact the scalability, reliability, and security of our cloud infrastructure. Play a key role in delivering high-quality products and services to our clients. We are seeking a highly skilled and driven Cloud DevOps Engineer to join our team. Candidate should have experience with developing and deploying native cloud-based solutions, possess a passion for container-based technologies, immutable infrastructure, and continuous delivery practices in deploying global software. Responsibilities Deploy scalable, highly available, secure, and fault tolerant systems on AWS for the development and test lifecycle of AWS Cloud Native solutions Configure and manage AWS environment for usage with web applications Engage with development teams to document and implement best practice (low maintenance) cloud-native solutions for new products Focus on building containerization application components and integrating with AWS ECS Contribute to application design and architecture, especially as it relates to AWS services Manage AWS security groups Collaborate closely with the Technical Architects by providing input into the overall solution architecture Implement DevOps technologies and processesi.e.,containerization, CI/CD, infrastructure as code, metrics, monitoring etc. Experience of networks, security, load balancers, DNS and other infrastructure components and their application to cloud (AWS) environments Passion for solving challenging issues Promote cooperation and commitment within a team to achieve common goals What you will need Understanding of networking, infrastructure, and applications from a DevOps perspective Infrastructure as code ( IaC ) using Terraform and CloudFormation Deep knowledge of AWS especially with services like ECS/Fargate, ECR, S3/CloudFront, Load Balancing, Lambda, VPC, Route 53, RDS, CloudWatch, EC2 and AWS Security Center Experience managing AWS security groups Experience building scalable infrastructure in AWS Experience with one or more AWS SDKs and/or CLI Experience in Automation, CI/CD pipelines, DevOps principles Experience with Container platforms Experience with operational tools and ability to apply best practices for infrastructure and software deployment Software design fundamentals in data structures, algorithm design and performance analysis Experience working in an Agile Development environment Strong written and verbal communication and presentation skills Education and Experience Bachelor's degree in Computer Science, Information Systems, Information Technology, or a similar major orCertified DevelopmentProgram 2-3 years of experience managing AWS application environment and deployments 5+ years of experience working in a development organization Statement: S&P Global delivers essential intelligence that powers decision making. We provide the worlds leading organizations with the right data, connected technologies and expertise they need to move ahead. As part of our team, youll help solve complex challenges that equip businesses, governments, and individuals with the knowledge to adapt to a changing economic landscape. S&P Global Mobility turns invaluable insights captured from automotive data to help our clients understand todays market, reach more customers, and shape the future of automotive mobility. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- , SWP Priority Ratings - (Strategic Workforce Planning)

Posted 13 hours ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Hyderabad, Ahmedabad, Gurugram

Work from Office

About the Role: Grade Level (for internal use): 09 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team you will build and maintain enterprisescale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. Whats in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestrationCelery, Redis, Airflow Strong AWS expertiseEKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)

Posted 13 hours ago

Apply

6.0 - 9.0 years

3 - 6 Lacs

Bengaluru

Work from Office

Job Title:Aws Admin/AWS Cloud Engineer Experience6-9YearsLocation:Bangalore : Aws Admin, AWS Cloud Engineer, AWS services, Cloudwatch/EC2/S3.

Posted 14 hours ago

Apply

6.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Job Title:MS SQL Server & MongoDB Database Administrator (AWS Cloud)Experience:YearsLocation:Bangalore : Key ResponsibilitiesHands-on expertise in data migration between databases on-prem to Mongo Atlas. Experience in creating clusters, databases and creating the users. SQL Server AdministrationInstall, configure, upgrade, and manage SQL Server databases hosted on AWS EC2, RDS. AWS Cloud IntegrationDesign, deploy, and manage SQL Server instances using AWS services like RDS, EC2, S3, CloudFormation, and IAM. Performance TuningOptimize database performance through query tuning, indexing strategies, and resource allocation within AWS environments. High Availability and Disaster RecoveryImplement and manage HA/DR solutions such as Always On Availability Groups, Multi-AZ deployments, or read replicas on AWS. Backup and RestoreConfigure and automate backup strategies using AWS services like S3 and Lifecycle Policies while ensuring database integrity and recovery objectives. Security and ComplianceManage database security, encryption, and compliance standards (e.g., GDPR, HIPAA) using AWS services like KMS and GuardDuty. Monitoring and AutomationMonitor database performance using AWS CloudWatch, SQL Profiler, and third-party tools. Automate routine tasks using PowerShell, AWS Lambda, or AWS Systems Manager. CollaborationWork closely with development, DevOps, and architecture teams to integrate SQL Server solutions into cloud-based applications. DocumentationMaintain thorough documentation of database configurations, operational processes, and security procedures. Required Skills and Experience6+ years of experience in SQL Server database administration and 3+ years of experience in MongoDB administration. Extensive hands-on experience with AWS cloud services (e.g., RDS, EC2, S3, VPC, IAM). Proficiency in T-SQL programming and query optimization. Strong understanding of SQL Server HA/DR configurations in AWS (Multi-AZ, Read Replicas). Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or third-party solutions. Knowledge of cloud cost management and database scaling strategies. Familiarity with infrastructure-as-code tools (e.g., CloudFormation, Terraform). Strong scripting skills with PowerShell, Python, or similar languages. Preferred Skills and CertificationsKnowledge of database migration tools like AWS DMS or native backup/restore processes for cloud migrations. Understanding of AWS security best practices and tools such as KMS, GuardDuty, and AWS Config. Certifications such as AWS Certified Solutions Architect, AWS Certified Database Specialty, or Microsoft CertifiedAzure Database Administrator Associate. Educational QualificationBachelors degree in computer science, Information Technology, or a related field.

Posted 14 hours ago

Apply

7.0 - 12.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Job Title Sr Software EngineerExperience 7-14 YearsLocation Bangalore : Proficient in software Design and development and familiar with technologies - Java, Java-J2EE, Spring Boot, Ajax, REST API, Micro services etc. Working knowledge of any SQL database (MySQL, Oracle etc) Working knowledge of No-SQL database (Mongo or Dynamo DB) Experience in unit testing using Junit or Mockito Experience in designing and architecting systems with high scalability and performance requirements. Ability to design infrastructure for performance evaluation and reporting of cloud-based services, namely AWS In depth knowledge of key AWS services like EC2, S3, Lambda, CloudWatch etc. Certification on AWS architecture desirable Excellent communication skills ability to effectively articulate technical challenges and solutions skilled in interfacing with internal and external technical resources good in debugging problems and mentoring teams on technical front Roles and Responsibilities Participate and contribute in platform requirements/story development. Contribute to the design, coding for the requirements/stories and also participate in code and design reviews. Develop use cases and do unit test cases and execute them part of continuous integration pipeline. Process Skills: Agile Scrum and Test-Driven Development

Posted 14 hours ago

Apply

8.0 - 12.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Job Title:Oracle & MS-SQL and PostgreSQL DBAExperience:YearsLocation:Bangalore : Technical Skills: Hands-on expertise in data migration between databases on-prem to AWS cloud RDS. Experience in usage of AWS Data Migration Service (DMS) Experience in export/import of huge size Database Schema, full load plus CDC. Should have knowledge & experience in Unix command and writing task automation shell scripts. Knowledge & experience on Different Backup/restore methods and backup tools is needed. Experienced with hands on experience in DBA Skills of database monitoring, performance tuning, DB refresh. Hands on experience in Database support. Hands-on expertise in data migration between databases on-prem to Mongo Atlas. Experience in creating clusters, databases and creating the users. Hands-on expertise in data migration between databases on-prem to AWS cloud RDS (postgres) SQL Server AdministrationInstall, configure, upgrade, and manage SQL Server databases hosted on AWS EC2, RDS. AWS Cloud IntegrationDesign, deploy, and manage SQL Server instances using AWS services like RDS, EC2, S3, CloudFormation, and IAM. Performance TuningOptimize database performance through query tuning, indexing strategies, and resource allocation within AWS environments. High Availability and Disaster RecoveryImplement and manage HA/DR solutions such as Always On Availability Groups, Multi-AZ deployments, or read replicas on AWS. Backup and RestoreConfigure and automate backup strategies using AWS services like S3 and Lifecycle Policies while ensuring database integrity and recovery objectives. Security and ComplianceManage database security, encryption, and compliance standards (e.g., GDPR, HIPAA) using AWS services like KMS and GuardDuty. Monitoring and AutomationMonitor database performance using AWS CloudWatch, SQL Profiler, and third-party tools. Automate routine tasks using PowerShell, AWS Lambda, or AWS Systems Manager. CollaborationWork closely with development, DevOps, and architecture teams to integrate SQL Server solutions into cloud-based applications. DocumentationMaintain thorough documentation of database configurations, operational processes, and security procedures. Non-Technical Skills: Candidate needs to be Good Team Player Ownership skills- should be an individual performer to take deliverables and handle fresh challenges. Service/Customer orientation/ strong oral and written communication skills are mandate. Should be confident and capable to speak with client and onsite teams. Effective interpersonal, team building and communication skills. Ability to collaborate; be able to communicate clearly and concisely both to laypeople and peers, be able to follow instructions, make a team stronger for your presence and not weaker. Should be ready to work in rotating shifts (morning, general and afternoon shifts) Ability to see the bigger picture and differing perspectives; to compromise, to balance competing priorities, and to prioritize the user. Desire for continuous improvement, of the worthy sort; always be learning and seeking improvement, avoid change aversion and excessive conservatism, equally avoid harmful perfectionism, "not-invented-here" syndrome and damaging pursuit of the bleeding edge for its own sake. Learn things quickly, while working outside the area of expertise. Analyze a problem and realize exactly what all will be affected by even the smallest of change you make in the database. Ability to communicate complex technology to no tech audience in simple and precise manner.

Posted 14 hours ago

Apply

8.0 - 13.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Job Title Oracle & MS-SQL and PostgreSQL DBAExperience 8-16 YearsLocation Bangalore : Must have 4 Year degree (Computer Science, Information Systems or equivalent) 8+ years overall IT experience (5+ Years as DBA) Technical Skills: Hands-on expertise in data migration between databases on-prem to AWS cloud RDS. Experience in usage of AWS Data Migration Service (DMS) Experience in export/import of huge size Database Schema, full load plus CDC. Should have knowledge & experience in Unix command and writing task automation shell scripts. Knowledge & experience on Different Backup/restore methods and backup tools is needed. Experienced with hands on experience in DBA Skills of database monitoring, performance tuning, DB refresh. Hands on experience in Database support. Hands-on expertise in data migration between databases on-prem toMongo Atlas. Experience in creating clusters, databases and creating the users. Hands-on expertise in data migration between databases on-prem to AWS cloud RDS (postgres) SQL Server Administration:Install, configure, upgrade, and manage SQL Server databases hosted on AWS EC2, RDS. AWS Cloud Integration:Design, deploy, and manage SQL Server instances using AWS services like RDS, EC2, S3, CloudFormation, and IAM. Performance Tuning:Optimize database performance through query tuning, indexing strategies, and resource allocation within AWS environments. High Availability and Disaster Recovery:Implement and manage HA/DR solutions such as Always On Availability Groups, Multi-AZ deployments, or read replicas on AWS. Backup and Restore:Configure and automate backup strategies using AWS services like S3 and Lifecycle Policies while ensuring database integrity and recovery objectives. Security and Compliance:Manage database security, encryption, and compliance standards (e.g., GDPR, HIPAA) using AWS services like KMS and GuardDuty. Monitoring and Automation:Monitor database performance using AWS CloudWatch, SQL Profiler, and third-party tools. Automate routine tasks using PowerShell, AWS Lambda, or AWS Systems Manager. Collaboration:Work closely with development, DevOps, and architecture teams to integrate SQL Server solutions into cloud-based applications. Documentation:Maintain thorough documentation of database configurations, operational processes, and security procedures. Non-Technical Skills: Candidate needs to be Good Team Player Ownership skills- should be an individual performer to take deliverables and handle fresh challenges. Service/Customer orientation/ strong oral and written communication skills are mandate. Should be confident and capable to speak with client and onsite teams. Effective interpersonal, team building and communication skills. Ability to collaborate; be able to communicate clearly and concisely both to laypeople and peers, be able to follow instructions, make a team stronger for your presence and not weaker. Should be ready to work in rotating shifts (morning, general and afternoon shifts) Ability to see the bigger picture and differing perspectives; to compromise, to balance competing priorities, and to prioritize the user. Desire for continuous improvement, of the worthy sort; always be learning and seeking improvement, avoid change aversion and excessive conservatism, equally avoid harmful perfectionism, 'not-invented-here' syndrome and damaging pursuit of the bleeding edge for its own sake. Learn things quickly, while working outside the area of expertise. Analyze a problem and realize exactly what all will be affected by even the smallest of change you make in the database. Ability to communicate complex technology to no tech audience in simple and precise manner. Skills PRIMARY COMPETENCY Data Engineering PRIMARY Oracle APPS DBA PRIMARY PERCENTAGE 51 SECONDARY COMPETENCY Big Data Technologies SECONDARY PostgreSQL SECONDARY PERCENTAGE 39 TERTIARY COMPETENCY Data Engineering TERTIARY Microsoft SQL Server APPS DBA TERTIARY PERCENTAGE 10

Posted 14 hours ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Job Title:DEVOPS- AWS Glue, KMS, ALB , ECS and Terraform/TerragruntExperience5-10YearsLocation:Bangalore : DEVOPS, AWS, Glue, KMS, ALB , ECS, Terraform, Terragrunt

Posted 14 hours ago

Apply

6.0 - 11.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Job Title:Java AWS DeveloperExperience6-12 YearsLocation:Bangalore : : Experience in Java, J2ee, Spring boot. Experience in Design, Kubernetes, AWS (EKS, EC2) is needed. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Experience with Web Services SOA experience (SOAP as well as Restful with JSON formats), with Messaging (Kafka). Hands on with development and test automation tools/frameworks (e.g. BDD and Cucumber)

Posted 14 hours ago

Apply

7.0 - 10.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project.

Posted 16 hours ago

Apply

4.0 - 7.0 years

5 - 10 Lacs

Bengaluru

Work from Office

1 Have a good understanding of AWS services specifically in the following areas RDS, S3, add EC2, VPC, KMS, ECS, Lambda, AWS Organizations and IAM policy setup. Also Python as a main skill. 2 Architect/design/code database infrastructure deployment using terraform. Should be able to write terraform modules that will deploy database services in AWS 3 Provide automation solutions using python lambda's for repetitive tasks such as running quarterly audits, daily health checks in RDS in multiple accounts. 4 Have a fair understanding of Ansible to automate Postgres infrastructure deployment and automation of repetitive tasks for on prem servers 5 Knowledge of Postgres and plpgsql functions6 Hands on experience with Ansible and Terraform and the ability to contribute to ongoing projects with minimal coaching.

Posted 17 hours ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Hyderabad

Work from Office

The application engineer is a member of the CRM organization. This person contributes to bookings growth and customer success through participation in CRM business teams as an application developer. This person builds business processes, framework features and supporting functions based upon identified business requirements and use cases. The person functions in Scrum teams with other professionals focused on building, maintaining, and supporting solutions frameworks for the CRM industry. Pega is changing the way the world builds software, and our goal is to be the no. 1 CRM SaaS company in the world. In this role, you'll help us design, develop, implement new enhancements for the applications. What You'll Do At Pega Develop worlds best CRM applications. Adhere to Pega development Best Practices Work as part of a collaborative Agile team working in a SCRUM model surrounded by fun loving talented engineers. Technologies you will work on AWS, JS React, Node js, REST Services, REACT, Dynamo DB, S3, Cloudwatch. Take ownership of the components/tasks and make sure they are delivered with great quality Exhibit thought leadership and ready to suggest product and process improvements Resolve customer issues either by providing technical guidance or issue formal fixes Who You Are You are an experienced professional with a strong commitment to customer success without compromising integrity. You are a problem-solver who thrives in a collaborative team environment who wants to focus on building the next-generation solutions. You are skilled in both front end technologies and AWS cloud services. What You've Accomplished 4+ years of software development and design experience in AWS and UI technologies Prominent development experience in JavaScript/Typescript, Node JS, React JS, REST API, GraphQL(optional) experience is highly desired. Deep understanding and hands on experience on AWS - Amplify, API Gateway, DynamoDB, S3, Cloudwatch, Lambda, Codepipeline, SQS AWS IaC framework - CDK Experience with CI/CD, Git, Debugging Bachelors degree in engineering or similar field Very good presentation and communication skills Excellent problem-solving skills Passionate about learning new technologies and constant desire for innovation Should be able to take ownership of the deliverables assigned and would be able to deliver with no or minimal guidance. Partner with internal clients, like Product Managers, to deliver World-class software

Posted 19 hours ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Jalandhar

Work from Office

o Correspond with multiple sources to negotiate payment schedules that suit the customers current financial situation while still satisfying the debt.o Keep track of the portfolio for specific buckets for the assigned area and control the delinquency of the area, bucket-wise & DPD wise as well as focus on non-starters.o Provide efficient customer service regarding collection issues, process customer refunds, process and review account adjustments, resolve client discrepancies and short paymentso Monitor and maintain customer account details for non - payments, delayed payments and other irregularities, making customer calls, account adjustments, small balance write off, customer reconciliations and processing credit memos where necessary.o Ensure customer files are updated, recording times and dates that contact has been made and noting information that customers have received about their debt.o Trace defaulters and assets in coordination with the agencys tracing team and suggest remedial course of actiono Identify defaulting accounts and investigate reasons for default while continuing to make efforts to maintain a healthy relationship with the customero Enlist the efforts of sales and senior management when necessary to accelerate the collection process including supporting the collection manager (court receiver) in repossessing assets and seeking legal and police support where required.o Ensure compliance to all Audit / regulatory bodies as well as policies and procedures of the company Qualification : Graduate

Posted 20 hours ago

Apply

10.0 - 15.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Description: Boomi India Lab (11013020) Requirements: Job Description AWS (VPC/ECS/EC2/CloudFormation/RDS) Artifactory Some knowledge with CircleCI/Saltstack is preferred but not required Responsibilities: Manage containerized applications using kubernetes, Docker, etc. Automate Build/deployments (CI&CD) & other repetitive tasks using shell/Python scripts or tools like Ansible, Jenkins, etc Coordinate with development teams to fix issues, release new code Setup configuration management using tools like Ansible etc. Implement High available, auto-scaling, Fault tolerant, secure setup Implement automated jobs tasks like backups, cleanup, start-stop, reports. Configure monitoring alerts/alarms and act on any outages/incidents Ensure that the infrastructure is secured and can be accessed from limited IPs and ports Understand client requirements propose solutions and ensure delivery Innovate and actively look for improvements in overall infrastructure Must Have: Bachelors Degree, with at least 7+ year experience in DevOps Should have worked on various DevOps tools like: GitLab, Jenkins, SonarQube, Nexus, Ansible etc. Should have worked on various AWS Services -EC2, S3, RDS, CloudFront, CloudWatch, CloudTrail, Route53, ECS, ASG, Route53 etc. Well-versed with shell/python scripting & Linux Well-versed with Web-Servers (Apache, Tomcat etc) Well-versed with containerized application (Docker, Docker-compose, Docker-swarm, Kubernetes) Have worked on Configuration management tools like Puppet, Ansible etc. Have experience in CI/CD implementation (Jenkins, Bamboo, etc..) Self-starter and ability to deliver under tight timelines Good to have: Exposure to various tools like New Relic, ELK, Jira, confluence etc Prior experience in managing infrastructure for public facing web-applications. Prior experience in handling client communications Basic Networking knowledge – VLAN, Subnet, VPC, etc. Knowledge of databases (PostgreSQL). Key Skills- Must have Jenkins, Docker, Python, Groovy, Shell-Scripting, Artifactory, Gitlab, Terraform, VM Ware,PostgreSQL, AWS, Kafka Job Responsibilities: Responsibilities: Manage containerized applications using kubernetes, Docker, etc. Automate Build/deployments (CI&CD) & other repetitive tasks using shell/Python scripts or tools like Ansible, Jenkins, etc Coordinate with development teams to fix issues, release new code Setup configuration management using tools like Ansible etc. Implement High available, auto-scaling, Fault tolerant, secure setup Implement automated jobs tasks like backups, cleanup, start-stop, reports. Configure monitoring alerts/alarms and act on any outages/incidents Ensure that the infrastructure is secured and can be accessed from limited IPs and ports Understand client requirements propose solutions and ensure delivery Innovate and actively look for improvements in overall infrastructure What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 1 day ago

Apply

1.0 - 5.0 years

8 - 12 Lacs

Ahmedabad

Work from Office

About the Company e.l.f. Beauty, Inc. stands with every eye, lip, face and paw. Our deep commitment to clean, cruelty free beauty at an incredible value has fueled the success of our flagship brand e.l.f. Cosmetics since 2004 and driven our portfolio expansion. Today, our multi-brand portfolio includes e.l.f. Cosmetics, e.l.f. SKIN, pioneering clean beauty brand Well People, Keys Soulcare, a groundbreaking lifestyle beauty brand created with Alicia Keys and Naturium, high-performance, biocompatible, clinically-effective and accessible skincare. In our Fiscal year 24, we had net sales of $1 Billion and our business performance has been nothing short of extraordinary with 24 consecutive quarters of net sales growth. We are the #2 mass cosmetics brand in the US and are the fastest growing mass cosmetics brand among the top 5. Our total compensation philosophy offers every full-time new hire competitive pay and benefits, bonus eligibility (200% of target over the last four fiscal years), equity, and a hybrid 3 day in office, 2 day at home work environment. We believe the combination of our unique culture, total compensation, workplace flexibility and care for the team is unmatched across not just beauty but any industry. Visit our Career Page to learn more about our team: https://www.elfbeauty.com/work-with-us Job Summary: We are seeking a highly skilled and experienced Senior AI Engineer to lead the design, development, and deployment of advanced AI solutions across our enterprise. The ideal candidate will have a deep understanding of AI/ML algorithms, scalable systems, and data engineering best practices. Responsibilities Design and develop production-grade AI and machine learning models for real-world applications (e.g., recommendation engines, NLP, computer vision, forecasting). Lead model lifecycle management from experimentation and prototyping to deployment and monitoring. Collaborate with cross-functional teams (product, data engineering, MLOps, and business) to define AI-driven features and services. Perform feature engineering, data wrangling, and exploratory data analysis on large-scale structured and unstructured datasets. Build and maintain scalable AI infrastructure using cloud services (AWS, Azure, GCP) and MLOps best practices. Mentor junior AI engineers, guiding them in model development, evaluation, and deployment. Continuously improve model performance by leveraging new research, retraining on new data, and optimizing pipelines. Stay current with the latest developments in AI, machine learning, and deep learning through research, conferences, and publications. Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning or related field. 14+ years of IT experience with a minimum of 6+ years of AI Experience in AI engineering, particularly with building LLM-based applications and prompt-driven architectures. Solid understanding of Retrieval-Augmented Generation (RAG) patterns and vector databases (especially Qdrant ). Hands-on experience in deploying and managing containerized services in AWS ECS and using CloudWatch for logs and diagnostics. Familiarity with AWS Bedrock and working with foundation models through its managed services. Experience working with AWS RDS (MySQL or MariaDB) for structured data storage and integration with AI workflows. Practical experience with LLM fine-tuning techniques, including full fine-tuning, instruction tuning, and parameter-efficient methods like LoRA or QLoRA. Strong understanding of recent AI advancements such as multi-agent systems, AI assistants, and orchestration frameworks. Proficiency in Python and experience working directly with LLM APIs (e.g., OpenAI, Anthropic, or similar). Comfortable working in a React frontend environment and integrating backend APIs. Experience with CI/CD pipelines and infrastructure as code (e.g., Terraform, AWS CDK). Minimum Work Experience 6 Maximum Work Experience 15 This job description is intended to describe the general nature and level of work being performed in this position. It also reflects the general details considered necessary to describe the principal functions of the job identified, and shall not be considered, as detailed description of all the work required inherent in the job. It is not an exhaustive list of responsibilities, and it is subject to changes and exceptions at the supervisors’ discretion. e.l.f. Beauty respects your privacy. Please see our Job Applicant Privacy Notice (www.elfbeauty.com/us-job-applicant-privacy-notice) for how your personal information is used and shared.

Posted 1 day ago

Apply

3.0 - 8.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Responsible for IT Infrastructure cross-platform technology areas demonstrating design and build expertise. Responsible for developing, architecting, and building AWS Cloud services with best practices, blueprints, patterns, high-availability and multi-region disaster recovery. Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise BE / B Tech in any stream, M.Sc. (Computer Science/IT) / M.C.A, with Minimum 3-5 plus years of experience Must have 3 + yrs of relevant experience in Python/ Java, AWS, Terraform/(IaC) Experience in Kubernetes, Docker, Shell scripting. Experienced in scripting languages Python (not someone who can write small scripts) Preferred technical and professional experience Experience using DevOps tools in a cloud environment, such as Ansible, Artifactory, Docker, GitHub, Jenkins, Kubernetes, Maven, and Sonar Qube Experience installing and configuring different application servers such as JBoss, Tomcat, and WebLogic Experience using monitoring solutions like CloudWatch, ELK Stack, and Prometheus

Posted 1 day ago

Apply

3.0 - 6.0 years

8 - 12 Lacs

Guwahati

Work from Office

Major Deliverables: 1. Handle collections for the assigned area and achieve collection targets on various parameters like resolution, flows, credit cost and roll rates (depending on the bucket) 2. Ensure that the NPA's are kept within assigned budget and active efforts are made to minimize it. 3. Increase the fee income / revenue and develop initiatives to control and reduce the amount of vendor payouts 4. Conduct asset verifications and possession as per SARFESI / Section 9 process through court receivers. 5. Track & control the delinquency of the area (Bucket & DPD wise) and focus on nonstarters 6. Ensure customer satisfaction by ensuring quick resolution of customer issues within specified TAT 7. Build relationships with key clients to ensure timely collections are made and monitor defaulting customers by ensuring regular follow with critical/complex customers to identify reasons for defaulting 8. Represent the organization in front of legal/ statutory bodies as required by the legal team and ensure that the collection team adheres to the legal guidelines provided by the law in force 9. Allocate work to the field executives and ensure that all the agencies in the location perform as per defined SLA, ensuring payments and audit receipts get deposited within the defined SLA. 10. Ensure that there is adequate Feet on Street availability area-wise /bucket-wise/ segment-wise and obtain daily updates from all collection executives on delinquent portfolio & initiate detailed account level review of high-ticket accounts 11. Ensure compliance to all Audit / Regulatory bodies as well as policies and procedures of the company Educational Qualification: Post Graduate/ Graduate in any discipline

Posted 1 day ago

Apply

2.0 - 6.0 years

9 - 13 Lacs

Bengaluru

Work from Office

About The Role Job TitleArea Resolutions Manager LevelM4/5 DivisionRARD Function Recovery Reporting RelationshipNRM Average No. of Direct Reportees :5 to 7 Job Role KRA"sJob requires managing central agencies and adding new agencies.Managing independent team from all states.Liaison with central and state teams.Meeting monthly collection targets.Deep bucket collections Job Requirements, Skills, Knowledge prerequisites Knowhow of managing collection call centers, digital approach, negotiation, skip tracing and managing state level team. Having basic legal knowledge.Deep bucket collections. Educational QualificationsGraduate & above Experience Profile 4 5 years of experience in managing large collection call center. BenchmarkCompanies Bajaj, L&T, HDFC, Magma, ICICI and NBFCS

Posted 1 day ago

Apply

6.0 - 11.0 years

15 - 20 Lacs

Hyderabad

Work from Office

DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The impact you will have in this role : Being a member of IT Architecture and Enterprise services team, you will be responsible for ensuring that all applications and systems meet defined quality standards. You will develop, conduct, and evaluate testing processes, working closely with developers to remediate identified system defects. You will have in-depth knowledge of automated testing tools, and quality control and assurance approaches including the creation of reusable foundational test automation framework for the entire organization. You are also responsible for the validation of non-functional requirements for performance testing. Your Primary Responsibilities: Develop proven grasp of the DTCC Software Delivery process, Performance Test Engineering framework, CoE (Center of Excellence) Engagement process as well as understanding of tech-stack (tools and technologies) to perform day-to-day job Prepare, maintain, and implement performance test scenarios based on non-functional requirements and follow standard DTCC testing guidelines Automate performance test scenarios by using current automated functional and performance test scripts in compliance with the non-functional framework Maintain traceability across Non-Functional Requirements, Performance Test Scenarios and Defects Review of performance test scenarios from both a technical and business perspective with collaborators, such as development teams and business Track defects to closure, report test results, continuously supervise execution achievements and call out as required Contribute to the technical aspects of Delivery Pipeline adoption for performance testing and improve adoption Identify environmental and data requirements; collaborate with Development and Testing teams to manage and maintain environment and data Provide mentorship to team members related to performance test coverage, performance test scenarios, and non-functional requirements Develop a basic understanding of the product being delivered including architecture considerations and technical design of the supported applications in relations to the performance and scalability. Work with Development and Architecture teams and find opportunities to improve the test coverage. Share suggestions for performance improvements Aligns risk and control processes into day-to-day responsibilities to supervise and mitigate risk; raises appropriately **Note: Responsibilities of this role are not limited to the details above Qualifications: Minimum of 6 years of related experience Bachelor's degree and/or equivalent experience Talents Needed for Success: Expertise in LINUX/UNIX, shell scripting Experience using JMS/IBM-MQ messaging system and Administering the MQ systems Experience with understanding multi-technology end-to-end testing (distributed and mainframe systems) Experience in performance engineering (analysis, testing, and tuning) Experience developing n-tier, J2EE software applications Experience working in Unix or Linux environments Expertise in performance analysis of distributed platforms including Linux, Windows, AWS, Containers and VMware (tools: Dynatrace, AppDynamics, Splunk, CloudWatch, TeamQuest) Extensive knowledge of the functionality and performance aspects of the above computing platforms Experience in sophisticated statistical and analytical modeling Excellent analytical skills including: Data exploration, analysis and presentation applying descriptive statistics and graphical techniques Key Performance and volume metrics relationship modeling Understanding of queuing networks modeling and simulation modeling concepts and experience with one of the industry standard analytic modeling tools TeamQuest, Metron-Athenee, HyPerformix, and BMC Understanding of RESTful web service, JSON, and XML Experience in Relational Databases, preferably Oracle Experience with AWS services (Kinesis, Elastic Beanstalk, CloudWatch, Lambda, etc) Experience with CI/CD pipeline implementations, including testing, using Jenkins or similar tool Expert MS Office skills. Effective use of Excel statistical functions and sophisticated Power Point presentation skills Experience working with Agile teams (preferably scrum) Experience in the financial services industry is good to have

Posted 1 day ago

Apply

6.0 - 10.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project.

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Role Purpose The purpose of this role is to perform the development of VLSI system by defining the various functionalities, architecture, layout and implementation for a client Do 1. Conduct verification of the module/ IP functionality and provide customer support a. Understand the architecture of the module or the IP and create verification environment and the development plan as per Universal Verification Methodology b. Create test bench development and test case coding of the one or multiple module c. Write the codes or check the code as required d. Execute the test cases and debug the test cases if required e. Conduct functional coverage analysis and document the test cases including failures and debugging procedures on SharePoint/ JIRA or any other platform as directed f. Test the entire IP functionality under regression testing and complete the documentation to publish to client g. Troubleshoot, debug and upgrade existing systems on time & with minimum latency and maximum efficiency h. Write scripts for the IP i. Comply with project plans and industry standards 2. Ensure reporting & documentation for the client a. Ensure weekly, monthly status reports for the clients as per requirements b. Maintain documents and create a repository of all design changes, recommendations etc c. Maintain time-sheets for the clients d. Providing written knowledge transfer/ history of the project Mandatory Skills: VLSI Physical Place and Route Experience: 3-5 Years

Posted 1 day ago

Apply

8.0 - 10.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Job Overview We are looking for a visionary Lead DevOps Engineer with a strong background in architecting scalable and secure cloud-native solutions on AWS. This leadership role will drive DevOps strategy, design cloud architectures, and mentor a team of engineers while ensuring operational excellence and reliability across infrastructure and deployments. The ideal candidate will: Architect and implement scalable, highly available, and secure infrastructure on AWS. Define and enforce DevOps best practices across CI/CD, IaC, observability, and container orchestration. Lead the adoption and optimization of Kubernetes for scalable microservices infrastructure. Develop standardized Infrastructure as Code (IaC) frameworks using Terraform or CloudFormation. Champion automation at every layer of infrastructure and application delivery pipelines. Collaborate with cross-functional teams (Engineering, SRE, Security) to drive cloud-native transformation. Provide technical mentorship to junior DevOps engineers, influencing design and implementation decisions. Primary Skills Bachelor's degree in Computer Science, Information Technology, or a related field. 7+ years of DevOps or Cloud Engineering experience with strong expertise in AWS. Proven experience designing and implementing production-grade cloud architectures. Hands-on experience with containerization and orchestration (Docker, Kubernetes). Proficient in building CI/CD workflows using Jenkins and/or GitHub Actions. Deep understanding of Infrastructure as Code using Terraform (preferred) or CloudFormation. Strong scripting/automation expertise in Python or Go. Familiarity with service mesh, secrets management, and policy as code (e.g., Istio, Vault, OPA). Strong problem-solving and architectural thinking skills. Excellent verbal and written communication skills with a track record of technical leadership. AWS Certified Solutions Architect (Professional/Associate), CKA/CKAD, or Terraform Associate is a plus. Good to Have Skills Exposure to AI & ML Exposure to cloud cost optimization and FinOps practices. Roles and Responsibilities Lead the architecture and implementation of scalable, secure, and cost-efficient infrastructure solutions on AWS. Define Kubernetes cluster architecture, implement GitOps/ArgoCD-based deployment models, and manage multi-tenant environments. Establish and maintain standardized CI/CD pipelines with embedded security and quality gates. Build and maintain reusable Terraform modules to enable infrastructure provisioning at scale across multiple teams. Drive observability strategy across all services, including metric collection, alerting, and logging with tools like Prometheus, Datadog, CloudWatch, and ELK. Automate complex operational workflows and disaster recovery processes using Python/Go scripts and native AWS services. Review and approve high-level design documents and support platform roadmap planning. Mentor junior team members and foster a culture of innovation, ownership, and continuous improvement. Stay abreast of emerging DevOps and AWS trends, and drive adoption of relevant tools and practices.

Posted 1 day ago

Apply

5.0 - 10.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project.

Posted 3 days ago

Apply

Exploring Amazon CloudWatch Jobs in India

Amazon CloudWatch is a monitoring and observability service offered by Amazon Web Services (AWS) that provides real-time insights into the performance and health of applications and infrastructure. In India, the demand for professionals with expertise in Amazon CloudWatch is on the rise as more companies adopt cloud technologies for their operations.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for Amazon CloudWatch professionals in India varies based on experience levels. Entry-level positions may start at around ₹6-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of ₹15-20 lakhs per annum.

Career Path

In the field of Amazon CloudWatch, a typical career path may include roles such as CloudWatch Analyst, CloudWatch Engineer, CloudWatch Architect, and CloudWatch Manager. Progression can be from Junior CloudWatch Analyst to Senior CloudWatch Engineer to CloudWatch Architect or Tech Lead.

Related Skills

In addition to expertise in Amazon CloudWatch, professionals in this field are often expected to have knowledge and experience in related areas such as AWS services, cloud computing, monitoring tools, scripting languages (e.g., Python), and networking concepts.

Interview Questions

  • What is Amazon CloudWatch and how does it work? (basic)
  • How do you set up alarms in Amazon CloudWatch? (medium)
  • Explain the difference between Amazon CloudWatch Logs and Amazon CloudWatch Metrics. (medium)
  • How can you integrate Amazon CloudWatch with other AWS services? (medium)
  • What is the importance of monitoring and observability in cloud environments? (basic)
  • Describe a scenario where you had to troubleshoot performance issues using Amazon CloudWatch. (advanced)
  • How do you use Amazon CloudWatch to monitor EC2 instances? (medium)
  • What is the difference between Amazon CloudWatch Events and Amazon CloudWatch Alarms? (medium)
  • How do you create custom metrics in Amazon CloudWatch? (medium)
  • Explain the concept of CloudWatch Logs Insights. (advanced)
  • How can you automate responses to CloudWatch alarms? (advanced)
  • What is the significance of CloudWatch dashboards in monitoring AWS resources? (basic)
  • Describe a situation where you had to scale resources based on CloudWatch metrics. (advanced)
  • How would you optimize costs using Amazon CloudWatch? (advanced)
  • What are the different types of monitoring provided by Amazon CloudWatch? (basic)
  • How do you troubleshoot CloudWatch agent connectivity issues? (advanced)
  • Explain the concept of CloudWatch API and CLI. (medium)
  • How do you monitor AWS Lambda functions using Amazon CloudWatch? (medium)
  • What are the benefits of using CloudWatch Logs for log monitoring? (basic)
  • How do you set up notifications for CloudWatch alarms? (basic)
  • Describe a scenario where you had to analyze historical data in CloudWatch. (advanced)
  • How do you integrate CloudWatch with AWS CloudTrail for auditing purposes? (medium)
  • What are the limitations of Amazon CloudWatch? (medium)
  • How do you secure CloudWatch data and access controls? (medium)

Closing Remark

As you explore opportunities in Amazon CloudWatch jobs in India, remember to continuously upskill and stay updated with the latest trends and technologies in cloud monitoring. Prepare thoroughly for interviews by practicing common questions and showcasing your expertise confidently. Good luck on your job search journey!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies