Home
Jobs

7 Aws Ec2 Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

20 - 30 Lacs

Pune, Chennai, Bengaluru

Work from Office

Naukri logo

Mandatory keyskills : Athena, Step Functions, Spark - Pyspark, ETL Fundamentals, SQL (Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2, Lake Formation, Aurora, S3, Modern Data Platform Fundamentals, PLSQL, Cloud front We are looking for an experienced AWS Data Engineer to design, build, and manage robust, scalable, and high-performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands-on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation. Key Responsibilities : Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions. Work with structured and semi-structured data using Athena, S3, and Lake Formation to enable efficient querying and access control. Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration. Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning. Build data lakes and data warehouses using S3, Aurora, and Athena. Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM. Develop and maintain metadata, lineage, and data cataloging capabilities. Participate in data modeling exercises for both OLTP and OLAP environments. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Monitor, debug, and optimize data pipelines for reliability and performance. Required Skills & Experience: Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront. Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL. Solid understanding of ETL/ELT processes and data warehousing concepts. Familiarity with modern data platform fundamentals and distributed data processing. Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases. Experience with orchestration and workflow management tools within AWS. Strong debugging and performance tuning skills across the data stack.

Posted 23 hours ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Hyderabad

Remote

Naukri logo

Role & responsibilities 1. Prepare Helm charts and package applications for deployment. 2. Create manifests and database tunnels for seamless development and testing. 3. Create and maintain development support tools and CI/CD pipelines for multiple projects. 4. Understand the product and create dependency maps to ensure smooth project workflows. 5. Maintain and optimize DevOps tools, including GitLab on-premises and GitPods. 6. Support and configure container registries, code scanners, and code reporting tools. 7. Integrate and execute testing processes within CI/CD pipelines. 8. Utilize Terraform for infrastructure provisioning and management. Operational 9. Gain expertise in databases, including backups, restores, high availability, and failover strategies. 10. Implement least privileged access and set up database tunneling for developer access. 11. Ensure comprehensive backups of repositories and branching structures. 12. Demonstrate proficiency in Kubernetes and Docker, with hands-on experience in CRDs, StatefulSets, PVs, PVCs, Docker volumes, and security contexts. 13. Experience with Helm for Kubernetes package management. 14. Utilize Ansible for configuration management. 15. Possess practical knowledge and experience in Infrastructure as Code (IAC), VMware vSphere, Linux, and configuration management. 16. Implement , provision, and monitor a fleet of servers. People 17. Monitor infrastructure using tools such as Prometheus, Grafana, and Alert Manager. 18. Work with logs aggregation systems, write queries, and set up log shipping. 19. Have hands-on experience with Python and Bash scripting to automate routine tasks. 20. Use practical knowledge of CNCF incubated tools like Longhorn, Velero, Kasten, Harbor, and Rancher to build and maintain private clouds. 21. Implement DevSecOps practices and security tools to enhance the security of infrastructure, network, and storage layers. Preferred candidate profile 1. Bachelor's degree in a related field or equivalent work experience 2. Proficiency with scripting languages (Python, Bash) for automation 3. Excellent understanding of GCP, AWS EC2, LInux, Kubernetes, Docker, Helm, Terraform, Ansible, Jenkins, Gitlab-ci, Gitlab runner, longhorn, k3s, velero backup, minio and other related technologies.

Posted 3 weeks ago

Apply

12.0 - 17.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

The Role: Sr. Engineer, Database Engineering. The Team: We are looking for a highly self-motivated hands-on Sr. Engineer, Database Engineering who would focus on our database infrastructures estate and automations and DevOps engineering within our enterprise solutions division. The Impact: This is an excellent opportunity to join Enterprise Solutions as we transform and harmonize our infrastructure into a unified place while also developing your skills and furthering your career as we plan to power the markets of the future. Whats in it for you: This is the place to hold your existing Database, Infrastructure, DevOps and Leadership skills while having the chance to become exposed to fresh and divergent technologies (e.g. AWS/ Snowflake/ Terraforms/Python/CI/CD) Responsibilities: Team Leadership: Lead and mentor a team of DBAs, fostering a collaborative and high-performance work environment. Assign tasks, manage workloads, and ensure team members meet project deadlines. Conduct performance reviews and identify training needs to enhance technical capabilities. Database Management: Oversee the installation, configuration, and maintenance of SQL Server, Oracle, and other database systems. Manage and optimize databases hosted on **AWS RDS** and **AWS EC2** for performance, scalability, and security. Implement automated backup, restore, and recovery strategies for cloud-based databases. Manage database security policies, ensuring protection against unauthorized access. Performance & Optimization: Monitor database performance and proactively implement tuning strategies. Optimize AWS RDS instances and EC2-hosted databases for cost efficiency and performance. Analyze system logs, resolve issues, and ensure minimal downtime. Project & Change Management: Collaborate with development teams to support database design, deployment, and schema changes. Manage database migrations, upgrades, and patching processes, including AWS services. Incident & Problem Management: Act as an escalation point for critical database issues. Drive root cause analysis for incidents and ensure preventive measures are implemented. Documentation & Compliance: Maintain accurate documentation of database configurations, processes, and recovery procedures. Ensure compliance with data governance, security standards, and AWS best practices. What Were Looking For: Technical Expertise : Proficient in SQL Server, Oracle, AWS RDS, and EC2 database environments. Cloud Knowledge : Strong understanding of AWS database services, including security, scaling, and cost optimization. Leadership Skills : Proven experience managing a DBA team or leading technical projects. Problem-Solving : Strong analytical skills with a proactive approach to troubleshooting. Communication : Excellent verbal and written communication skills for effective collaboration. Certifications : Preferred certifications include AWS Certified Database - Specialty, Microsoft Certified: Azure Database Administrator Associate, Oracle DBA certifications, or equivalent. Experience Requirements: Minimum 12+ years of hands-on DBA experience. At least 2 years of experience in a leadership or team lead role. Experience working with AWS RDS, AWS EC2, and on-premises database environments. Preferred Skills: Experience in PowerShell, T-SQL, and Python for automation. Knowledge of CI/CD pipelines and DevOps practices for database deployments.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

13 - 15 Lacs

Gurugram

Work from Office

Naukri logo

A skilled DevOps Engineer to manage and optimize both on-premises and AWS cloud infrastructure. The ideal candidate will have expertise in DevOps tools, automation, system administration, and CI/CD pipeline management while ensuring security, scalability, and reliability. Key Responsibilities: 1. AWS & On-Premises Solution Architecture: o Design, deploy, and manage scalable, fault-tolerant infrastructure across both on-premises and AWS cloud environments. o Work with AWS services like EC2, IAM, VPC, CloudWatch, GuardDuty, AWS Security Hub, Amazon Inspector, AWS WAF, and Amazon RDS with Multi-AZ. o Configure ASG and implement load balancing techniques such as ALB and NLB. o Optimize cost and performance leveraging Elastic Load Balancing and EFS. o Implement logging and monitoring with CloudWatch, CloudTrail, and on-premises monitoring solutions. 2. DevOps Automation & CI/CD: o Develop and maintain CI/CD pipelines using Jenkins and GitLab for seamless code deployment across cloud and on-premises environments. o Automate infrastructure provisioning using Ansible, and CloudFormation. o Implement CI/CD pipeline setups using GitLab, Maven, Gradle, and deploy on Nginx and Tomcat. o Ensure code quality and coverage using SonarQube. o Monitor and troubleshoot pipelines and infrastructure using Prometheus, Grafana, Nagios, and New Relic. 3. System Administration & Infrastructure Management: o Manage and maintain Linux and Windows systems across cloud and on-premises environments, ensuring timely updates and security patches. o Configure and maintain web/application servers like Apache Tomcat and web servers like Nginx and Node.js. o Implement robust security measures, SSL/TLS configurations, and secure communications. o Configure DNS and SSL certificates. o Maintain and optimize on-premises storage, networking, and compute resources. 4. Collaboration & Documentation: o Collaborate with development, security, and operations teams to optimize deployment and infrastructure processes. o Provide best practices and recommendations for hybrid cloud and on-premises architecture, DevOps, and security. o Document infrastructure designs, security configurations, and disaster recovery plans for both environments. Required Skills & Qualifications: Cloud & On-Premises Expertise: Extensive knowledge of AWS services (EC2, IAM, VPC, RDS, etc.) and experience managing on-premises infrastructure. DevOps Tools: Proficiency in SCM tools (Git, GitLab), CI/CD (Jenkins, GitLab CI/CD), and containerization. Code Quality & Monitoring: Experience with SonarQube, Prometheus, Grafana, Nagios, and New Relic. Operating Systems: Experience managing Linux/Windows servers and working with CentOS, Fedora, Debian, and Windows platforms. Application & Web Servers: Hands-on experience with Apache Tomcat, Nginx, and Node.js. Security & Networking: Expertise in DNS configuration, SSL/TLS implementation, and AWS security services. Soft Skills: Strong problem-solving abilities, effective communication, and proactive learning. Preferred Qualifications: AWS certifications (Solutions Architect, DevOps Engineer) and a bachelors degree in Computer Science or related field. Experience with hybrid cloud environments and on-premises infrastructure automation.

Posted 1 month ago

Apply

4 - 6 years

6 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for Site Reliability Engineer! Youll make a difference by: SRE L1 Commander is responsible for ensuring the stability, availability, and performance of critical systems and services. As the first line of defense in incident management and monitoring, the role requires real-time response, proactive problem solving, and strong coordination skills to address production issues efficiently. Monitoring and Alerting: Proactively monitor system health, performance, and uptime using monitoring tools like Datadog, Prometheus. Serving as the primary responder for incidents to troubleshoot and resolve issues quickly, ensuring minimal impact on end-users. Accurately categorizing incidents, prioritize them based on severity, and escalate to L2/L3 teams when necessary. Ensuring systems meet Service Level Objectives (SLOs) and maintain uptime as per SLAs. Collaborating with DevOps and L2 teams to automate manual processes for incident response and operational tasks. Performing root cause analysis (RCA) of incidents using log aggregators and observability tools to identify patterns and recurring issues. Following predefined runbooks/playbooks to resolve known issues and document fixes for new problems. Youd describe yourself as: Experienced professional with 4 to 6 years of relevant experience in SRE, DevOps, or Production Support with monitoring tools (e.g., Prometheus, Datadog). Working knowledge of Linux/Unix operating systems and basic scripting skills (Python, Gitlab actions) cloud platforms (AWS, Azure, or GCP). Familiarity with container orchestration (Kubernetes, Docker, Helmcharts) and CI/CD pipelines. Exposure with ArgoCD for implementing GitOps workflows and automated deployments for containerized applications. Possessing experience in Monitoring: Datadog, Infrastructure: AWS EC2, Lambda, ECS/EKS, RDS, Networking: VPC, Route 53, ELB and Storage: S3, EFS, Glacier. Strong troubleshooting and analytical skills to resolve production incidents effectively. Basic understanding of networking concepts (DNS, Load Balancers, Firewalls). Good communication and interpersonal skills for incident communication and escalation. Having preferred certifications: AWS Certified SysOps Administrator Associate, AWS Certified Solutions Architect Associate or AWS Certified DevOps Engineer Professional

Posted 1 month ago

Apply

7 - 9 years

25 - 32 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Hiring Cloud Engineers for an 8-month contract role based in Chennai or Bangalore with hybrid/remote flexibility. The ideal candidate will have 8+ years of IT experience, including 4+ years in AWS cloud migrations, with strong hands-on expertise in AWS MGN, EC2, EKS, Terraform, and scripting using Python or Shell. Responsibilities include leading lift-and-shift migrations, automating infrastructure, migrating storage to EBS, S3, EFS, and modernizing legacy applications. AWS/Terraform certifications and experience in monolithic and microservices architectures are preferred

Posted 1 month ago

Apply

6 - 8 years

18 - 20 Lacs

Hyderabad, Gurugram, Bengaluru

Work from Office

Naukri logo

6+ years of hands on experience with AWS services(Lambda, DynamoDB, SQS, SNS, S3, ECS, EC2) (mandatory in each service) Created lambda functions and done scripting Only deployment experience will not work Hands on Java, Spring Boot, Microservices, Kafka.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies