Jobs
Interviews

71 Lambda Aws Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

15 - 20 Lacs

Hyderabad

Hybrid

Hello, Urgent job openings for Data Engineer role @ GlobalData(Hyd). Job Description given below please go through to understand the requirement. if requirement is matching to your profile & interested to apply please share your updated resume @ mail id (m.salim@globaldata.com). Mention Subject Line :- Applying for Data Engineer @ GlobalData(Hyd) Share your details in the mail :- Full Name : Mobile # : Qualification : Company Name : Designation : Total Work Experience Years : How many years of experience working on Snowflake/Google BigQuery : Current CTC : Expected CTC : Notice Period : Current Location/willing to relocate to Hyd? : Office Address : 3rd Floor, Jyoti Pinnacle Building, Opp to Prestige IVY League Appt, Kondapur Road, Hyderabad, Telangana-500081. Job Description :- We are looking for a skilled and experienced Data Delivery Specification (DDS) Engineer to join our data team. The DDS Engineer will be responsible for designing, developing, and maintaining robust data pipelines and delivery mechanisms, ensuring timely and accurate data delivery to various stakeholders. This role requires strong expertise in cloud data platforms such as AWS, Snowflake, and Google BigQuery, along with a deep understanding of data warehousing concepts. Key Responsibilities Design, develop, and optimize data pipelines for efficient data ingestion, transformation, and delivery from various sources to target systems. Implement and manage data delivery solutions using cloud platforms like AWS (S3, Glue, Lambda, Redshift), Snowflake, and Google BigQuery. Collaborate with data architects, data scientists, and business analysts to understand data requirements and translate them into technical specifications. Develop and maintain DDS documents, outlining data sources, transformations, quality checks, and delivery schedules. Ensure data quality, integrity, and security throughout the data lifecycle. Monitor data pipelines, troubleshoot issues, and implement solutions to ensure continuous data flow. Optimize data storage and query performance on cloud data warehouses. Implement automation for data delivery processes and monitoring. Stay current with new data technologies and best practices in data engineering and cloud platforms. Required Skills & Qualifications Bachelors or Master’s degree in Computer Science, Data Engineering, or a related quantitative field. 4+ years of experience in data engineering, with a focus on data delivery and warehousing. Proven experience with cloud data platforms, specifically: AWS: S3, Glue, Lambda, Redshift, or other relevant data services. Snowflake: Strong experience with data warehousing, SQL, and performance optimization. Google BigQuery: Experience with data warehousing, SQL, and data manipulation. Proficient in SQL for complex data querying, manipulation, and optimization. Experience with scripting languages (e.g., Python) for data pipeline automation. Solid understanding of data warehousing concepts, ETL/ELT processes, and data modeling. Experience with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams Thanks & Regards, Salim (Human Resources)

Posted 1 month ago

Apply

7.0 - 12.0 years

30 - 40 Lacs

Hyderabad

Work from Office

Support enhancements to the MDM platform Develop pipelines using snowflake python SQL and airflow Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years of hands on expert level Snowflake, Python, orchestration tools like Airflow Good understanding of investment domain Experience with dbt, Cloud experience (AWS, Azure) DevOps

Posted 1 month ago

Apply

4.0 - 9.0 years

8 - 16 Lacs

Kolkata

Remote

Enhance/modify applications, configure existing systems and provide user support DotNET Full Stack or [Angular 18 + developer + DotNET backend] SQL Server Angular version 18+ (it is nice to have) Angular version 15+ (mandatory)

Posted 1 month ago

Apply

6.0 - 10.0 years

30 - 40 Lacs

Hyderabad

Hybrid

About the Role We are looking for a Full Stack Developer with strong expertise in Java Spring Boot, microservices architecture, PDF processing, and AWS DevOps . You will play a key role in building reliable and performant applications that power our AI-driven construction platform. Key responsibilities below and relevant experience in those areas is a requirement. Key Responsibilities Backend Development: Design and implement scalable microservices using Java Spring Boot , optimized for performance and maintainability. PDF Document Processing: Build and integrate modules for extracting, processing, and managing PDF documents such as construction plans, contracts, and specifications. Front-End Integration: Collaborate with frontend engineers to ensure seamless communication with backend services via REST APIs or GraphQL. Cloud Architecture & Deployment: Deploy and manage services on AWS using DevOps best practices including containerization (Docker), orchestration (Kubernetes/ECS), and CI/CD pipelines (GitHub Actions, CodePipeline). Database & Data Flow: Design data models using PostgreSQL and MongoDB ; manage data pipelines and integrations across services. Security & Scalability: Implement access controls, encryption standards, and secure API endpoints to support enterprise-level deployments. Cross-Team Collaboration: Work with AI/ML engineers, product managers, and domain experts to develop backend services that support AI features like document understanding and risk analysis. Required Skills & Qualifications Technical Skills Strong programming skills in Java , with hands-on experience in Spring Boot and microservices architecture Experience processing and managing data from PDFs using tools like Apache PDFBox, iText, or similar libraries Proficient in designing and consuming RESTful APIs or GraphQL APIs Experience with AWS services like EC2, S3, Lambda, API Gateway, CloudWatch, and RDS Hands-on experience with Docker , CI/CD pipelines , and infrastructure automation (e.g., Terraform, CloudFormation) Familiarity with PostgreSQL , MongoDB , and distributed caching mechanisms (e.g., Redis) Understanding of authentication and security principles (OAuth2, JWT, etc.) Exposure to AI/ML model consumption via APIs (e.g., OpenAI, SageMaker) Soft Skills Ability to work independently and take full ownership of backend services Excellent problem-solving skills and attention to detail Strong communication and collaboration in agile cross-functional teams Passion for delivering high-quality, reliable, and scalable solutions Beyond Technical Skills What We're Looking For At Wyre AI, we're building more than just software. Were building a team that thrives in a fast-paced, high-ownership environment. Heres what we value deeply beyond strong technical capabilities Startup Readiness & Ownership Bias for action : You’re someone who ships fast, tests quickly, and iterates with purpose. Comfort with ambiguity : Ability to make decisions with limited information and adapt as things evolve. Ownership mindset : You treat the product as your own - not just a list of tickets to complete. Resourcefulness: You know when to hack something together to keep moving, and when it’s time to build it right. Product Thinking User-Centric Approach: You care about the “why” behind what you're building and understand the user’s perspective. Collaborative in Shaping Product: You’re comfortable challenging and refining product specs instead of just executing them. Strategic Trade-off Awareness: You can navigate choices—speed vs scalability, UX vs tech debt, MVP vs V1—with clarity. Collaboration & Communication Cross-Functional Comfort: You work well with product, design, and founders. Clear communicator: You can explain technical concepts in simple terms when needed. Feedback culture fit: You give and receive feedback without ego. Growth Potential Fast Learner: Startups change, and so do stacks. Willingness to learn is gold. Long-Term Mindset: Lot of opportunity to scale Mentorship Readiness: If you can bring others up as the team scales, that’s a win. Startup Cultural Fit Mission-Driven: You care deeply about what you’re building. Flexible Work Style: Especially if remote, please be flexible. No big-company baggage: No expectations of layered teams or polished specs. We move fast and build together.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Hiring for a FAANG company. Note: This position is part of a program designed to support women professionals returning to the workforce after a career break (9+ months career gap) About the Role Join a high-impact global business team that is building cutting-edge B2B technology solutions. As part of a structured returnship program, this role is ideal for experienced professionals re-entering the workforce after a career break. Youll work on mission-critical data infrastructure in one of the worlds largest cloud-based environments, helping transform enterprise procurement through intelligent architecture and scalable analytics. This role merges consumer-grade experience with enterprise-grade features to serve businesses worldwide. Youll collaborate across engineering, sales, marketing, and product teams to deliver scalable solutions that drive measurable value. Key Responsibilities: Design, build, and manage scalable data infrastructure using modern cloud technologies Develop and maintain robust ETL pipelines and data warehouse solutions Partner with stakeholders to define data needs and translate them into actionable solutions Curate and manage large-scale datasets from multiple platforms and systems Ensure high standards for data quality, lineage, security, and governance Enable data access for internal and external users through secure infrastructure Drive insights and decision-making by supporting sales, marketing, and outreach teams with real-time and historical data Work in a high-energy, fast-paced environment that values curiosity, autonomy, and impact Who You Are: 5+ years of experience in data engineering or related technical roles Proficient in SQL and familiar with relational database management Skilled in building and optimizing ETL pipelines Strong understanding of data modeling and warehousing Comfortable working with large-scale data systems and distributed computing Able to work independently, collaborate with cross-functional teams, and communicate clearly Passionate about solving complex problems through data Preferred Qualifications: Hands-on experience with cloud technologies including Redshift, S3, AWS Glue, EMR, Lambda, Kinesis, and Firehose Familiarity with non-relational databases (e.g., object storage, document stores, key-value stores, column-family DBs) Understanding of cloud access control systems such as IAM roles and permissions Returnship Benefits: Dedicated onboarding and mentorship support Flexible work arrangements Opportunity to work on meaningful, global-scale projects while rebuilding your career momentum Supportive team culture that encourages continuous learning and professional development Top 10 Must-Have Skills: SQL ETL Development Data Modeling Cloud Data Warehousing (e.g., Redshift or equivalent) Experience with AWS or similar cloud platforms Working with Large-Scale Datasets Data Governance & Security Awareness Business Communication & Stakeholder Collaboration Automation with Python/Scala (for ETL pipelines) Familiarity with Non-Relational Databases

Posted 1 month ago

Apply

7.0 - 12.0 years

12 - 22 Lacs

Hyderabad

Work from Office

Job Title: Sr. Managed Services Engineer AWS (L3) Company: SHI | LOCUZ Location: Hyderabad Experience: 8+ Years Level: L3 Managed Services Shift: 24/7 Support (Rotational Shifts) Notice Period: Immediate Joiners or Max 15 to 20 Days About the Role: We are looking for a seasoned Sr. Managed Services Engineer AWS (L3) to join our expert team supporting SHI Complete and Expert-level AWS services. The ideal candidate will have strong hands-on experience with core AWS services, managed services delivery, and a passion for proactive monitoring and automation in cloud environments. Key Responsibilities: Perform in-depth reviews and understanding of customer AWS environments Evaluate business requirements and develop tailored service delivery plans Configure, monitor, and maintain AWS infrastructure for performance and availability Handle L3-level escalations and troubleshoot complex customer incidents/tickets Conduct proactive system checks, health monitoring, and performance tuning Implement data backup and recovery best practices Maintain security compliance and ensure adherence to SLAs and KPIs Prepare AWS-level change roadmaps for continuous improvement Lead incident response and root cause analysis for critical issues Collaborate with L1, L2, and vendor support teams Mentor junior engineers and ensure knowledge transfer Required Skills & Experience: 8+ years of IT experience, with strong exposure to Managed Services environments Deep hands-on experience with a wide range of AWS services, including but not limited to: CloudWatch, EC2, EBS, S3, RDS, EKS, Lambda, CloudFormation, CloudTrail, VPC, Route53, Transit Gateway, IAM, Security Hub, GuardDuty, AWS Backup, WAF & Shield, ACM, FSx, EFS, Elastic Beanstalk, API Gateway, AWS Workspaces, Control Tower Excellent understanding of AWS operational excellence and Well-Architected Framework Experience with 24x7 production environments and ITIL-based service delivery Strong troubleshooting and analytical skills Excellent communication and documentation skills Nice to Have: AWS Certifications (e.g., Solutions Architect Associate/Professional, SysOps Admin, DevOps Engineer) Familiarity with Infrastructure as Code (IaC) tools like Terraform or CloudFormation Experience with monitoring/alerting via EventBridge, SNS, SQS, or 3rd party tools Why Join Us? Work with leading-edge AWS technologies Be part of a high-performance managed services team Great learning opportunities and certifications Stable and growth-oriented career path in cloud infrastructure Apply now and be part of our mission to deliver expert AWS support 24x7 for enterprise customers!

Posted 1 month ago

Apply

0.0 - 1.0 years

2 - 3 Lacs

Bengaluru

Work from Office

Responsibilities: * Collaborate with cross-functional teams on project delivery. * Develop backend solutions using Python, FastAPI & AWS. * Optimize performance through Redis DB & Nginx.

Posted 1 month ago

Apply

5.0 - 9.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job Description We are seeking a skilled Python Developer to join our team. The ideal candidate will have experience in designing, developing, and deploying scalable applications. You will work on backend services, data processing, APIs, and automation, ensuring performance and reliability. Responsibilities Design, develop, test, and deploy high-quality Python applications. Write clean, maintainable, and efficient code following best practices. Develop RESTful APIs and integrate with third-party services. Work with databases (SQL & NoSQL) to design efficient data storage solutions. Implement security, authentication, and authorization mechanisms. Optimize application performance and scalability. Collaborate with cross-functional teams, including frontend developers and DevOps. Debug, troubleshoot, and resolve software issues. Automate repetitive tasks using scripts and tools. Requirements Experience: 5 to 12 years of hands-on experience in Python development. Frameworks: Proficiency in Django, Flask, or Fast API. Database: Strong knowledge of MySQL/ any RDBMS, and MongoDB. APIs: Experience in developing RESTful APIs and working with API documentation tools like Swagger/Postman. Cloud & DevOps: Familiarity with AWS, Docker, Kubernetes, and CI/CD pipelines. Version Control: Proficiency in Git and GitHub/GitLab. Testing: Experience with unit testing frameworks like PyTest or Unittest. Messaging Queues: Knowledge of RabbitMQ, Kafka, or Celery is a plus. Security & Best Practices: Understanding of authentication (OAuth, JWT) and secure coding practice. Role & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 9 Lacs

Kolkata

Work from Office

Seeking a results-driven Python Developer with expertise in API development, AWS services, SQL, and raw queries. Must have a basic grasp of backend architecture and system design principles.A Python Developer Lead plays a crucial role in the software development lifecycle, combining deep technical expertise in Python with strong leadership and project management skills. They are responsible for guiding a team of Python developers, ensuring the delivery of high-quality, scalable, and efficient software solutions. Job Summary: The Python Developer Lead will be responsible for overseeing the design, development, and deployment of robust, scalable, and performant Python applications. This role requires a blend of hands-on coding, architectural design, team leadership, and cross-functional collaboration. The Lead will mentor junior developers, establish best practices, ensure code quality, and contribute significantly to the overall technical strategy and success of our projects. Key Responsibilities: Technical Leadership & Architecture: Lead the design and development of complex Python-based systems, ensuring scalability, reliability, and maintainability. Define and enforce coding standards, design patterns, and architectural principles across the team. Conduct code reviews, provide constructive feedback, and ensure adherence to best practices. Stay abreast of emerging technologies, tools, and trends in the Python ecosystem and integrate relevant advancements. Team Management & Mentorship: Manage and mentor a team of Python developers, fostering their technical growth and professional development. Assign tasks, monitor progress, and provide guidance to ensure efficient project execution. Facilitate knowledge sharing and encourage a collaborative team environment. Participate in the hiring process for new team members. Software Development & Delivery: Develop, test, and deploy high-quality, efficient, and well-documented Python code for various applications and services. Work with cross-functional teams (Product, UI/UX, QA, DevOps) to translate business requirements into technical specifications and deliver effective solutions. Design and implement RESTful APIs, integrate with third-party services, and manage data pipelines. Troubleshoot and debug complex issues, ensuring low-latency and high-availability applications. Oversee the entire software development lifecycle, from conception to deployment and maintenance.

Posted 2 months ago

Apply

8.0 - 12.0 years

16 - 27 Lacs

Chennai, Bengaluru

Work from Office

Role & responsibilities Design, develop, and optimize scalable ETL pipelines using PySpark and AWS data services Work with structured and semi-structured data from various sources and formats (CSV, JSON, Parquet) Build reusable data transformations using Spark DataFrames, RDDs, and Spark SQL Implement data validation, quality checks, and ensure schema evolution across data sources Manage deployment and monitoring of Spark jobs using AWS EMR, Glue, Lambda, and CloudWatch Collaborate with product owners, architects, and data scientists to deliver robust data workflows Tune job performance, manage partitioning strategies, and reduce job latency/cost Contribute to version control, CI/CD processes, and production support Preferred candidate profile Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 5+ years of experience in PySpark, Spark SQL, RDDs, UDFs, and Spark optimization Strong experience in building ETL workflows for large-scale data processing Solid understanding of AWS cloud ecosystem, especially S3, EMR, Glue, Lambda, Athena Proficiency in Python, SQL, and shell scripting Experience with data lakes, partitioning strategies, and file formats (e.g., Parquet, ORC) Familiarity with Git, Jenkins, and automated testing frameworks (e.g., PyTest) Experience with Redshift, Snowflake, or other DW platforms Exposure to data governance, cataloging, or DQ frameworks Terraform or infrastructure-as-code experience Understanding of Spark internals, DAGs, and caching strategies

Posted 2 months ago

Apply

6.0 - 8.0 years

12 - 15 Lacs

Pune, Chennai

Work from Office

Required Skills: Min. 6 Years of Experience in Data Engineering / Backend Data Processing. Strong Hands-On Experience with Python for Data Processing. Expertise In Apache Spark (Pyspark Preferred) Advanced Proficiency in Sql

Posted 2 months ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Join us as a Data Engineer II in Bengaluru! Build scalable data pipelines using Python, SQL, AWS, Airflow, and Kafka. Drive real-time & batch data systems across analytics, ML, and product teams. A hybrid work option is available. Required Candidate profile 3+ yrs in data engineering with strong Python, SQL, AWS, Airflow, Spark, Kafka, Debezium, Redshift, ETL & CDC experience. Must know data lakes, warehousing, and orchestration tools.

Posted 2 months ago

Apply

5.0 - 8.0 years

66 - 108 Lacs

Kolkata

Work from Office

Seeking a results-driven Python Developer with expertise in API development, AWS services, SQL, and raw queries. Must have a basic grasp of backend architecture and system design principles.

Posted 2 months ago

Apply

2.0 - 5.0 years

6 - 17 Lacs

Noida

Work from Office

Responsibilities: * Design, develop, test & maintain Vue.js applications using TypeScript, Node.js, GraphQL & AWS services. * Collaborate with cross-functional teams on API Gateway integration & Dynamo DB data management. Food allowance

Posted 2 months ago

Apply

5.0 - 8.0 years

20 - 30 Lacs

Noida, Pune, Gurugram

Hybrid

Job information: Role AWS Cloud Engineer Shift Timings – 01:00 PM to 10:00 PM JD – Required Skills – 1. Design and manage cloud-based systems on AWS. 2. Develop and maintain backend services and APIs using Java/Python or any other language. 3. Basic knowledge on SQL and able to write SQL queries. 4. Good hands on Devops, Docker file and multistage docker 5. Implement containerization using Docker and orchestration with ECS/Kubernetes. 6. Monitor and troubleshoot cloud infrastructure and application performance. 7. Collaborate with cross-functional teams to integrate systems seamlessly. 8. Document system architecture, configurations, and operational procedures. Need Strong Hands-on Knowledge: * ECS, ECR, NLB, ALB, ACM, IAM, S3, Lambda, RDS, KMS, API Gateway, Cognito, CloudFormation. Good to Have: * Experience with AWS CDK for infrastructure as code. * AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified Developer). * Python Job Type : Full Time If interested to apply on above job then please share your updated resume. ABOUT IRIS SOFTWARE Iris Software has been recognized as one of India’s Top 100 Best Companies to Work For 2023 by the Great Place to Work® Institute. The best workplaces were selected after a rigorous evaluation of 1,200 companies in India from more than 21 industries. IRIS Software Inc is a CMMI ML 5 professional software services organization offering high-quality, cost-effective solutions to businesses. It has helped meeting the IT requirements of companies ranging from those among the Fortune 100 to medium-sized firms by utilizing best-of-breed technologies, rapidly deployable solutions and flexible engagement models. With offices in India, USA , Mexico and Canada, we employ 4000+ highly talented professionals and domain experts. We are strategic partners to over 25 Fortune 100 companies and our consistent striving for high customer satisfaction has translated into long-standing relationships and a preferred-partner status with many of our partners. Over 93% of the customers we have worked with have chosen to re-engage with us. IRIS is a quality and security conscious organization and has maintained steady progress in its quality management and information security management programs over the years. Namely Software Process Maturity: Certified to ISO 9001:2008 since 2002 | CMMI ML 5 | ISO 27001:2013 Visit us at www.irissoftware.com

Posted 2 months ago

Apply

5.0 - 8.0 years

10 - 17 Lacs

Pune

Remote

Were Hiring! | Senior Data Engineer (Remote) Location: Remote | Shift: US - CST Time | Department: Data Engineering Are you a data powerhouse who thrives on solving complex data challenges? Do you love working with Python, AWS, and cutting-edge data tools? If yes, Atidiv wants YOU! Were looking for a Senior Data Engineer to build and scale data pipelines, transform how we manage data lakes and warehouses, and power real-time data experiences across our products. What Youll Do: Architect and develop robust, scalable data pipelines using Python & PySpark Drive real-time & batch data ingestion from diverse data sources Build and manage data lakes and data warehouses using AWS (S3, Glue, Redshift, EMR, Lambda, Kinesis) Write high-performance SQL queries and optimize ETL/ELT jobs Collaborate with data scientists, analysts, and engineers to ensure high data quality and availability Implement monitoring, logging & alerting for workflows Ensure top-tier data security, compliance & governance What We’re Looking For: 5+ years of hands-on experience in Data Engineering Strong skills in Python, DBT, SQL , and working with Snowflake Proven experience with Airflow, Kafka/Kinesis , and AWS ecosystem Deep understanding of CI/CD practices Passion for clean code, automation , and scalable systems Why Join Atidiv? 100% Remote | Flexible Work Culture Opportunity to work with cutting-edge technologies Collaborative, supportive team that values innovation and ownership Work on high-impact, global projects Ready to transform data into impact? Send your resume to: nitish.pati@atidiv.com

Posted 2 months ago

Apply

5.0 - 8.0 years

12 - 19 Lacs

Pune

Remote

5 + yrs of exp as a Performance Tester on AWS services. Exp on AWS services - EKS, Lambda, EC2, RDS. Exp in SRE (preferred) Exp with performance testing tools -JMeter, Gatling, LoadRunner Exp CI/CD tools Jenkins, GitLab CI & Agile.

Posted 2 months ago

Apply

9.0 - 11.0 years

15 - 30 Lacs

Noida, Bengaluru

Work from Office

Job description Location: Noida/Bangalore Experience:9+years Position Overview We are seeking a highly skilled Lead AWS DevOps Engineer with an emphasis on AWS cloud technologies and working knowledge of Azure. This role requires deep expertise in architecting, deploying, and managing cloud infrastructure for a variety of applications, with a strong focus on front-end workloads. Proficiency with critical AWS servicesincluding VPC, Lambda, Elastic Load Balancing, Route 53, CloudFront, EC2, RDS, and S3—is essential. You will lead the design of scalable, secure, and resilient environments, mentor junior engineers, and collaborate across teams. Key Responsibilities Architect, implement, and maintain AWS infrastructure, focusing on: Elastic Load Balancing (ELB): Configure and manage Application Load Balancers (ALB), Network Load Balancers (NLB), and Classic Load Balancers to distribute traffic efficiently, maximize application availability, and prevent server overloads. Amazon Route 53: Manage DNS routing, domain registration, and health checks; implement advanced routing policies (weighted, latency-based, failover, geolocation) to ensure high availability, low latency, and seamless integration with AWS resources. CloudFront: Optimize content delivery and caching for front-end applications. EC2: Provision, monitor, and scale compute resources. RDS: Manage and optimize relational databases i.e. RDS Aurora, MySql and Postgres DB. AWS Lambda: Absolute expertise with AWS Lambda with Layers and ECR integrations. VPC: Design secure, scalable network topologies, including subnets, security groups, and peering. Lead the support of existing applications, migration, deployment, and scaling of front-end, back-end and data applications in the cloud. Develop and maintain infrastructure-as-code (e.g., Terraform, CloudFormation). Automate CI/CD pipelines and streamline release processes. Monitor, troubleshoot, and optimize system performance and costs. Integrate and support Azure services as needed for hybrid or multi-cloud scenarios. Mentor and support DevOps team members, fostering a culture of continuous improvement. Stay current with cloud trends and emerging technologies to drive innovation. Required Skills and Experience 10+ years of hands-on DevOps experience, with at least 3 years in a lead or senior hands-on role. Deep expertise in AWS, including: Elastic Load Balancing (ALB, NLB, CLB): advanced traffic routing, health checks, and high-availability configurations. Route 53: DNS management, custom routing policies, domain registration, health checks, and integration with other AWS services. CloudFront, EC2, RDS, VPC, Baston host. Hosting Front end applications using EC2 and containers Overseeing cost optimization for current applications and infrastructure Working knowledge of Azure cloud services and integration/migration strategies. Strong proficiency with infrastructure-as-code tools (Terraform, AWS CloudFormation). Experience with CI/CD tools (e.g., Jenkins, GitHub Actions, AWS CodePipeline). Proficient in scripting languages (Python, Bash, or similar). Solid understanding of networking, security best practices, and cloud cost optimization. Experience supporting front-end application deployments and troubleshooting performance issues. Excellent communication, leadership, and team collaboration skills. Preferred Qualifications AWS and/or Azure certifications (e.g., AWS Solutions Architect Professional, DevOps Engineer, Any Azure certification good to have) [Must have]. Experience with containerization (Docker, ECS, EKS, AKS) [Must have]. GitHub and SonarQube integration experience is required. Familiarity with monitoring and logging tools (CloudWatch, Datadog, Prometheus). Prior experience in a multi-cloud or hybrid environment [Must have]. Education Bachelor's degree in computer science, Engineering, or a related field (or equivalent experience). This role is ideal for a proactive DevOps leader who thrives in dynamic environments and is passionate about leveraging AWS and Azure technologies to deliver robust, scalable and cost-effective solutions for business applications.

Posted 2 months ago

Apply

9.0 - 10.0 years

5 - 7 Lacs

Noida, Bengaluru

Work from Office

Requirements: 5+ years of experience in DevOps or Cloud Engineering. Expertise in AWS (EC2, S3, RDS, Lambda, IAM, VPC, Route 53, etc.) and Azure (VMs, AKS, App Services, Azure Functions, Networking, etc.). Strong experience with Infrastructure as Code (IaC) using Terraform, CloudFormation, or Bicep. Hands-on experience with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps. Proficiency in scripting languages like Python, Bash, or PowerShell. Experience with Kubernetes (EKS, AKS) and containerization (Docker). Knowledge of monitoring and logging tools like Prometheus, Grafana, ELK Stack, CloudWatch, and Azure Monitor. Familiarity with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of security best practices in cloud environments. Experience with version control systems like Git. Excellent problem-solving skills and the ability to work in a fast-paced environment.

Posted 2 months ago

Apply

8.0 - 13.0 years

22 - 37 Lacs

Bengaluru, Mumbai (All Areas)

Hybrid

Role:-AWS Devops Experience:- 8-12 Years Location:- Bangalore / Mumbai Work Mode:-Hybrid About the Role: We are seeking a skilled AWS Systems Engineer to design, deploy, and maintain scalable, secure, and highly available cloud infrastructure on Amazon Web Services (AWS). The ideal candidate will have hands-on experience with AWS EKS, AWS services, automation, and DevOps best practices to optimize performance, reliability, and cost-efficiency. Key Responsibilities Design & Deployment: Architect, implement, and manage AWS cloud infrastructure (EC2, S3, VPC, Lambda, RDS, ECS/EKS, etc.). Automation & DevOps: Develop Infrastructure as Code (IaC) using Terraform, CloudFormation, or CDK. Automate deployments with CI/CD pipelines (AWS CodePipeline, GitHub Actions, Jenkins). Monitoring & Performance: Configure and maintain monitoring tools (CloudWatch, Prometheus) to ensure system reliability and performance. Security & Compliance: Implement security best practices (IAM, KMS, Security Groups, WAF) and ensure compliance with industry standards (SOC2, HIPAA, GDPR). Troubleshooting & Optimization: Identify and resolve performance bottlenecks, optimize costs, and ensure high availability (HA) and disaster recovery (DR) strategies. Collaboration: Work closely with development, DevOps, and security teams to streamline cloud operations. Required Skills and Qualification Experience: 8+ years in cloud engineering with a focus on AWS.• Technical Skills:o Proficiency in AWS services (EC2, S3, Lambda, SQS, RDS, CloudFront, IAM, etc.) Strong scripting skills (Python, Bash Infrastructure as Code (Terraform, CloudFormation) Containerization & Orchestration (Docker, Kubernetes, ECS/EKS) CI/CD pipelines (Jenkins, GitHub Actions, AWS CodePipeline) Certifications (Preferred): AWS Certified Solutions Architect Associate/Professional AWS Certified DevOps Engineer Professional• Soft Skills: Strong problem-solving, communication, and teamwork abilities. Preferred Qualifications• Experience in ArgoCD, CubeCost, fluentd• Knowledge of serverless architectures and microservices.• Familiarity with Linux/Windows administration and networking concepts. Graduate/post-graduate qualification B.Tech./BCA/MTech/BCA/ MCA

Posted 2 months ago

Apply

8.0 - 13.0 years

22 - 37 Lacs

Bengaluru

Hybrid

Role:-AWS Devops Experience:- 8-12 Years Location:- Bangalore / Mumbai Work Mode:-Hybrid About the Role: We are seeking a skilled AWS Systems Engineer to design, deploy, and maintain scalable, secure, and highly available cloud infrastructure on Amazon Web Services (AWS). The ideal candidate will have hands-on experience with AWS EKS, AWS services, automation, and DevOps best practices to optimize performance, reliability, and cost-efficiency. Key Responsibilities Design & Deployment: Architect, implement, and manage AWS cloud infrastructure (EC2, S3, VPC, Lambda, RDS, ECS/EKS, etc.). Automation & DevOps: Develop Infrastructure as Code (IaC) using Terraform, CloudFormation, or CDK. Automate deployments with CI/CD pipelines (AWS CodePipeline, GitHub Actions, Jenkins). Monitoring & Performance: Configure and maintain monitoring tools (CloudWatch, Prometheus) to ensure system reliability and performance. Security & Compliance: Implement security best practices (IAM, KMS, Security Groups, WAF) and ensure compliance with industry standards (SOC2, HIPAA, GDPR). Troubleshooting & Optimization: Identify and resolve performance bottlenecks, optimize costs, and ensure high availability (HA) and disaster recovery (DR) strategies. Collaboration: Work closely with development, DevOps, and security teams to streamline cloud operations. Required Skills and Qualification Experience: 8+ years in cloud engineering with a focus on AWS.• Technical Skills:o Proficiency in AWS services (EC2, S3, Lambda, SQS, RDS, CloudFront, IAM, etc.) Strong scripting skills (Python, Bash Infrastructure as Code (Terraform, CloudFormation) Containerization & Orchestration (Docker, Kubernetes, ECS/EKS) CI/CD pipelines (Jenkins, GitHub Actions, AWS CodePipeline) Certifications (Preferred): AWS Certified Solutions Architect Associate/Professional AWS Certified DevOps Engineer Professional• Soft Skills: Strong problem-solving, communication, and teamwork abilities. Preferred Qualifications• Experience in ArgoCD, CubeCost, fluentd• Knowledge of serverless architectures and microservices.• Familiarity with Linux/Windows administration and networking concepts. Graduate/post-graduate qualification B.Tech./BCA/MTech/BCA/ MCA

Posted 2 months ago

Apply

5.0 - 9.0 years

0 - 3 Lacs

Pune, Chennai, Bengaluru

Hybrid

Primary skill : AWS, Quicksight Secondary skill : AWS Glue, Lambda, Athena, Redshift, Aurora Experience : 5-9 years Location : Pune/Mumbai/Chennai/Noida/Bangalore/Coimbatore Notice period : Immediate joiners Design, develop, and maintain large-scale data pipelines using AWS services such as Athena, Aurora, Glue, Lambda, and Quicksight. Develop complex SQL queries to extract insights from massive datasets stored in Amazon Redshift.

Posted 2 months ago

Apply

5.0 - 10.0 years

18 - 30 Lacs

Pune, Bengaluru, Delhi / NCR

Hybrid

Job Location- Delhi NCR/Bangalore/Hyderabad/Pune/Mumbai/Chennai Shift timings- 1:30PM -11:30PM Work mode- Hybrid Exp- 5-8 years We are looking for AWS experts with the following exp- Experience Requirements: candidates with 5+ years of experience in AWS and cloud services. Exp in Serverless app dev. Python knowledge for the role, specifically for scripting to handle infrastructure and manipulate AWS services. Looking for candidates with experience in Python scripting for AWS services, not just web application development . Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organisations through complex digital decisions. Responsibilities Design and implement cloud security solutions using AWS services, ensuring compliance with industry standards Design and implement microservice based solutions using Serverless and Containerization services. Develop and maintain automation scripts and tools using Python to streamline security processes and enhance operational efficiency. Collaborate with DevOps, development, and security teams to integrate security best practices into the software development lifecycle (SDLC). Monitor cloud environments for security incidents and respond to alerts, conduct investigations and implement corrective actions, as required. Stay up to date with the latest cloud security trends, threats, and best practices, and provide recommendations for continuous improvement. Create and maintain documentation related to security policies, procedures, and compliance requirements. Provide mentorship and guidance to junior engineers and team members on cloud security and compliance practices. Key Skills Bachelor/masters degree in computer science, Information Technology, or a related field. 5+ years of experience in cloud engineering, with a focus on AWS services and cloud security. Strong proficiency in Python programming for automation and scripting. Hands-on experience with working on Python Automation testing using Unit tests and BDDs. In-depth knowledge of AWS security services (AWS Lambda, AWS IAM, S3, CloudWatch, SNS, SQS, Step Functions) is a must. Experience with microservice and containerization in AWS using Amazon EKS is a plus. Experience with Infrastructure as Code (IaC) tools such as AWS CloudFormation. Strong understanding of networking, encryption, and security protocols in cloud environments. Basic understanding of tools like Jenkins, artifactory is required. Excellent problem-solving skills and the ability to work independently and collaboratively in a fast-paced environment. Relevant certifications (e.g., AWS Certified Solutions Architect Associate, AWS Certified Developer Associate, AWS) are a plus. Experience on the AI services in the AWS environment in a plus. Excellent communication skills. What you can expect Were legendary for taking care of you, your family and to help you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another locationeven an international destination—for up to 30 consecutive calendar days per year.

Posted 2 months ago

Apply

8.0 - 13.0 years

12 - 22 Lacs

Bengaluru, Mumbai (All Areas)

Hybrid

We are hiring "AWS Lambda + Python Programming + Shell Scripting + SQL " for one of our "IT Services & Consulting-MNC" @ Bangalore/Mumbai location Exp-7-14 Years Mode-Permanent

Posted 2 months ago

Apply

5.0 - 10.0 years

1 - 4 Lacs

Pune, Bengaluru

Hybrid

Responsibilities: Troubleshoot and resolve complex customer issues. Proactively evaluate and address issues related to performance, stability, scalability, and extensibility of the systems. Develop and outline architecture and relationships between subsystems, participating in design reviews. Ensure that new code, features, or software products meet performance goals and metrics. Maintain high availability and a commitment to customer trust at all times. Work in an Agile environment, following Scrum methodologies.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies