Home
Jobs

2 Aws (S3 Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 10.0 years

7 - 10 Lacs

Gurgaon, Haryana, India

On-site

Design, develop, and maintain scalable data pipelines and data assets using modern data engineering techniques Strong experience in code optimisation using Spark SQL and PySpark Apply AWS architecture knowledge, especially S3, EC2, Lambda, Redshift, CloudFormation Refactor legacy codebase to improve readability, maintainability, and performance Write tests before code to ensure functionality and catch bugs early Debug complex code and resolve performance, concurrency, or logic issues Role Requirements and Qualifications: Minimum 7+ years of strong hands-on programming experience with PySpark / Python / Boto3 Experience using Python frameworks and libraries in line with Python best practices Understanding of code versioning tools (Git), repositories (e.g., JFrog Artifactory) Strong commitment to TDD (Test-Driven Development), unit testing, and participating in code reviews Excellent problem-solving skills, analytical thinking, and ability to work independently

Posted 4 days ago

Apply

6.0 - 12.0 years

2 - 11 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Develop and implement efficient data pipelines using Apache Spark (PySpark preferred) to process and analyze large-scale data. Design, build, and optimize complex SQL queries to extract, transform, and load (ETL) data from multiple sources. Orchestrate data workflows using Apache Airflow , ensuring smooth execution and error-free pipelines. Design, implement, and maintain scalable and cost-effective data storage and processing solutions on AWS using S3, Glue, EMR, and Athena . Leverage AWS Lambda and Step Functions for serverless compute and task orchestration in data pipelines. Work with AWS databases like RDS and DynamoDB to ensure efficient data storage and retrieval. Monitor data processing and pipeline health using AWS CloudWatch and ensure smooth operation in production environments. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Perform performance tuning, optimize distributed data processing tasks, and handle scalability issues. Provide troubleshooting and support for data pipeline failures and ensure high availability and reliability. Contribute to the setup and maintenance of CI/CD pipelines for automated deployment and testing of data workflows. Required Skills & Experience : Experience: Minimum of 6+ years of hands-on experience in data engineering or big data development roles, with a focus on designing and building data pipelines and processing systems. Technical Skills: Strong programming skills in Python with hands-on experience in Apache Spark (PySpark preferred). Proficient in writing and optimizing complex SQL queries for data extraction, transformation, and loading. Hands-on experience with Apache Airflow for orchestration of data workflows and pipeline management. In-depth understanding and practical experience with AWS services : Data Storage & Processing: S3, Glue, EMR, Athena Compute & Execution: Lambda, Step Functions Databases: RDS, DynamoDB Monitoring: CloudWatch Experience with distributed data processing, parallel computing, and performance tuning techniques. Strong analytical and problem-solving skills to troubleshoot and optimize data workflows and pipelines. Familiarity with CI/CD pipelines and DevOps practices for continuous integration and automated deployments is a plus. Preferred Qualifications: Familiarity with other cloud platforms (Azure, Google Cloud) and services related to data engineering. Experience in handling unstructured and semi-structured data and working with data lakes. Knowledge of containerization technologies such as Docker or orchestration systems like Kubernetes . Experience with NoSQL databases or data warehouses like Redshift or BigQuery is a plus. Qualifications: Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience: Minimum of 6+ years in a data engineering role with strong expertise in AWS and big data processing frameworks.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies