Home
Jobs

Python and PySpark Developer - 8 to 12 Years - Hyderabad - ASAP Joiner

8 years

0 Lacs

Posted:3 weeks ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Greetings! One of our esteemed client Japanese multinational information technology (IT) service and consulting company headquartered in Tokyo, Japan. The company acquired Italy -based Value Team S.p.A. and launched Global One Teams. Join this dynamic, high-impact firm where innovation meets opportunity — and take your career to new height s! 🔍 We Are Hiring: Python, PySpark and SQL Developer (8-12 years) Relevant Exp – 8-12 Years JD - • Python, PySpark and SQL • 8+ years of experience in Spark, Scala, PySpark for big data processing • Proficiency in Python programming for data manipulation and analysis. • Experience with Python libraries such as Pandas, NumPy. • Knowledge of Spark architecture and components (RDDs, DataFrames, Spark SQL). • Strong knowledge of SQL for querying databases. • Experience with database systems like Lakehouse, PostgreSQL, Teradata, SQL Server. • Ability to write complex SQL queries for data extraction and transformation. • Strong analytical skills to interpret data and provide insights. • Ability to troubleshoot and resolve data-related issues. • Strong problem-solving skills to address data-related challenges • Effective communication skills to collaborate with cross-functional teams. Role/Responsibilities: • Work on development activities along with lead activities • Coordinate with the Product Manager (PdM) and Development Architect (Dev Architect) and handle deliverables independently • Collaborate with other teams to understand data requirements and deliver solutions. • Design, develop, and maintain scalable data pipelines using Python and PySpark. • Utilize PySpark and Spark scripting for data processing and analysis • Implement ETL (Extract, Transform, Load) processes to ensure data is accurately processed and stored. • Develop and maintain Power BI reports and dashboards. • Optimize data pipelines for performance and reliability. • Integrate data from various sources into centralized data repositories. • Ensure data quality and consistency across different data sets. • Analyze large data sets to identify trends, patterns, and insights. • Optimize PySpark applications for better performance and scalability. • Continuously improve data processing workflows and infrastructure. Interested candidates, please share your updated resume along with the following details : Total Experience: Relevant Experience in Python, PySpark and SQL: Current Loc Current CTC: Expected CTC: Notice Period: 🔒 We assure you that your profile will be handled with strict confidentiality. 📩 Apply now and be part of this incredible journey Thanks, Syed Mohammad!! syed.m@anlage.co.in Show more Show less

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You