Home
Jobs

3 Py Spark Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 14.0 years

12 - 14 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Bachelors or master’s degree in computer science, Engineering, or a related field. 10+ years of overall experience and 8+ years of relevant in Data bricks, DLT, Py spark and Data modelling concepts-Dimensional Data Modelling (Star Schema, Snowflake Schema) Proficiency in programming languages such as Python, Py spark, Scala, SQL. Proficiency in DLT Proficiency in SQL Proficiency in Data Modelling concepts - Dimensional Data Modelling (Star Schema, Snowflake Schema) Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. Good to have experience with containerization technologies such as Docker and Kubernetes. Knowledge of DevOps practices for automated deployment and monitoring of data pipelines

Posted 1 week ago

Apply

4.0 - 9.0 years

5 - 10 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Job Purpose: We are seeking an experienced Azure Data Engineer with over 4 to 13 years of proven expertise in Data lakes, Lake house, Synapse Analytic, Data bricks, Tsql, sql server, Synapse Db, Data warehouse and should have work exp in ETL, Data catalogs, Meta data, DWH, mpp systems, OLTP, and OLAP systems with strong communication skills. Requirements: We are seeking an experienced Azure Data Engineer with over 4 to 13 years of proven expertise in Data lakes, Lake house, Synapse Analytic, Data bricks, Tsql, sql server, Synapse Db, Data warehouse and should have work exp in ETL, Data catalogs, Meta data, DWH, mpp systems, OLTP, and OLAP systems with strong communication skills. The ideal candidate should have: Key Responsibilities: Create Data lakes from scratch, configure existing systems and provide user support Understand different datasets and Storage elements to bring data Have good knowledge and work experience in ADF, Synapse Data pipelines Have good knowledge in python, Py spark and spark sql Implement Data security at DB and data movement layers Should have experience in ci/cd data pipelines Work with internal teams to design, develop and maintain software Qualifications & Key skills required: Expertise in Datalakes, Lakehouse, Synapse Analytics, Data bricks, Tsql, sql server, Synapse Db, Data warehouse Hands-on experience in ETL, ELT, handling large volume of data and files. Working knowledge in json, parquet, csv, xl, structured, unstructured data and other data sets Exposure to any Source Control Management, like TFS/Git/SVN Understanding of non-functional requirements Should be proficient in Data catalogs, Meta data, DWH, mpp systems, OLTP, and OLAP systems Experience in Azure Data Fabric, MS Purview, MDM tools is an added advantage A good team player and excellent communicator

Posted 3 weeks ago

Apply

10 - 18 years

35 - 55 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Experience in Synapase with pyspark Knowledge of Big Data pipelinesData Engineering Working Knowledge on MSBI stack on Azure Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage Handson in Visualization like PowerBI Implement endend data pipelines using cosmosAzure Data factory Should have good analytical thinking and Problem solving Good communication and coordination skills Able to work as Individual contributor Requirement Analysis CreateMaintain and Enhance Big Data Pipeline Daily status reporting interacting with Leads Version controlADOGIT CICD Marketing Campaign experiences Data Platform Product telemetry Analytical thinking Data Validation of the new streams Data quality check of the new streams Monitoring of data pipeline created in Azure Data factory updating the Tech spec and wiki page for each implementation of pipeline Updating ADO on daily basis If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies