Posted:2 weeks ago|
Platform:
Work from Office
Full Time
Location : Mumbai Experience : 0-6months Technologies / Skills: Advanced SQL, Python and associated libraries like Pandas, Numpy etc., Pyspark , Shell scripting, Data Modelling, Big data, Hadoop, Hive, ETL pipelines. Responsibilities : Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and develop data engineering strategy. Ability to work with business owners to define key business requirements and convert to user stories with required technical specifications. Communicate results and business impacts of insight initiatives to key stakeholders to collaboratively solve business problems. Working closely with the overall Enterprise Data & Analytics Architect and Engineering practice leads to ensure adherence with the best practices and design principles. Assures quality, security and compliance requirements are met for supported area. Design and create fault-tolerance data pipelines running on cluster Excellent communication skills with the ability to influence client business and IT teams Should have design data engineering solutions end to end. Ability to come up with scalable and modular solutions Required Qualification: 0-6months of hands-on experience Designing and developing Data Pipelines for Data Ingestion or Transformation using Python (PySpark)/Spark SQL in AWS cloud Experience in design and development of data pipelines and processing of data at scale. Advanced experience in writing and optimizing efficient SQL queries with Python and Hive handling Large Data Sets in Big-Data Environments Experience in debugging, tunning and optimizing PySpark data pipelines Should have implemented concepts and have good knowledge of Pyspark data frames, joins, caching, memory management, partitioning, parallelism etc. Understanding of Spark UI, Event Timelines, DAG, Spark config parameters, in order to tune the long running data pipelines. Experience working in Agile implementations Experience with building data pipelinesin streaming and batch mode. Experience with Git and CI/CD pipelines to deploy cloud applications Good knowledge of designing Hive tables with partitioning for performance. Desired Qualification: Experience in data modelling Hands on creating workflows on any Scheduling Tool like Autosys, CA Workload Automation Proficiency in using SDKsfor interacting with native AWS services Strong understanding of concepts of ETL, ELT and data modeling.
Go Digital Technology Consulting
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowBengaluru
Experience: Not specified
5.0 - 15.0 Lacs P.A.
Chennai, Tamil Nadu, India
Experience: Not specified
Salary: Not disclosed
Bengaluru, Karnataka, India
Experience: Not specified
Salary: Not disclosed
Delhi Cantonment, Delhi, India
Experience: Not specified
Salary: Not disclosed
Pune, Maharashtra, India
Salary: Not disclosed
Bengaluru, Karnataka, India
Salary: Not disclosed
Bengaluru East, Karnataka, India
Salary: Not disclosed
Gurugram
3.0 - 5.0 Lacs P.A.
Gurgaon, Haryana, India
Experience: Not specified
Salary: Not disclosed
Hyderabad
3.0 - 8.0 Lacs P.A.