Posted:1 month ago|
Platform:
Work from Office
Full Time
This is an exciting opportunity for an experienced developer of large-scale data solutions. You will join a team delivering a transformative cloud hosted data platform for a key Version 1 customer. The ideal candidate will have a proven track record as a senior/self-starting data engineer in implementing data ingestion and transformation pipelines for large scale organisations. We are seeking someone with deep technical skills in a variety of technologies, specifically SPARK performance uning\optimisation and Databricks , to play an important role in developing and delivering early proofs of concept and production implementation. You will ideally have experience in building solutions using a variety of open source tools & Microsoft Azure services, and a proven track record in delivering high quality work to tight deadlines. Your main responsibilities will be: Designing and implementing highly performant metadata driven data ingestion & transformation pipelines from multiple sources using Databricks and Spark Streaming and Batch processes in Databricks SPARK performance uning\optimisation Providing technical guidance for complex geospatial problems and spark dataframes Developing scalable and re-usable frameworks for ingestion and transformation of large data sets Data quality system and process design and implementation. Integrating the end to end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times Working with other members of the project team to support delivery of additional project components (Reporting tools, API interfaces, Search) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Direct experience of building data piplines using Azure Data Factory and Databricks Building data integration with Python Databrick Engineer certification Microsoft Azure Data Engineer certification. Hands on experience designing and delivering solutions using the Azure Data Analytics platform. Experience building data warehouse solutions using ETL / ELT tools like Informatica, Talend. Comprehensive understanding of data management best practices including demonstrated experience with data profiling, sourcing, and cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching. Nice to have Experience working in a Dev/Ops environment with tools such as Microsoft Visual Studio Team Services, Chef, Puppet or Terraform Experience working with structured and unstructured data including imaging & geospatial data. Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience with Azure Event Hub, IOT Hub, Apache Kafka, Nifi for use with streaming data / event-based data
Version 1
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowPune
3.0 - 6.0 Lacs P.A.
18.0 - 24.0 Lacs P.A.
20.0 - 25.0 Lacs P.A.
Salary: Not disclosed
16.0 - 20.0 Lacs P.A.
Chennai, Tamil Nadu, India
Salary: Not disclosed
Salary: Not disclosed
Chennai
3.0 - 7.0 Lacs P.A.
Hyderabad
3.0 - 7.0 Lacs P.A.
Hyderabad
3.0 - 7.0 Lacs P.A.