Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page.
DĂ©couvrez votre prochaine opportunitĂ© au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunitĂ©s innovantes, dĂ©couvrez notre culture enrichissante et travaillez avec des Ă©quipes talentueuses qui vous poussent Ă vous dĂ©velopper chaque jour. Nous savons ce quâil faut faire pour diriger UPS vers l'avenir : des personnes passionnĂ©es dotĂ©es dâune combinaison unique de compĂ©tences. Si vous avez les qualitĂ©s, de la motivation, de l'autonomie ou le leadership pour diriger des Ă©quipes, il existe des postes adaptĂ©s Ă vos aspirations et Ă vos compĂ©tences d'aujourd'hui et de demain.
Fiche De Poste
Job Summary
This position provides input and support for full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She performs tasks within planned durations and established deadlines. This position collaborates with teams to ensure effective communication and support the achievement of objectives. He/She provides knowledge, development, maintenance, and support for applications.
Responsibilities
- Generates application documentation.
- Contributes to systems analysis and design.
- Designs and develops moderately complex applications.
- Contributes to integration builds.
- Contributes to maintenance and support.
- Monitors emerging technologies and products.
Technical Skills
- Cloud Platforms: Azure (Databricks, Data Factory, Data Lake Storage, Synapse Analytics).
- Data Processing: Databricks (PySpark, Spark SQL), Apache Spark.
- Programming Languages: Python, SQL
- Data Engineering Tools: Delta Lake, Azure Data Factory, Apache Airflow
- Other: Git, CI/CD
Professional Experience
- Design and implementation of a scalable data lakehouse on Azure Databricks, optimizing data ingestion, processing, and analysis for improved business insights.
- Develop and maintain efficient data pipelines using PySpark and Spark SQL for extracting, transforming, and loading (ETL) data from diverse sources.(Azure and GCP).
- Strong Proficiency in SQL & Develop SQL stored procedures for data integrity. Ensure data accuracy and consistency across all layers.
- Implement Delta Lake for ACID transactions and data versioning, ensuring data quality and reliability.
- Create frameworks using Databricks and Data Factory to process incremental data for external vendors and applications.
- Implement Azure functions to trigger and manage data processing workflows.
- Design and implement data pipelines to integrate various data sources and manage Databricks workflows for efficient data processing.
- Conduct performance tuning and optimization of data processing workflows.
- Provide technical support and troubleshooting for data processing issues.
- Experience with successful migrations from legacy data infrastructure to Azure Databricks, improving scalability and cost savings.
- Collaborate with data scientists and analysts to build interactive dashboards and visualizations on Databricks for data exploration and analysis.
Effective oral and written management communication skills.
Qualifications
- Minimum 8 years of Relevant experience
- Bachelorâs Degree or International equivalent
- Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics or related field
Type De Contrat
en CDI
Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.