Your Role
The Data Engineer is an individual contributor member of the Azure Data Platform Team. They forge relationships with multiple product & platform teams to actively contribute to the implementation of data solution that ensure organizational data is well structured, easily accessible, and efficiently stored, enabling the Business to better exploit its data. They have expertise in both data engineering and modern cloud platform data services.
Main Responsibilities
- Develop, and deploy scalable data solutions on the cloud platform.
- Build and maintain data pipelines to extract, transform, and load (ETL) data from various sources into Azure Data Platform Services, and other data repositories.
- Collaborate with cross-functional teams to understand data requirements and translate them into data pipelines and implement technical solutions.
- Understanding of data models that represent the organization’s data requirements and its relationships, including conceptual, logical, and physical data models.
- Defines and maintains clear documentation for data pipelines to serve as references for Data Owners, Data Stewards, and technical support teams; supports the Data Architect to maintain broader data architecture assets as required.
- Applies normalization techniques to eliminate data redundancy, ensure consistency, and improve data integrity.
- Ensures compliance with data governance and data management processes when building data models (e.g., data privacy and security considerations).
- Assesses and takes action to address data performance issues related to data models and database designs.
- Continuously optimize data pipelines for performance, ensuring models support efficient data retrieval and manipulation.
To Succeed You Will Need
Must Have skills:
- 3 to 5 years of relevant experience in Azure data engineering roles.
- Extensive experience delivering data solutions using Azure Data Services
- Experience of Extract, Transform and Load technologies and techniques.
- Hands-On with Python
- Must be able to collaborate with colleagues with and without technical expertise.
- Must possess excellent written and verbal communication skills A Self-starter – able to pro-actively seek out answers and take responsibility to deliver results
- Advanced skills in SQL and proficiency in programming languages such as Python & PySpark with clear experience in data warehouse design concepts.
- Hands-on experience with Azure data services such as Azure Data Lake Storage, Azure Synapse Analytics, Azure Data Factory, Azure Databricks, and Azure SQL Database.
- Demonstrable experience working with data warehouse, data lake, and big data platform technologies (e.g., SAP BW, Snowflake, Azure Data Platform, ETL technologies).
Nice To Have Skills
- LogicApps, Funcation App, Stream Analytics, Purview
- Experience building algorithm and training models for Machine Learning projects
- Terraform & ARM Templates
- Azure DevOps
- Azure Monitoring and Alerting
In Return, We Offer You
- Plenty of opportunities to grow and develop.
- A culture known for respectful interaction, ethical behavior and integrity.
- Potential to see your ideas realized and to make an impact.
- New challenges and new things to learn every day.
- Access to global job opportunities, as part of the Atlas Copco Group.
- An excellent remuneration package
- Support for maintaining a healthy work-life balance, including vacation, personal, and sick time.
Uniting curious minds
Behind every innovative solution, there are people working together to transform the future. With careers sparked by initiative and lifelong learning, we unite curious minds, and you could be one of them.