Posted:3 weeks ago|
Platform:
On-site
Full Time
· Design and develop ETL pipelines using ADF for data ingestion and transformation.
· Collaborate with Azure stack modules like Data Lakes and SQL DW to build robust data solutions.
· Write SQL, Python, and PySpark code for efficient data processing and transformation.
· Understand and translate business requirements into technical designs.
· Develop mapping documents and transformation rules as per project scope.
· Communicate project status with stakeholders, ensuring smooth project execution.
· 7-10 years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases.
· Hands-on experience with Azure services: ADLS, Azure Databricks, Data Factory, Synapse, Azure SQL DB.
· Experience in SQL, Python, and PySpark for data transformation and processing.
· Familiarity with DevOps and CI/CD deployments.
· Strong communication skills and attention to detail in high-pressure situations.
· Experience in the insurance or financial industry is preferred.
ValueMomentum
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowhyderabad, telangana
Salary: Not disclosed
Hyderabad, Telangana, India
Salary: Not disclosed
hyderabad, telangana
Salary: Not disclosed
Hyderabad, Telangana, India
Salary: Not disclosed