Posted:2 days ago|
Platform:
On-site
Full Time
o Design and build efficient data pipelines using Azure Databricks (PySpark).
o Implement business logic for data transformation and enrichment at scale.
o Manage and optimize Delta Lake storage solutions.
o Develop REST APIs using FastAPI to expose processed data.
o Deploy APIs on Azure Functions for scalable and serverless data access.
o Develop and manage Airflow DAGs to orchestrate ETL processes.
o Ingest and process data from various internal and external sources on a scheduled basis.
o Handle data storage and access using PostgreSQL and MongoDB.
o Write optimized SQL queries to support downstream applications and analytics.
o Work cross-functionally with teams to deliver reliable, high-performance data solutions.
o Follow best practices in code quality, version control, and documentation.
· 5+ years of hands-on experience as a Data Engineer.
· Strong experience with Azure Cloud services.
· Proficient in Azure Databricks, PySpark, and Delta Lake.
· Solid experience with Python and FastAPI for API development.
· Experience with Azure Functions for serverless API deployments.
· Skilled in managing ETL pipelines using Apache Airflow.
· Hands-on experience with PostgreSQL and MongoDB.
· Strong SQL skills and experience handling large datasets.
SPEC INDIA
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowAhmedabad, Gujarat, India
Salary: Not disclosed
Ahmedabad, Gujarat, India
Salary: Not disclosed