Jobs
Interviews

3 Dataflows Gen2 Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your role will involve working on both batch and real-time ingestion and transformation, integrating with Azure Data Factory for smooth data flow, and collaborating with data architects to implement governed Lakehouse models in Microsoft Fabric. You will be expected to monitor and optimize the performance of data pipelines and notebooks in Microsoft Fabric, applying tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery. Collaboration with cross-functional teams, including BI developers, analysts, and data scientists, is essential to gather requirements and build high-quality datasets. Additionally, you will need to document pipeline logic, lakehouse architecture, and semantic layers clearly, following development standards and contributing to internal best practices for Microsoft Fabric-based solutions. To excel in this role, you should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric, Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required, along with a strong command of SQL, PySpark, and Python applied to data integration and analytical workloads. Experience in optimizing pipelines and managing compute resources for cost-effective data processing in Azure/Fabric is also crucial. Preferred skills for this role include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, and understanding of Microsoft Purview, Unity Catalog, or Fabric-native tools for metadata, lineage, and access control. Exposure to DevOps practices for Fabric and Power BI, as well as knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines, would be considered a plus. If you are passionate about developing efficient data solutions in a collaborative environment and have a strong background in data engineering within the Azure ecosystem, this role as a Senior Data Engineer at Srijan Technologies PVT LTD could be the perfect fit for you. Apply now to be a part of a dynamic team driving innovation in data architecture and analytics.,

Posted 12 hours ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your primary focus will be on developing and optimizing data pipelines, including Fabric Notebooks, Dataflows Gen2, and Lakehouse architecture for both batch and real-time ingestion and transformation. You will collaborate with data architects and engineers to implement governed Lakehouse models in Microsoft Fabric, ensuring data solutions are performant, reusable, and aligned with business needs and compliance standards. Monitoring and improving the performance of data pipelines and notebooks in Microsoft Fabric will be a key aspect of your role. You will apply tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery across domains. Working closely with BI developers, analysts, and data scientists, you will gather requirements and build high-quality datasets to support self-service BI initiatives. Additionally, documenting pipeline logic, lakehouse architecture, and semantic layers clearly will be essential. Your experience with Lakehouses, Notebooks, Data Pipelines, and Direct Lake in Microsoft Fabric will be crucial in delivering reliable, secure, and efficient data solutions that integrate with Power BI, Azure Synapse, and other Microsoft services. You should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric components such as Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required. A strong command of SQL, PySpark, Python, and experience in optimising pipelines for cost-effective data processing in Azure/Fabric are necessary. Preferred skills include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, as well as understanding of Microsoft Purview or Unity Catalog. Exposure to DevOps practices for Fabric and Power BI, and knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines would be advantageous.,

Posted 1 day ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

As a Data Engineer at Srijan, a Material company, you will play a crucial role in designing and developing scalable data pipelines within Microsoft Fabric. Your primary responsibilities will include optimizing data pipelines, collaborating with cross-functional teams, and ensuring documentation and knowledge sharing. You will work closely with the Data Architecture team to implement scalable and governed data architectures within OneLake and Microsoft Fabric's unified compute and storage platform. Your expertise in Microsoft Fabric will be utilized to build robust pipelines using both batch and real-time processing techniques, integrating with Azure Data Factory for seamless data movement. Continuous monitoring, enhancement, and optimization of Fabric pipelines, notebooks, and lakehouse artifacts will be essential to ensure performance, reliability, and cost-efficiency. You will collaborate with analysts, BI developers, and data scientists to deliver high-quality datasets and enable self-service analytics via Power BI datasets connected to Fabric Lakehouses. Maintaining up-to-date documentation for all data pipelines, semantic models, and data products, as well as sharing knowledge of Fabric best practices with junior team members, will be an integral part of your role. Your expertise in SQL, data modeling, and cloud architecture design will be crucial in designing modern data platforms using Microsoft Fabric, OneLake, and Synapse. To excel in this role, you should have at least 7+ years of experience in the Azure ecosystem, with relevant experience in Microsoft Fabric, Data Engineering, and Data Pipelines components. Proficiency in Azure Data Factory, advanced data engineering skills, and strong collaboration and communication abilities are also required. Additionally, knowledge of Azure Databricks, Power BI integration, DevOps practices, and familiarity with OneLake, Delta Lake, and Lakehouse architecture will be advantageous. Join our awesome tribe at Srijan and leverage your expertise in Microsoft Fabric to build scalable solutions integrated with Business Intelligence layers, Azure Synapse, and other Microsoft data services.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies