Home
Jobs

178 Azure Synapse Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

20 - 30 Lacs

Chennai

Remote

Naukri logo

Job Summary: We are seeking a highly skilled Azure Solution Architect to design, implement, and oversee cloud-based solutions on Microsoft Azure. The ideal candidate will have a deep understanding of cloud architecture, a strong technical background, and the ability to align Azure capabilities with business needs. You will lead the architecture and design of scalable, secure, and resilient Azure solutions across multiple projects. Role & responsibilities: Design end-to-end data architectures on Azure using Microsoft Fabric, Data Lake (ADLS Gen2), Azure SQL/Synapse, and Power BI. Lead the implementation of data integration and orchestration pipelines using Azure Data Factory and Fabric Data Pipelines. Architect Lakehouse/Data Warehouse solutions for both batch and real-time processing, ensuring performance, scalability, and cost optimization. Establish data governance, lineage, and cataloging frameworks using Microsoft Purview and other observability tools. Enable data quality, classification, and privacy controls aligned with compliance and regulatory standards. Drive adoption of event-driven data ingestion patterns using Event Hubs, Event Grid, or Stream Analytics. Provide architectural oversight on reporting and visualization solutions using Power BI integrated with Fabric datasets and models. Define architecture standards, data models, and reusable components to accelerate project delivery. Collaborate with data stewards, business stakeholders, and engineering teams to define functional and non-functional requirements. Support CI/CD, infrastructure as code, and DevOps for data pipelines using Azure DevOps or GitHub Actions. Lead Proof of Concepts (PoCs) and performance evaluations for emerging Azure data services and tools. Monitor system performance, data flow, and health using Azure Monitor and Fabric observability capabilities. Required Qualifications: Bachelors degree in Computer Science, Data Engineering, or a related field. 5+ years of experience as a data architect or solution architect in cloud data environments. 3+ years of hands-on experience designing and implementing data solutions on Microsoft Azure . Strong hands-on expertise with: Azure Data Factory Microsoft Fabric (Data Engineering, Data Warehouse, Real-Time Analytics, Power BI) Azure Data Lake (ADLS Gen2), Azure SQL, and Synapse Analytics Power BI for enterprise reporting and data modeling Experience with data governance and cataloging tools , ideally Microsoft Purview. Proficient in data modeling techniques (dimensional, normalized, or data vault). Strong understanding of security, RBAC, data encryption, Key Vault, and privacy requirements in Azure. Preferred Qualifications: Microsoft Certified: Azure Solutions Architect Expert (AZ-305) or Azure Enterprise Data Analyst Associate (DP-500) . Hands-on experience with Microsoft Fabric end-to-end implementation. Familiarity with medallion architecture , delta lake, and modern lakehouse principles. Experience in Agile/Scrum environments and stakeholder engagement across business and IT. Strong communication skills, with the ability to explain complex concepts to both technical and non-technical audiences.

Posted 7 hours ago

Apply

6.0 - 11.0 years

11 - 21 Lacs

Udaipur, Jaipur, Bengaluru

Work from Office

Naukri logo

Data Architect Kadel Labs is a leading IT services company delivering top-quality technology solutions since 2017, focused on enhancing business operations and productivity through tailored, scalable, and future-ready solutions. With deep domain expertise and a commitment to innovation, we help businesses stay ahead of technological trends. As a CMMI Level 3 and ISO 27001:2022 certified company, we ensure best-in-class process maturity and information security, enabling organizations to achieve their digital transformation goals with confidence and efficiency. Role: Data Architect Experience: 6-10 Yrs Location: Udaipur , Jaipur, Bangalore Domain: Telecom Job Description: We are seeking an experienced Telecom Data Architect to join our team. In this role, you will be responsible for designing comprehensive data architecture and technical solutions specifically for telecommunications industry challenges, leveraging TM forum frameworks and modern data platforms. You will work closely with customers, and technology partners to deliver data solutions that address complex telecommunications business requirements including customer experience management, network optimization, revenue assurance, and digital transformation initiatives. Key Responsibilities: Design and articulate enterprise-scale telecom data architectures incorporating TMforum standards and frameworks, including SID (Shared Information/Data Model), TAM (Telecom Application Map), and eTOM (enhanced Telecom Operations Map) Develop comprehensive data models aligned with TMforum guidelines for telecommunications domains such as Customer, Product, Service, Resource, and Partner management Create data architectures that support telecom-specific use cases including customer journey analytics, network performance optimization, fraud detection, and revenue assurance Design solutions leveraging Microsoft Azure and Databricks for telecom data processing and analytics Conduct technical discovery sessions with telecom clients to understand their OSS/BSS architecture, network analytics needs, customer experience requirements, and digital transformation objectives Design and deliver proof of concepts (POCs) and technical demonstrations showcasing modern data platforms solving real-world telecommunications challenges Create comprehensive architectural diagrams and implementation roadmaps for telecom data ecosystems spanning cloud, on-premises, and hybrid environments Evaluate and recommend appropriate big data technologies, cloud platforms, and processing frameworks based on telecom-specific requirements and regulatory compliance needs. Design data governance frameworks compliant with telecom industry standards and regulatory requirements (GDPR, data localization, etc.) Stay current with the latest advancements in data technologies including cloud services, data processing frameworks, and AI/ML capabilities Contribute to the development of best practices, reference architectures, and reusable solution components for accelerating proposal development Required Skills: 6-10 years of experience in data architecture, data engineering, or solution architecture roles with at least 5 years in telecommunications industry Deep knowledge of TMforum frameworks including SID (Shared Information/Data Model), eTOM, TAM, and their practical implementation in telecom data architectures Demonstrated ability to estimate project efforts, resource requirements, and implementation timelines for complex telecom data initiatives Hands-on experience building data models and platforms aligned with TMforum standards and telecommunications business processes Strong understanding of telecom OSS/BSS systems, network management, customer experience management, and revenue management domains Hands-on experience with data platforms including Databricks, and Microsoft Azure in telecommunications contexts Experience with modern data processing frameworks such as Apache Kafka, Spark and Airflow for real-time telecom data streaming Proficiency in Azure cloud platform and its respective data services with an understanding of telecom-specific deployment requirements Knowledge of system monitoring and observability tools for telecommunications data infrastructure Experience implementing automated testing frameworks for telecom data platforms and pipelines Familiarity with telecom data integration patterns, ETL/ELT processes, and data governance practices specific to telecommunications Experience designing and implementing data lakes, data warehouses, and machine learning pipelines for telecom use cases Proficiency in programming languages commonly used in data processing (Python, Scala, SQL) with telecom domain applications Understanding of telecommunications regulatory requirements and data privacy compliance (GDPR, local data protection laws) Excellent communication and presentation skills with ability to explain complex technical concepts to telecom stakeholders Strong problem-solving skills and ability to think creatively to address telecommunications industry challenges Good to have TMforum certifications or telecommunications industry certifications Relevant data platform certifications such as Databricks, Azure Data Engineer are a plus Willingness to travel as required Educational Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm

Posted 8 hours ago

Apply

0.0 - 5.0 years

0 Lacs

Pune

Remote

Naukri logo

The candidate must be proficient in Python, libraries and frameworks. Good with Data Modeling, Pyspark, MySQL concepts, Power BI, AWS, Azure concepts Experience in optimizing large transactional DBs Data, visualization tools, Databricks, fast API.

Posted 1 day ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Kolkata, Pune, Bengaluru

Work from Office

Naukri logo

About Client Hiring for One of the Most Prestigious Multinational Corporations! Job Description Job Title : Azure Data Engineer Qualification : Any Graduate or above Relevant Experience : 4 to 10 yrs Must Have Skills : Azure, ADB, PySpark Roles and Responsibilites: Strong experience in implementation and management of lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL) . Strong hands-on expertise with SQL, Python, Apache Spark and Delta Lake. Proficiency in data integration techniques, ETL processes and data pipeline architectures. Demonstrable experience using GIT and building CI/CD pipelines for code management. Develop and maintain technical documentation for the platform. Ensure the platform is developed with software engineering, data analytics and data security practices in mind. Developing and optimizing data processing and data storage systems, ensuring high performance, reliability, and security. Experience working in Agile Methodology and well-knowledgeable in using ADO Boards for Sprint deliveries. Excellent communication skills and able to communicate clearly technical and business concepts both verbally and in writing. Ability to work in a team environment and collaborate with all the levels effectively by sharing ideas and knowledge. Location : Kolkata, Pune, Mumbai, Bangalore, BBSR Notice period : Immediate / 90 days Shift Timing : General Shift Mode of Interview : Virtual Mode of Work : WFO Thanks & Regards Bhavana B Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,India. Direct Number:8067432454 bhavana.b@blackwhite.in |www.blackwhite.in

Posted 2 days ago

Apply

12.0 - 16.0 years

1 - 1 Lacs

Hyderabad

Remote

Naukri logo

Were Hiring: Azure Data Factory (ADF) Developer Hyderabad Location: Onsite at Canopy One Office, Hyderabad/Remote Type: Full-time/Partime/Contract | Offshore role | Must be available to work in Eastern Time Zone (EST) We’re looking for an experienced ADF Developer to join our offshore team supporting a major client. This role focuses on building robust data pipelines using Azure Data Factory (ADF) and working closely with client stakeholders for transformation logic and data movement. Key Responsibilities Design, build, and manage ADF data pipelines Implement transformations and aggregations based on mappings provided Work with data from the bronze (staging) area, pre-loaded via Boomi Collaborate with client-side data managers (based in EST) to deliver clean, reliable datasets Requirements Proven hands-on experience with Azure Data Factory Strong understanding of ETL workflows and data transformation Familiarity with data staging/bronze layer concepts Willingness to work in Eastern Time Zone (EST) hours Preferred Qualifications Knowledge of Kimball Data Warehousing (huge advantage!) Experience working in an offshore coordination model Exposure to Boomi is a plus Role & responsibilities Preferred candidate profile

Posted 2 days ago

Apply

8.0 - 10.0 years

13 - 16 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Role: Azure cloud & Devops engineer Location: Remote /Pan India,Delhi NCR,Bengaluru,Chennai,Pune,Kolkata,Ahmedabad, Mumbai,Hyderabad Experience: 8+ years iSource Services is hiring for one of their client for the position of Azure cloud & Devops engineer. About the role: We are looking for an Azure Cloud and DevOps Engineer to design, configure, and manage Azure services, including Azure Synapse, security, DNS, and App Gateway. You will develop Java-based applications, automate CI/CD pipelines using GitHub Actions or Azure DevOps, and manage infrastructure with Terraform. Additionally, you will handle identity management using Microsoft Entra ID and integrate with Office 365. Job Responsibilities: Collaborate with customers to create scalable and secure Azure solutions. Develop and deploy Java applications on Azure with DevOps integration. Automate infrastructure provisioning using Terraform and manage CI/CD pipelines. Ensure system security and compliance in Azure environments. Provide expert guidance on Azure services, identity management, and DevOps best practices. Design, configure, and manage Azure services, including Azure Synapse, security, DNS, databases, App Gateway, Front Door, Traffic Manager, and Azure Automation. Core Skills: Expertise in Azure services (Synapse, DNS, App Gateway, Traffic Manager, etc.). Experience with Java-based application deployment and CI/CD pipelines. Proficiency in Microsoft Entra ID, Office 365 integration, and Terraform. Strong knowledge of cloud security and DevOps best practices.

Posted 3 days ago

Apply

5.0 - 10.0 years

9 - 19 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Key Responsibilities: ¢ Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. €¢ Build and operate very large data warehouses or data lakes. €¢ ETL optimization, designing, coding, & tuning big data processes using Apache Spark. €¢ Build data pipelines & applications to stream and process datasets at low latencies. €¢ Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience: €¢ Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake. €¢ Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. Email at- maya@mounttalent.com

Posted 3 days ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

About The Position. Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective.. Key Responsibilities. Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency.. Mentor junior data engineers within the organization.. Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric).. Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage).. Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions.. Optimize data pipelines in the Azure environment for performance, scalability, and reliability.. Ensure data quality and integrity through data validation techniques and frameworks.. Develop and maintain documentation for data processes, configurations, and best practices.. Monitor and troubleshoot data pipeline issues to ensure timely resolution.. Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge.. Manage the CI/CD process for deploying and maintaining data solutions.. Required Qualifications. Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals.. At least 5 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes. 5 10 years of experience. Strong knowledge of Microsoft Azure services, including Azure Data Factory, Azure Synapse, Azure Databricks, Azure Blob Storage and Azure Data Lake Gen 2.. Experience utilizing SQL DML to query modern RDBMS in an efficient manner (e.g., SQL Server, PostgreSQL).. Strong understanding of Software Engineering principles and how they apply to Data Engineering (e.g., CI/CD, version control, testing).. Experience with big data technologies (e.g., Spark).. Strong problem-solving skills and attention to detail.. Excellent communication and collaboration skills.. Preferred Qualifications. Learning agility. Technical Leadership. Consulting and managing business needs. Strong experience in Python is preferred but experience in other languages such as Scala, Java, C#, etc is accepted.. Experience building spark applications utilizing PySpark.. Experience with file formats such as Parquet, Delta, Avro.. Experience efficiently querying API endpoints as a data source.. Understanding of the Azure environment and related services such as subscriptions, resource groups, etc.. Understanding of Git workflows in software development.. Using Azure DevOps pipeline and repositories to deploy and maintain solutions.. Understanding of Ansible and how to use it in Azure DevOps pipelines.. Chevron ENGINE supports global operations, supporting business requirements across the world. Accordingly, the work hours for employees will be aligned to support business requirements. The standard work week will be Monday to Friday. Working hours are 8:00am to 5:00pm or 1.30pm to 10.30pm.. Chevron participates in E-Verify in certain locations as required by law.. Show more Show less

Posted 3 days ago

Apply

5.0 - 10.0 years

12 - 17 Lacs

Pune

Work from Office

Naukri logo

What You'll Do. The Global Analytics and Insights (GAI) team is seeking an experienced and experienced Data Visualization Manager to lead our data-driven decision-making initiatives.. The ideal candidate will have a background in Power BI, expert-level SQL proficiency, to drive actionable insights and demonstrated leadership and mentoring experience, and an ability to drive innovation and manage complex projects.. You will become an expert in Avalara's financial, marketing, sales, and operations data.. This position will Report to Senior Manager. What Your Responsibilities Will Be. You will define and execute the organization's BI strategy, ensuring alignment with business goals.. You will Lead, mentor, and manage a team of BI developers and analysts, fostering a continuous learning.. You will Develop and implement robust data visualization and reporting solutions using Power BI.. You will Optimize data models, dashboards, and reports to provide meaningful insights and support decision-making.. You will Collaborate with business leaders, analysts, and cross-functional teams to gather and translate requirements into actionable BI solutions.. Be a trusted advisor to business teams, identifying opportunities where BI can drive efficiencies and improvements.. You will Ensure data accuracy, consistency, and integrity across multiple data sources.. You will Stay updated with the latest advancements in BI tools, SQL performance tuning, and data visualization best practices.. You will Define and enforce BI development standards, governance, and documentation best practices.. You will work closely with Data Engineering teams to define and maintain scalable data pipelines.. You will Drive automation and optimization of reporting processes to improve efficiency.. What You’ll Need To Be Successful. 8+ years of experience in Business Intelligence, Data Analytics, or related fields.. 5+ Expert proficiency in Power BI, including DAX, Power Query, data modeling, and dashboard creation.. 5+ years of strong SQL skills, with experience in writing complex queries, performance tuning, and working with large datasets.. Familiarity with cloud-based BI solutions (e.g., Azure Synapse, AWS Redshift, Snowflake) is a plus.. Should have understanding of ETL processes and data warehousing concepts.. Strong problem-solving, analytical thinking, and decision-making skills.. How We’ll Take Care Of You. Total Rewards. In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses.. Health & Wellness. Benefits vary by location but generally include private medical, life, and disability insurance.. Inclusive culture and diversity. Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship.. Learn more about our benefits by region here: Avalara North America. What You Need To Know About Avalara. We’re Avalara. We’re defining the relationship between tax and tech.. We’ve already built an industry-leading cloud compliance platform, processing nearly 40 billion customer API calls and over 5 million tax returns a year, and this year we became a billion-dollar business. Our growth is real, and we’re not slowing down until we’ve achieved our mission to be part of every transaction in the world.. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. Ownership and achievement go hand in hand here. We instill passion in our people through the trust we place in them.. We’ve been different from day one. Join us, and your career will be too.. We’re An Equal Opportunity Employer. Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.. Show more Show less

Posted 3 days ago

Apply

6.0 - 10.0 years

20 - 25 Lacs

Pune, Mumbai (All Areas)

Work from Office

Naukri logo

Position: Data Engineer Experience: 6 +yrs. Job Location: Pune / Mumbai Job Profile Summary- Azure Databricks and Hands on Pyspark with tuning Azure Data Factory pipelines for various data loading into ADB, perf tuning Azure Synapse Azure Monitoring and Log Analytics ( error handling in ADF pipelines and ADB) Logic Apps and Functions Performance Tuning Databricks, Datafactory and Synapse Databricks data loading (layers ) and Export (which connection options, which best approach for report and access for fast)

Posted 3 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

Naukri logo

This role requires deep understanding of data warehousing, business intelligence (BI), and data governance principles, with strong focus on the Microsoft technology stack. Data Architecture: Develop and maintain the overall data architecture, including data models, data flows, data quality standards. Design and implement data warehouses, data marts, data lakes on Microsoft Azure platform Business Intelligence: Design and develop complex BI reports, dashboards, and scorecards using Microsoft Power BI. Data Engineering: Work with data engineers to implement ETL/ELT pipelines using Azure Data Factory. Data Governance: Establish and enforce data governance policies and standards. Primary Skills Experience: 15+ years of relevant experience in data warehousing, BI, and data governance. Proven track record of delivering successful data solutions on the Microsoft stack. Experience working with diverse teams and stakeholders. Required Skills and Experience: Technical Skills: Strong proficiency in data warehousing concepts and methodologies. Expertise in Microsoft Power BI. Experience with Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. Knowledge of SQL and scripting languages (Python, PowerShell). Strong understanding of data modeling and ETL/ELT processes. Secondary Skills Soft Skills Excellent communication and interpersonal skills. Strong analytical and problem-solving abilities. Ability to work independently and as part of a team. Strong attention to detail and organizational skills.

Posted 3 days ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Pune, Ahmedabad

Hybrid

Naukri logo

Key Responsibilities: Design, develop, and optimize data pipelines and ETL/ELT workflows using Microsoft Fabric, Azure Data Factory, and Azure Synapse Analytics. Implement Lakehouse and Warehouse architectures within Microsoft Fabric, supporting medallion (bronze-silver-gold) data layers. Collaborate with business and analytics teams to build scalable and reliable data models (star/snowflake) using Azure SQL, Power BI, and DAX. Utilize Azure Analysis Services, Power BI Semantic Models, and Microsoft Fabric Dataflows for analytics delivery. Very good hands-on experience with Python for data transformation and processing. Apply CI/CD best practices and manage code through Git version control. Ensure data security, lineage, and quality using data governance best practices and Microsoft Purview (if applicable). Troubleshoot and improve performance of existing data pipelines and models. Participate in code reviews, testing, and deployment activities. Communicate effectively with stakeholders across geographies and time zones. Required Skills: Hands-on experience with Microsoft Fabric (Lakehouse, Warehouse, Dataflows, Pipelines). Strong knowledge of Azure Synapse Analytics, Azure Data Factory, Azure SQL, and Azure Analysis Services. Proficiency in Power BI and DAX for data visualization and analytics modeling. Strong Python skills for scripting and data manipulation. Experience in dimensional modeling, star/snowflake schemas, and Kimball methodologies. Familiarity with CI/CD pipelines, DevOps, and Git-based versioning. Understanding of data governance, data cataloging, and quality management practices. Excellent verbal and written communication skills.

Posted 4 days ago

Apply

5.0 - 10.0 years

11 - 21 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Job Title: Senior Data Engineer ADF | Snowflake | DBT | Databricks Experience: 5 to 8 Years Locations: Pune / Hyderabad / Gurgaon / Bangalore (Hybrid) Job Type: Full Time, Permanent Job Description: We are hiring for a Senior Data Engineer role with strong expertise in Azure Data Factory (ADF) , Snowflake , DBT , and Azure Databricks . The ideal candidate will be responsible for designing, building, and maintaining scalable cloud-based data pipelines and enabling high-quality data delivery for analytics and reporting. Key Responsibilities Build and manage ETL/ELT pipelines using ADF, Snowflake, DBT, and Databricks Create parameterized, reusable components within ADF pipelines Perform data transformations and modeling in Snowflake using DBT Use Databricks for data processing using PySpark/SQL Collaborate with stakeholders to define and implement data solutions Optimize data workflows for performance, scalability , and cost-efficiency Ensure data quality, governance, and documentation standards Mandatory Skills Azure Data Factory (ADF) Snowflake DBT (Data Build Tool) Azure Databricks Strong SQL and data modeling experience Good-to-Have Skills Azure Data Lake, Azure Synapse, Blob Storage CI/CD using Azure DevOps or GitHub Python scripting, PySpark Power BI/Tableau integration Experience in metadata/data governance tools Role Requirements Education : Bachelors/Masters degree in Computer Science, Data Engineering, or related field Certifications : Azure or Snowflake certification is a plus Strong problem-solving and communication skills Keywords: Azure Data Factory, ADF, Snowflake, DBT, Azure Databricks, PySpark, SQL, Data Engineer, Azure Data Lake, ETL, ELT, Azure Synapse, Power BI, CI/CD

Posted 4 days ago

Apply

4.0 - 8.0 years

30 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

ECMS ID/ Title 525632 Number of Openings 1 Duration of contract 6 No of years experience Relevant 4-8 Years. Detailed job description - Skill Set: Attached Mandatory Skills* Azure Data Factory, PySpark notebooks, Spark SQL, and Python. Good to Have Skills ETL Processes, SQL, Azure Data Factory, Data Lake, Azure Synapse, Azure SQL, Databricks etc. Vendor Billing range 9000- 10000/Day Remote option available: Yes/ No Hybrid Mode Work location: Most Preferrable Pune and Hyderabad Start date: Immediate Client Interview / F2F Applicable yes Background check process to be followed: Before onboarding / After onboarding: BGV Agency: Post

Posted 4 days ago

Apply

9.0 - 14.0 years

35 - 55 Lacs

Noida

Hybrid

Naukri logo

Looking For A Better Opportunity? Join Us and Make Things Happen with DMI a Encora company now....! Encora is seeking a full-time Lead Data Engineer with Logistic domian expertise to support our manufacturing large scale client in digital transformation. The Lead Data Engineer is responsible for ensuring the day-to-day leadership and guidance of the local, India-based, data team. This role will be the primary interface with the management team of the client and will work cross functionally with various IT functions to streamline project delivery. Minimum Requirements: l 8+ years of experience overall in IT l Current - 5+ years of experience on Azure Cloud as Data Engineer l Current - 3+ years of hands-on experience on Databricks/ AzureDatabricks l Proficient in Python/PySpark l Proficient in SQL/TSQL l Proficient in Data Warehousing Concepts (ETL/ELT, Data Vault Modelling, Dimensional Modelling, SCD, CDC) Primary Skills: Azure Cloud, Databricks, Azure Data Factory, Azure Synapse Analytics, SQL/TSQL, PySpark, Python + Logistic domain expertise Work Location: Noida, India (Candidates who are open for relocation on immediate basis can also apply) Interested candidates can apply at nidhi.dubey@encora.com along with their updated resume: 1. Total experience: 2.Relevant experience in Azure Cloud: 3. Relevant experience in Azure Databricks: 4. Relevant experience in Azure Syanspse: 5. Relevant experience in SQL/T-SQL: 6. Relevant experience in Pyspark: 7. Relevant experience in python: 8. Relevant experience in logistic domain: 9. Relevant experience in data warehosuing: 10. Current CTC: 11. Expected CTC: 12. Official Notice Period. if serving please specify LWD:

Posted 4 days ago

Apply

10.0 - 14.0 years

40 - 45 Lacs

Hyderabad

Work from Office

Naukri logo

Skills: Cloudera, Big Data, Hadoop, SPARK, Kafka, Hive, CDH Clusters Design and implement Cloudera-based data platforms, including cluster sizing, configuration, and optimization. Install, configure, and administer Cloudera Manager and CDP clusters, managing all aspects of the cluster lifecycle. Monitor and troubleshoot platform performance, identifying and resolving issues promptly. Review the maintain the data ingestion and processing pipelines on the Cloudera platform. Collaborate with data engineers and data scientists to design and optimize data models, ensuring efficient data storage and retrieval. Implement and enforce security measures for the Cloudera platform, including authentication, authorization, and encryption. Manage platform user access and permissions, ensuring compliance with data privacy regulations and internal policies. Experience in creating Technology Road Maps for Cloudera Platform. Stay up-to-date with the latest Cloudera and big data technologies, and recommend and implement relevant updates and enhancements to the platform. Experience in Planning, testing, and executing upgrades involving Cloudera components and ensuring platform stability and security. Document platform configurations, processes, and procedures, and provide training and support to other team members as needed. Requirements Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as a Cloudera platform engineer or similar role, with a strong understanding of Cloudera Manager and CDH clusters. Expertise in designing, implementing, and maintaining scalable and high-performance data platforms using Cloudera technologies such as Hadoop, Spark, Hive, Kafka. Strong knowledge of big data concepts and technologies, data modeling, and data warehousing principles. Familiarity with data security and compliance requirements, and experience implementing security measures for Cloudera platforms. Proficiency in Linux system administration and scripting languages (e.g., Shell, Python). Strong troubleshooting and problem-solving skills, with the ability to diagnose and resolve platform issues quickly. Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. Experience on Azure Data Factory/Azure Databricks/Azure Synapse is a plus Timings: 10 am to 7.30 pm 2 days WFO and 3 days WFH.

Posted 4 days ago

Apply

3.0 - 8.0 years

9 - 16 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

Must have skills : Snowflake Data Warehouse Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years fulltime education Share CV on - neha.mandal@mounttalent.com Summary: As an Application Lead for Packaged Application Development, you will be responsible for leading the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve working with Snowflake Data Warehouse and collaborating with cross-functional teams to deliver high-quality solutions. Roles & Responsibilities: - Lead the design, development, and implementation of applications using Snowflake Data Warehouse. - Collaborate with cross-functional teams to ensure the delivery of high-quality solutions that meet business requirements. - Act as the primary point of contact for all application-related issues, providing technical guidance and support to team members. - Ensure that all applications are designed and developed in accordance with industry best practices and standards. - Provide technical leadership and mentorship to team members, ensuring that they have the necessary skills and knowledge to deliver high-quality solutions. Professional & Technical Skills: - Must To Have Skills: Strong experience in Snowflake Data Warehouse. - Good To Have Skills: Experience in other data warehousing technologies such as Redshift, BigQuery, or Azure Synapse Analytics. - Experience in designing and developing applications using Snowflake Data Warehouse. - Strong understanding of data warehousing concepts and best practices. - Experience in working with cross-functional teams to deliver high-quality solutions. - Excellent communication and interpersonal skills.

Posted 5 days ago

Apply

4.0 - 5.0 years

17 - 25 Lacs

Pune

Work from Office

Naukri logo

Seeking an experienced Azure Data Engineer to design, build, and maintain scalable data solutions using ADF, Databricks, Synapse, Azure SQL, and more. Strong Python/SQL skills, 4+ yrs exp, and Azure cloud expertise required.

Posted 5 days ago

Apply

4.0 - 8.0 years

6 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

No of years experience Relevant 4-8 Years. Detailed job description - Skill Set: Attached Mandatory Skills* Power BI, DAX, Azure Data Factory, PySpark notebooks, Spark SQL, and Python Good to Have Skills Power BI, DAX,ETL Processes, SQL, Azure Data Factory, Data Lake, Azure Synapse, Azure SQL, Databricks etc

Posted 5 days ago

Apply

4.0 - 8.0 years

8 - 18 Lacs

Hyderabad

Hybrid

Naukri logo

Azure Data Engineer | Hyderabad (Onsite) Experience: 4 to 8 Years Job Type: Full Time Timings: 9 AM to 6 PM IST Roles & Responsibilities: Design and develop scalable, multi-terabyte data models and data marts. Build and maintain cloud-based analytics solutions using Azure services. Create and manage data pipelines and implement streaming ingestion methods. Analyze complex data sets and derive actionable insights. Develop solutions leveraging Azure Data Bricks, Synapse, SQL, and Data Lake. Ensure robust cloud infrastructure using Azure platform components. Follow DevOps processes, including CI/CD and Infrastructure as Code (IaC). Apply strong data warehouse modeling principles in solution architecture. Collaborate with stakeholders to understand analytics requirements. Work in agile teams to deliver projects efficiently and on time. Troubleshoot, debug, and optimize data engineering solutions. Preferred Skills: Strong knowledge of Azure ecosystem & data warehouse concepts. Scripting with Python/PowerShell, familiarity with Power BI. Added edge: KQL & LLM model exposure. Quick learner, agile contributor, and problem-solver. Apply Now / Share CVs : sirisha.nethi@quadranttechnologies.com | ajay.pesaru@quadranttechnologies.com

Posted 5 days ago

Apply

6.0 - 11.0 years

9 - 19 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Role & responsibilities Primary Skills : Azure Synapse ~ Azure Data. Looking for an Architect and Developers . Nice to have: Amazon Redshift; BigData -Hadoop;BigData -Hive;Scala;Apache Spark;Snowflake;Databricks;Google Cloud PlatformSAP;Apache Cassandra;flume;BigData -HBASE;Apache Impala;apache nifi;Apache Pig;Sqoop;PySpark;Python for Data Science;Data-Dremio;Apache Hadoop

Posted 6 days ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python

Posted 6 days ago

Apply

4.0 - 8.0 years

5 - 15 Lacs

Hyderabad

Hybrid

Naukri logo

Role: Azure Data Engineer Job Type: Full Time Job Location: Hyderabad Level of Experience: 5-8 Years Job Description : Experience in understanding on Azure DataBricks, Azure Synapse, Azure SQL and Azure Data Lake is required. • Experience in Creating: designing and developing data models for scalable, multi-terabyte data marts. • Experience in designing and hands-on development in cloud-based analytics solutions. • Should be able to analyze and understand complex data. • Thorough understanding of Azure Cloud Infrastructure. • Designing and building of data pipelines and Streaming ingestion methods. • Knowledge of Dev-Ops processes (Including CI/CD) and Infrastructure as code is essential. • Strong experience in common data warehouse modelling principles. • Knowledge in Power Bl is desirable. • Knowledge on PowerShell and work experience in Python or equivalent programming language is desirable. • Exposure or knowledge on Kusto(KQL) is an added advantage. • Exposure or knowledge of LLM models is an added advantage. Technical Soft Skills; Strong customer engagement skills to understand customer needs for Analytics solutions fully. • Experience in working in a fast-paced agile environment. • Ability to grasp the new technologies fast and start delivering projects quickly. • Strong problem solving and troubleshooting skills.

Posted 6 days ago

Apply

4.0 - 7.0 years

7 - 14 Lacs

Pune, Mumbai (All Areas)

Work from Office

Naukri logo

Job Profile Description Create and maintain highly scalable data pipelines across Azure Data Lake Storage, and Azure Synapse using Data Factory, Databricks and Apache Spark/Scala Responsible for managing a growing cloud-based data ecosystem and reliability of our Corporate datalake and analytics data mart Contribute to the continued evolution of Corporate Analytics Platform and Integrated data model. Be part of Data Engineering team in all phases of work including analysis, design and architecture to develop and implement cutting-edge solutions. Negotiate and influence changes outside of the team that continuously shape and improve the Data strategy 4+ years of experience implementing analytics data Solutions leveraging Azure Data Factory, Databricks, Logic Apps, ML Studio, Datalake and Synapse Working experience with Scala, Python or R Bachelors degree or equivalent experience in Computer Science, Information Systems, or related disciplines.

Posted 6 days ago

Apply

3.0 - 5.0 years

9 - 17 Lacs

Bengaluru

Remote

Naukri logo

Role Overview: We are looking for a highly skilled Azure Data Engineer or Power BI Analyst with 3 to 5 years of experience in building end-to-end data solutions on the Microsoft Azure platform. The ideal candidate should be proficient in data ingestion, transformation, modeling, and visualization using tools such as Azure Data Factory, Azure Databricks, SQL, Power BI, and Fabric. Role & responsibilities Design, develop, and maintain robust ETL/ELT pipelines using Azure Data Factory (ADF) and Azure Databricks Perform data ingestion from various on-prem/cloud sources to Azure Data Lake / Synapse / SQL Implement transformation logic using PySpark , SQL , and Data Frames Create Power BI dashboards and reports using DAX and advanced visualization techniques Develop and manage tabular models , semantic layers , and data schemas (Star/Snowflake) Optimize Power BI datasets and performance tuning (e.g., dataset refresh time, PLT) Collaborate with stakeholders to gather reporting requirements and deliver insights Ensure data accuracy, security, and compliance across all stages Leverage Azure DevOps for version control and CI/CD pipelines Participate in Agile ceremonies (scrum, sprint reviews, demos) Preferred candidate profile 3+ years of experience with Azure Data Factory, Databricks, Data Lake Proficient in Power BI , DAX , SQL , and Python Experience in building and optimizing tabular models and semantic layers Hands-on with Azure Synapse , Fabric , and DevOps Solid understanding of data modeling, ETL, data pipelines, and business logic implementation Strong communication skills and ability to work in Agile teams

Posted 6 days ago

Apply

Exploring Azure Synapse Jobs in India

The Azure Synapse job market in India is currently experiencing a surge in demand as organizations increasingly adopt cloud solutions for their data analytics and business intelligence needs. With the growing reliance on data-driven decision-making, professionals with expertise in Azure Synapse are highly sought after in the job market.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for Azure Synapse professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.

Career Path

A typical career path in Azure Synapse may include roles such as Junior Developer, Senior Developer, Tech Lead, and Architect. As professionals gain experience and expertise in the platform, they can progress to higher-level roles with more responsibilities and leadership opportunities.

Related Skills

In addition to expertise in Azure Synapse, professionals in this field are often expected to have knowledge of SQL, data warehousing concepts, ETL processes, data modeling, and cloud computing principles. Strong analytical and problem-solving skills are also essential for success in Azure Synapse roles.

Interview Questions

  • What is Azure Synapse Analytics and how does it differ from Azure Data Factory? (medium)
  • Can you explain the differences between a Data Warehouse and a Data Lake? (basic)
  • How do you optimize data loading and querying performance in Azure Synapse? (advanced)
  • What is PolyBase in Azure Synapse and how is it used for data integration? (medium)
  • How do you handle security and compliance considerations in Azure Synapse? (advanced)
  • Explain the concept of serverless SQL pools in Azure Synapse. (medium)
  • What are the different components of an Azure Synapse workspace? (basic)
  • How do you monitor and troubleshoot performance issues in Azure Synapse? (advanced)
  • Describe your experience with building data pipelines in Azure Synapse. (medium)
  • Can you walk us through a recent project where you used Azure Synapse for data analysis? (advanced)
  • How do you ensure data quality and integrity in Azure Synapse? (medium)
  • What are the key features of Azure Synapse Link for Azure Cosmos DB? (advanced)
  • How do you handle data partitioning and distribution in Azure Synapse? (medium)
  • Discuss a scenario where you had to optimize data storage and processing costs in Azure Synapse. (advanced)
  • What are some best practices for data security in Azure Synapse? (medium)
  • How do you automate data integration workflows in Azure Synapse? (advanced)
  • Can you explain the role of Azure Data Lake Storage Gen2 in Azure Synapse? (medium)
  • Describe a situation where you had to collaborate with cross-functional teams on a data project in Azure Synapse. (advanced)
  • How do you ensure data governance and compliance in Azure Synapse? (medium)
  • What are the advantages of using Azure Synapse over traditional data warehouses? (basic)
  • Discuss your experience with real-time analytics and streaming data processing in Azure Synapse. (advanced)
  • How do you handle schema evolution and versioning in Azure Synapse? (medium)
  • What are some common challenges you have faced while working with Azure Synapse and how did you overcome them? (advanced)
  • Explain the concept of data skew and how it can impact query performance in Azure Synapse. (medium)
  • How do you stay updated on the latest developments and best practices in Azure Synapse? (basic)

Closing Remark

As the demand for Azure Synapse professionals continues to rise in India, now is the perfect time to upskill and prepare for exciting career opportunities in this field. By honing your expertise in Azure Synapse and related skills, you can position yourself as a valuable asset in the job market and embark on a rewarding career journey. Prepare diligently, showcase your skills confidently, and seize the numerous job opportunities waiting for you in the Azure Synapse domain. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies