Jobs
Interviews

375 Azure Synapse Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As an Azure Synapse Developer, you will be responsible for designing, developing, and optimizing data solutions within Azure Synapse Analytics. Your expertise will be utilized in leveraging the capabilities of Azure Synapse for data warehousing, data lakes, and big data processing. You will play a key role in the development of data pipelines using Azure Data Factory (ADF), ensuring their scalability and efficiency for data ingestion, transformation, and orchestration. Utilizing your proficiency in data warehousing principles and data modeling techniques, you will be involved in designing and implementing robust data structures. Working extensively with Snowflake, you will handle tasks such as data loading, transformation, and query optimization. Additionally, you will develop and maintain complex Stored Procedures in various database environments. Your role will also involve the implementation and enhancement of ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes to facilitate efficient data movement between disparate systems. You will be responsible for ensuring data quality and adherence to data quality frameworks, utilizing data monitoring tools to maintain data integrity and reliability. Configuration and management of job scheduling for automated data workflows and pipeline execution will be part of your responsibilities. Proficiency in handling various data formats including JSON, XML, CSV, and Parquet will be essential. Active participation in an Agile development environment, using tools like JIRA for task management and collaboration, is expected. Your skills should include good experience in Azure Synapse Analytics, Azure Data Factory (ADF), Snowflake, and strong experience with Stored Procedures. Experience with ETL/ELT processes, data warehousing, data modeling, data quality frameworks, monitoring tools, job scheduling, and knowledge of data formats is required. Fluent English communication skills, both written and verbal, along with strong presentation skills are essential for effectively communicating technical concepts and solutions to team members and stakeholders in a formal and professional manner.,

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

As a Cloud Solution Architect, you will be the subject matter expert for Azure Public Cloud, including various platform services such as cloud networking, storage, and PaaS services. Your primary responsibilities will involve providing support and architecture guidance in these areas. With over 8 years of experience, including more than 10 years of relevant experience, you will be expected to build solutions that make use of application services on Microsoft Azure like Azure Functions, Azure Web Apps, and Azure Logic Apps. Additionally, you will leverage integration services such as Azure Service Bus, Azure Event Hub, Azure Synapse, and Azure Data Factory to create effective solutions. Your role will also include working with Azure Key Vault and implementing certificates/secret policies. You will be responsible for optimizing resources to manage costs effectively and budget efficiently. Furthermore, you will design disaster recovery solutions and be involved in documenting DR plans and runbooks. Experience with tools like Terraform, Azure Bicep, and Azure DevOps pipelines will be essential for Infrastructure as code deployment and management. If you are ready to take on this challenging role, based in Hyderabad on a full-time basis with an immediate to 30 days notice period, we welcome your application.,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You are looking for a talented Data Engineer with 4+ years of experience to join your team in Pune/Hyderabad/Gurgaon/Hybrid location. As a Data Engineer, you will be responsible for converting and optimizing SAS code into PySpark using in-house tools and participating in end-to-end migration projects. Your role will involve developing and maintaining data pipelines in PySpark, ensuring seamless integration with Azure cloud services, and conducting rigorous testing and validation of converted code to meet performance benchmarks and client requirements. To excel in this role, you must have at least 4 years of experience in SAS and PySpark coding, with proven experience in migration projects. You should possess a strong understanding of testing and validating high-quality code, along with experience working with Azure cloud services such as Azure Synapse, Azure Data Lake, and Azure Databricks. Strong problem-solving skills and the ability to work independently with minimal guidance are essential. Good communication skills are important for effectively engaging with onshore clients in the services industry. Experience with CI/CD pipelines is preferred. Your responsibilities will include collaborating with clients to understand project requirements, communicating progress, and addressing concerns. You will troubleshoot and resolve issues independently, identifying and mitigating problems early in the development cycle. Maintaining a proactive approach to identifying improvements in code conversion and migration processes will be crucial for success in this role. If you meet the requirements and are excited about this opportunity, please share your resume at nagapushpa@iflowonline.com.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Senior Power BI Architect with over 8 years of experience, you will be responsible for leading the design, implementation, and governance of Power BI solutions within the enterprise. In this leadership role, you will be expected to utilize advanced knowledge in data architecture, Azure BI stack integration, and enterprise-scale reporting solutions to drive impactful results. You will play a key role in architecture ownership, stakeholder engagement, and technical mentorship across various teams. Your key responsibilities will include architecting enterprise BI solutions by designing and implementing scalable Power BI architecture that aligns with data strategy and business objectives. You will also lead the design of semantic models, data transformations, and DAX calculations while ensuring optimized performance across large data sets and distributed systems. Additionally, you will define and implement Power BI governance strategies, integrate BI solutions with Azure services, contribute to enterprise data strategy decisions, and provide mentorship to BI developers, analysts, and business users. To excel in this role, you should possess a Bachelor's degree in computer science, Information Technology, or a related field along with 8+ years of experience in Power BI Architecture. Proficiency in Advanced Power BI features such as DAX, Power Query, and Dataflows, as well as experience with Azure Synapse, Azure Data Factory, Azure SQL, and Azure Purview is essential. Strong skills in SQL, data modeling, Power BI governance, and stakeholder management are also required. Excellent communication and mentoring abilities are crucial for collaborating with senior executives, data owners, and cross-functional teams to deliver robust BI solutions aligned with strategic goals. Preferred qualifications include Microsoft Certified Power BI/Data Engineering certifications, exposure to Power Platform, Python, or R for advanced analytics, and experience in Agile BI delivery models. If you are a seasoned Power BI professional looking to take on a challenging leadership role in designing and implementing enterprise-scale BI solutions, this position offers the opportunity to make a significant impact within the organization.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As an Azure Data Engineer strong in Azure Synapse, you will play a crucial role in designing, developing, and maintaining data pipelines using Azure Data Factory (ADF) and Synapse. Your responsibility will include working with SQL databases to optimize queries, developing data warehousing solutions, and providing production support for data pipelines and reports. Collaboration with stakeholders to understand business requirements and ensuring data quality, integrity, and governance will be essential. Additionally, you will stay updated with industry best practices and emerging technologies in data engineering. To excel in this role, you should have 3-5 years of experience in Data Engineering with a focus on Azure technologies. Hands-on experience with Azure Data Factory (ADF), Synapse, and Data Warehousing is a must. Strong expertise in SQL development, query optimization, and database performance tuning will be required. Your problem-solving skills, attention to detail, and excellent communication abilities will be key in translating business requirements into scalable data solutions and working effectively with cross-functional teams. Preferred qualifications include experience with Power BI, Power Query, and report development, knowledge of data security best practices, exposure to Jet Analytics, and familiarity with CI/CD for data pipelines. By joining us, you will have the opportunity to work on cutting-edge Azure Data Engineering projects in a collaborative work environment with a global team. There is potential for long-term engagement and career growth, along with competitive compensation based on experience.,

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced Data Architect with over 12 years of expertise in data architecture, data engineering, and enterprise-scale data solutions. Your strong background in Microsoft Fabric Data Engineering, Azure Synapse, Power BI, and Data Lake will be instrumental in driving strategic data initiatives for our organization in Hyderabad, India. In this role, you will design and implement scalable, secure, and high-performance data architecture solutions utilizing Microsoft Fabric and related Azure services. Your responsibilities will include defining data strategies aligned with business goals, architecting data pipelines and warehouses, collaborating with stakeholders to define data requirements, and providing technical leadership in data engineering best practices. Your qualifications include 12+ years of experience in data engineering or related roles, proven expertise in Microsoft Fabric and Azure Data Services, hands-on experience in modern data platform design, proficiency in SQL, Python, Spark, and Power BI, as well as strong problem-solving and communication skills. Preferred qualifications include Microsoft certifications, experience with DevOps and CI/CD for data projects, exposure to real-time streaming and IoT data, and prior Agile/Scrum environment experience. If you are passionate about driving innovation in data architecture, optimizing data performance, and leading data initiatives that align with business objectives, we encourage you to apply for this Full-Time Data Architect position in Hyderabad, India.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With over 125,000 employees spread across more than 30 countries, we are characterized by our innate curiosity, entrepreneurial agility, and commitment to creating lasting value for our clients. Our purpose, which is the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises, including the Fortune Global 500, by leveraging our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently looking for a Principal Consultant - Data Scientist, specializing in Azure Generative AI & Advanced Analytics. In this role, we are seeking a highly skilled and experienced Data Scientist with hands-on expertise in Azure Generative AI, Document Intelligence, Agentic AI, and Advanced Data Pipelines. Your responsibilities will include developing and optimizing AI/ML models, analyzing complex datasets, and providing strategic recommendations for embedding models and Generative AI applications. You will play a crucial role in driving AI-driven insights and automation within our business. Responsibilities: - Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets to develop actionable insights and drive data-driven decision-making. - Design, develop, and implement Generative AI solutions leveraging AWS Bedrock, Azure OpenAI, Azure Machine Learning, and Cognitive Services. - Utilize Azure Document Intelligence to extract and process structured and unstructured data from diverse document sources. - Build and optimize data pipelines for processing and analyzing large-scale datasets efficiently. - Implement Agentic AI techniques to develop intelligent, autonomous systems that can make decisions and take actions. - Research, evaluate, and recommend embedding models, language models, and generative models for diverse business use cases. - Continuously monitor and assess the performance of AI models, generative models, and data-driven solutions, refining and optimizing them as needed. - Stay up-to-date with the latest industry trends, tools, and technologies in data science, AI, and generative models, and apply this knowledge to improve existing solutions and develop new ones. - Mentor and guide junior team members, helping to develop their skills and contribute to their professional growth. - Ensure model explainability, fairness, and compliance with responsible AI principles. - Stay up to date with the latest advancements in AI, ML, and data science, and apply best practices to enhance business operations. Qualifications: Minimum Qualifications / Skills: - Bachelor's or Master's degree in Computer Science, Data Science, AI, Machine Learning, or a related field. - Relevant experience in data science, machine learning, AI applications, generative AI prompt engineering, and creating custom models. - Strong proficiency in Python, TensorFlow, PyTorch, PySpark, Scikit-learn, and MLflow. - Hands-on experience with Azure AI services (Azure OpenAI, Azure Document Intelligence, Azure Machine Learning, Azure Synapse, Azure Data Factory, Data Bricks, RAG Pipeline). - Expertise in LLMs, transformer architectures, and embeddings. - Experience in building and optimizing end-to-end data pipelines. - Familiarity with vector databases, FAISS, Pinecone, and knowledge retrieval techniques. - Knowledge of Reinforcement Learning (RLHF), fine-tuning LLMs, and prompt engineering. - Strong analytical skills with the ability to translate business requirements into AI/ML solutions. - Excellent problem-solving, critical thinking, and communication skills. - Experience with cloud-native AI deployment, containerization (Docker, Kubernetes), and MLOps practices is a plus. Preferred Qualifications / Skills: - Experience with multi-modal AI models and computer vision applications. - Exposure to LangChain, Semantic Kernel, RAG (Retrieval-Augmented Generation), and knowledge graphs. - Certifications in Microsoft Azure AI, Data Science, or ML Engineering. If you are passionate about leveraging your skills and expertise to drive AI-driven insights and automation in a dynamic environment, we invite you to apply for the role of Principal Consultant - Data Scientist at Genpact. Join us in shaping the future and creating lasting value for our clients.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As an Azure Data Engineer strong in Azure Synapse, you will be responsible for designing, developing, and maintaining data pipelines using Azure Data Factory (ADF) and Synapse. Your role will involve working with SQL databases to optimize queries and ensure efficient data processing. Additionally, you will develop and manage data warehousing solutions to support analytics and reporting, providing production support for data pipelines and reports. You will collaborate with stakeholders to understand business requirements and translate them into scalable data solutions. It will be crucial for you to ensure data quality, integrity, and governance across all data pipelines. Staying updated with industry best practices and emerging technologies in data engineering will also be part of your responsibilities. To excel in this role, you should have 3-5 years of experience in Data Engineering with a focus on Azure technologies. Hands-on experience with Azure Data Factory (ADF), Synapse, and Data Warehousing is essential. Strong expertise in SQL development, query optimization, and database performance tuning is required. You should possess experience providing production support for data pipelines and reports, along with strong problem-solving skills and the ability to work independently. Preferred qualifications include experience with Power BI, Power Query, and report development, knowledge of data security best practices, exposure to Jet Analytics, and familiarity with CI/CD for data pipelines. Joining us will offer you the opportunity to work on cutting-edge Azure Data Engineering projects in a collaborative work environment with a global team. You can expect potential for long-term engagement and career growth, along with competitive compensation based on experience.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Data Engineer at our Pune location, you will play a critical role in designing, developing, and maintaining scalable data pipelines and architectures using Data bricks on Azure/AWS cloud platforms. With 6 to 9 years of experience in the field, you will collaborate with stakeholders to integrate large datasets, optimize performance, implement ETL/ELT processes, ensure data governance, and work closely with cross-functional teams to deliver accurate solutions. Your responsibilities will include building, maintaining, and optimizing data workflows, integrating datasets from various sources, tuning pipelines for performance and scalability, implementing ETL/ELT processes using Spark and Data bricks, ensuring data governance, collaborating with different teams, documenting data pipelines, and developing automated processes for continuous integration and deployment of data solutions. To excel in this role, you should have 6 to 9 years of hands-on experience as a Data Engineer, expertise in Apache Spark, Delta Lake, Azure/AWS Data bricks, proficiency in Python, Scala, or Java, advanced SQL skills, experience with cloud data platforms, data warehousing solutions, data modeling, ETL tools, version control systems, and automation tools. Additionally, soft skills such as problem-solving, attention to detail, and ability to work in a fast-paced environment are essential. Nice to have skills include experience with Data bricks SQL and Data bricks Delta, knowledge of machine learning concepts, and experience in CI/CD pipelines for data engineering solutions. Joining our team offers challenging work with international clients, growth opportunities, a collaborative culture, and global project involvement. We provide competitive salaries, flexible work schedules, health insurance, performance-based bonuses, and other standard benefits. If you are passionate about data engineering, possess the required skills and qualifications, and thrive in a dynamic and innovative environment, we welcome you to apply for this exciting opportunity.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Azure Data Engineer with expertise in Microsoft Fabric and modern data platform components, you will be responsible for designing, developing, and managing end-to-end data pipelines on Azure Cloud. Your primary focus will be on ensuring performance, scalability, and delivering business value through efficient data solutions. You will collaborate with various teams to define data requirements, implement data ingestion, transformation, and modeling pipelines supporting structured and unstructured data. Additionally, you will work with Azure Synapse, Data Lake, Data Factory, Databricks, and Power BI for seamless data integration and reporting. Your role will involve optimizing data performance and cost through efficient architecture and coding practices, ensuring data security, privacy, and compliance with organizational policies. Monitoring, troubleshooting, and improving data workflows for reliability and performance will also be part of your responsibilities. To excel in this role, you should have 5 to 7 years of experience as a Data Engineer, with at least 2+ years working on the Azure Data Stack. Hands-on experience with Microsoft Fabric, Azure Synapse Analytics, Data Factory, Data Lake, SQL Server, and Power BI integration is crucial. Strong skills in data modeling, ETL/ELT design, and performance tuning are required, along with proficiency in SQL and Python/PySpark scripting. Experience with CI/CD pipelines and DevOps practices for data solutions, understanding of data governance, security, and compliance frameworks, as well as excellent communication, problem-solving, and stakeholder management skills are essential for success in this role. A Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field is preferred. Having Microsoft Azure Data Engineer Certification (DP-203), experience in Real-Time Streaming (e.g., Azure Stream Analytics or Event Hub), and exposure to Power BI semantic models and direct lake mode in Microsoft Fabric would be advantageous. Join us to work with the latest in Microsoft's modern data stack - Microsoft Fabric, collaborate with a team of passionate data professionals, work on enterprise-grade, large-scale data projects, experience a fast-paced, learning-focused work environment, and have immediate visibility and impact in key business decisions.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

Your Responsibilities Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional, and NoSQL). Collaborate with solution teams and Data Architects to implement data strategies, build data flows, and develop logical/physical data models. Work with Data Architects to define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Engage in hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Proactively and independently address project requirements and articulate issues/challenges to reduce project delivery risks. Your Profile Bachelor's degree in computer/data science technical or related experience. Possess 7+ years of hands-on relational, dimensional, and/or analytic experience utilizing RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols. Demonstrated experience with data warehouse, Data Lake, and enterprise big data platforms in multi-data-center contexts. Proficient in metadata management, data modeling, and related tools (e.g., Erwin, ER Studio). Preferred experience with services in Azure/Azure Databricks (Azure Data Factory, Azure Data Lake Storage, Azure Synapse & Azure Databricks) and working on SAP Datasphere is a plus. Experience in team management, communication, and presentation. Understanding of agile delivery methodology and experience working in a scrum environment. Ability to translate business needs into data vault and dimensional data models supporting long-term solutions. Collaborate with the Application Development team to implement data strategies, create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Maintain logical and physical data models along with corresponding metadata. Develop best practices for standard naming conventions and coding practices to ensure data model consistency. Recommend opportunities for data model reuse in new environments. Perform reverse engineering of physical data models from databases and SQL scripts. Evaluate data models and physical databases for variances and discrepancies. Validate business data objects for accuracy and completeness. Analyze data-related system integration challenges and propose appropriate solutions. Develop data models according to company standards. Guide System Analysts, Engineers, Programmers, and others on project limitations and capabilities, performance requirements, and interfaces. Review modifications to existing data models to improve efficiency and performance. Examine new application design and recommend corrections as needed. #IncludingYou Diversity, equity, inclusion, and belonging are cornerstones of ADM's efforts to continue innovating, driving growth, and delivering outstanding performance. ADM is committed to attracting and retaining a diverse workforce and creating welcoming, inclusive work environments that enable every ADM colleague to feel comfortable, make meaningful contributions, and grow their career. ADM values the unique backgrounds and experiences that each person brings to the organization, understanding that diversity of perspectives makes us stronger together. For more information regarding ADM's efforts to advance Diversity, Equity, Inclusion & Belonging, please visit the website: Diversity, Equity and Inclusion | ADM. About ADM At ADM, the power of nature is unlocked to provide access to nutrition worldwide. With industry-advancing innovations, a comprehensive portfolio of ingredients and solutions catering to diverse tastes, and a commitment to sustainability, ADM offers customers an edge in addressing nutritional challenges. As a global leader in human and animal nutrition and the premier agricultural origination and processing company worldwide, ADM's capabilities in insights, facilities, and logistical expertise are unparalleled. From ideation to solution, ADM enriches the quality of life globally. Learn more at www.adm.com.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Do you have in-depth experience in Nat Cat models and tools Do you enjoy being part of a distributed team of Cat Model specialists with diverse backgrounds, educations, and skills Are you passionate about researching, debugging issues, and developing tools from scratch We are seeking a curious individual to join our NatCat infrastructure development team. As a Cat Model Specialist, you will collaborate with the Cat Perils Cat & Geo Modelling team to maintain models, tools, and applications used in the NatCat costing process. Your responsibilities will include supporting model developers in validating their models, building concepts and tools for exposure reporting, and assisting in model maintenance and validation. You will be part of the Cat & Geo Modelling team based in Zurich and Bangalore, which specializes in natural science, engineering, and statistics. The team is responsible for Swiss Re's global natural catastrophe risk assessment and focuses on advancing innovative probabilistic and proprietary modelling technology for earthquakes, windstorm, and flood hazards. Main Tasks/Activities/Responsibilities: - Conceptualize and build NatCat applications using sophisticated analytical technologies - Collaborate with model developers to implement and test models in the internal framework - Develop and implement concepts to enhance the internal modelling framework - Coordinate with various teams for successful model and tool releases - Provide user support on model and tools related issues - Install and maintain the Oasis setup and contribute to the development of new functionality - Coordinate platform setup and maintenance with 3rd party vendors About You: - Graduate or Post-Graduate degree in mathematics, engineering, computer science, or equivalent quantitative training - Minimum 5 years of experience in the Cat Modelling domain - Reliable, committed, hands-on, with experience in Nat Cat modelling - Previous experience with catastrophe models or exposure reporting tools is a plus - Strong programming skills in MATLAB, MS SQL, Python, Pyspark, R - Experience in consuming WCF/RESTful services - Knowledge of Business Intelligence, reporting, and data analysis solutions - Experience in agile development environment is beneficial - Familiarity with Azure services like Storage, Data Factory, Synapse, and Databricks - Good interpersonal skills, self-driven, and ability to work in a global team - Strong analytical and problem-solving skills About Swiss Re: Swiss Re is a leading provider of reinsurance, insurance, and insurance-based risk transfer solutions. With over 14,000 employees worldwide, we anticipate and manage various risks to make the world more resilient. We cover a wide range of risks from natural catastrophes to cybercrime, offering solutions in both Property & Casualty and Life & Health sectors. If you are an experienced professional returning to the workforce after a career break, we welcome you to apply for positions that match your skills and experience.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

telangana

On-site

We are looking for a Data Engineer to join our team in a remote position with a notice period of Immediate to 30 Days. As a Data Engineer, you will be responsible for designing and developing data pipelines and data products on the Azure cloud platform. Your role will involve utilizing Azure Data Factory, Azure Synapse, Azure SQL Database, Azure Data Lake Storage, and other Azure services as part of the Clients Data Platform Infrastructure. Collaboration with cross-functional teams will be essential to ensure that data solutions meet business requirements. You will also be expected to implement best practices for data management, quality, and security, as well as optimize and troubleshoot data workflows to ensure performance and reliability. The ideal candidate should have expertise in Azure Data Factory, Azure Synapse, Azure SQL Database, and Azure Data Lake Storage, along with experience in data pipeline design and development. A strong understanding of data architecture and cloud-based data solutions, as well as proficiency in SQL and data modeling, are crucial for this role. Excellent problem-solving skills and attention to detail will be highly valued. Qualifications for this position include a Bachelor's degree in Computer Science, Information Technology, or a related field, along with at least 3 years of experience as a Data Engineer or in a similar role. Strong communication and teamwork skills are essential, as well as the ability to manage multiple projects and meet deadlines effectively.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Project Lead (Data) at our esteemed organization, you will play a crucial role in translating business requirements into technical specifications and leading the design, development, and deployment of Business Intelligence (BI) solutions. Your responsibilities will include maintaining and supporting data analytics platforms, collaborating with cross-functional teams, executing database queries and analyses, creating visualizations, and updating technical documentation. To excel in this role, you should possess a minimum of 5 years of experience in designing and implementing reports, dashboards, ETL processes, and data warehouses. Additionally, you should have at least 3 years of direct management experience. A strong understanding of data warehousing and database concepts is essential, along with expertise in BI fundamentals. Proficiency in tools such as Microsoft SQL Server, SSIS, SSRS, Azure Data Factory, Azure Synapse, and Power BI will be highly advantageous. Your role will involve defining software development aspects, communicating concepts and guidelines effectively to the team, providing technical guidance and coaching, and overseeing the progress of report/dashboard development to ensure alignment with data warehouse and RDBMS design principles. Engaging with stakeholders to identify key performance indicators (KPIs) and presenting actionable insights through reports and dashboards will be a key aspect of your responsibilities. The ideal candidate for this position will exhibit proven analytical and problem-solving abilities, possess excellent interpersonal and written communication skills, and be adept at working in a collaborative environment. If you are passionate about leveraging data to drive business decisions and possess the requisite skills and experience, we invite you to join our dynamic team and contribute to our continued success. Join us in our journey of innovation and excellence as we continue to serve our global clientele with end-to-end IT and ICT solutions.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Microsoft Fabric Consultant - Senior at EY, you will leverage your expertise to contribute to the design, development, and deployment of Business Intelligence (BI) solutions. With 4+ years of experience in Power BI, you will be responsible for translating business requirements into technical specifications and creating visualizations to support data-driven decision-making. Your primary responsibilities will include designing database architecture for dashboards, conducting unit testing, troubleshooting BI systems, and collaborating with global teams to integrate systems. Your proficiency in Power BI tools such as DAX, Power Query, and SQL will be essential in developing and executing database queries, conducting analyses, and creating reports for various projects. In addition to your technical skills, your soft skills will play a crucial role in this role. Excellent communication skills, being a team player, self-starter, and highly motivated individual, along with the ability to handle high-pressure and fast-paced situations will be key attributes for success. Your presentation skills and experience working with globally distributed teams will be valuable in effectively collaborating and delivering exceptional BI solutions. While experience with Azure Data Factory, Azure Synapse, Python/R, PowerApps, Power Automate, and design skills like Adobe XD/Figma are considered beneficial, they are not mandatory. Your focus will be on designing, building, and deploying BI solutions, evaluating and improving existing systems, and developing and updating technical documentation to ensure the success of projects. At EY, we are committed to building a better working world by providing trust through assurance and helping clients grow, transform, and operate in over 150 countries. By joining our diverse team, you will have the opportunity to contribute your unique voice and perspective to drive innovation and create a positive impact on clients, people, and society. Join us at EY and embark on a rewarding journey to become the best version of yourself while contributing to a better working world for all.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

jaipur, rajasthan

On-site

As a Databricks Engineer specializing in the Azure Data Platform, you will be responsible for designing, developing, and optimizing scalable data pipelines within the Azure ecosystem. You should have hands-on experience with Python-based ETL development, Lakehouse architecture, and building Databricks workflows utilizing the bronze-silver-gold data modeling approach. Your key responsibilities will include developing and maintaining ETL pipelines using Python and Apache Spark in Azure Databricks, implementing and managing bronze-silver-gold data lake layers using Delta Lake, and working with various Azure services such as Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for end-to-end pipeline orchestration. It will be crucial to ensure data quality, integrity, and lineage across all layers of the data pipeline, optimize Spark performance, manage cluster configurations, and schedule jobs effectively in Databricks. Collaboration with data analysts, architects, and business stakeholders to deliver data-driven solutions will also be part of your role. To be successful in this role, you should have at least 3+ years of experience with Python in a data engineering environment, 2+ years of hands-on experience with Azure Databricks and Apache Spark, and a strong background in building scalable data lake pipelines following the bronze-silver-gold architecture. Additionally, in-depth knowledge of Delta Lake, Parquet, and data versioning, along with familiarity with Azure Data Factory, ADLS Gen2, and SQL is required. Experience with CI/CD pipelines and job orchestration tools such as Azure DevOps or Airflow would be advantageous. Excellent communication skills, both verbal and written, are essential. Nice to have qualifications include experience with data governance, security, and monitoring in Azure, exposure to real-time streaming or event-driven pipelines (Kafka, Event Hub), and an understanding of MLflow, Unity Catalog, or other data cataloging tools. By joining our team, you will have the opportunity to be part of high-impact, cloud-native data initiatives, work in a collaborative and growth-oriented team focused on innovation, and contribute to modern data architecture standards using the latest Azure technologies. If you are ready to advance your career as a Databricks Engineer in the Azure Data Platform, please send your updated resume to hr@vidhema.com. We look forward to hearing from you and potentially welcoming you to our team.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

chandigarh

On-site

As a Senior Azure Data Engineer at iO Associates in Mohali, you will be responsible for building and optimizing data pipelines, supporting data integration across systems, and enhancing the Azure-based Enterprise Data Platform. The company leads the real estate sector with headquarters in Mohali and offices in the US and over 17 other countries. Your key responsibilities will include building and enhancing the Azure-based EDP using modern tools like Databricks, Synapse, ADF, and ADLS Gen2. You will develop and maintain ETL pipelines, collaborate with teams to deliver efficient data solutions, create data products for enterprise-wide use, mentor team members, promote code reusability, and contribute to documentation, reviews, and architecture planning. To excel in this role, you should have at least 7 years of experience in data engineering with expertise in Databricks, Python, Scala, Azure Synapse, and ADF. You should have a proven track record of building and managing ETL/data pipelines across various sources and formats, along with strong skills in data modeling, warehousing, and CI/CD practices. This is an excellent opportunity to join a company that values your growth, emphasizes work-life balance, and recognizes your contributions. If you are interested in this position, please email at [Email Address].,

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Datawarehouse Database Architect - Immediate hiring. We are currently looking for Datawarehouse Database Architect for our client who are into Fintech solutions. Please let us know your interest and availability Experience: 10 plus years of experience Locations: Hybrid Any Accion offices in India pref (Bangalore /Pune/Mumbai) Notice Period: Immediate – 0 – 15 days joiners are preferred Required skills: Tools & Technologies Cloud Platform : Azure (Data Bricks, DevOps, Data factory, azure synapse Analytics, Azure SQL, blob storage, Databricks Delta Lake) Languages : Python/PL/SQL/SQL/C/C++/Java Databases : Snowflake/ MS SQL Server/Oracle Design Tools : Erwin & MS Visio. Data warehouse tools : SSIS, SSRS, SSAS. Power Bi, DBT, Talend Stitch, PowerApps, Informatica 9, Cognos 8, OBIEE. Any cloud exp is good to have Let’s connect for more details. Please write to me at mary.priscilina@accionlabs.com along with your cv and with the best contact details to get connected for a quick discussion. Regards, Mary Priscilina

Posted 1 week ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

Chennai

Remote

Job Title:- Sr. Python Data Engineer Location:- Chennai & Bangalore (REMOTE) Job Type:- Permanent Employee Experience :- 8 to 12 Years Shift: 2 11 PM Responsibilities Design and develop data pipelines and ETL processes. Collaborate with data scientists and analysts to understand data needs. Maintain and optimize data warehousing solutions. Ensure data quality and integrity throughout the data lifecycle. Develop and implement data validation and cleansing routines. Work with large datasets from various sources. Automate repetitive data tasks and processes. Monitor data systems and troubleshoot issues as they arise. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer or similar role (Minimum 6+ years’ experience as Data Engineer). Strong proficiency in Python and PySpark. Excellent problem-solving abilities. Strong communication skills to collaborate with team members and stakeholders. Individual Contributor Technical Skills Required Expert Python, PySpark and SQL/Snowflake Advanced Data warehousing, Data pipeline design – Advanced Level Data Quality, Data validation, Data cleansing – Advanced Level Intermediate/Basic Microsoft Fabric, ADF, Databricks, Master Data management/Data Governance Data Mesh, Data Lake/Lakehouse Architecture

Posted 1 week ago

Apply

12.0 - 16.0 years

10 - 14 Lacs

Pune

Work from Office

IT MANAGER, DATA ENGINEERING AND ANALYTICS will lead a team of data engineers and analysts responsible for designing, developing, and maintaining robust data systems and integrations. This role is critical for ensuring the smooth collection, transformation, integration and visualization of data, making it easily accessible for analytics and decision-making across the organization. The Manager will collaborate closely with analysts, developers, business leaders and other stakeholders to ensure that the data infrastructure meets business needs and is scalable, reliable, and efficient. What Youll Do: Team Leadership: Manage, mentor, and guide a team of data engineers and analysts, ensuring their professional development and optimizing team performance. Foster a culture of collaboration, accountability, and continuous learning within the team. Lead performance reviews, provide career guidance, and handle resource planning. Data Engineering & Analytics: Design and implement data pipelines, data models, and architectures that are robust, scalable, and efficient. Develop and enforce data quality frameworks to ensure accuracy, consistency, and reliability of data assets. Establish and maintain data lineage processes to track the flow and transformation of data across systems. Ensure the design and maintenance of robust data warehousing solutions to support analytics and reporting needs. Collaboration and Stakeholder Management: Collaborate with stakeholders, including functional owners, analysts and business leaders, to understand business needs and translate them into technical requirements. Work closely with these stakeholders to ensure the data infrastructure supports organizational goals and provides reliable data for business decisions. Build and Foster relationships with major stakeholders to ensure Management perspectives on Data Strategy and its alignment with Business objectives. Project Management: Drive end-to-end delivery of analytics projects, ensuring quality and timeliness. Manage project roadmaps, prioritize tasks, and allocate resources effectively. Manage project timelines and mitigate risks to ensure timely delivery of high-quality data engineering projects. Technology and Infrastructure: Evaluate and implement new tools, technologies, and best practices to improve the efficiency of data engineering processes. Oversee the design, development, and maintenance of data pipelines, ensuring that data is collected, cleaned, and stored efficiently. Ensure there are no data pipeline leaks and monitor production pipelines to maintain their integrity. Familiarity with reporting tools such as Superset and Tableau is beneficial for creating intuitive data visualizations and reports. Machine Learning and GenAI Integration: Machine Learning: Knowledge of machine learning concepts and integration with data pipelines is a plus. This includes understanding how machine learning models can be used to enhance data quality, predict data trends, and automate decision-making processes. GenAI: Familiarity with Generative AI (GenAI) concepts and exposure is advantageous, particularly in enabling GenAI features on new datasets. Leveraging GenAI with data pipelines to automate tasks, streamline workflows, and uncover deeper insights is beneficial. What Youll Bring: 12+ years of experience in data engineering, with at least 3 years in a managerial role. Technical Expertise: Strong knowledge of data engineering concepts, including data warehousing, ETL processes, and data pipeline design. Proficiency in Azure Synapse or data factory, SQL, Python, and other data engineering tools. Data Modeling: Expertise in data modeling is essential, with the ability to design and implement robust, scalable data models that support complex analytics and reporting needs. Experience with data modeling frameworks and tools is highly valued. Leadership Skills: Proven ability to lead and motivate a team of engineers while managing cross-functional collaborations. Problem-Solving: Strong analytical and troubleshooting skills to address complex data-related challenges. Communication: Excellent verbal and written communication skills to effectively interact with technical and non-technical stakeholders. This includes the ability to motivate team members, provide regular constructive feedback, and facilitate open communication channels to ensure team alignment and success. Data Architecture: Experience with designing scalable, high-performance data systems and understanding cloud platforms such as Azure, Data Bricks. Machine Learning and GenAI: Knowledge of machine learning concepts and integration with data pipelines, as well as familiarity with GenAI, is a plus. Data Governance: Experience with data governance best practices is desirable. Open Mindset: An open mindset with a willingness to learn new technologies, processes, and methodologies is essential. The ability to adapt quickly to evolving data engineering landscapes and embrace innovative solutions is highly valued.

Posted 1 week ago

Apply

8.0 - 12.0 years

14 - 20 Lacs

Bengaluru

Work from Office

Azure Data Engineer Experience in Azure Data Factory, Databricks, Azure data lake and Azure SQL Server. Developed ETL/ELT process using SSIS and/or Azure Data Factory. Build complex pipelines & dataflows using Azure Data Factory. Designing and implementing data pipelines using in Azure Data Factory (ADF). Improve functionality/ performance of existing data pipelines. Performance tuning processes dealing with very large data sets. Configuration and Deployment of ADF packages. Proficient of the usage of ARM Template, Key Vault, Integration runtime. Adaptable to work with ETL frameworks and standards. Strong analytical and troubleshooting skill to root cause issue and find solution. Propose innovative, feasible and best solutions for the business requirements. Knowledge on Azure technologies / services such as Blob storage, ADLS, Logic Apps, Azure SQL, Web Jobs.. Expert in Service now , Incidents ,JIRA. Should have exposure agile methodology. Expert in understanding , building powerBI reports using latest methodologies

Posted 1 week ago

Apply

14.0 - 19.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Role & responsibilities Eligibility Criteria: Years of Experience: Minimum 14 years Experience with Data Analysis/Data Profiling, Visualization tools (Power BI) Experience in Database and Data warehouse tech (Azure Synapse/SQL Server/SAP HANA/MS fabric) Experience in Stakeholder management/requirement gathering/delivery cycle. Bachelors Degree: Math/Statistics/Operations Research/Computer Science Master’s Degree: Business Analytics (with a background in Computer Science) Primary Responsibilities: Translate complex data analyses into clear, engaging narratives tailored to diverse audiences. Develop impactful data visualizations and dashboards using tools like Power BI or Tableau. Educate and Mentor team to develop the insightful dashboards by using multiple Data Story telling methodologies . Collaborate with Data Analysts, Data Scientists, Business Analyst and Business stakeholders to uncover insights. Understand business goals and align analytics storytelling to drive strategic actions. Create presentations, reports, and visual content to communicate insights effectively. - Maintain consistency in data communication and ensure data-driven storytelling best practices. Mandatory Skills required to perform the job: Data Analysis skills, experience in extracting information from databases, Office 365 Professional and Proven Data Storyteller through BI Experience in Agile/SCRUM process and development using any tools. Knowledge of SAP systems (SAP ECC T-Codes & Navigation) Proven ability to tell stories with data, combining analytical rigor with creativity. Strong skills in data visualization tools (e.g., Tableau, Power BI) and presentation tools (e.g., PowerPoint, Google Slides). Proficiency in SQL and basic understanding of statistical methods or Python/R is a plus. Excellent communication and collaboration skills. Ability to distill complex information into easy-to-understand formats. Desirable Skills: Background in journalism, design, UX, or marketing alongside analytics. Experience working in fast-paced, cross-functional teams. Familiarity with data storytelling frameworks or narrative design. Expected Outcome 1. Provide on-the-job training for leads on actionable insights. 2. Educate business partners on data literacy and actionable insights. 3. Lead change management initiatives (related to Data Storytelling and Data Literacy) in the organization. 4. Implement processes based on data storytelling concepts and establish a governance model to ensure dashboards are released with the appropriate insights. 5. Standardize dashboards and reports to provide actionable insights. 6. Utilize the most suitable data representation techniques. Preferred candidate profile

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Mumbai, Hyderabad

Work from Office

Essential Services: Role & Location fungibility At ICICI Bank, we believe in serving our customers beyond our role definition, product boundaries, and domain limitations through our philosophy of customer 360-degree. In essence, this captures our belief in serving the entire banking needs of our customers as One Bank, One Team . To achieve this, employees at ICICI Bank are expected to be role and location-fungible with the understanding that Banking is an essential service . The role descriptions give you an overview of the responsibilities, it is only directional and guiding in nature. About the Role: As a Data Warehouse Architect, you will be responsible for managing and enhancing data warehouse that manages large volume of customer-life cycle data flowing in from various applications within guardrails of risk and compliance. You will be managing the day-to-day operations of data warehouse i.e. Vertica. In this role responsibility, you will manage a team of data warehouse engineers to develop data modelling, designing ETL data pipeline, issue management, upgrades, performance fine-tuning, migration, governance and security framework of the data warehouse. This role enables the Bank to maintain huge data sets in a structured manner that is amenable for data intelligence. The data warehouse supports numerous information systems used by various business groups to derive insights. As a natural progression, the data warehouses will be gradually migrated to Data Lake enabling better analytical advantage. The role holder will also be responsible for guiding the team towards this migration. Key Responsibilities: Data Pipeline Design: Responsible for designing and developing ETL data pipelines that can help in organising large volumes of data. Use of data warehousing technologies to ensure that the data warehouse is efficient, scalable, and secure. Issue Management: Responsible for ensuring that the data warehouse is running smoothly. Monitor system performance, diagnose and troubleshoot issues, and make necessary changes to optimize system performance. Collaboration: Collaborate with cross-functional teams to implement upgrades, migrations and continuous improvements. Data Integration and Processing: Responsible for processing, cleaning, and integrating large data sets from various sources to ensure that the data is accurate, complete, and consistent. Data Modelling: Responsible for designing and implementing data modelling solutions to ensure that the organizations data is properly structured and organized for analysis. Key Qualifications & Skills: Education Qualification: B.E./B. Tech. in Computer Science, Information Technology or equivalent domain with 10 to 12 years of experience and at least 5 years or relevant work experience in Datawarehouse/ mining/BI/MIS. Experience in Data Warehousing: Knowledge on ETL and data technologies and outline future vision in OLTP, OLAP (Oracle / MS SQL). Data Modelling, Data Analysis and Visualization experience (Analytical tools experience like Power BI / SAS / ClickView / Tableu etc). Good to have exposure to Azure Cloud Data platform services like COSMOS, Azure Data Lake, Azure Synapse, and Azure Data factory. Synergize with the Team: Regular interaction with business/product/functional teams to create mobility solutions. Certification: Azure certified DP 900, PL 300, DP 203 or any other Data platform/Data Analyst certifications. About the Business Group The Technology Group at ICICI Bank is at the forefront of our operations and offerings, which are focused on leveraging state-of-the-art technology to provide customer-centric solutions. This group plays a pivotal role in our vision of the transition from Bank to Bank Tech. Further, the group offers round-the-clock support to our entire banking ecosystem. In our persistent efforts to provide products and solutions that genuinely touch customers, unlocking the potential of technology in every single engagement would go a long way in creating customer delight. In this endeavor, we also tirelessly ensure all our processes, systems, and infrastructure are very well within the guardrails of data security, privacy, and relevant regulations.

Posted 1 week ago

Apply

6.0 - 11.0 years

12 - 17 Lacs

Pune

Work from Office

Roles and Responsibility The Senior Tech Lead - Databricks leads the design, development, and implementation of advanced data solutions. Has To have extensive experience in Databricks, cloud platforms, and data engineering, with a proven ability to lead teams and deliver complex projects. Responsibilities: Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria: Bachelors degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture

Posted 1 week ago

Apply

8.0 - 10.0 years

20 - 35 Lacs

Ahmedabad

Remote

We are seeking a talented and experienced Senior Data Engineer to join our team and contribute to building a robust data platform on Azure Cloud. The ideal candidate will have hands-on experience designing and managing data pipelines, ensuring data quality, and leveraging cloud technologies for scalable and efficient data processing. The Data Engineer will design, develop, and maintain scalable data pipelines and systems to support the ingestion, transformation, and analysis of large datasets. The role requires a deep understanding of data workflows, cloud platforms (Azure), and strong problem-solving skills to ensure efficient and reliable data delivery. Key Responsibilities Data Ingestion and Integration: Develop and maintain data ingestion pipelines using tools like Azure Data Factory , Databricks , and Azure Event Hubs . Integrate data from various sources, including APIs, databases, file systems, and streaming data. ETL/ELT Development: Design and implement ETL/ELT workflows to transform and prepare data for analysis and storage in the data lake or data warehouse. Automate and optimize data processing workflows for performance and scalability. Data Modeling and Storage: Design data models for efficient storage and retrieval in Azure Data Lake Storage and Azure Synapse Analytics . Implement best practices for partitioning, indexing, and versioning in data lakes and warehouses. Quality Assurance: Implement data validation, monitoring, and reconciliation processes to ensure data accuracy and consistency. Troubleshoot and resolve issues in data pipelines to ensure seamless operation. Collaboration and Documentation: Work closely with data architects, analysts, and other stakeholders to understand requirements and translate them into technical solutions. Document processes, workflows, and system configurations for maintenance and onboarding purposes. Cloud Services and Infrastructure: Leverage Azure services like Azure Data Factory , Databricks , Azure Functions , and Logic Apps to create scalable and cost-effective solutions. Monitor and optimize Azure resources for performance and cost management. Security and Governance: Ensure data pipelines comply with organizational security and governance policies. Implement security protocols using Azure IAM, encryption, and Azure Key Vault. Continuous Improvement: Monitor existing pipelines and suggest improvements for better efficiency, reliability, and scalability. Stay updated on emerging technologies and recommend enhancements to the data platform. Skills Strong experience with Azure Data Factory , Databricks , and Azure Synapse Analytics . Proficiency in Python , SQL , and Spark . Hands-on experience with ETL/ELT processes and frameworks. Knowledge of data modeling, data warehousing, and data lake architectures. Familiarity with REST APIs, streaming data (Kafka, Event Hubs), and batch processing. Good To Have: Experience with tools like Azure Purview , Delta Lake , or similar governance frameworks. Understanding of CI/CD pipelines and DevOps tools like Azure DevOps or Terraform . Familiarity with data visualization tools like Power BI . Competency Analytical Thinking Clear and effective communication Time Management Team Collaboration Technical Proficiency Supervising Others Problem Solving Risk Management Organizing & Task Management Creativity/innovation Honesty/Integrity Education: Bachelors degree in Computer Science, Data Science, or a related field. 8+ years of experience in a data engineering or similar role.

Posted 1 week ago

Apply

Exploring Azure Synapse Jobs in India

The Azure Synapse job market in India is currently experiencing a surge in demand as organizations increasingly adopt cloud solutions for their data analytics and business intelligence needs. With the growing reliance on data-driven decision-making, professionals with expertise in Azure Synapse are highly sought after in the job market.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for Azure Synapse professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.

Career Path

A typical career path in Azure Synapse may include roles such as Junior Developer, Senior Developer, Tech Lead, and Architect. As professionals gain experience and expertise in the platform, they can progress to higher-level roles with more responsibilities and leadership opportunities.

Related Skills

In addition to expertise in Azure Synapse, professionals in this field are often expected to have knowledge of SQL, data warehousing concepts, ETL processes, data modeling, and cloud computing principles. Strong analytical and problem-solving skills are also essential for success in Azure Synapse roles.

Interview Questions

  • What is Azure Synapse Analytics and how does it differ from Azure Data Factory? (medium)
  • Can you explain the differences between a Data Warehouse and a Data Lake? (basic)
  • How do you optimize data loading and querying performance in Azure Synapse? (advanced)
  • What is PolyBase in Azure Synapse and how is it used for data integration? (medium)
  • How do you handle security and compliance considerations in Azure Synapse? (advanced)
  • Explain the concept of serverless SQL pools in Azure Synapse. (medium)
  • What are the different components of an Azure Synapse workspace? (basic)
  • How do you monitor and troubleshoot performance issues in Azure Synapse? (advanced)
  • Describe your experience with building data pipelines in Azure Synapse. (medium)
  • Can you walk us through a recent project where you used Azure Synapse for data analysis? (advanced)
  • How do you ensure data quality and integrity in Azure Synapse? (medium)
  • What are the key features of Azure Synapse Link for Azure Cosmos DB? (advanced)
  • How do you handle data partitioning and distribution in Azure Synapse? (medium)
  • Discuss a scenario where you had to optimize data storage and processing costs in Azure Synapse. (advanced)
  • What are some best practices for data security in Azure Synapse? (medium)
  • How do you automate data integration workflows in Azure Synapse? (advanced)
  • Can you explain the role of Azure Data Lake Storage Gen2 in Azure Synapse? (medium)
  • Describe a situation where you had to collaborate with cross-functional teams on a data project in Azure Synapse. (advanced)
  • How do you ensure data governance and compliance in Azure Synapse? (medium)
  • What are the advantages of using Azure Synapse over traditional data warehouses? (basic)
  • Discuss your experience with real-time analytics and streaming data processing in Azure Synapse. (advanced)
  • How do you handle schema evolution and versioning in Azure Synapse? (medium)
  • What are some common challenges you have faced while working with Azure Synapse and how did you overcome them? (advanced)
  • Explain the concept of data skew and how it can impact query performance in Azure Synapse. (medium)
  • How do you stay updated on the latest developments and best practices in Azure Synapse? (basic)

Closing Remark

As the demand for Azure Synapse professionals continues to rise in India, now is the perfect time to upskill and prepare for exciting career opportunities in this field. By honing your expertise in Azure Synapse and related skills, you can position yourself as a valuable asset in the job market and embark on a rewarding career journey. Prepare diligently, showcase your skills confidently, and seize the numerous job opportunities waiting for you in the Azure Synapse domain. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies