Jobs
Interviews

1620 Azure Databricks Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

10 - 15 Lacs

mumbai, pune, chennai

Work from Office

Role Description: Design, provision, and manage secure, scalable, and high-performance Azure Databricks platforms tailored to support enterprise-wide data transformation for insurance data workloads. Collaborate with architects, engineers, and security teams to define and implement robust infrastructure standards , ensuring reliable connectivity and integration with legacy systems, cloud data sources, and third-party platforms. Implement Infrastructure as Code solutions (e.g., Terraform) to streamline provisioning and configuration of Azure and Databricks resources, supporting DevOps best practices. Automate environment deployment, monitoring, and incident response workflows using GitHub Actions to increase consistency, traceability, and efficiency. Monitor platform health, resource utilization, and performance ; anticipate scaling needs and conduct regular tuning to maintain optimal operation for data pipelines and analytics workloads. Enforce security and compliance with enterprise and regulatory standards , including RBAC, managed identities, encryption, PDPO, GDPR, and other insurance-specific requirements. Oversee integration of Informatica tools to support data governance , including cataloguing, data lineage, and compliance checks across the platform. Document infrastructure architecture, network topology, security configurations, and operational runbooks to support ongoing governance, audit, and handover. Troubleshoot infrastructure issues, perform root cause analysis, and drive resolution and continuous improvement for platform reliability. Stay current with new Azure, Databricks, and DevOps featurescontinuously recommending and implementing enhancements to platform capabilities, cost-effectiveness, and effectiveness for Hong Kong Life and General Insurance business priorities. Requirement: Bachelor's degree in Computer Science, Information Technology, or a related field. 5+ years of experience designing, deploying, and administering Azure cloud platforms, with a focus on supporting Azure Databricks for insurance data and analytics workloads. Deep expertise provisioning, configuring, and managing Azure Databricks clusters and workspaces, including supporting the processing and storage of structured, semi-structured, and unstructured insurance data. Skilled in integrating Azure Data Factory and Azure Data Lake Storage Gen2 with Databricks for seamless, automated data flows. Proficient in using infrastructure-as-code tools (Terraform) for automated deployment and configuration of Azure and Databricks services. Experience deploying and integrating Informatica solutions for comprehensive metadata management, cataloguing, and governance. Strong understanding of platform security (RBAC, NSG, managed identities, Key Vault), monitoring, alerting, and cost optimization in regulated insurance environments. Hands-on with GitHub Actions for CI/CD pipeline automation related to platform and pipeline deployments. Experience with platform incident response, troubleshooting, and performance optimization for mission-critical insurance data workloads. Excellent documentation, collaboration, and communication skills to support technical and business users in the insurance domain. Location: Pune / Bangalore/ Mumbai/ Chennai/ Hyderabad/ Gurugram Band: M3/M4 (7 to 14 years)

Posted 2 days ago

Apply

0.0 - 2.0 years

1 - 3 Lacs

vadodara

Remote

Responsibilities: Day-to-day: Able to act independently under guided supervision to investigate issues Co-ordinate work and build strong relationships with On-Shore Managers Able to form views on how new processes ought to be constructed. Able to contribute both to BAU (Business as Usual) enhancements and to work under the umbrella of a project. Responsible for End of Period checks : volume & quantity extremes and trends checks (comparison with external data ) Communication Skills: Excellent written and verbal communication skills to effectively communicate with diverse audiences. Understanding the wider business Develops a basic understanding of the Operations functions Develop an understanding of the Commercial usage of our data Develop a broader understanding of our direct competitors Training & Development Take ownership for self-development and where available participate in structured training. Gains proficiency in all relevant databases, data interrogation and reporting tools (for example Databricks, SQL, Python , Excel, etc.) Communication & Collaboration Be able to communicate in an appropriate manner (e.g. verbally, presenting or creating a PowerPoint, Word document, email) Adhering to deadlines and escalating where there is a risk of delays Demonstrate and role model best practise and techniques including positive communication style. Displays a proactive attitude when working both within and outside of the team. Demonstrates clear, direct and to the point communication at Data Methods team meetings Issue Management and Best Practice Proactive identification and root cause analysis of Data Methods issues and development of best practice solutions to improve the underlying methodology and processes. Support regular methodology review meetings with On-Shore Manager and Leads to establish priorities and future requirements. Knowledge sharing through the team, in either team meeting or day-to-day with the wider Data Methods team Able to think through complex processes and to how to construct and improve them, considering in detail the positive and negative implications of different approaches and how best to test and assess them. Resource management Organising your workload efficiently Adhering to schedules Escalating any risks to deadlines and capacity challenges Requirements: Education & Experience Bachelors, Masters, Doctorate Degree 1+ years experience Knowledge Awareness of sampling and weighting principles Strong statistical, numerical and logical skills Strong aptitude for data analysis Strong knowledge of metrics and KPIs Tools SQL (intermediate) Python (intermediate) Excel (advanced) Power BI (intermediate) Databricks (desirable) Azure Dev Ops (basic) Access (basic) Passion and Drive Passionate about data quality, integrity and best practices Passionate about delivering high quality data to clients on schedule Communication Good English communication, presentation, interpersonal and writing skills Good listening skills Good online, virtual, and in-person collaboration skills Comfortable presenting panel and data methods to external audiences (internal)

Posted 2 days ago

Apply

10.0 - 17.0 years

0 Lacs

chennai, coimbatore, bengaluru

Hybrid

Open & Direct Walk-in Drive event | Hexaware technologies - Azure Data Engineer/Architect in Chennai, Tamilnadu on 23rd AUG [Saturday] 2025 - Azure Databricks/ Data factory/ SQL & Pyspark or Spark/Synapse/MS Fabrics Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as an Azure Data Engineer/Architect. We are hosting an Open Walk-in Drive in Chennai, Tamilnadu on 23rd AUG [Saturday] 2025, and we believe your skills in Databricks, Data Factory, SQL, and Pyspark or Spark align perfectly with what we are seeking. Total years of Experience: 4 years to 15 years Relevant experience: 3+ years Details of the Walk-in Drive: Date: 23rd AUG [Saturday] 2025 Experience 4 years to 18 years Time: 9.30 AM to 4PM Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Venue: Hexaware Technologies, H-5, Sipcot It Park, Post, Navalur, Siruseri, Tamil Nadu 603103 Work Location: Chennai, Bangalore, Coimbatore, Mumbai, Pune & Noida Key Skills and Experience: As an Azure Data Engineer, we are looking for candidates who possess expertise in the following: Databricks Data Factory SQL Pyspark/Spark Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: Designing, implementing, and maintaining data pipelines Collaborating with cross-functional teams to understand data requirements. Optimizing and troubleshooting data processes Leveraging Azure data services to build scalable solutions. What to Bring: 1. Updated resume 2. Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team.

Posted 2 days ago

Apply

4.0 - 9.0 years

0 Lacs

chennai, coimbatore, bengaluru

Hybrid

Open & Direct Walk-in Drive event | Hexaware technologies - Azure Data Engineer/Architect in Chennai, Tamilnadu on 23rd AUG [Saturday] 2025 - Azure Databricks/ Data factory/ SQL & Pyspark or Spark/Synapse/MS Fabrics Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as an Azure Data Engineer/Architect. We are hosting an Open Walk-in Drive in Chennai, Tamilnadu on 23rd AUG [Saturday] 2025, and we believe your skills in Databricks, Data Factory, SQL, and Pyspark or Spark align perfectly with what we are seeking. Total years of Experience: 4 years to 15 years Relevant experience: 3+ years Details of the Walk-in Drive: Date: 23rd AUG [Saturday] 2025 Experience 4 years to 18 years Time: 9.30 AM to 4PM Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Venue: Hexaware Technologies, H-5, Sipcot It Park, Post, Navalur, Siruseri, Tamil Nadu 603103 Work Location: Chennai, Bangalore, Coimbatore, Mumbai, Pune & Noida Key Skills and Experience: As an Azure Data Engineer, we are looking for candidates who possess expertise in the following: Databricks Data Factory SQL Pyspark/Spark Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: Designing, implementing, and maintaining data pipelines Collaborating with cross-functional teams to understand data requirements. Optimizing and troubleshooting data processes Leveraging Azure data services to build scalable solutions. What to Bring: 1. Updated resume 2. Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team.

Posted 2 days ago

Apply

2.0 - 7.0 years

10 - 14 Lacs

hyderabad

Work from Office

About the team: AIML is usedto create chatbots, virtual assistants, and other forms of artificial intelligence software. AIML is also used in research and development of natural language processing systems. What you can look forward to as a AI/ML exper t Lead Development : Own endtoend design, implementation, deployment and maintenance of both traditional ML and Generative AI solutions (e.g., finetuning LLMs, RAG pipelines) Project Execution & Delivery : Translate business requirements into datadriven and GenAIdriven use cases; scope features, estimates, and timelines Technical Leadership & Mentorship : Mentor, review and coach junior/midlevel engineers on best practices in ML, MLOps and GenAI Programming & Frameworks : Expert in Python (pandas, NumPy, scikitlearn, PyTorch/TensorFlow) Cloud & MLOps : Deep experience with Azure Machine Learning (SDK, Pipelines, Model Registry, hosting GenAI endpoints) Proficient in Azure Databricks : Spark jobs, Delta Lake, MLflow for tracking both ML and GenAI experiments Data & GenAI Engineering : Strong background in building ETL/ELT pipelines, data modeling, orchestration (Azure Data Factory, Databricks Jobs) Experience with embedding stores, vector databases, promptoptimization, and cost/performance tuning for large GenAI models Your profile as a Specialist : Bachelors or masters in computer science/Engineering/Ph.D in Data Science, or a related field Min of 2 years of professional experience in AI/ML engineering, including at least 2 years of handson Generative AI project delivery Track record of production deployments using Python, Azure ML, Databricks, and GenAI frameworks Handson data engineering experiencedesigning and operating robust pipelines for both structured data and unstructured (text, embeddings) Preferred: Certifications in Azure AI/ML, Databricks, or Generative AI specialties Experience working in Agile/Scrum environments and collaborating with crossfunctional teams.

Posted 2 days ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

pune

Work from Office

Roles & Responsibilities: Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Requirements: 10+ years of experience in working on Azure Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, PYpark, Spark-SQL for data processing and transformation. Exposure to CI/CD pipelines, version control (Git/Azure DevOps), and data quality is a plus. Must have experience of working with streaming data sources and Kafka (preferred). Our Offering: Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.

Posted 2 days ago

Apply

2.0 - 4.0 years

4 - 8 Lacs

bengaluru

Work from Office

About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Microsoft Azure Databricks Good to have skills : Python (Programming Language), Microsoft Azure Data Services, PySparkMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data architecture strategy, ensuring that the solutions you develop are scalable and efficient. Must have Proficiency in programming languages such as Python, SQL, and experience with big data technology - Spark.Experience with cloud platforms mainly on Microsoft Azure.Experience on Microsoft Azure Databricks and Azure Data Factory.Experience with CI/CD processes and tools, including Azure DevOps, Jenkins, and Git, to ensure smooth and efficient deployment of data solutions. Familiarity with APIs to push and Pull data from data systems and Platforms. Familiarity with understanding software architecture High level Design document and translating them to developmental tasks. Familiarity with Microsoft data stack such as Azure Data Factory, Azure Synapse, Databricks, Azure DevOps and Fabric / PowerBI. Nice to have:Experience with machine learning and AI technologies Data Modelling & Architecture ETL pipeline design Azure DevOps Logging and Monitoring using Azure / Databricks services Apache Kafka Additional Information:- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 days ago

Apply

2.0 - 4.0 years

4 - 8 Lacs

bengaluru

Work from Office

About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Microsoft Azure Databricks Good to have skills : Python (Programming Language), Microsoft Azure Data Services, PySparkMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are efficient, scalable, and aligned with business objectives. You will also monitor and optimize existing data processes to enhance performance and reliability, making data-driven decisions to support organizational goals. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with stakeholders to gather and analyze data requirements.- Design and implement data models that support business needs.Must have Proficiency in programming languages such as Python, SQL, and experience with big data technology - Spark.Experience with cloud platforms mainly on Microsoft Azure.Experience on Microsoft Azure Databricks and Azure Data Factory.Experience with CI/CD processes and tools, including Azure DevOps, Jenkins, and Git, to ensure smooth and efficient deployment of data solutions. Familiarity with APIs to push and Pull data from data systems and Platforms. Familiarity with understanding software architecture High level Design document and translating them to developmental tasks. Familiarity with Microsoft data stack such as Azure Data Factory, Azure Synapse, Databricks, Azure DevOps and Fabric / PowerBI. Nice to have:Experience with machine learning and AI technologies Data Modelling & Architecture ETL pipeline design Azure DevOps Logging and Monitoring using Azure / Databricks services Apache Kafka Additional Information:- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 days ago

Apply

2.0 - 5.0 years

5 - 9 Lacs

hyderabad

Work from Office

About The Role Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : Microsoft Azure ArchitectureMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Designer, you will assist in defining requirements and designing applications to meet business process and application requirements. Your typical day will involve collaborating with various stakeholders to gather and analyze requirements, creating application designs that align with business objectives, and ensuring that the applications are user-friendly and efficient. You will also participate in team meetings to discuss project progress and contribute innovative ideas to enhance application functionality and performance. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Engage in continuous learning to stay updated with industry trends and technologies.- Collaborate with cross-functional teams to ensure alignment on project goals and deliverables. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with Microsoft Azure Architecture.- Strong understanding of application design principles and methodologies.- Experience in developing and deploying applications on cloud platforms.- Familiarity with programming languages relevant to application development. Additional Information:- The candidate should have minimum 3 years of experience in Microsoft Azure Databricks.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 days ago

Apply

2.0 - 5.0 years

5 - 9 Lacs

hyderabad

Work from Office

About The Role Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : Microsoft Azure ArchitectureMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Designer, you will assist in defining requirements and designing applications to meet business process and application requirements. A typical day involves collaborating with cross-functional teams to gather insights, analyzing user needs, and translating them into functional specifications. You will engage in discussions to refine application designs and ensure alignment with business objectives, while also addressing any challenges that arise during the development process. Your role will be pivotal in ensuring that the applications developed are user-friendly and effectively meet the needs of the organization. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with stakeholders to gather and analyze requirements for application design.- Develop and document application specifications and design documents. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with Microsoft Azure Architecture.- Strong understanding of cloud computing concepts and services.- Experience in application design and development methodologies.- Familiarity with agile development practices and tools. Additional Information:- The candidate should have minimum 3 years of experience in Microsoft Azure Databricks.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 days ago

Apply

8.0 - 10.0 years

9 - 14 Lacs

hyderabad

Work from Office

Required Skills •8+ years of experience in data architecture and design. •Strong hands-on experience with Azure Data Services, Databricks, and ADF. •Proven experience in insurance data domains and product-oriented data design.

Posted 3 days ago

Apply

5.0 - 7.0 years

5 - 5 Lacs

mumbai, chennai, gurugram

Work from Office

Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes: Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Outputs Expected: Code: Code as per design Follow coding standards templates and checklists Review code - for team and peers Documentation: Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure: Define and govern configuration management plan Ensure compliance from the team Test: Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain relevance: Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project: Manage delivery of modules and/or manage user stories Manage Defects: Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate: Create and provide input for effort estimation for projects Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release: Execute and monitor release process Design: Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface with Customer: Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team: Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications: Take relevant domain/technology certification Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware's. Strong analytical and problem-solving abilities Knowledge Examples: Appropriate software programs / modules Functional and technical designing Programming languages - proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile - Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments: Accountabilities - Work in iterative processes to map data into common formats, perform advanced data analysis, validate findings, or test hypotheses, and communicate results and methodology. - Provide recommendations on who to utilize our data to optimize our search, increase in data accuracy results, and help us to better understand or existing data. - Communicate technical information successfully with technical and non-technical audiences such as third-party vendors, external customer technical departments, various levels of management and other relevant parties. - Collaborate effectively with all team members as well as attend regular team meetings. Required Qualifications - Bachelor's or master's degree in computer science, engineering, mathematics, statistics, or equivalent technical discipline - 6+ years of experience working with data mapping, data analysis and numerous large data sets/data warehouses. - Strong application development experience using JAVA, C++ - Strong experience with Azure Data bricks, ADLS2, Azure Data Explorer, EventHub technologies. - Experience with application containerization and deployment process (Docker, Helm charts, GitHub, CI/CD pipelines). - Experience working with Cosmos DB is preferred. - Ability to assemble, analyze, and evaluate big data and be able to make appropriate and well-reasoned recommendations to stakeholders. - Good analytical and problem-solving skills, good understanding of different data structures, algorithms, and their usage in solving business problems. - Strong communication (verbal and written) and customer service skills. Strong interpersonal, communication, and presentation skills applicable to a wide audience including senior and executive management, customers, etc. - Strong skills in setting, communicating, implementing, and achieving business objectives and goals. - Strong organization/project planning, time management, and change management skills across multiple functional groups and departments, and strong delegation skills involving prioritizing and reprioritizing projects and managing projects of various size and complexity Required Skills Java, Spring boot, Azure cloud, Docker, Helm charts (for Kubernetes Deployment).

Posted 3 days ago

Apply

6.0 - 8.0 years

15 - 20 Lacs

hyderabad, chennai, bengaluru

Work from Office

Role & responsibilities We are seeking a hands-on Data Engineer to develop, optimize, and maintain automated data pipelines supporting data governance and analytics initiatives. This role will focus on building production-ready workflows for ingestion, transformation, quality checks, lineage capture, access auditing, cost usage analysis, retention tracking, and metadata integration, primarily using Azure Databricks , Azure Data Lake , and Microsoft Purview . Experience: 4+ years in data engineering, with strong Azure and Databricks experience Key Responsibilities Pipeline Development Design, build, and deploy robust ETL/ELT pipelines in Databricks (PySpark, SQL, Delta Lake) to ingest, transform, and curate governance and operational metadata from multiple sources landed in Databricks. Granular Data Quality Capture – Implement profiling logic to capture issue-level metadata (source table, column, timestamp, severity, rule type) to support drill-down from dashboards into specific records and enable targeted remediation. Governance Metrics Automation – Develop data pipelines to generate metrics for dashboards covering data quality, lineage, job monitoring, access & permissions, query cost, usage & consumption, retention & lifecycle, policy enforcement, sensitive data mapping, and governance KPIs. Microsoft Purview Integration – Automate asset onboarding, metadata enrichment, classification tagging, and lineage extraction for integration into governance reporting. Data Retention & Policy Enforcement – Implement logic for retention tracking and policy compliance monitoring (masking, RLS, exceptions). Job & Query Monitoring – Build pipelines to track job performance, SLA adherence, and query costs for cost and performance optimization. Metadata Storage & Optimization – Maintain curated Delta tables for governance metrics, structured for efficient dashboard consumption. Testing & Troubleshooting – Monitor pipeline execution, optimize performance, and resolve issues quickly. Collaboration – Work closely with the lead engineer, QA, and reporting teams to validate metrics and resolve data quality issues. Security & Compliance – Ensure all pipelines meet organizational governance, privacy, and security standards. Required Qualifications Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field 4+ years of hands-on data engineering experience, with Azure Databricks and Azure Data Lake Proficiency in PySpark , SQL , and ETL/ELT pipeline design Demonstrated experience building granular data quality checks and integrating governance logic into pipelines Working knowledge of Microsoft Purview for metadata management, lineage capture, and classification Experience with Azure Data Factory or equivalent orchestration tools Understanding of data modeling, metadata structures, and data cataloging concepts Strong debugging, performance tuning, and problem-solving skills Ability to document pipeline logic and collaborate with cross-functional teams Preferred candidate profile Preferred Qualifications Microsoft certification in Azure Data Engineering Experience in governance-heavy or regulated environments (e.g., finance, healthcare, hospitality) Exposure to Power BI or other BI tools as a data source consumer Familiarity with DevOps/CI-CD for data pipelines in Azure Experience integrating both cloud and on-premises data sources into Azure

Posted 3 days ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

hyderabad, chennai, bengaluru

Work from Office

Job Summary: We are seeking an experienced and motivated Microsoft Fabric Architect with 5-6 years of experience in designing and building data solutions using Microsoft Fabric , Power BI , SQL , PySpark , and Azure Databricks . This role requires strong hands-on skills in cloud-based data architecture and modern analytics tools, with an emphasis on data modeling , performance optimization , and advanced data engineering . Key Responsibilities: Architect and implement modern data platforms using Microsoft Fabric , integrating Power BI , SQL , and Azure Databricks . Build scalable and optimized data pipelines and ETL/ELT workflows using SQL and PySpark . Develop robust data models to support business intelligence and analytics use cases. Design and implement data visualization solutions using Power BI . Collaborate with data engineers, analysts, and business teams to define and deliver end-to-end data solutions. Ensure data governance, quality, and security standards are met across the platform. Monitor, troubleshoot, and continuously improve the performance of data solutions. Required Skills and Experience: 5 to 6 years of experience in data architecture or data engineering roles. At least 1 year of hands-on experience with Microsoft Fabric and deep knowledge of Power BI . Strong proficiency in SQL for data modeling, transformation, and performance tuning. Experience developing data pipelines and transformations using PySpark in Azure Databricks . Familiarity with Microsoft Azure ecosystem (e.g., Azure Storage, Azure SQL). Good understanding of data governance , access control , and compliance in cloud environments. Strong problem-solving skills and the ability to work independently or as part of a team.

Posted 3 days ago

Apply

2.0 - 5.0 years

5 - 9 Lacs

gurugram

Work from Office

Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Primary skills:Technology->Machine Learning->Python Preferred Skills: Technology->Machine Learning->Python

Posted 3 days ago

Apply

7.0 - 12.0 years

15 - 18 Lacs

pune

Work from Office

Greetings from NAM Info!!! Please go through the job description. If you are interested in this opportunity, please reply with the following information to praveen@nam-it.com : Full Name (as in Adhar): Current Location: Expected CTC: Present CTC: Least Notice Period(Last Working Day If any): PAN Number: Job Title: Azure Data Engineer Company: Thermax Limited Location: Pune Job Type: Full-Time Employee of Thermax Experience Level: Senior (7+ years) Interview Process: 2 - 3 Rounds virtual Job Summary: Thermax is seeking a Senior Azure Databricks Engineer to lead the design, development, and deployment of scalable data solutions on the Azure cloud platform. The ideal candidate will have deep experience in Azure Databricks, Spark, and modern data engineering practices, and will play a pivotal role in driving advanced analytics, predictive maintenance, and energy efficiency use cases across our industrial operations. Key Responsibilities: Design and implement robust data pipelines in Azure Databricks using PySpark, SQL, and Delta Lake. Build and maintain scalable ETL/ELT workflows for ingesting data from various industrial sources (SCADA, PLCs, SAP, IoT). Collaborate with data scientists, business analysts, and domain experts to support AI/ML model development and deployment. Work with Azure Data Lake Storage (ADLS Gen2), Azure Synapse, Azure Data Factory, and Azure Event Hub for data integration and transformation. Optimize Spark jobs for performance, reliability, and cost-efficiency. Implement CI/CD pipelines using DevOps tools (e.g., Azure DevOps, Git). Ensure data governance, lineage, and security compliance in collaboration with the IT and security teams. Support and mentor junior engineers in cloud data engineering best practices . Required Qualifications: Bachelors or masters degree in computer science, Engineering, or a related field. 7+ years of experience in Data Engineering, including 3+ years in Azure Databricks/Spark. Strong proficiency in PySpark, Spark SQL, and Delta Lake architecture . Experience with Azure services: Data Factory, Data Lake, Synapse, Event Hub, and Key Vault. Solid understanding of distributed computing, big data processing, and real-time streaming. Familiarity with industrial data protocols (e.g., OPC UA, MQTT) is a strong plus. Strong knowledge of data modeling, schema design, and data quality principles. Preferred Qualifications: Experience in energy, manufacturing, or utilities sectors. Exposure to IoT analytics, predictive maintenance, or digital twin architectures. Certifications such as Azure Data Engineer Associate (DP-203) or Databricks Certified Data Engineer. Experience with MLflow, Databricks Feature Store, or integrating with Power BI. Working knowledge of containerization (Docker, Kubernetes) is a plus. Soft Skills: Strong problem-solving and debugging skills. Excellent communication and stakeholder engagement. Ability to lead discussions across technical and non-technical teams. Self-starter, detail-oriented, and collaborative mindset. Regards, Praveen Staffing Executive NAM Info Pvt Ltd, Email: praveen@nam-it.com Website - www.nam-it.com

Posted 3 days ago

Apply

5.0 - 10.0 years

5 - 12 Lacs

bengaluru

Work from Office

Job Description: AWS/Azure/SAP ETL Data Modelling Data Integration & Ingestion Data Manipulation and Processing GITHUB, Action , Azure DevOps Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a part of the data and analytics engineering team at PwC, your focus will be on utilizing advanced technologies and techniques to create robust data solutions for clients. Your role will involve transforming raw data into actionable insights, enabling informed decision-making, and contributing to business growth. Specifically in data engineering at PwC, you will be responsible for designing and constructing data infrastructure and systems that facilitate efficient data processing and analysis. This will include the development and implementation of data pipelines, data integration, and data transformation solutions. At PwC - AC, we are seeking an Azure Manager specializing in Data & AI, with a strong background in managing end-to-end implementations of Azure Databricks within large-scale Data & AI programs. In this role, you will be involved in architecting, designing, and deploying scalable and secure solutions that meet business requirements, encompassing ETL, data integration, and migration. Collaboration with cross-functional, geographically dispersed teams and clients will be key to understanding strategic needs and translating them into effective technology solutions. Your responsibilities will span technical project scoping, delivery planning, team leadership, and ensuring the timely execution of high-quality solutions. Utilizing big data technologies, you will create scalable, fault-tolerant components, engage stakeholders, overcome obstacles, and stay abreast of emerging technologies to enhance client ROI. Candidates applying for this role should possess 8-12 years of hands-on experience and meet the following position requirements: - Proficiency in designing, architecting, and implementing scalable Azure Data Analytics solutions utilizing Azure Databricks. - Expertise in Azure Databricks, including Spark architecture and optimization. - Strong grasp of Azure cloud computing and big data technologies. - Experience in traditional and modern data architecture and processing concepts, encompassing relational databases, data warehousing, big data, NoSQL, and business analytics. - Proficiency in Azure ADLS, Data Databricks, Data Flows, HDInsight, and Azure Analysis services. - Ability to build stream-processing systems using solutions like Storm or Spark-Streaming. - Practical knowledge of designing and building Near-Real Time and Batch Data Pipelines, expertise in SQL and Data modeling within an Agile development process. - Experience in the architecture, design, implementation, and support of complex application architectures. - Hands-on experience implementing Big Data solutions using Microsoft Data Platform and Azure Data Services. - Familiarity with working in a DevOps environment using tools like Chef, Puppet, or Terraform. - Strong analytical and troubleshooting skills, along with proficiency in quality processes and implementation. - Excellent communication skills and business/domain knowledge in Financial Services, Healthcare, Consumer Market, Industrial Products, Telecommunication, Media and Technology, or Deal advisory. - Familiarity with Application DevOps tools like Git, CI/CD Frameworks, Jenkins, or Gitlab. - Good understanding of Data Modeling and Data Architecture. Certification in Data Engineering on Microsoft Azure (DP 200/201/203) is required. Additional Information: - Travel Requirements: Travel to client locations may be necessary based on project needs. - Line of Service: Advisory - Horizontal: Technology Consulting - Designation: Manager - Location: Bangalore, India In addition to the above, the following skills are considered advantageous: - Cloud expertise in AWS, GCP, Informatica-Cloud, Oracle-Cloud. - Knowledge of Cloud DW technologies like Snowflake and Databricks. - Certifications in Azure Databricks. - Familiarity with Open Source technologies such as Apache Spark, Hadoop, NoSQL, Kafka, and Solr/Elastic Search. - Data Engineering skills in Java, Python, Pyspark, and R-Programming. - Data Visualization proficiency in Tableau and Qlik. Education qualifications accepted include BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Design, develop, and maintain data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement ETL solutions to integrate data from various sources into Azure Data Lake and Data Warehouse. Hands-on experience with SQL, Python, PySpark for data processing. Expertise in building Power BI dashboards and reports. Strong DAX and Power Query skills. Experience in Power BI Service, Gateways, and embedding reports. Develop Power BI datasets, semantic models, and row-level security for data access control. Experience on Azure Data Factory, Azure Databricks, and Azure Synapse. Strong customer orientation, decision-making, problem-solving, communication, and presentation skills. Very good judgment skills and ability to shape compelling solutions and solve unstructured problems with assumptions. Very good collaboration skills and ability to interact with multicultural and multifunctional teams spread across geographies. Strong executive presence and entrepreneurial spirit. Superb leadership and team-building skills with the ability to build consensus and achieve goals through collaboration rather than direct line authority. You can shape your career with us. We offer a range of career paths and internal opportunities within the Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage, or new parent support via flexible work. You will have the opportunity to learn on one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, generative AI, cloud, and data, combined with its deep industry expertise and partner ecosystem.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

The role of a Databricks PySpark IICS professional at Infosys involves working as part of the consulting team to address customer issues, develop innovative solutions, and ensure successful deployment to achieve client satisfaction. Your responsibilities will include contributing to proposal development, solution design, product configuration, and conducting pilots and demonstrations. Additionally, you will lead small projects, provide high-quality solutions, and support organizational initiatives. The technical requirements for this role include expertise in Data On Cloud Platform, specifically Azure Data Lake (ADL). You should also possess the ability to develop strategies that drive innovation, growth, and profitability for clients. Familiarity with software configuration management systems, industry trends, and financial processes is essential. Strong problem-solving skills, collaboration abilities, and knowledge of pricing models are key for success in this role. Preferred skills for this position include experience with Azure Analytics Services, specifically Azure Databricks. By leveraging your domain knowledge, client interfacing skills, and project management capabilities, you can help clients navigate their digital transformation journey effectively. If you are passionate about delivering value-added solutions, driving business transformation, and embracing the latest technologies, this opportunity at Infosys is the right fit for you.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a technical leader at Advisor360, you will play a crucial role in the planning and implementation of mid- to large-scale projects. Your expertise will be instrumental in architecting and implementing cloud-based data solutions using your strong technical skills. You will be responsible for validating requirements, performing business and technical analysis, designing cloud-native applications, and writing optimized PySpark-based data pipelines. Ensuring compliance with coding standards and best practices for AI-assisted code generation will be a part of your responsibilities. You will utilize GenAI tools like Cursor, Claude, and other LLMs to decompose complex requirements and auto-generate UI, API, and database scripts for rapid development. Your proficiency in introducing new software design patterns, AI-driven development workflows, and emerging cloud technologies will be crucial for the team's success. As a subject matter expert within the organization, you will help resolve complex technical issues related to cloud data engineering, distributed computing, and microservices architecture. Mentoring team members and fostering collaborative learning environments, particularly in areas related to GenAI, Azure, PySpark, Databricks, and CI/CD automation, will be a key aspect of your role. Your preferred strengths should include proficiency in GenAI-powered development, strong experience with SQL, relational databases, and GIT, as well as expertise in Microsoft Azure cloud services. Familiarity with serverless architectures, containerization, and big data frameworks will be beneficial for this position. To excel in this role, you are expected to have 7+ years of software engineering experience with Python and .NET, proven ability to analyze and automate requirement-based development using GenAI, and hands-on expertise in building and deploying cloud-based data processing applications. Strong knowledge of SDLC methodologies, proficiency in coding standards, and experience in integrating AI-based automation for improving code quality are also required. While wealth management domain experience is a plus, it is not mandatory. Candidates should be willing to learn and master the domain through on-the-job experience. Join Advisor360 for a rewarding career experience where your contributions are recognized and rewarded. Enjoy competitive base salaries, annual performance-based bonuses, and comprehensive health benefits. We trust our employees to manage their time effectively and offer an unlimited paid time off program to ensure you can perform at your best every day. We are committed to diversity and inclusion, believing that it drives innovation and welcomes individuals from all backgrounds to bring their authentic selves to work every day.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Senior Associate in the Data, Analytics & Specialist Managed Service Tower at PwC, with 6 to 10 years of experience, you will be part of a team of problem solvers addressing complex business issues from strategy to execution. Your responsibilities at this management level include utilizing feedback and reflection for personal development, being flexible in stretch opportunities, and demonstrating critical thinking skills to solve unstructured problems. You will also be involved in ticket quality reviews, project status reporting, and ensuring adherence to SLAs and incident, change, and problem management processes. Seeking diverse opportunities, communicating effectively, upholding ethical standards, demonstrating leadership capabilities, and collaborating in a team environment are essential aspects of this role. You will also be expected to contribute to cross competency work, COE activities, and manage escalations and risks. As a Senior Azure Cloud Engineer, you are required to have a minimum of 6 years of hands-on experience in building advanced Data warehousing solutions on leading cloud platforms, along with 3-5 years of Operate/Managed Services/Production Support Experience. Your responsibilities will include designing scalable and secure data structures, developing data pipelines for downstream consumption, and implementing ETL processes using tools like Informatica, Talend, SSIS, AWS, Azure, Spark, SQL, and Python. Experience with data analytics tools, data governance solutions, ITIL processes, and strong communication and problem-solving skills are essential for this role. Knowledge of Azure Data Factory, Azure SQL Database, Azure Data Lake, Azure Blob Storage, Azure Databricks, Azure Synapse Analytics, and Apache Spark is also required. Additionally, experience in data validation, cleansing, security, and privacy measures, as well as SQL querying, data governance, and performance tuning are essential. Nice to have qualifications for this role include Azure certification. Managed Services- Data, Analytics & Insights Managed Service at PwC focuses on providing integrated services and solutions to clients, enabling them to optimize operations and accelerate outcomes through technology and human-enabled experiences. The team emphasizes a consultative approach to operations, leveraging industry insights and talent to drive transformational journeys and sustained client outcomes. As a member of the Data, Analytics & Insights Managed Service team, you will be involved in critical offerings, help desk support, enhancement, optimization work, and strategic advisory engagements. Your role will require a mix of technical expertise and relationship management skills to support customer engagements effectively.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As an Associate Data Engineer/Analyst at EY, you will leverage your expertise to create, develop, and maintain scalable big data processing pipelines in distributed computing environments. Your role will involve designing and implementing interactive Power BI reports and dashboards tailored to the specific needs of various business units. Collaborating with cross-functional teams, you will gather data requirements and design effective data solutions. Your responsibilities will also include executing data ingestion, processing, and transformation workflows to support analytical and machine learning applications. To excel in this role, you should have over 2 years of experience as a data analyst or data engineer, along with a Bachelor's degree in a relevant field. Proficiency in SQL, a solid understanding of Azure data engineering tools like Azure Data Factory, and familiarity with Python programming are essential. You should also be competent in using Azure Databricks and possess expertise in Power BI and other Power Platform tools. Knowledge of large language models and generative AI solutions will be advantageous. Furthermore, you are expected to stay updated with emerging technologies and best practices in data processing and analytics, integrating them into EY's data engineering methodologies. Your role will require expertise in data modelling, data warehousing principles, data governance best practices, and ETL processes. Strong written and verbal communication skills, including documentation, presentation, and data storytelling, are essential for effective collaboration within the team. Joining EY means contributing to building a better working world, where new value is created for clients, people, society, and the planet while fostering trust in capital markets. With the support of data, AI, and advanced technology, EY teams help clients shape a confident future and address pressing issues of today and tomorrow. Working across assurance, consulting, tax, strategy, and transactions services, EY teams operate in a globally connected, multi-disciplinary network, providing services in over 150 countries and territories.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

You should have at least 5 years of experience working as a Data Engineer. Your expertise should include a strong background in Azure Cloud services and proficiency in tools such as Azure Databricks, PySpark, and Delta Lake. It is essential to have solid experience in Python and FastAPI for API development, as well as familiarity with Azure Functions for serverless API deployments. Experience in managing ETL pipelines using Apache Airflow is also required. Hands-on experience with databases like PostgreSQL and MongoDB is necessary. Strong SQL skills and the ability to work with large datasets are key for this role.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

As a member of the data engineering team at PepsiCo, you will play a crucial role in developing and overseeing data product build & operations. Your primary responsibility will be to drive a strong vision for how data engineering can proactively create a positive impact on the business. Working alongside a team of data engineers, you will build data pipelines, rest data on the PepsiCo Data Lake, and facilitate exploration and access for analytics, visualization, machine learning, and product development efforts across the company. Your contributions will directly impact the design, architecture, and implementation of PepsiCo's flagship data products in areas such as revenue management, supply chain, manufacturing, and logistics. You will collaborate closely with process owners, product owners, and business users, operating in a hybrid environment that includes in-house, on-premise data sources as well as cloud and remote systems. Your responsibilities will include active contribution to code development, managing and scaling data pipelines, building automation and monitoring frameworks for data pipeline quality and performance, implementing best practices around systems integration, security, performance, and data management, and empowering the business through increased adoption of data, data science, and business intelligence. Additionally, you will collaborate with internal clients, drive solutioning and POC discussions, and evolve the architectural capabilities of the data platform by engaging with enterprise architects and strategic partners. To excel in this role, you should have at least 6+ years of overall technology experience, including 4+ years of hands-on software development, data engineering, and systems architecture. You should also possess 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools, along with expertise in SQL optimization, performance tuning, and programming languages like Python, PySpark, and Scala. Experience in cloud data engineering, specifically in Azure, is essential, and familiarity with Azure cloud services is a plus. You should have experience in data modeling, data warehousing, building ETL pipelines, and working with data quality tools. Proficiency in MPP database technologies, cloud infrastructure, containerized services, version control systems, deployment & CI tools, and Azure services like Data Factory, Databricks, and Azure Machine Learning tools is desired. Additionally, experience with Statistical/ML techniques, retail or supply chain solutions, metadata management, data lineage, data glossaries, agile development, DevOps, DataOps concepts, and business intelligence tools will be advantageous. A degree in Computer Science, Math, Physics, or related technical fields is preferred for this role.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies