Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 7.0 years
14 - 16 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Job Title: Data/ML Platform Engineer Location: Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office) Notice Period: ImmediateiSource Services is hiring for one of their client for the position of Data/ML Platform Engineer. As a Data Engineer you will be relied on to independently develop and deliver high-quality features for our new ML Platform, refactor and translate our data products and finish various tasks to a high standard. Youll be part of the Data Foundation Team, which focuses on creating and maintaining the Data Platform for Marktplaats. 5 years of hands-on experience in using Python, Spark,Sql. Experienced in AWS Cloud usage and management. Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow). Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch. Experience with orchestrators such as Airflow and Kubeflow. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes). Fundamental understanding of Parquet, Delta Lake and other data file formats. Proficiency on an IaC tool such as Terraform, CDK or CloudFormation. Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst Location - Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office)
Posted 3 days ago
8.0 - 13.0 years
30 - 45 Lacs
Hyderabad
Work from Office
Role : Were looking for a skilled Databricks Solution Architect who will lead the design and implementation of data migration strategies and cloud-based data and analytics transformation on the Databricks platform. This role involves collaborating with stakeholders, analyzing data, defining architecture, building data pipelines, ensuring security and performance, and implementing Databricks solutions for machine learning and business intelligence. Key Responsibilities: Define the architecture and roadmap for cloud-based data and analytics transformation on Databricks. Design, implement, and optimize scalable, high-performance data architectures using Databricks. Build and manage data pipelines and workflows within Databricks. Ensure that best practices for security, scalability, and performance are followed. Implement Databricks solutions that enable machine learning, business intelligence, and data science workloads. Oversee the technical aspects of the migration process, from planning through to execution. Create documentation of the architecture, migration processes, and solutions. Provide training and support to teams post-migration to ensure they can leverage Databricks. Preferred candidate profile: Experience: 7+ years of experience in data engineering, cloud architecture, or related fields. 3+ years of hands-on experience with Databricks, including the implementation of data engineering solutions, migration projects, and optimizing workloads. Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their integration with Databricks. Experience in end-to-end data migration projects involving large-scale data infrastructure. Familiarity with ETL tools, data lakes, and data warehousing solutions. Skills: Expertise in Databricks architecture and best practices for data processing. Strong knowledge of Spark, Delta Lake, DLT, Lakehouse architecture, and other latest Databricks components. Proficiency in Databricks Asset Bundles Expertise in design and development of migration frameworks using Databricks Proficiency in Python, Scala, SQL, or similar languages for data engineering tasks. Familiarity with data governance, security, and compliance in cloud environments. Solid understanding of cloud-native data solutions and services.
Posted 3 days ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Hi, We at HCL are looking for Databricks engineers with exp in AWS. Location: Noida, Bangalore,Pune, Hyderabad and Chennai Please share the details below. Total years of exp: Exp in Databricks: Exp in AWS: Exp in Unity Catalog: Exp in Collibra: Current CTC: Expected CTC: Notice Period: Current location: Preferred location: Reach us on srikanth.domala@hcltech.com Primary & Secondary Skill Databricks Pyspark, Python and Collibra ( Primary) Unity catalog ETL AWS JD (Detailed) Design of data solutions on Databricks including delta lake, data warehouse, data marts and other data solutions to support the analytics needs of the organization. Proficiency in using Collibra Data Governance Center, Data Catalog, and Collibra Connect for data management and governance. Apply best practices during design in data modeling (logical, physical) and ETL pipelines (streaming and batch) using cloud-based services especially Python & Pyspark Design, develop and manage the pipelining (collection, storage, access), data engineering (data quality, ETL, Data Modelling) and understanding (documentation, exploration) of the data. Interact with stakeholders regarding data landscape understanding, conducting discovery exercises, developing proof of concepts, and demonstrating it to stakeholders. Implement data quality frameworks and standards using Collibra to ensure the integrity and accuracy of data Excellent collaboration skills to work effectively with cross-functional teams Strong verbal and written communication skills
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Job Title: Consultant / Senior Consultant - Azure Data Engineering Location: India - Gurgaon preferred Industry: Insurance Analytics & AI Vertical Role Overview: We are seeking a hands-on Consultant / Senior Consultant with strong expertise in Azure-based data engineering to support end-to-end development and delivery of data pipelines for our insurance clients. The ideal candidate will have a deep understanding of Azure Data Factory, ADLS, Databricks (preferably with DLT and Unity Catalog), SQL, and Python and be comfortable working in a dynamic, client-facing environment. This is a key offshore role requiring both technical execution and solution-oriented thinking to support modern data platform initiatives. Collaborate with data scientists, analysts, and stakeholders to gather requirements and define data models that effectively support business requirements Demonstrate decision-making, analytical and problem-solving abilities Strong verbal and written communication skills to manage client discussions Familiar with working on Agile methodologies - daily scrum, sprint planning, backlog refinement Key Responsibilities & Skillsets: o Design and develop scalable and efficient data pipelines using Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS). o Build and maintain Databricks notebooks for data ingestion, transformation, and quality checks, using Python and SQL. o Work with Delta Live Tables (DLT) and Unity Catalog (preferred) to improve pipeline automation, governance, and performance. o Collaborate with data architects, analysts, and onshore teams to translate business requirements into technical specifications. o Troubleshoot data issues, ensure data accuracy, and apply best practices in data engineering and DevOps. o Support the migration of legacy SQL pipelines to modern Python-based frameworks. o Ensure adherence to data security, compliance, and performance standards, especially within insurance domain constraints. o Provide documentation, status updates, and technical insights to stakeholders as required. o Excellent communication skills and stakeholder management Required Skills & Experience: 3-7 years of strong hands-on experience in data engineering with a focus on Azure cloud technologies. Proficient in Azure Data Factory, Databricks, ADLS Gen2, and working knowledge of Unity Catalog. Strong programming skills in both SQL, Python especially within Databricks Notebooks. Pyspark expertise is good to have. Experience in Delta Lake / Delta Live Tables (DLT) is a plus. Good understanding of ETL/ELT concepts, data modeling, and performance tuning. Exposure to Insurance or Financial Services data projects is highly preferred. Strong communication and collaboration skills in an offshore delivery model. Required Skills & Experience: Experience working in Agile/Scrum teams Familiarity with Azure DevOps, Git, and CI/CD practices Certifications in Azure Data Engineering (e.g., DP-203) or Databricks
Posted 5 days ago
10.0 - 18.0 years
15 - 30 Lacs
Pune, Bengaluru
Work from Office
Role & responsibilities AWS with Databricks infra lead Experienced in setting up the Unity Catalog s Setting out how the group is to consume the model serving processes, Developing MLflow routines, Experienced ML models, Have used Gen AI features with guardrails, experimentation, and monitoring
Posted 5 days ago
3.0 - 8.0 years
4 - 9 Lacs
Ahmedabad
Hybrid
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Governance Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Job Summary: We are seeking a highly skilled and motivated Governance Tool Specialist to join our team with 4 years of experience. The candidate will be responsible for the implementation, configuration, and management of our governance tools. This role requires a deep understanding of data governance principles, excellent technical skills, and the ability to work collaboratively with various stakeholders. Optional - Experienced Data Quality Specialist with extensive expertise in using Alex Solutions tools to ensure data accuracy, consistency, and reliability. Proficiency in data profiling, cleansing, validation, and governance. Key Responsibilities: Data Governance: • Implement and configure Alex Solutions governance tools to meet client requirements. • Collaborate with clients to understand their data governance needs and provide tailored solutions. • Provide technical support and troubleshooting for governance tool issues. • Conduct training sessions and workshops to educate clients on the use of governance tools. • Develop and maintain documentation for governance tool configurations and processes. • Monitor and report on the performance and usage of governance tools. • Stay up-to-date with the latest developments in data governance and related technologies. • Work closely with the product development team to provide feedback and suggestions for tool enhancements. Data Quality: • Utilized Alex Solutions' data quality tools to develop and implement processes, standards, and guidelines that ensure data accuracy and reliability. • Conducted comprehensive data profiling using Alex Solutions, identifying and rectifying data anomalies and inconsistencies. • Monitored data quality metrics through Alex Solutions, providing regular reports on data quality issues and improvements to stakeholders. • Collaborated with clients to understand their data quality needs and provided tailored solutions using Alex Solutions. • Implemented data cleansing, validation, and enrichment processes within the Alex Solutions platform to maintain high data quality standards. • Developed and maintained detailed documentation for data quality processes and best practices using Alex Solutions' tools. Preferred Skills: Must Have Skills: Alex Solutions Good to Have: Unity Catalog, Microsoft Purview, Data Quality tool Secondary Skills: Informatica, Collibra Experience with data cataloging, data lineage, data quality and metadata management. • Knowledge of regulatory requirements related to data governance (e.g., GDPR, CCPA). • Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud). • Certification in data governance or related fields. • Proven experience with data governance, data quality tools and technologies. • Strong understanding of data governance principles and best practices. • Proficiency in SQL, data modeling, and database management. • Excellent problem-solving and analytical skills. • Strong communication and interpersonal skills.
Posted 1 week ago
3.0 - 7.0 years
22 - 25 Lacs
Bengaluru
Hybrid
Role & responsibilities 3-6 years of experience in Data Engineering Pipeline Ownership and Quality Assurance, with hands-on expertise in building, testing, and maintaining data pipelines. Proficiency with Azure Data Factory (ADF), Azure Databricks (ADB), and PySpark for data pipeline orchestration and processing large-scale datasets. Strong experience in writing SQL queries and performing data validation, data profiling, and schema checks. Experience with big data validation, including schema enforcement, data integrity checks, and automated anomaly detection. Ability to design, develop, and implement automated test cases to monitor and improve data pipeline efficiency. Deep understanding of Medallion Architecture (Raw, Bronze, Silver, Gold) for structured data flow management. Hands-on experience with Apache Airflow for scheduling, monitoring, and managing workflows. Strong knowledge of Python for developing data quality scripts, test automation, and ETL validations. Familiarity with CI/CD pipelines for deploying and automating data engineering workflows. Solid data governance and data security practices within the Azure ecosystem. Additional Requirements: Ownership of data pipelines ensuring end-to-end execution, monitoring, and troubleshooting failures proactively. Strong stakeholder management skills, including follow-ups with business teams across multiple regions to gather requirements, address issues, and optimize processes. Time flexibility to align with global teams for efficient communication and collaboration. Excellent problem-solving skills with the ability to simulate and test edge cases in data processing environments. Strong communication skills to document and articulate pipeline issues, troubleshooting steps, and solutions effectively. Experience with Unity Catalog or willingness to learn. Preferred candidate profile Immediate Joiner's
Posted 1 week ago
10.0 - 15.0 years
8 - 18 Lacs
Kochi
Remote
10 yrs of exp working in cloud-native data (Azure Preferred),Databricks, SQL,PySpark, migrating from Hive Metastore to Unity Catalog, Unity Catalog, implementing Row-Level Security (RLS), metadata-driven ETL design patterns,Databricks certifications
Posted 1 week ago
5.0 - 10.0 years
14 - 24 Lacs
Bengaluru
Remote
Detailed job description - Skill Set: Strong Knowledge in Databricks. This includes creating scalable ETL (Extract, Transform, Load) processes, data lakes Strong knowledge in Python and SQL Strong experience with AWS cloud platforms is a must Good understanding of data modeling principles and data warehousing concepts Strong knowledge of optimizing ETL jobs, batch processing jobs to ensure high performance and efficiency Implementing data quality checks, monitoring data pipelines, and ensuring data consistency and security Hands on experience with Databricks features like Unity Catalog Mandatory Skills Databricks, AWS
Posted 1 week ago
7.0 - 12.0 years
27 - 35 Lacs
Kolkata, Hyderabad, Bengaluru
Work from Office
Band 4c & 4D Skill set -Unity Catalog + Python , Spark , Kafka Inviting applications for the role of Lead Consultant- Databricks Developer with experience in Unity Catalog + Python , Spark , Kafka for ETL! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Develop and maintain scalable ETL pipelines using Databricks with a focus on Unity Catalog for data asset management. Implement data processing frameworks using Apache Spark for large-scale data transformation and aggregation. Integrate real-time data streams using Apache Kafka and Databricks to enable near real-time data processing. Develop data workflows and orchestrate data pipelines using Databricks Workflows or other orchestration tools. Design and enforce data governance policies, access controls, and security protocols within Unity Catalog. Monitor data pipeline performance, troubleshoot issues, and implement optimizations for scalability and efficiency. Write efficient Python scripts for data extraction, transformation, and loading. Collaborate with data scientists and analysts to deliver data solutions that meet business requirements. Maintain data documentation, including data dictionaries, data lineage, and data governance frameworks. Qualifications we seek in you! Minimum qualifications Bachelors degree in Computer Science, Data Engineering, or a related field. experience in data engineering with a focus on Databricks development. Proven expertise in Databricks, Unity Catalog, and data lake management. Strong programming skills in Python for data processing and automation. Experience with Apache Spark for distributed data processing and optimization. Hands-on experience with Apache Kafka for data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of data governance, data security, and data quality frameworks. Excellent communication skills and the ability to work in a cross-functional environ Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql. Must have strong performance optimization skills to improve efficiency and reduce cost. Must have worked on both Batch and streaming data pipeline. Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes.
Posted 2 weeks ago
7.0 - 12.0 years
15 - 22 Lacs
Bengaluru
Hybrid
Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering.
Posted 2 weeks ago
5.0 - 10.0 years
14 - 24 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Greetings from LTIMindtree! Job Description Notice Period:- 0 to 30 Days only Experience:- 5 to 12 Years Interview Mode :- 2 rounds (One round is F2F) Hybrid (2-3 WFO) Brief Description of Role Job Summary: We are seeking an experienced and strategic Data Architect to design, build, and optimize scalable, secure, and high-performance data solutions. You will play a pivotal role in shaping our data infrastructure, working with technologies such as Databricks, Azure Data Factory, Unity Catalog , and Spark , while aligning with best practices in data governance, pipeline automation , and performance optimization . Key Responsibilities: Design and develop scalable data pipelines using Databricks and Medallion Architecture (Bronze, Silver, Gold layers). • Architect and implement data governance frameworks using Unity Catalog and related tools. • Write efficient PySpark and SQL code for data transformation, cleansing, and enrichment. • Build and manage data workflows in Azure Data Factory (ADF) including triggers, linked services, and integration runtimes. • Optimize queries and data structures for performance and cost-efficiency . • Develop and maintain CI/CD pipelines using GitHub for automated deployment and version control. • Collaborate with cross-functional teams to define data strategies and drive data quality initiatives. • Implement best practices for DevOps, CI/CD , and infrastructure-as-code in data engineering. • Troubleshoot and resolve performance bottlenecks across Spark, ADF, and Databricks pipelines. • Maintain comprehensive documentation of architecture, processes, and workflows . Requirements: Bachelors or masters degree in computer science, Information Systems, or related field. • Proven experience as a Data Architect or Senior Data Engineer. • Strong knowledge of Databricks , Azure Data Factory , Spark (PySpark) , and SQL . • Hands-on experience with data governance , security frameworks , and catalog management . • Proficiency in cloud platforms (preferably Azure). • Experience with CI/CD tools and version control systems like GitHub. • Strong communication and collaboration skills.
Posted 2 weeks ago
6.0 - 11.0 years
18 - 33 Lacs
Bengaluru
Remote
Role & responsibilities Mandatory skills: ADB AND UNITY CATALOG Job Summary: We are looking for a skilled StData Engineer /with expertise in Databricks and Unity Catalog to design, implement, and manage scalable data solutions. Key Responsibilities: • Design and implement scalable data pipelines and ETL workflows using Databricks. • Implement Unity Catalog for data governance, access control, and metadata management across multiple workspaces. • Develop Delta Lake architectures for optimized data storage and retrieval. • Establish best practices for data security, compliance, and lineage tracking in Unity Catalog. • Optimize data lakehouse architecture for performance and cost efficiency. • Collaborate with data scientists, engineers, and business teams to support analytical workloads. • Monitor and troubleshoot Databricks clusters, performance tuning, and cost management. • Implement data quality frameworks and observability solutions to maintain high data integrity. • Work with Azure/AWS/GCP cloud environments to deploy and manage data solutions. Required Skills & Qualifications: • 8-19 years of experience in data engineering, data architecture, or cloud data solutions. • Strong hands-on experience with Databricks and Unity Catalog. • Expertise in PySpark, Scala, or SQL for data processing. • Deep understanding of Delta Lake, Lakehouse architecture, and data partitioning strategies. • Experience with RBAC, ABAC, and access control mechanisms within Unity Catalog. • Knowledge of data governance, compliance standards (GDPR, HIPAA, etc.), and audit logging. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their respective data services. • Strong understanding of CI/CD pipelines, DevOps, and Infrastructure as Code (IaC). • Experience integrating BI tools (Tableau, Power BI, Looker) and ML frameworks is a plus. • Excellent problem-solving, communication, and collaboration skills. Preferred candidate profile
Posted 2 weeks ago
5.0 - 8.0 years
6 - 24 Lacs
Hyderabad
Work from Office
Notice 30 to 45 days. * Design, develop & maintain data pipelines using PySpark, Databricks ,Unity Catalog & cloud. * Collaborate with cross-functional teams on ETL processes & report development. Share resume : garima.arora@anetcorp.com
Posted 3 weeks ago
12.0 - 20.0 years
22 - 37 Lacs
Bengaluru
Hybrid
12+ yrs of experience in Data Architecture Strong in Azure Data Services & Databricks, including Delta Lake & Unity Catalog Experience in Azure Synapse, Purview, ADF, DBT, Apache Spark,DWH,Data Lakes, NoSQL,OLTP NP-Immediate sachin@assertivebs.com
Posted 3 weeks ago
8.0 - 13.0 years
10 - 20 Lacs
Hyderabad, Pune
Work from Office
Job Title: Databricks Administrator Client: Wipro Employer: Advent Global Solutions Location: Hyderabad / Pune Work Mode: Hybrid Experience: 8+ Years (8 Years Relevant in Databricks Administration) CTC: 22.8 LPA Notice Period: Immediate Joiners to 15 Days Shift: General Shift Education Preferred: B.Tech / M.Tech / MCA / B.Sc (Computer Science) Key words:- Databricks Administration Unity Catalog Cluster Creation, Tuning & ADministration RBAC in Unity Catalog Cloud Administration in GCP preferably else ok with AWS/Azure Knowledge Databricks on 80% and cloud on 20% Mandatory Skills Databricks Admin on GCP/AWS Job Description: • Responsibilities will include designing, implementing, and maintaining the Databricks platform, and providing operational support. Operational support responsibilities include platform set-up and configuration, workspace administration, resource monitoring, providing technical support to data engineering, Data Science/ML, and Application/integration teams, performing restores/recoveries, troubleshooting service issues, determining the root causes of issues, and resolving issues. • The position will also involve the management of security and changes. • The position will work closely with the Team Lead, other Databricks Administrators, System Administrators, and Data Engineers/Scientists/Architects/Modelers/Analysts. Responsibilities: • Responsible for the administration, configuration, and optimization of the Databricks platform to enable data analytics, machine learning, and data engineering activities within the organization. • Collaborate with the data engineering team to ingest, transform, and orchestrate data. • Manage privileges over the entire Databricks account, as well as at the workspace level, Unity Catalog level and SQL warehouse level. • Create workspaces, configure cloud resources, view usage data, and manage account identities, settings, and subscriptions. • Install, configure, and maintain Databricks clusters and workspaces. • Maintain Platform currency with security, compliance, and patching best practices. • Monitor and manage cluster performance, resource utilization, platform costs, and troubleshoot issues to ensure optimal performance. • Implement and manage access controls and security policies to protect sensitive data. • Manage schema data with Unity Catalog - create, configure, catalog, external storage, and access permissions. • Administer interfaces with Google Cloud Platform. Required Skills: • 3+ years of production support of the Databricks platform Preferred: • 2+ years of experience of AWS/Azure/GCP PaaS admin • 2+ years of experience in automation frameworks such as Terraform Role & responsibilities Preferred candidate profile
Posted 1 month ago
5 - 10 years
16 - 27 Lacs
Pune, Chennai, Bengaluru
Hybrid
If interested pls share the below details on PriyaM4@hexaware.com Total Exp CTC ECTC NP Loc MUST Have skill- Unity Catalog We are looking for a skilled Sr Data Engineer /with expertise in Databricks and Unity Catalog to design, implement, and manage scalable data solutions. Key Responsibilities: • Design and implement scalable data pipelines and ETL workflows using Databricks. • Implement Unity Catalog for data governance, access control, and metadata management across multiple workspaces. • Develop Delta Lake architectures for optimized data storage and retrieval. • Establish best practices for data security, compliance, and lineage tracking in Unity Catalog. • Optimize data lakehouse architecture for performance and cost efficiency. • Collaborate with data scientists, engineers, and business teams to support analytical workloads. • Monitor and troubleshoot Databricks clusters, performance tuning, and cost management. • Implement data quality frameworks and observability solutions to maintain high data integrity. • Work with Azure/AWS/GCP cloud environments to deploy and manage data solutions. Required Skills & Qualifications: • 8-19 years of experience in data engineering, data architecture, or cloud data solutions. • Strong hands-on experience with Databricks and Unity Catalog. • Expertise in PySpark, Scala, or SQL for data processing. • Deep understanding of Delta Lake, Lakehouse architecture, and data partitioning strategies. • Experience with RBAC, ABAC, and access control mechanisms within Unity Catalog. • Knowledge of data governance, compliance standards (GDPR, HIPAA, etc.), and audit logging. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their respective data services. • Strong understanding of CI/CD pipelines, DevOps, and Infrastructure as Code (IaC). • Experience integrating BI tools (Tableau, Power BI, Looker) and ML frameworks is a plus. • Excellent problem-solving, communication, and collaboration skills.
Posted 1 month ago
8 - 10 years
11 - 21 Lacs
Noida, Mumbai (All Areas)
Work from Office
As the Full Stack Developer within the Data and Analytics team, you will be responsible for delivery of innovative data and analytics solutions, ensuring Al Futtaim Business stays at the forefront of technical development.
Posted 1 month ago
6 - 9 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Sharing the JD for your reference : Experience : 6-10+ yrs Primary skills set : Azure Databricks , ADF SQL , Unity CATALOG, Pyspark/Python Kindly, share the following details : Updated CV Relevant Skills Total Experience Current CTC Expected CTC Notice Period Current Location Preferred Location
Posted 1 month ago
8 - 12 years
13 - 18 Lacs
Bengaluru
Work from Office
Role & responsibilities Job Summary: We are seeking a highly skilled and motivated Data Governance Executor to join our team. The ideal candidate will be responsible for implementing the data governance frameworks focus on data governance solution using Unity Catalog and Azure Purview. This role will ensure implementation of data quality standardization, Data Classification, and Data Governance Polices execution. Key Responsibilities: Data Governance Solution Implementation: Develop and implement data governance policies and procedures using Unity Catalog and Azure Purview. Ensure data governance frameworks align with business objectives and regulatory requirements. Data Catalog Management: Manage and maintain the Unity Catalog, ensuring accurate and up-to-date metadata. Oversee the classification and organization of data assets within Azure Purview. Data Quality Assurance: Implement data quality standards with Data Engineer and perform regular audits to ensure data accuracy and integrity. Collaborate with data stewards to resolve data quality issues. Stakeholder Collaboration: Work closely with data owners, stewards, and business stakeholders to understand data needs and requirements. Provide training and support to ensure effective use of data governance tools. Reporting and Documentation: Generate reports on data governance metrics and performance. Maintain comprehensive documentation of data governance processes and policies. Qualifications: Education: Bachelor's degree in Computer Science, Information Systems, or a related field. Master's degree preferred. Experience: Proven experience in data governance, data management, or related roles. 2+ years hands-on experience with Unity Catalog and Azure Purview. Skills: Strong understanding of data governance principles and best practices. Proficiency in data cataloging, metadata management, and data quality assurance. Excellent analytical, problem-solving, and communication skills. Ability to work collaboratively with cross-functional teams. Preferred Qualifications: Certification in data governance or related fields. Experience with other data governance tools and platforms. Knowledge of cloud data platforms and services. Preferred candidate profile
Posted 1 month ago
8 - 12 years
20 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities As a Cloud Technical Lead- Data, you will get to: Build and maintain data pipelines to enable faster, better, data-informed decision-making through customer enterprise business analytics Collaborate with stakeholders to understand their strategic objectives and identify opportunities to leverage data and data quality Design, develop and maintain large-scale data solutions on Azure cloud platform Implement ETL pipelines using Azure Data Factory, Azure Databricks, and other related services Develop and deploy data models and data warehousing solutions using Azure Synapse Analytics, Azure SQL Database. Optimize performing, robust, and resilient data storage solutions using Azure Blob Storage, Azure Data Lake, Snowflake and other related services Develop and implement data security policies to ensure compliance with industry standards Provide support for data-related issues, and mentor junior data engineers in the team Define and manage data governance policies to ensure data quality and compliance with industry standards Collaborate with data architects, data scientists, developers, and business stakeholders to design data solutions that meet business requirements Coordinates with users to understand data needs and delivery of data with a focus on data quality, data reuse, consistency, security, and regulatory compliance. Conceptualize and visualize data frameworks. Preferred candidate profile Bachelors degree in computer science, Information Technology, or related field 8+ years of experience in data engineering with 3+ years hands on Databricks (DB) experience. Strong expertise in Microsoft Azure cloud platform and services, particularly Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure SQL Database Extensive experience working with large data sets with hands-on technology skills to design and build robust data architecture and data modeling and database design. Strong programming skills in SQL, Python and Pyspark Experience in Unity catalog & DBT and data governance knowledge. Good to have experience in Snowflake utilities such as SnowSQL, SnowPipe, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Agile development environment experience applying DEVOPS along with data quality and governance principles. Good leadership skills to guide and mentor the work of less experienced personnel Ability to contribute to continual improvement by suggesting improvements to Architecture or new technologies and mentoring junior employees and being ready to shoulder ad-hoc. Experience with cross-team collaboration, interpersonal skills/relationship building Ability to effectively communicate through presentation, interpersonal, verbal, and written skills.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane