Home
Jobs

10 Medallion Architecture Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 14.0 years

25 - 37 Lacs

Noida

Hybrid

Naukri logo

Description - Internal 'Accountable for the data engineering lifecycle including research, proof of concepts, architecture, design, development, test, deployment, and maintenance. Design, develop, implement, and run cross-domain, modular, flexible, scalable, secure, reliable, and quality data solutions that transform data for meaningful analyses and analytics while ensuring operability. Layer in instrumentation in the development process so that data pipelines that can be monitored to detect internal problems before they result in user-visible outages or data quality issues. Build processes and diagnostic tools to troubleshoot, maintain, and optimize solutions and respond to customer and production issues. Embrace continuous learning of engineering practices to ensure industry best practices and technology adoption, including DevOps, Cloud, and Agile thinking. Tech debt reduction/Tech transformation including open source adoption, cloud adoption, HCP assessment, and adoption. Maintain high-quality documentation of data definitions, transformations, and processes to ensure data governance and security' Functions may include database architecture, engineering, design, optimization, security, and administration; as well as data modeling, big data development, Extract, Transform, and Load (ETL) development, storage engineering, data warehousing, data provisioning and other similar roles. Responsibilities may include Platform-as-a-Service and Cloud solution with a focus on data stores and associated eco systems. Duties may include management of design services, providing sizing and configuration assistance, ensuring strict data quality, and performing needs assessments. Analyzes current business practices, processes and procedures as well as identifying future business opportunities for leveraging data storage and retrieval system capabilities. Manages relationships with software and hardware vendors to understand the potential architectural impact of different vendor strategies and data acquisition. May design schemas, write SQL or other data markup scripting and helps to support development of Analytics and Applications that build on top of data. Selects, develops and evaluates personnel to ensure the efficient operation of the function. - Generally work is self-directed and not prescribed. - Works with less structured, more complex issues. - Serves as a resource to others. Qualifications - Internal - Undergraduate degree or equivalent experience. 'Proficient in design and documentation of data exchanges across various channels including APIs, streams, batch feeds Proficient in source to target mapping, gap analysis and applies data transformation rules based on understanding of business rules, data structures Develops and implements scripts to maintain and monitor performance tuning. Designs scalable job scheduler solutions and advises on appropriate tools/technologies to use. Works across multiple domains to define and build data models Understands all the connected technology services and their impacts. Assesses design and proposes options to ensure the solution meets business needs in terms of security, scalability, reliability, and feasibility. ' 'Understanding of healthcare data, including Electronic Health Records (EHR), claims data, and regulatory compliance such as HIPAA. Familiarity with healthcare regulations and data exchange standards (e.g. HL7, FHIR) Experience with data analytics tools like Tableau, Power BI, or similar. Familiarity with automation tools and scripting languages (e.g., Bash, PowerShell) to automate repetitive tasks. Experience in optimizing data processing workflows for performance and cost-efficiency.'

Posted 1 day ago

Apply

16.0 - 21.0 years

40 - 60 Lacs

Pune, Gurugram, Delhi / NCR

Hybrid

Naukri logo

Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models – data warehouse etc. Lead cross-functional teams, define data strategies, and leverage the latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) • Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow.We are an equal opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Perks and benefits

Posted 1 week ago

Apply

5.0 - 8.0 years

22 - 30 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Naukri logo

Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in

Posted 2 weeks ago

Apply

3.0 - 7.0 years

22 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Role & responsibilities 3-6 years of experience in Data Engineering Pipeline Ownership and Quality Assurance, with hands-on expertise in building, testing, and maintaining data pipelines. Proficiency with Azure Data Factory (ADF), Azure Databricks (ADB), and PySpark for data pipeline orchestration and processing large-scale datasets. Strong experience in writing SQL queries and performing data validation, data profiling, and schema checks. Experience with big data validation, including schema enforcement, data integrity checks, and automated anomaly detection. Ability to design, develop, and implement automated test cases to monitor and improve data pipeline efficiency. Deep understanding of Medallion Architecture (Raw, Bronze, Silver, Gold) for structured data flow management. Hands-on experience with Apache Airflow for scheduling, monitoring, and managing workflows. Strong knowledge of Python for developing data quality scripts, test automation, and ETL validations. Familiarity with CI/CD pipelines for deploying and automating data engineering workflows. Solid data governance and data security practices within the Azure ecosystem. Additional Requirements: Ownership of data pipelines ensuring end-to-end execution, monitoring, and troubleshooting failures proactively. Strong stakeholder management skills, including follow-ups with business teams across multiple regions to gather requirements, address issues, and optimize processes. Time flexibility to align with global teams for efficient communication and collaboration. Excellent problem-solving skills with the ability to simulate and test edge cases in data processing environments. Strong communication skills to document and articulate pipeline issues, troubleshooting steps, and solutions effectively. Experience with Unity Catalog or willingness to learn. Preferred candidate profile Immediate Joiner's

Posted 2 weeks ago

Apply

7.0 - 12.0 years

18 - 33 Lacs

Navi Mumbai

Work from Office

Naukri logo

About Us: Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation. We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance our business processes. As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team. Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling

Posted 3 weeks ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Mumbai

Work from Office

Naukri logo

Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling Location: Mumbai

Posted 3 weeks ago

Apply

16.0 - 21.0 years

16 - 21 Lacs

Delhi NCR, , India

On-site

Foundit logo

Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models data warehouse etc. Lead cross-functional teams, define data strategies, andleveragethe latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology

Posted 1 month ago

Apply

16.0 - 21.0 years

16 - 21 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Foundit logo

Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models data warehouse etc. Lead cross-functional teams, define data strategies, andleveragethe latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology

Posted 1 month ago

Apply

16.0 - 21.0 years

16 - 21 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models data warehouse etc. Lead cross-functional teams, define data strategies, andleveragethe latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology

Posted 1 month ago

Apply

14 - 20 years

20 - 35 Lacs

Pune, Chennai, Mumbai (All Areas)

Work from Office

Naukri logo

Role :Data & Analytics Architect Required Skill Set :Data Integration , Data modelling , IOT Data Management and information delivery layers of Data & Analytics Preferred Specializations or Prior Experience : Manufacturing, Hi-Tech, CPG Use cases where Analytics and AI have been applied Location: PAN INDIA Desired Competencies (Managerial/Behavioural Competency): Must-Have: 14+ years of IT industry experience IoT / Industry 4.0 / Industrial AI experience for at least 3+ years In depth knowledge of Data Integration , Data modelling , IOT Data Management and information delivery layers of Data & Analytics Strong written , verbal communication with good presentation skills Excellent Knowledge of Data governance , Medallion architecture , UNS , Data lake architectures , AIML and Data Science Experience with cloud platforms (e.g., GCP, AWS, Azure) and cloud-based Analytics & AI / ML services. Proven experience working with clients in the Manufacturing CPG /High-Tech / oil & Gas /Pharma industries. Good understanding of technology trends, market forces and industry imperatives. Excellent communication, presentation, and interpersonal skills. Ability to work independently and collaboratively in a team environment. Good-to-Have: Degree in Data Science or Statistics. Led consulting & advisory programs at CxO level, managing business outcomes. Point of View articulation for CxO level. Manufacturing (Discreet or Process) industry background for application of AI technology for business impact. Entrepreneurial and comfortable working in a complex and fast-paced environment Responsibilities / Expected Deliverables: We are seeking a highly skilled and experienced Data and Analytics / Consultant to provide expert guidance and support to our clients in the Manufacturing, Consumer Packaged Goods (CPG), and High-Tech industries. This role requires a deep understanding of Architect , Design and Implementation experience in cloud data platforms Experience in handling multiple type of data (Structured, streaming , Semi structured etc.) Strategic experience in Data & Analytics (cloud data architecture, lake-house architecture, data fabric, data mesh concept) Experience in deploying DevOps / CICD techniques. Automate and Deploy Data Pipelines / ETLs in DevOps Environment Experience in Strategizing Data governance activities. The ideal candidate will possess exceptional communication, consulting, and problem-solving skills, along with a strong technical foundation in data Arch. The role involves leading the Data Arch Tech Advisory engagement, bringing thought leadership to engage CxOs actively. Following would be some of the key roles and responsibilities: Business-oriented Engage with customer CxOs to evangelise adoption of AI & GenAI Author proposals for solving business problems and achieving business objectives, leveraging Data Analytics & AI technologies Advisory Experience in managing the entire lifecycle of Data Analytics , is an added advantage. This includes: Develop roadmap for introduction and scaling of Data architecture in customer organization Define best suited AI operating model for customers Guide the teams on solution approaches and roadmaps. Build and leverage frameworks for RoI from AI. Effectively communicate complex technical information to both technical and non-technical audiences, presenting findings and recommendations in a clear, concise, and compelling. Demonstrate though-leadership to identify various use-cases that need to be built for showcase to prospective customers.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies