Home
Jobs

64 Star Schema Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

20 - 32 Lacs

New Delhi, Pune, Bengaluru

Work from Office

Naukri logo

Job Description Overall 7-10 yrs of IT experience with a Bachelor's degree in computer science, information technology, or a similar field. Must have 4+ years strong experience in Data modelling. Able to understand, analyze and design enterprise data model, and have expertise on any data modelling tool Erwin,power designer. Have created flexible and scalable model Have experience at least two end-to-end implementation of cloud data warehouse. 3 years of hands-on experience with physical and relational data modeling. Expert knowledge of metadata management and related tools. Supporting and providing consultation to database users and developers Preparing accurate database design and architecture reports for management and executive teams Overseeing the migration of data from legacy systems to new solutions Recommending solutions to improve new and existing database systems Experience with team management. Excellent communication and presentation skills. Roles & Responsibilities Involvement in overall SDLC including requirement gathering, development, testing, debugging, deployment, documentation, production support. Familiarity and experience in the work environment consisting of Business analysts, Production Support teams, Subject Matter Experts, Database Administrators, Data Engineers and BI Developers. Strong skills in SQL, PL/SQL packages, functions, stored procedures, triggers and materialized views to implement business logic in database. Develop mapping spreadsheets for (ETL) team with source to target data mapping with physical naming standards, datatypes, volumetric, domain definitions , and corporate meta-data definitions. Exceptional communication and presentation skills and established track record of client interactions. Experience with Database SQL tuning and query optimization tools like Explain Plan. Experience in designing Conceptual, Logical and Physical Data Models SKills - Data-Modeler, Data-Modelling, Erwin, Power-Designer, SQL

Posted 2 days ago

Apply

9.0 - 14.0 years

0 - 0 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Role & responsibilities Design and develop conceptual, logical, and physical data models for enterprise and application-level databases. Translate business requirements into well-structured data models that support analytics, reporting, and operational systems. Define and maintain data standards, naming conventions, and metadata for consistency across systems. Collaborate with data architects, engineers, and analysts to implement models into databases and data warehouses/lakes. Analyze existing data systems and provide recommendations for optimization, refactoring, and improvements. Create entity relationship diagrams (ERDs) and data flow diagrams to document data structures and relationships. Support data governance initiatives including data lineage, quality, and cataloging. Review and validate data models with business and technical stakeholders. Provide guidance on normalization, denormalization, and performance tuning of database designs. Ensure models comply with organizational data policies, security, and regulatory requirements. Looking for a Data Modeler Architect to design conceptual, logical, and physical data models. Must translate business needs into scalable models for analytics and operational systems. Strong in normalization , denormalization , ERDs , and data governance practices. Experience in star/snowflake schemas and medallion architecture preferred. Role requires close collaboration with architects, engineers, and analysts. Data modelling, Normalization, Denormalization, Star and snowflake schemas, Medallion architecture, ERDLogical data model, Physical data model & Conceptual data model

Posted 2 days ago

Apply

1.0 - 3.0 years

6 - 9 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Naukri logo

POSITION Senior Data Engineer / Data Engineer Bangalore/Mumbai/Kolkata/Gurugra m/Hyd/Pune/Chennai LOCATION EXPERIENCE 2+ Years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. WHY SHOULD YOU JOIN US? With the agility of a start-up and the opportunities of an enterprise, every day at HashedIn, your work will make an impact that matters. So, if you are a problem solver looking to thrive in a dynamic fun culture of inclusion, collaboration, and high performance HashedIn is the place to be! From learning to leadership, this is your chance to take your software engineering career to the next level. So, what impact will you make? Visit us @ https://hashedin.com JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business- critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high- quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: ¢ ¢ ¢ Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: ¢ Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). ¢ Implement efficient solutions for high-volume, batch, real-time streaming, and event- driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and ¢ ¢ ¢ ¢ ¢ ¢ ¢ observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. ¢ Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code ¢ ¢ ¢ reviews, technical discussions, and peer mentoring as needed. Skills & Experience: ¢ Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services ¢ for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 ¢ Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). ¢ Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). ¢ ¢ Strong SQL development skills for ETL, analytics, and performance optimization. ¢ Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. ¢ ¢ ¢ ¢ ¢ Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. ¢ Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: ¢ ¢ ¢ ¢ Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: ¢ ¢ ¢ ¢ Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: ¢ ¢ ¢ ¢ Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus. © HASHEDIN BY DELOITTE 2025

Posted 3 days ago

Apply

12.0 - 16.0 years

1 - 1 Lacs

Hyderabad

Remote

Naukri logo

Were Hiring: Azure Data Factory (ADF) Developer Hyderabad Location: Onsite at Canopy One Office, Hyderabad/Remote Type: Full-time/Partime/Contract | Offshore role | Must be available to work in Eastern Time Zone (EST) We’re looking for an experienced ADF Developer to join our offshore team supporting a major client. This role focuses on building robust data pipelines using Azure Data Factory (ADF) and working closely with client stakeholders for transformation logic and data movement. Key Responsibilities Design, build, and manage ADF data pipelines Implement transformations and aggregations based on mappings provided Work with data from the bronze (staging) area, pre-loaded via Boomi Collaborate with client-side data managers (based in EST) to deliver clean, reliable datasets Requirements Proven hands-on experience with Azure Data Factory Strong understanding of ETL workflows and data transformation Familiarity with data staging/bronze layer concepts Willingness to work in Eastern Time Zone (EST) hours Preferred Qualifications Knowledge of Kimball Data Warehousing (huge advantage!) Experience working in an offshore coordination model Exposure to Boomi is a plus Role & responsibilities Preferred candidate profile

Posted 5 days ago

Apply

5.0 - 10.0 years

9 - 14 Lacs

Vijayawada, Hyderabad

Work from Office

Naukri logo

We are actively seeking experienced Power BI Administrators who can take full ownership of Power BI environments from data modeling and report development to security management and system integration. This role is ideal for professionals with a solid technical foundation and hands-on expertise across the Power BI ecosystem, including enterprise BI environments. Key Responsibilities: Data Model Management: Maintain and optimize Power BI data models to meet evolving analytical and reporting needs. Data Import & Transformation: Import and transform data from various sources using Power Query (M) and implement business logic using DAX. Advanced Measure Creation: Design complex DAX measures, KPIs, and calculated columns tailored to dynamic reporting requirements. Access & Permission Management: Administer and manage user access, roles, and workspace security settings in Power BI Service. Interactive Reporting: Develop insightful and interactive dashboards and reports aligned with business goals and user needs. Error Handling & Data Validation: Identify, investigate, and resolve data inconsistencies and refresh issues, ensuring data accuracy and report reliability. BCS & BIRT Integration: Develop and manage data extraction reports using Business Connectivity Services (BCS) and BIRT (Business Intelligence and Reporting Tool). Preferred Skills: Proven experience as a Power BI Administrator or similar BI role. Strong expertise in Power BI Desktop, Power BI Service, Power Query, DAX, and security configuration. Familiar with report lifecycle management, data governance, and large-scale enterprise BI environments. Experience with BCS and BIRT tools is highly preferred. Capable of independently troubleshooting data and configuration issues. Excellent communication and documentation skills.

Posted 6 days ago

Apply

5.0 - 9.0 years

4 - 7 Lacs

Gurugram

Work from Office

Naukri logo

Primary Skills SQL (Advanced Level) SSAS (SQL Server Analysis Services) Multidimensional and/or Tabular Model MDX / DAX (strong querying capabilities) Data Modeling (Star Schema, Snowflake Schema) Secondary Skills ETL processes (SSIS or similar tools) Power BI / Reporting tools Azure Data Services (optional but a plus) Role & Responsibilities Design, develop, and deploy SSAS models (both tabular and multidimensional). Write and optimize MDX/DAX queries for complex business logic. Work closely with business analysts and stakeholders to translate requirements into robust data models. Design and implement ETL pipelines for data integration. Build reporting datasets and support BI teams in developing insightful dashboards (Power BI preferred). Optimize existing cubes and data models for performance and scalability. Ensure data quality, consistency, and governance standards. Top Skill Set SSAS (Tabular + Multidimensional modeling) Strong MDX and/or DAX query writing SQL Advanced level for data extraction and transformations Data Modeling concepts (Fact/Dimension, Slowly Changing Dimensions, etc.) ETL Tools (SSIS preferred) Power BI or similar BI tools Understanding of OLAP & OLTP concepts Performance Tuning (SSAS/SQL) Skills: analytical skills,etl processes (ssis or similar tools),collaboration,multidimensional expressions (mdx),power bi / reporting tools,sql (advanced level),sql proficiency,dax,ssas (multidimensional and tabular model),etl,data modeling (star schema, snowflake schema),communication,azure data services,mdx,data modeling,ssas,data visualization

Posted 1 week ago

Apply

12.0 - 14.0 years

12 - 14 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Bachelors or master’s degree in computer science, Engineering, or a related field. 10+ years of overall experience and 8+ years of relevant in Data bricks, DLT, Py spark and Data modelling concepts-Dimensional Data Modelling (Star Schema, Snowflake Schema) Proficiency in programming languages such as Python, Py spark, Scala, SQL. Proficiency in DLT Proficiency in SQL Proficiency in Data Modelling concepts - Dimensional Data Modelling (Star Schema, Snowflake Schema) Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. Good to have experience with containerization technologies such as Docker and Kubernetes. Knowledge of DevOps practices for automated deployment and monitoring of data pipelines

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad, Ahmedabad

Hybrid

Naukri logo

Key Responsibilities: Lead the end-to-end Snowflake platform implementation , including architecture, design, data modeling, and governance. Oversee the migration of data and pipelines from legacy platforms to Snowflake, ensuring quality, reliability, and business continuity. Design and optimize Snowflake-specific data models , including use of clustering keys, materialized views, Streams, and Tasks. Build and manage scalable ELT/ETL pipelines using modern tools and best practices. Define and implement standards for Snowflake development , testing, and deployment, including CI/CD automation. Collaborate with cross-functional teams including data engineering, analytics, DevOps, and business stakeholders. Establish and enforce data security , privacy, and governance policies using Snowflakes native capabilities. Monitor and tune system performance and cost efficiency through appropriate warehouse sizing and usage patterns. Lead code reviews, technical mentoring, and documentation for Snowflake-related processes. Required Snowflake Expertise: Snowflake Architecture – Deep understanding of virtual warehouses, data sharing, multi-cluster, zero-copy cloning. Ability to enhance architecture and implement solutions as per the architecture. Performance Optimization – Proficient in tuning queries, clustering, caching, and workload management. Data Engineering – Experience with processing batch, real time using multiple Snowflake features like Snowpipe, Streams & Tasks, stored procedures, and data ingestion patterns. Data Security & Governance – Strong experience with RBAC, dynamic data masking, row-level security, and tagging. Experience enabling such capabilities in Snowflake and at least one enterprise product solution. Advanced SQL – Expertise writing, analyzing, performance optimization of complex SQL queries, transformations, semi-structured data handling (JSON, XML). Cloud Integration – Experience with at least one of the major cloud platforms (AWS/GCP/Azure) and services like S3, Lambda, Step Functions, etc.

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Noida

Work from Office

Naukri logo

Primary Skills SQL (Advanced Level) SSAS (SQL Server Analysis Services) Multidimensional and/or Tabular Model MDX / DAX (strong querying capabilities) Data Modeling (Star Schema, Snowflake Schema) Secondary Skills ETL processes (SSIS or similar tools) Power BI / Reporting tools Azure Data Services (optional but a plus) Role & Responsibilities Design, develop, and deploy SSAS models (both tabular and multidimensional). Write and optimize MDX/DAX queries for complex business logic. Work closely with business analysts and stakeholders to translate requirements into robust data models. Design and implement ETL pipelines for data integration. Build reporting datasets and support BI teams in developing insightful dashboards (Power BI preferred). Optimize existing cubes and data models for performance and scalability. Ensure data quality, consistency, and governance standards. Top Skill Set SSAS (Tabular + Multidimensional modeling) Strong MDX and/or DAX query writing SQL Advanced level for data extraction and transformations Data Modeling concepts (Fact/Dimension, Slowly Changing Dimensions, etc.) ETL Tools (SSIS preferred) Power BI or similar BI tools Understanding of OLAP & OLTP concepts Performance Tuning (SSAS/SQL) Skills: analytical skills,etl processes (ssis or similar tools),collaboration,multidimensional expressions (mdx),power bi / reporting tools,sql (advanced level),sql proficiency,dax,ssas (multidimensional and tabular model),etl,data modeling (star schema, snowflake schema),communication,azure data services,mdx,data modeling,ssas,data visualization

Posted 1 week ago

Apply

6.0 - 8.0 years

8 - 15 Lacs

Hyderabad, Secunderabad

Work from Office

Naukri logo

Strong programming and scripting skills in SQL and Python. Experience with data pipeline tools (e.g., Apache Airflow, Azure Data Factory, AWS Glue). Hands-on with cloud-based data platforms such as Azure, AWS. Familiarity with data modeling and warehousing concepts (e.g., star schema, snowflake).

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 37 Lacs

Hyderabad

Work from Office

Naukri logo

SQL & Database Management: Deep knowledge of relational databases (PostgreSQL), cloud-hosted data platforms (AWS, Azure, GCP), and data warehouses like Snowflake. ETL/ELT Tools: Experience with SnapLogic, StreamSets, or DBT for building and maintaining data pipelines. / ETL Tools Extensive Experience on data Pipelines Data Modeling & Optimization: Strong understanding of data modeling, OLAP systems, query optimization, and performance tuning. Cloud & Security: Familiarity with cloud platforms and SQL security techniques (e.g., data encryption, TDE). Data Warehousing: Experience managing large datasets, data marts, and optimizing databases for performance. Agile & CI/CD: Knowledge of Agile methodologies and CI/CD automation tools.

Posted 1 week ago

Apply

6.0 - 8.0 years

13 - 23 Lacs

Bengaluru

Hybrid

Naukri logo

Job description Primary skillsets 5 years hands on experience in Informatica PWC ETL development 7 years of experience in SQL analytical STAR schema data modeling and Informatica PowerCenter 5 years of Redshift Oracle or comparable database experience with BIDW deployments Secondary skillsets Good to know cloud like AWS Services Must have proven experience with STAR and SNOWFLAKE schema techniques Good to know cloud like AWS Services Proven track record as an ETL developer potentially to grow as an Architect leading development teams to deliver successful business intelligence solutions with complex data sources Strong analytical skills and enjoys solving complex technical problems Knowledge on additional ETL tools Qlik Replicate End to End understanding of data from ingestion to transformation to consumption in Analytics will be great benefits

Posted 2 weeks ago

Apply

6.0 - 10.0 years

18 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Key Responsibilities: Analyzes and solve problems using technical experience, judgment and precedents Provides informal guidance to new team members Explains complex information to others in straightforward situations 1. Data Engineering and Modelling: Design & Develop Scalable Data Pipelines: Leverage AWS technologies to design, develop, and manage end-to-end data pipelines with services like ETL, Kafka, DMS, Glue, Lambda, and Step Functions . Orchestrate Workflows: Use Apache Airflow to build, deploy, and manage automated workflows, ensuring smooth and efficient data processing and orchestration. Snowflake Data Warehouse: Design, implement, and maintain Snowflake data warehouses, ensuring optimal performance, scalability, and seamless data availability. Infrastructure Automation: Utilize Terraform and CloudFormation to automate cloud infrastructure provisioning, ensuring efficiency, scalability, and adherence to security best practices. Logical & Physical Data Models: Design and implement high-performance logical and physical data models using Star and Snowflake schemas that meet both technical and business requirements. Data Modeling Tools: Utilize Erwin or similar modeling tools to create, maintain, and optimize data models, ensuring they align with evolving business needs. Continuous Optimization: Actively monitor and improve data models to ensure they deliver the best performance, scalability, and security. 2. Collaboration, Communication, and Continuous Improvement: Cross-Functional Collaboration: Work closely with data scientists, analysts, and business stakeholders to gather requirements and deliver tailored data solutions that meet business objectives. Data Security Expertise: Provide guidance on data security best practices and ensure team members follow secure coding and data handling procedures. Innovation & Learning: Stay abreast of emerging trends in data engineering, cloud computing, and data security to recommend and implement innovative solutions. Optimization & Automation: Proactively identify opportunities to optimize system performance, enhance data security, and automate manual workflows. Key Skills & Expertise: Snowflake Data Warehousing: Hands-on experience with Snowflake, including performance tuning, role-based access controls, dynamic Masking, data sharing, encryption, and row/column-level security. Data Modeling: Expertise in physical and logical data modeling, specifically with Star and Snowflake schemas using tools like Erwin or similar . AWS Services Proficiency: In-depth knowledge of AWS services like ETL, DMS, Glue, Step Functions, Airflow, Lambda, CloudFormation, S3, IAM, EKS and Terraform . Programming & Scripting: Strong working knowledge of Python, R, Scala, PySpark and SQL (including stored procedures). DevOps & CI/CD: Solid understanding of CI/CD pipelines, DevOps principles, and infrastructure-as-code practices using tools like Terraform, JFrog, Jenkins and CloudFormation . Analytical & Troubleshooting Skills: Proven ability to solve complex data engineering issues and optimize data workflows. Excellent Communication: Strong interpersonal and communication skills, with the ability to work across teams and with stakeholders to drive data-centric projects. Qualifications & Experience: Bachelors degree in computer science, Engineering, or a related field. 7-8 years of experience designing and implementing large-scale Data Lake/Warehouse integrations with diverse data storage solutions. Certifications: AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect (preferred). Snowflake Advanced Architect and/or Snowflake Core Certification ( Required ).

Posted 2 weeks ago

Apply

4.0 - 6.0 years

15 - 25 Lacs

Noida

Work from Office

Naukri logo

We are looking for a highly experienced Senior Data Engineer with deep expertise in Snowflake to lead efforts in optimizing the performance of our data warehouse to enable faster, more reliable reporting. You will be responsible for improving query efficiency, data pipeline performance, and overall reporting speed by tuning Snowflake environments, optimizing data models, and collaborating with Application development teams. Roles and Responsibilities Analyze and optimize Snowflake data warehouse performance to support high-volume, complex reporting workloads. Identify bottlenecks in SQL queries, ETL/ELT pipelines, and data models impacting report generation times. Implement performance tuning strategies including clustering keys, materialized views, result caching, micro-partitioning, and query optimization. Collaborate with BI teams and business analysts to understand reporting requirements and translate them into performant data solutions. Design and maintain efficient data models (star schema, snowflake schema) tailored for fast analytical querying. Develop and enhance ETL/ELT processes ensuring minimal latency and high throughput using Snowflake’s native features. Monitor system performance and proactively recommend architectural improvements and capacity planning. Establish best practices for data ingestion, transformation, and storage aimed at improving report delivery times. Experience with Unistore will be an added advantage

Posted 2 weeks ago

Apply

6.0 - 10.0 years

3 - 8 Lacs

Noida

Work from Office

Naukri logo

Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Preferred Skills Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

25 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Role - Data Modeler/Senior Data Modeler Exp - 5 to 12 Yrs Locs - Hyderabad, Pune, Bengaluru Position - Permanent Must have skills: - Strong SQL - Strong Data Warehousing skills - ER/Relational/Dimensional Data Modeling - Data Vault Modeling - OLAP, OLTP - Schemas & Data Marts Good to have skills: - Data Vault - ERwin / ER Studio - Cloud Platforms (AWS or Azure)

Posted 2 weeks ago

Apply

5.0 - 7.0 years

5 - 9 Lacs

Chennai

Work from Office

Naukri logo

Design,develop, and maintain scalable data pipelines and systems to support thecollection, integration, and analysis of healthcare and enterprise data. Theprimary responsibilities of this role include designing and implementingefficient data pipelines, architecting robust data models, and adhering to datamanagement best practices. In this position, you will play a crucial part intransforming raw data into meaningful insights, through development of semanticdata layers, enabling data-driven decision-making across the organization. Theideal candidate will possess strong technical skills, a keen understanding ofdata architecture, and a passion for optimizing data processes. What you will do Design and implement scalable and efficient data pipelines to acquire, transform, and integrate data from various sources, such as electronic health records (EHR), medical devices, claims data, and back-office enterprise data Develop data ingestion processes, including data extraction, cleansing, and validation, ensuring data quality and integrity throughout the pipeline Collaborate with cross-functional teams, including subject matter experts, analysts, and engineers, to define data requirements and ensure data pipelines meet the needs of data-driven initiatives Design and implement data integration strategies to merge disparate datasets, enabling comprehensive and holistic analysis Implement data governance practices and ensure compliance with healthcare data standards, regulations (e.g., HIPAA), and security protocols Monitor and troubleshoot pipeline and data model performance, identifying and addressing bottlenecks, and ensuring optimal system performance and data availability Design and implement data models that align with domain requirements, ensuring efficient data storage, retrieval, and delivery Apply data modeling best practices and standards to ensure consistency, scalability, and reusability of data models Implement data quality checks and validation processes to ensure the accuracy, completeness, and consistency of healthcare data Develop and enforce data governance policies and procedures, including data lineage, architecture, and metadata management Collaborate with stakeholders to define data quality metrics and establish data quality improvement initiatives Document data engineering processes, methodologies, and data flows for knowledge sharing and future reference Stay up to date with emerging technologies, industry trends, and healthcare data standards to drive innovation and ensure compliance Who you are 4+ years strong programming skills in object-oriented languages such as Python Proficiency in SQL Hands on experience with data integration tools, ETL/ELT frameworks, and data warehousing concepts Hands on experience with data modeling and schema design, including concepts such as star schema, snowflake schema and data normalization Familiarity with healthcare data standards (e.g., HL7, FHIR), electronic health records (EHR), medical coding systems (e.g., ICD-10, SNOMED CT), and relevant healthcare regulations (e.g., HIPAA) Hands on experience with big data processing frameworks such as Apache Hadoop, Apache Spark, etc. Working knowledge of cloud computing platforms (e.g., AWS, Azure, GCP) and related services (e.g., DMS, S3, Redshift, BigQuery) Experience integrating heterogeneous data sources, aligning data models and mapping between different data schemas Understanding of metadata management principles and tools for capturing, storing, and managing metadata associated with data models and semantic data layers Ability to track the flow of data and its transformations across data models, ensuring transparency and traceability Understanding of data governance principles, data quality management, and data security best practices Strong problem-solving and analytical skills with the ability to work with complex datasets and data integration challenges Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams Education Bachelor's or Master's degree in computer science, information systems, or a relatedfield. Proven experience as a Data Engineer or similar role with a focus on healthcaredata. Soft Skills: Attention to detail. Good oral and written communication skills in English language. Or Proficient in English communication, both written and verbal. Dedicated self-starter with excellent people skills. Quick learner and a go-getter. Effective time and project management. Analytical thinker and a great team player. Strong leadership, interpersonal &problem-solving skills

Posted 2 weeks ago

Apply

4.0 - 8.0 years

9 - 12 Lacs

Chennai

Work from Office

Naukri logo

Job Title: Data Engineer Location: Chennai (Hybrid) Summary Design,develop, and maintain scalable data pipelines and systems to support thecollection, integration, and analysis of healthcare and enterprise data. Theprimary responsibilities of this role include designing and implementingefficient data pipelines, architecting robust data models, and adhering to datamanagement best practices. In this position, you will play a crucial part intransforming raw data into meaningful insights, through development of semanticdata layers, enabling data-driven decision-making across the organization. Theideal candidate will possess strong technical skills, a keen understanding ofdata architecture, and a passion for optimizing data processes. Accountability Design and implement scalable and efficient data pipelines to acquire, transform, and integrate data from various sources, such as electronic health records (EHR), medical devices, claims data, and back-office enterprise data Develop data ingestion processes, including data extraction, cleansing, and validation, ensuring data quality and integrity throughout the pipeline Collaborate with cross-functional teams, including subject matter experts, analysts, and engineers, to define data requirements and ensure data pipelines meet the needs of data-driven initiatives Design and implement data integration strategies to merge disparate datasets, enabling comprehensive and holistic analysis Implement data governance practices and ensure compliance with healthcare data standards, regulations (e.g., HIPAA), and security protocols Monitor and troubleshoot pipeline and data model performance, identifying and addressing bottlenecks, and ensuring optimal system performance and data availability Design and implement data models that align with domain requirements, ensuring efficient data storage, retrieval, and delivery Apply data modeling best practices and standards to ensure consistency, scalability, and reusability of data models Implement data quality checks and validation processes to ensure the accuracy, completeness, and consistency of healthcare data Develop and enforce data governance policies and procedures, including data lineage, architecture, and metadata management Collaborate with stakeholders to define data quality metrics and establish data quality improvement initiatives Document data engineering processes, methodologies, and data flows for knowledge sharing and future reference Stay up to date with emerging technologies, industry trends, and healthcare data standards to drive innovation and ensure compliance Skills 4+ years strong programming skills in object-oriented languages such as Python Proficiency in SQL Hands on experience with data integration tools, ETL/ELT frameworks, and data warehousing concepts Hands on experience with data modeling and schema design, including concepts such as star schema, snowflake schema and data normalization Familiarity with healthcare data standards (e.g., HL7, FHIR), electronic health records (EHR), medical coding systems (e.g., ICD-10, SNOMED CT), and relevant healthcare regulations (e.g., HIPAA) Hands on experience with big data processing frameworks such as Apache Hadoop, Apache Spark, etc. Working knowledge of cloud computing platforms (e.g., AWS, Azure, GCP) and related services (e.g., DMS, S3, Redshift, BigQuery) Experience integrating heterogeneous data sources, aligning data models and mapping between different data schemas Understanding of metadata management principles and tools for capturing, storing, and managing metadata associated with data models and semantic data layers Ability to track the flow of data and its transformations across data models, ensuring transparency and traceability Understanding of data governance principles, data quality management, and data security best practices Strong problem-solving and analytical skills with the ability to work with complex datasets and data integration challenges Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams Education Bachelor's orMaster's degree in computer science, information systems, or a relatedfield. Provenexperience as a Data Engineer or similar role with a focus on healthcare data.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

10 - 17 Lacs

Ahmedabad

Work from Office

Naukri logo

Job Summary: Key Responsibilities: Design, develop, and maintain interactive and user-friendly Power BI dashboards and reports. Translate business requirements into functional and technical specifications. Perform data modeling, DAX calculations, and Power Query transformations. Integrate data from multiple sources including SQL Server, Excel, SharePoint, and APIs. Optimize Power BI datasets, reports, and dashboards for performance and usability. Collaborate with business analysts, data engineers, and stakeholders to ensure data accuracy and relevance. Ensure security and governance best practices in Power BI workspaces and datasets. Provide ongoing support and troubleshooting for existing Power BI solutions. Stay updated with Power BI updates, best practices, and industry trends. Required Skills & Qualifications: Bachelors degree in Computer Science, Information Technology, Data Analytics, or a related field. 4+ years of professional experience in data analytics or business intelligence. 3+ years of hands-on experience with Power BI (Power BI Desktop, Power BI Service). Strong expertise in DAX, Power Query (M Language), and data modeling (star/snowflake schema). Proficiency in writing complex SQL queries and optimizing them for performance. Experience in working with large and complex datasets. Experience in BigQuery, MySql, Looker Studio is a plus. Ecommerce Industry Experience will be an added advantage. Solid understanding of data warehousing concepts and ETL processes. Experience with version control tools such as Power Apps & Power Automate would be a plus. Preferred Qualifications: Microsoft Power BI Certification (PL-300 or equivalent is a plus). Experience with Azure Data Services (Azure Data Factory, Azure SQL, Synapse). Knowledge of other BI tools (Tableau, Qlik) is a plus. Familiarity with scripting languages (Python, R) for data analysis is a bonus. Experience integrating Power BI into web portals using Power BI Embedded.

Posted 3 weeks ago

Apply

1.0 - 2.0 years

4 - 9 Lacs

Pune, Chennai, Bengaluru

Work from Office

Naukri logo

Job Title: Data Engineer Experience: 12 to 20 months Work Mode: Work from Office Locations: Bangalore, Chennai, Kolkata, Pune, Gurgaon Role Overview We are seeking a driven and hands-on Data Engineer with 12 to 20 months of experience to support modern data pipeline development and transformation initiatives. The role requires solid technical skills in SQL , Python , and PySpark , with exposure to cloud platforms such as Azure Databricks or GCP . As a Data Engineer at Tredence, you will work on ingesting, processing, and modeling large-scale data, implementing scalable data pipelines, and applying foundational data warehousing principles. This role also includes direct collaboration with cross-functional teams and client stakeholders. Key Responsibilities Develop robust and scalable data pipelines using PySpark in cloud platforms like Azure Databricks or GCP Dataflow . Write optimized SQL queries for data transformation, analysis, and validation. Implement and support data warehouse models and principles, including: Fact and Dimension modeling Star and Snowflake schemas Slowly Changing Dimensions (SCD) Change Data Capture (CDC) Medallion Architecture Monitor, troubleshoot, and improve pipeline performance and data quality. Work with teams across analytics, business, and IT functions to deliver data-driven solutions. Communicate technical updates and contribute to sprint-level delivery. Mandatory Skills Strong hands-on experience with SQL and Python Working knowledge of PySpark for data transformation Exposure to at least one cloud platform: Azure Databricks or GCP Good understanding of data engineering and warehousing fundamentals Excellent debugging and problem-solving skills Strong written and verbal communication skills Preferred Skills Experience working with Databricks Community Edition or enterprise version Familiarity with data orchestration tools like Airflow or Azure Data Factory Exposure to CI/CD processes and version control (e.g., Git) Understanding of Agile/Scrum methodology and collaborative development Basic knowledge of handling structured and semi-structured data (JSON, Parquet, etc.)

Posted 3 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, We are hiring a Data Warehouse Architect to design scalable, high-performance data warehouse solutions for analytics and reporting. Perfect for engineers experienced with large-scale data systems. Key Responsibilities: Design and maintain enterprise data warehouse architecture Optimize ETL/ELT pipelines and data modeling (star/snowflake schemas) Ensure data quality, security, and performance Work with BI teams to support analytics and reporting needs Required Skills & Qualifications: Proficiency with SQL and data warehousing tools (Snowflake, Redshift, BigQuery, etc.) Experience with ETL frameworks (Informatica, Apache NiFi, dbt, etc.) Strong understanding of dimensional modeling and OLAP Bonus: Knowledge of cloud data platforms and orchestration tools (Airflow) Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 3 weeks ago

Apply

5.0 - 7.0 years

7 - 11 Lacs

Pune

Work from Office

Naukri logo

: We are seeking a highly skilled and experienced MSTR (MicroStrategy) Developer to join our Business Intelligence team. In this role, you will be responsible for the design, development, implementation, and maintenance of robust and scalable BI solutions using the MicroStrategy platform. Your primary focus will be on leveraging your deep understanding of MicroStrategy architecture and strong SQL skills to deliver insightful and actionable data to our stakeholders. This is an excellent opportunity to contribute to critical business decisions by providing high-quality BI solutions. Responsibilities - Design, develop, and deploy MicroStrategy objects including reports, dashboards, cubes (Intelligent Cubes, OLAP Cubes), documents, and visualizations. - Utilize various MicroStrategy features and functionalities such as Freeform SQL, Query Builder, MDX connectivity, and data blending. - Optimize MicroStrategy schema objects (attributes, facts, hierarchies) for performance and usability. - Implement security models within MicroStrategy, including user and group management, object-level security, and data-level security. - Perform performance tuning and optimization of MicroStrategy reports and dashboards. - Participate in the administration and maintenance of the MicroStrategy environment, including metadata management, project configuration, and user support. - Troubleshoot and resolve issues related to MicroStrategy reports, dashboards, and the overall platform. - Write complex and efficient SQL queries to extract, transform, and load data from various data sources. - Understand database schema design and data modeling principles. - Optimize SQL queries for performance within the MicroStrategy environment. - Work with different database platforms (e.g., Oracle, SQL Server, Teradata, Snowflake) and understand their specific SQL dialects. - Develop and maintain database views and stored procedures to support MicroStrategy development. - Collaborate with business analysts and end-users to understand their reporting and analytical requirements. - Translate business requirements into technical specifications for MicroStrategy development. - Participate in the design and prototyping of BI solutions. - Develop and execute unit tests and integration tests for MicroStrategy objects. - Participate in user acceptance testing (UAT) and provide support to end-users during the testing phase. - Ensure the accuracy and reliability of data presented in MicroStrategy reports and dashboards. - Create and maintain technical documentation for MicroStrategy solutions, including design documents, user guides, and deployment instructions. - Provide training and support to end-users on how to effectively use MicroStrategy reports and dashboards. - Adhere to MicroStrategy best practices and development standards. - Stay updated with the latest MicroStrategy features and functionalities. - Proactively identify opportunities to improve existing MicroStrategy solutions and processes. Required Skills and Expertise - Strong proficiency in MicroStrategy development (5+ years of hands-on experience is essential). This includes a deep understanding of the MicroStrategy architecture, object creation, report development, dashboard design, and administration. - Excellent SQL skills (5+ years of experience writing complex queries, optimizing performance, and working with various database systems). - Experience in data modeling and understanding of dimensional modeling concepts (e.g., star schema, snowflake schema). - Solid understanding of BI concepts, data warehousing principles, and ETL processes. - Experience in performance tuning and optimization of MicroStrategy reports and SQL queries. - Ability to gather and analyze business requirements and translate them into technical specifications. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills, with the ability to work effectively with both technical and business stakeholders. - Experience with version control systems (e.g., Git). - Ability to work independently and as part of a team.

Posted 1 month ago

Apply

5.0 - 10.0 years

9 - 19 Lacs

Chennai

Work from Office

Naukri logo

Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data warehousing solutions using Snowflake SQL. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions on time. Develop complex queries to optimize database performance and troubleshoot issues. Implement star schema designs for efficient data modeling and querying. Participate in code reviews to ensure adherence to coding standards.

Posted 1 month ago

Apply

5.0 - 7.0 years

6 - 11 Lacs

Pune

Work from Office

Naukri logo

About The Role : We are seeking a highly skilled and experienced MSTR (MicroStrategy) Developer to join our Business Intelligence team. In this role, you will be responsible for the design, development, implementation, and maintenance of robust and scalable BI solutions using the MicroStrategy platform. Your primary focus will be on leveraging your deep understanding of MicroStrategy architecture and strong SQL skills to deliver insightful and actionable data to our stakeholders. This is an excellent opportunity to contribute to critical business decisions by providing high-quality BI solutions. Responsibilities - Design, develop, and deploy MicroStrategy objects including reports, dashboards, cubes (Intelligent Cubes, OLAP Cubes), documents, and visualizations. - Utilize various MicroStrategy features and functionalities such as Freeform SQL, Query Builder, MDX connectivity, and data blending. - Optimize MicroStrategy schema objects (attributes, facts, hierarchies) for performance and usability. - Implement security models within MicroStrategy, including user and group management, object-level security, and data-level security. - Perform performance tuning and optimization of MicroStrategy reports and dashboards. - Participate in the administration and maintenance of the MicroStrategy environment, including metadata management, project configuration, and user support. - Troubleshoot and resolve issues related to MicroStrategy reports, dashboards, and the overall platform. - Write complex and efficient SQL queries to extract, transform, and load data from various data sources. - Understand database schema design and data modeling principles. - Optimize SQL queries for performance within the MicroStrategy environment. - Work with different database platforms (e.g., Oracle, SQL Server, Teradata, Snowflake) and understand their specific SQL dialects. - Develop and maintain database views and stored procedures to support MicroStrategy development. - Collaborate with business analysts and end-users to understand their reporting and analytical requirements. - Translate business requirements into technical specifications for MicroStrategy development. - Participate in the design and prototyping of BI solutions. - Develop and execute unit tests and integration tests for MicroStrategy objects. - Participate in user acceptance testing (UAT) and provide support to end-users during the testing phase. - Ensure the accuracy and reliability of data presented in MicroStrategy reports and dashboards. - Create and maintain technical documentation for MicroStrategy solutions, including design documents, user guides, and deployment instructions. - Provide training and support to end-users on how to effectively use MicroStrategy reports and dashboards. - Adhere to MicroStrategy best practices and development standards. - Stay updated with the latest MicroStrategy features and functionalities. - Proactively identify opportunities to improve existing MicroStrategy solutions and processes. Required Skills and Expertise - Strong proficiency in MicroStrategy development (5+ years of hands-on experience is essential). This includes a deep understanding of the MicroStrategy architecture, object creation, report development, dashboard design, and administration. - Excellent SQL skills (5+ years of experience writing complex queries, optimizing performance, and working with various database systems). - Experience in data modeling and understanding of dimensional modeling concepts (e.g., star schema, snowflake schema). - Solid understanding of BI concepts, data warehousing principles, and ETL processes. - Experience in performance tuning and optimization of MicroStrategy reports and SQL queries. - Ability to gather and analyze business requirements and translate them into technical specifications. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills, with the ability to work effectively with both technical and business stakeholders. - Experience with version control systems (e.g., Git). - Ability to work independently and as part of a team. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Job Title: ======= Senior MS BI Developer Onsite Location: ============= Dubai, UAE Doha , Qatar Riyadh, Saudi Onsite Monthly Salary: ============== 10k AED - 15k AED - Full tax free salary , Depending on Experience Gulf work permit will be sponsored by our client Project duration: ============= 2 Years, Extendable Desired Experience Level Needed: =========================== 5 - 10 Years Qualification: ========== B.Tech / M.Tech / MCA / M.Sc or equivalent Experience Needed: =============== Over all: 5 or more Years of total IT experience Solid 3+ Years experience or MS - BI Developer with Microsoft Stack / MS - DWH Engineer Job Responsibilities: ================ - Design and develop DWH data flows - Able to build SCD -1 / SCD - 2 / SCD -3 dimensions - Build Cubes - Maintain SSAS / DWH data - Design Microsoft DWH & its ETL packages - Able to code T-SQL - Able to create Orchestrations - Able to design batch jobs / Orchestrations runs - Familiarity with data models - Able to develop MDM (Master Data Management) Experience: ================ - Experience as DWH developer with Microsoft DWH data flows and cubes - Exposure and experience with Azure services including Azure Data Factory - Sound knowledge of BI practices and visualization tools such as PowerBI / SSRS/ QlikView - Collecting / gathering data from various multiple source systems - Creating automated data pipelines - Configuring Azure resources and services Skills: ================ - Microsoft SSIS / SSAS / SSRS - Informatica - Azure Data Factory - Spark - SQL Nice to have: ========== - Any on site experience is added advantage, but not mandatory - Microsoft certifications are added advantage Business Vertical: ============== - Banking / Investment Banking - Capital Markets - Securities / Stock Market Trading - Bonds / Forex Trading - Credit Risk - Payments Cards Industry (VISA/ Master Card/ Amex) Job Code: ====== MSBI_DEVP_0525 No.of positions: ============ 05 Email: ===== spectrumconsulting1977@gmail.com if you are interested, please email your CV as ATTACHMENT with job ref. code [ MSBI_DEVP_0525 ] as subject

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies