Jobs
Interviews

2244 Snowflake Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities - Managing large machine learning applications and designing and implementing new frameworks to build scalable and efficient data processing workflows and machine learning pipelines.- Build the tightly integrated pipeline that optimizes and compiles models and then orchestrates their execution.- Collaborate with CPU, GPU, and Neural Engine hardware backends to push inference performance and efficiency- Work closely with feature teams to facilitate and debug the integration of increasingly sophisticated models, including large language models- Automate data processing and extraction- Engage with sales team to find opportunities, understand requirements, and translate those requirements into technical solutions.- Develop reusable ML models and assets into production. Technical and Professional : - Excellent Python programming and debugging skills. (Refer to Pytho JD given below)- Proficiency with SQL, relational databases, & non-relational databases- Passion for API design and software architecture.- Strong communication skills and the ability to naturally explain difficult technical topics to everyone from data scientists to engineers to business partners- Experience with modern neural-network architectures and deep learning libraries (Keras, TensorFlow, PyTorch). - Experience unsupervised ML algorithms. - Experience in Timeseries models and Anomaly detection problems.- Experience with modern large language model (Chat GPT/BERT) and applications.- Expertise with performance optimization.- Experience or knowledge in public cloud AWS services - S3, Lambda.- Familiarity with distributed databases, such as Snowflake, Oracle.- Experience with containerization and orchestration technologies, such as Docker and Kubernetes. Preferred Skills: Technology-Big Data - Data Processing-Spark Technology-Machine Learning-R Technology-Machine Learning-Python

Posted 1 week ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,BCS,BBA,BCom,MCA,MSc Service Line Data & Analytics Unit Responsibilities " Has good knowledge on Snowflake Architecture. Understanding Virtual Warehouses - multi-cluster warehouse, autoscaling Metadata and system objects - query history, grants to users, grants to roles, users Micro-partitions Table Clustering, Auto Reclustering Materialized Views and benefits Data Protection with Time Travel in Snowflake - extremely imp Analyzing Queries Using Query Profile - extremely important (Explain plan) Cache architecture Virtual Warehouse(VW) Named Stages Direct Loading SnowPipe, Data Sharing,Streams, JavaScript Procedures & Tasks Strong ability to design and develop workflows in Snowflake in at least one cloud technology (preferably, AWS) Apply Snowflake programming and ETL experience to write Snowflake SQL and maintain complex, internally developed Reporting system. Preferable knowledge in ETL Activities like data processing from multiple source systems. Extensive Knowledge on Query Performance tuning. Apply knowledge of BI tools. Manage time effectively. Accurately estimate effortfortasks and meet agreed-upon deadlines. Effectively juggle ad-hocrequests and longer-term projects.Snowflake performance specialist- Familiar withzero copy cloningand usingtime travelfeatures to clone table- Familiar in understandingSnowflake query profileand what each step does andidentifying performance bottlenecks from query profile- Understanding of when a table needs to be clustered- Choosing the right cluster keyas a part of table design to help query optimization- Working with materialized views andbenefits vs cost scenario- How Snowflake micro partitions are maintained and what are the performance implications wrt micro partitions/ pruningetc- Horizontal vs vertical scaling. When to do what.Concept of multi cluster warehouse and autoscaling- Advanced SQL knowledge including window functions, recursive queriesand ability to understand and rewritecomplex SQLs as a part of performance optimization" Additional Responsibilities: Domain*Data Warehousing, Business IntelligencePrecise Work LocationBhubaneswar, Bangalore, Hyderabad, Pune Technical and Professional : Mandatory skills*SnowflakeDesired skills*Teradata/Python(Not Mandatory) Preferred Skills: Cloud Platform-Snowflake Technology-OpenSystem-Python - OpenSystem

Posted 1 week ago

Apply

2.0 - 5.0 years

5 - 9 Lacs

Gurugram

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Primary skills:Technology-Machine Learning-Python Preferred Skills: Technology-Machine Learning-Python

Posted 1 week ago

Apply

5.0 - 7.0 years

20 - 25 Lacs

Hyderabad, Bengaluru, Greater Noida

Work from Office

Develop, implement, and optimize end-to-end data pipelines on the Snowflake platform. Design and maintain ETL workflows to enable seamless data processing across systems. Data Transformation with PySpark: Leverage PySpark for data transformations within the Snowflake environment. Implement complex data cleansing, enrichment, and validation processes using PySpark to ensure the highest data quality. Collaboration: Work closely with cross-functional teams to design data solutions aligned with business requirements. Engage with stakeholders to understand business needs and translate them into technical solutions. Optimization: Continuously monitor and optimize data storage, processing, and retrieval performance in Snowflake. Leverage Snowflakes capabilities for scalable data storage and data processing to ensure efficient performance.Role & responsibilities Preferred candidate profile 5 to 7 years of experience as a Data Engineer, with a strong emphasis on Snowflake. Proven experience in designing, implementing, and optimizing data warehouses on the Snowflake platform. Expertise in PySpark for data processing and analytics.

Posted 1 week ago

Apply

10.0 - 13.0 years

30 - 32 Lacs

Bengaluru

Remote

Role & responsibilities Title: Team Lead Data Integration & Management Job Overview: The Team Lead – Data Integration & Management is responsible for designing and implementing complex database architectures to support enterprise applications, creating and maintaining data models, schemas, and documentation, optimizing SQL queries and database performance, developing database security standards and implementing data protection measures, collaborating with development teams to ensure proper database integration, providing technical leadership and mentorship to junior database professionals, establishing data governance procedures and best practices, troubleshooting and resolving complex data issues, and evaluating and recommending database technologies and tools. Employer Name: Bridgetree Experience Required: 10 - 13 years Must Haves: - 10+ years of experience with SQL and relational database management systems - Expert knowledge of SQL Server, PostgreSQL, or other major RDBMS - Strong understanding of database design principles and normalization - Experience with database performance tuning and query optimization - Proficiency in writing complex SQL queries, stored procedures, and functions - Knowledge of data warehousing concepts and ETL processes - Knowledge on any ETL process (SSIS, Informatica, RPDM) - Strong analytical and problem-solving abilities - Excellent communication and documentation skills - Ability to translate business requirements into technical specifications - Experience working in Agile development environments - Experience working in high demand projects Overall Skills Needed: - SQL - Relational database management systems - Database design and normalization - Database performance tuning and query optimization - SQL queries, stored procedures, and functions - Data warehousing concepts and ETL processes - ETL tools (SSIS, Informatica, RPDM) - Analytical and problem-solving skills - Communication and documentation skills - Translating business requirements into technical specifications - Agile development experience - High-demand project experience Key Responsibilities: - Design and implement complex database architectures to support enterprise applications - Create and maintain data models, schemas, and documentation - Optimize SQL queries and database performance - Develop database security standards and implement data protection measures - Collaborate with development teams to ensure proper database integration - Provide technical leadership and mentorship to junior database professionals - Establish data governance procedures and best practices - Troubleshoot and resolve complex data issues - Evaluate and recommend database technologies and tools Experience with Volume Scale of Operations: High-demand project Additional Position Requirements: - PostgreSQL (Preferred): Basic working knowledge is desirable. Not mandatory but would be an added advantage. - Strong Data Warehousing Concepts: A solid understanding of data warehouse architecture is essential. - ETL Tool Experience: Proficiency in at least one ETL tool is a must — preferably SSIS, Snowflake, or Redpoint Data Management (RPDM). - Work Flexibility (EST Overlap): As this role supports a high-demand project, the candidate should be comfortable with working under pressure and extending their availability to match EST hours as needed. Recruitment Process: The recruitment process will consist of 2 technical rounds. Probation Period: 3 months Engagement Type: full_time Job Type: remote Employer Industry: Marketing and Advertising Employer Website: https://bridgetree.com/ Employer Description: Bridgetree is a marketing analytics company founded in 1995 and headquartered in Fort Mill, South Carolina. It offers services that deliver actionable insights and develop enterprise planning to improve efficiency, accuracy, and speed of marketing planning and execution processes. With over 28 years of success, Bridgetree builds bridges between data and customer engagement to deliver meaningful marketing outcomes. Preferred candidate profile

Posted 1 week ago

Apply

6.0 - 10.0 years

40 - 45 Lacs

Gurugram

Work from Office

About the Role Were looking for a skilled Site Reliability Engineer (SRE) with a strong foundation in Java or Python development, infrastructure automation, and application monitoring. You’ll be embedded within engineering teams to drive reliability, scalability, and performance across our systems. If you have a product-first mindset, enjoy solving real-world problems at scale, and love diving into code and systems alike — we’d love to talk to you. What You’ll Work On Enhancing service reliability & availability by implementing robust SLI/SLO-based monitoring and alerting systems Collaborating with developers to optimize service performance and reliability in Java/Spring Boot applications Building infrastructure as code with Terraform and automating provisioning pipelines Conducting chaos testing, capacity planning, and failure analysis Working with cloud-native observability stacks (e.g., CloudWatch, Prometheus, Victoria Metrics) Reporting with Snowflake and Sigma for operational insights Supporting scalable and resilient database operations across RDS and NoSQL systems What We’re Looking For 6–10 years of experience Strong backend coding skills – Java (preferred) or Python (not just scripting) Experience with monitoring tools: CloudWatch, Prometheus, Victoria Metrics Familiarity with Snowflake and Sigma reporting (preferred) Terraform experience for IaC Strong database skills: RDS and any major NoSQL platform Deep understanding of SLI/SLOs, alerting, capacity planning, chaos testing Application/service-oriented mindset, aligned with an embedded SRE approach

Posted 1 week ago

Apply

3.0 - 8.0 years

0 - 1 Lacs

Bengaluru

Hybrid

Role & responsibilities Strong SQL proficiency: Expert knowledge of SQL syntax, query optimization techniques, and data manipulation. Snowflake platform expertise: In-depth knowledge of Snowflake features including stored procedures, SnowSQL, Snowpipe, stages, data sharing, governance and security configurations. Data warehousing concepts: Understanding of data warehousing principles, Data Pipelines, ETL working with existing CI/CD pipeline to deploy Security and compliance: Implement data security measures within Snowflake Experience with BI tools like SAP BOBJ and Looker

Posted 1 week ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

We are seeking a skilled Lead Data Engineer with extensive experience in Snowflake, ADF, SQL, and other relevant data technologies to join our team. As a key member of our data engineering team, you will play an instrumental role in designing, developing, and managing data pipelines, working closely with cross-functional teams to drive the success of our data initiatives. Key Responsibilities: Design, implement, and maintain data solutions using Snowflake, ADF, and SQL Server to ensure data integrity, scalability, and high performance. Lead and contribute to the development of data pipelines, ETL processes, and data integration solutions, ensuring the smooth extraction, transformation, and loading of data from diverse sources. Work with MSBI, SSIS, and Azure Data Lake Storage to optimize data flows and storage solutions. Collaborate with business and technical teams to identify project needs, estimate tasks, and set intermediate milestones to achieve final outcomes. Implement industry best practices related to Business Intelligence and Data Management, ensuring adherence to usability, design, and development standards. Perform in-depth data analysis to resolve data issues and improve overall data quality. Mentor and guide junior data engineers, providing technical expertise and supporting the development of their skills. Effectively collaborate with geographically distributed teams to ensure project goals are met in a timely manner. Required Technical Skills: T-SQL, SQL Server, MSBI (SQL Server Integration Services, Reporting Services), Snowflake, Azure Data Factory (ADF), SSIS, Azure Data Lake Storage. Proficient in designing and developing data pipelines, data integration, and data management workflows. Strong understanding of Cloud Data Solutions, with a focus on Azure-based tools and technologies. Nice to Have: Experience with Power BI for data visualization and reporting. Familiarity with Azure Databricks for data processing and advanced analytics.

Posted 1 week ago

Apply

7.0 - 12.0 years

16 - 31 Lacs

Greater Noida

Work from Office

Role : Snowflake Data Engineer Location : Greater Noida Note : 5 days work from office Total 8 + Years of experience with DBT Designer skill Good experience in Snowflake , AWS and Airflow Good Communication Skills Good Client interaction skills Knowledge of Agile Methodology R

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 32 Lacs

Bengaluru

Work from Office

SUMMARY OF ROLE As a Data Engineer, you'll focus on analyzing our centralized financial data and implementing machine learning models that drive actionable insights. While maintaining data infrastructure is part of the role, your primary focus will be on extracting value from data through analysis and building ML models that enhance our financial intelligence platform. You'll bridge the gap between data architecture and practical applications, turning complex financial data into predictive models and analytical tools. JOB RESPONSIBILITIES • Analyze financial datasets to identify patterns, trends, and relationships that can drive machine learning applications • Design and implement ML models for financial forecasting, anomaly detection, and predictive analytics • Transform raw financial data into structured formats optimized for analysis and model training • Perform exploratory data analysis to uncover insights and opportunities for new analytical products • Develop and optimize data pipelines in Snowflake to support analytical workloads and ML model training • Create and maintain data models that enable effective cross-client benchmarking and comparative analytics • Implement data validation processes to ensure data quality for analysis & model training • Collaborate with product teams to translate business requirements into technical solutions • Document methodologies, analyses, and modeling approaches • Monitor and improve model performance through continuous evaluation and refinement External Skills And Expertise QUALIFICATIONS • Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or related field • 5+ years of experience in data engineering with a strong focus on data analysis and machine learning implementation • Proven experience analyzing complex datasets and implementing ML models based on findings • Expert level proficiency in Python and its data analysis/ML libraries (pandas, NumPy, scikit-learn, TensorFlow/PyTorch) • Strong SQL skills and experience with Snowflake or similar cloud data warehouse technologies • Experience with ETL/ELT processes and data pipeline development • Familiarity with data lake architectures and best practices • Experience in implementation of modern data warehouse with Star Schema, facts and dimensions, Snowflake Schema and Data Vault • Proven expertise to build scalable, auditable, state of art solutions using modern data stack • Understanding of statistical analysis techniques and their application to financial data • Experience implementing and optimizing machine learning models for production environments • Ability to communicate complex technical concepts to non-technical stakeholders

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 30 Lacs

Chennai

Hybrid

Role: Power BI Architect Experience: 10+ Years Location - Chennai (Willing to relocate - Tamil Nadu Region is fine) Looking for immediate Joiners only. Job Description: Bachelor's degree in computer science, information systems, or a related field (or equivalent experience). At least 7+ years of proven experience in developing Power BI solutions, including data modeling and ETL processes. Designed or architected solutions using Power BI connected to Snowflake or Data Lake Experience with performance tuning, data modeling, and DAX optimization in that context Exposure to enterprise-level reporting, preferably with large datasets and cloud data platforms Strong proficiency in DAX and Power Query. Experience with SQL and relational databases. Understanding of data warehousing and dimensional modeling concepts. Experience with data integration tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work independently and as part of a team. Experience with Azure services (e.g., Azure Data Factory, Azure SQL Database, Azure Databricks) is a plus. Experience with version control systems (e.g., Git) is a plus.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Pune

Hybrid

Senior Data engineer: At Acxiom, our vision is to transform data into value for everyone. Our data products and analytical services enable marketers to recognize, better understand, and then deliver highly applicable messages to consumers across any available channel. Our solutions enable true people-based marketing with identity resolution and rich descriptive and predictive audience segmentation. We are seeking an experienced Data Engineer with a versatile skill set to undertake data engineering efforts to build the next-generation ML infrastructure for Acxioms business. As part of the Data Science and Analytics Team, the Sr. Data engineer will partner with Data Scientists and work hands-on with Big Data technologies and build a scalable infrastructure to support development of machine learning based Audience Propensity models and solutions for our domestic and global businesses. The Sr. Data engineer’s responsibilities include collaborating with internal and external stakeholders to identify data ingestion, processing, ETL, data warehousing requirements and develop appropriate solutions using modern data engineering tools in cloud. We want this person to help us build a scalable data lake and EDW using modern tech stack from the ground up. Success in this role comes from combining a strong data engineering background with product and business acumen to deliver scalable data pipeline and database solutions that can enable & support a high performant, large scale modeling infrastructure at Acxiom. The Sr. Data Engineer will be a champion of the latest Cloud database technologies & data engineering tools and will lead by example in influencing adoption and migration to the new stack. What you will do: Partner with ML Architects and data scientists to drive POCs to build a scalable, next generation model development, model management and governance infrastructure in Cloud Be a thought leader and champion for adoption of new cloud-based database technologies and enable migration to new cloud-based modeling stack Collaborate with other data scientists and team leads to define project requirements & build the next generation data source ingestion, ETL, data pipelining, data warehousing solutions in Cloud Build data-engineering solutions by developing strong understanding of business and product data needs Manage environment security permissions and enforce role based compliance Build expert knowledge of the various data sources brought together for audience propensities solutions – survey/panel data, 3rd-party data (demographics, psychographics, lifestyle segments), media content activity (TV, Digital, Mobile), and product purchase or transaction data and develop solutions for seamless ingestion and process of the data Resolve defects/bugs during QA testing, pre-production, production, and post-release patches Contribute to the design and architecture of services across the data landscape Participation in development of the integration team, contributing to reviews of methodologies, standards, and processes Contribute to comprehensive internal documentation of designs and service components Required Skills: Background in data pipelining, warehousing, ETL development solutions for data science and other Big Data applications Experience with distributed, columnar and/or analytic oriented databases or distributed data processing frameworks Minimum of 4 years of experience with Cloud databases –Snowflake, Azure SQL database, AWS Redshift, Google Cloud SQL or similar. Experience with NoSQL databases such as mongoDB, Cassandra or similar is nice to have Snowflake and/or Databricks certification preferred Minimum of 3 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, data lake and data warehouse solutions Minimum of 3 years of hands-on experience in Big Data technologies such as Hadoop, Spark, PySpark, Spark/SparkSQL, Hive, Pig, Oozie and streaming technologies such as Kafka, Spark Streaming Ingestion API, Unix shell/Perl scripting etc. Strong programming skills using Java, Python, PySpark, Scala or similar Experience with public cloud architectures, pros/cons, and migration considerations Experience with container-based application deployment frameworks (Kubernetes, Docker, ECS/EKS or similar) Experience with Data Visualization tools such as Tableau, Looker or similar Outstanding troubleshooting, attention to detail, and communication skills (verbal/written) in a fast paced setting Bachelor's Degree in Computer Science or relevant discipline or 7+ years of relevant work experience. Solid communication skills: Demonstrate ability to explain complex technical issues to technical and non- technical audiences. Strong understanding of the software design/architecture process Experience with unit testing and data quality checks Building Infrastructure-as-code for Public Cloud using Terraform Experience in a Dev Ops engineering or equivalent role. Experience developing, enhancing, and maintaining CI/CD automation and configuration management using tools such as Jenkins, Snyk, and GitHub What will set you apart: Preferred Skills: Ability to work in white space and be able to develop solutions independently. Experience building ETL pipelines with health claims data will be a plus Prior experience with Cloud based ETL tools such as AWS Glue, AWS Data pipeline or similar Experience with building real-time and streaming data pipelines a plus Experience with MLOps tools such as Apache MLFlow/KubeFlow is a plus Exposure to E2E ML platform such as AWS Sagemaker, Azure ML studio, Google AI/ML, Datarobot, Databricks or similar a plus Experience with ingestion, processing and management of 3rd party data

Posted 1 week ago

Apply

14.0 - 24.0 years

30 - 40 Lacs

Pune, Bengaluru

Hybrid

Job Brief: Data Engineering PM Overview We are looking for a strong and dynamic Data Engineering Project Manager with Strong experience in Production Support, People management skill in a support environment and mid to senior level experience. This is an exciting opportunity to be a part of our transformation program, managing current functionality (including SSAS) Microsoft Data Warehouse to our Cloud Data Platform, Snowflake. Responsibilities Oversee entire Data Management support function Responsible for Strategy, Planning, Resourcing, stakeholder management Point of contact for Escalations, Cross-functional coordination Client relationship management, Shift Management and escalation handling Works with Architecture Teams Technical discussion with cross function teams Technical leadership, guide and suggest best practices Proactive issue resolving Requirements and Experience Technical expertise in AWS Data Services, Python, Scala, SQL Manage Data pipelines maintenance, Resolve Production issues/ tickets Data pipelines Monitoring & Alerting Strong knowledge of, or ability to rapidly adopt our core languages for data engineering Python , SQL and Terraform . Knowledge of analytics platforms like Snowflake, data transformation tools like dbt, scala, AWS lambda, Fivetran . A good understanding of CI/CD and experience with one of the CI/CD tools – Azure DevOps , GitHub, GitLab or Jenkins. Sufficient familiarity with SQL Server, SSIS and SSAS to facilitate understanding of the current system. Strong Knowledge of ITIL / ITSM framework Location and Duration The location will be offshore (India) primarily Bangalore OR Pune locations.

Posted 1 week ago

Apply

5.0 - 9.0 years

10 - 20 Lacs

Pune, Chennai, Bengaluru

Work from Office

Key Responsibilities Design, build, and maintain robust, scalable, and efficient ETL/ELT pipelines. Implement data ingestion processes using FiveTran and integrate various structured and unstructured data sources into GCP-based environments. Develop data models and transformation workflows using DBT and manage version-controlled pipelines. Build and manage data storage solutions using Snowflake, optimizing for cost, performance, and scalability. Orchestrate workflows and pipeline dependencies using Apache Airflow. Design and support Data Lake architecture for raw and curated data zones. Collaborate with Data Analysts, Scientists, and Product teams to ensure availability and quality of data. Monitor data pipeline performance, ensure data integrity, and handle error recovery mechanisms. Follow best practices in CI/CD, testing, data governance, and security standards. Required Skills 5 - 7 years of professional experience in data engineering roles. Hands-on experience with GCP services: Big Query, Cloud Storage, Pub/Sub, Dataflow, Composer, etc. Proficient in writing modular SQL transformations and data modeling using DBT. Deep understanding of Snowflake warehousing: performance tuning, cost optimization, security. Experience with Airflow for pipeline orchestration and DAG management. Familiarity with designing and implementing Data Lake solutions. Proficient in Python and/or SQL. Send profiles to payal.kumari@nam-it.com Regards, Payal Kumari Senior Executive Staffing NAM Info Pvt Ltd, 29/2B-01, 1st Floor, K.R. Road, Banashankari 2nd Stage, Bangalore - 560070. Email payal.kumari@nam-it.com Website - www.nam-it.com USA | CANADA | INDIA

Posted 1 week ago

Apply

6.0 - 10.0 years

11 - 19 Lacs

Noida

Hybrid

QA Automation Engineer As a Senior QA Automation Engineer specializing in Data Warehousing, you will play a critical role in ensuring that our data solutions are of the highest quality. You will work closely with data engineers and analysts to develop, implement, and maintain automated testing frameworks for data validation, ETL processes, data quality, and integration. Your work will ensure that data is accurate, consistent, and performs optimally across our data warehouse systems. Responsibilities Develop and Implement Automation Frameworks : Design, build, and maintain scalable test automation frameworks tailored for data warehousing environments. Test Strategy and Execution : Define and execute automated test strategies for ETL processes, data pipelines, and database integration across a variety of data sources. Data Validation : Implement automated tests to validate data consistency, accuracy, completeness, and transformation logic. Performance Testing : Ensure that the data warehouse systems meet performance benchmarks through automation tools and load testing strategies. Collaborate with Teams : Work closely with data engineers, software developers, and data analysts to understand business requirements and design tests accordingly. Continuous Integration : Integrate automated tests into the CI/CD pipelines, ensuring that testing is part of the deployment process. Defect Tracking and Reporting : Use defect-tracking tools (e.g., JIRA) to log and track issues found during automated testing, ensuring that defects are resolved in a timely manner. Test Data Management : Develop strategies for handling large volumes of test data while maintaining data security and privacy. Tool and Technology Evaluation : Stay current with emerging trends in automation testing for data warehousing and recommend tools, frameworks, and best practices. Job QualificationsJob QualificationsRequirements and skills • At Least 6+ Years Experience Solid understanding of data warehousing concepts (ETL, OLAP, data marts, data vault,star/snowflake schemas, etc.). • Proven experience in building and maintaining automation frameworks using tools like Python, Java, or similar, with a focus on database and ETL testing. • Strong knowledge of SQL for writing complex queries to validate data, test data pipelines, and check transformations. • Experience with ETL tools (e.g., Matillion, Qlik Replicate) and their testing processes. • Performance Testing • Experience with version control systems like Git • Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues. • Strong communication and collaboration skills. • Attention to detail and a passion for delivering high-quality solutions. • Ability to work in a fast-paced environment and manage multiple priorities. • Enthusiastic about learning new technologies and frameworks. Experience with the following tools and technologies are desired. QLIK Replicate Matillion ETL Snowflake Data Vault Warehouse Design Power BI Azure Cloud Including Logic App, Azure Functions, ADF

Posted 1 week ago

Apply

4.0 - 8.0 years

12 - 22 Lacs

Pune

Work from Office

Key Responsibilities Oversight & Optimisation of Data Lakehouse & Architecture, Data Engineering & Pipelines Understand lakehouse architectures that unify structured and semi-structured data at scale Strong experience of Implementing, monitoring job scheduling and orchestration using Airflow , Azure Data Factory , and CI/CD triggers and with Azure Dataflows , Databricks , and Delta Lake for real-time/batch processing, Managing schema evolution , data versioning (e.g., Delta Lake), and pipeline adaptability Pipeline performance tuning for latency, resource usage, and throughput optimization Cloud Infrastructure & Automation Infra automation using Terraform, Azure Bicep, and AWS CDK Setting up scalable cloud storage (Data Lake Gen2, S3, Blob, RDS, etc.) Administering RBAC , secure key vault access, and compliance-driven access controls Tuning infrastructure and services for cost efficiency and compute optimization Full-Stack Cloud Data Platform Design Designing end-to-end Azure/AWS data platforms including ingestion, transformation, storage, and serving layers Interfacing with BI/AI teams to ensure data readiness, semantic modeling, and ML enablement Familiarity with metadata management, lineage tracking, and data catalog integration Enterprise Readiness & Delivery Experience working with MNCs and large enterprises with strict processes, approvals, and data governance Capable of evaluating alternative tools/services across clouds for architecture flexibility and cost-performance balance Hands-on with CI/CD , monitoring , and security best practices in regulated environments (BFSI, Pharma, Manufacturing) Lead cost-performance optimization across Azure and hybrid cloud environments Design modular, scalable infrastructure using Terraform / CDK / Bicep with a DevSecOps mindset Explore alternative cloud tools/services across compute, storage, identity, and monitoring to propose optimal solutions Drive RBAC, approval workflows, and governance controls in line with typical enterprise, MNC deployment security protocols Support BI/data teams with infra tuning , pipeline stability , and client demo readiness Collaborate with client-side architects, procurement, and finance teams for approvals and architectural alignment Ideal Profile 47 years of experience in cloud infrastructure and platform engineering Strong hold on Microsoft Azure , with hands-on exposure to AWS / GCP / Snowflake acceptable Skilled in IaC tools (Terraform, CDK), CI/CD , monitoring (Grafana, Prometheus), and cost optimization tools Comfortable proposing innovative, multi-vendor architectures that balance cost, performance, and compliance Prior experience working with large global clients or regulated environments (e.g., BFSI, Pharma, Manufacturing) Preferred Certifications Microsoft Azure Administrator / Architect (Associate/Expert) AWS Solutions Architect / FinOps Certified Bonus: Snowflake, DevOps Professional, or Data Platform certifications

Posted 1 week ago

Apply

5.0 - 8.0 years

10 - 15 Lacs

Bengaluru

Hybrid

Role : Snowflake Developer Experience : 5 years - 8 years Expert in python snowflake SQL and DBT Experience in Dagster or Airflow is Must Should be able to grasp landscape quickly to test and approve Merge requests from Data Engineers Data Modelling and Architectural level knowledge is needed Should be able to establish connectivity from different source systems like SAP Beeline to existing setup and take ownership of it

Posted 1 week ago

Apply

8.0 - 11.0 years

20 - 35 Lacs

Bengaluru

Work from Office

• 8+ years of experience in designing and developing enterprise data solutions. • 3+ years of hands-on experience with Snowflake. • 3+ years of experience in Python development. • Strong expertise in SQL and Python for data processing and transformation. • Experience with Spark, Scala, and Python in production environments. • Hands-on experience with data orchestration tools (e.g., Airflow, Informatica, Automic). • Knowledge of metadata management and data lineage. • Strong problem-solving skills with an ability to work in a fast-paced, agile environment. • Excellent communication and collaboration skills.

Posted 1 week ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Location: Bangalore/Hyderabad/Pune Experience level: 7+ Years About the Role We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.

Posted 1 week ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Bengaluru

Work from Office

Location: Bangalore/Hyderabad/Pune Experience level: 8+ Years About the Role We are looking for a technical and hands-on Lead Data Engineer to help drive the modernization of our data transformation workflows. We currently rely on legacy SQL scripts orchestrated via Airflow, and we are transitioning to a modular, scalable, CI/CD-driven DBT-based data platform. The ideal candidate has deep experience with DBT , modern data stack design , and has previously led similar migrations improving code quality, lineage visibility, performance, and engineering best practices. Key Responsibilities Lead the migration of legacy SQL-based ETL logic to DBT-based transformations Design and implement a scalable, modular DBT architecture (models, macros, packages) Audit and refactor legacy SQL for clarity, efficiency, and modularity Improve CI/CD pipelines for DBT: automated testing, deployment, and code quality enforcement Collaborate with data analysts, platform engineers, and business stakeholders to understand current gaps and define future data pipelines Own Airflow orchestration redesign where needed (e.g., DBT Cloud/API hooks or airflow-dbt integration) Define and enforce coding standards, review processes, and documentation practices Coach junior data engineers on DBT and SQL best practices Provide lineage and impact analysis improvements using DBTs built-in tools and metadata Must-Have Qualifications 8+ years of experience in data engineering Proven success in migrating legacy SQL to DBT , with visible results Deep understanding of DBT best practices , including model layering, Jinja templating, testing, and packages Proficient in SQL performance tuning , modular SQL design, and query optimization Experience with Airflow (Composer, MWAA), including DAG refactoring and task orchestration Hands-on experience with modern data stacks (e.g., Snowflake, BigQuery etc.) Familiarity with data testing and CI/CD for analytics workflows Strong communication and leadership skills; comfortable working cross-functionally Nice-to-Have Experience with DBT Cloud or DBT Core integrations with Airflow Familiarity with data governance and lineage tools (e.g., dbt docs, Alation) Exposure to Python (for custom Airflow operators/macros or utilities) Previous experience mentoring teams through modern data stack transitions

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru

Work from Office

Role: Snowflake Developer with DBT Location: Bangalore/Hyderabad/Pune About the Role : We are seeking a Snowflake Developer with a deep understanding of DBT (data build tool) to help us design, build, and maintain scalable data pipelines. The ideal candidate will have hands-on experience working with Snowflake, DBT, and a passion for optimizing data processes for performance and efficiency. Responsibilities : Design, develop, and optimize Snowflake data models and DBT transformations. Build and maintain CI/CD pipelines for automated DBT workflows. Implement best practices for data pipeline performance, scalability, and efficiency in Snowflake. Contribute to the DBT community or develop internal tools/plugins to enhance the workflow. Troubleshoot and resolve complex data pipeline issues using DBT and Snowflake Qualifications : Must have minimum 4+ years of experience with Snowflake Must have at least 1 year of experience with DBT Extensive experience with DBT, including setting up CI/CD pipelines, optimizing performance, and contributing to the DBT community or plugins. Must be strong in SQL, data modelling, and ELT pipelines. Excellent problem-solving skills and the ability to collaborate effectively in a team environment.

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru

Work from Office

About the Role: We are seeking a skilled and detail-oriented Data Migration Specialist with hands-on experience in Alteryx and Snowflake. The ideal candidate will be responsible for analyzing existing Alteryx workflows, documenting the logic and data transformation steps and converting them into optimized, scalable SQL queries and processes in Snowflake. The ideal candidate should have solid SQL expertise, a strong understanding of data warehousing concepts. This role plays a critical part in our cloud modernization and data platform transformation initiatives. Key Responsibilities: Analyze and interpret complex Alteryx workflows to identify data sources, transformations, joins, filters, aggregations, and output steps. Document the logical flow of each Alteryx workflow, including inputs, business logic, and outputs. Translate Alteryx logic into equivalent SQL scripts optimized for Snowflake, ensuring accuracy and performance. Write advanced SQL queries , stored procedures, and use Snowflake-specific features like Streams, Tasks, Cloning, Time Travel , and Zero-Copy Cloning . Implement data ingestion strategies using Snowpipe , stages, and external tables. Optimize Snowflake performance through query tuning , partitioning, clustering, and caching strategies. Collaborate with data analysts, engineers, and stakeholders to validate transformed logic against expected results. Handle data cleansing, enrichment, aggregation, and business logic implementation within Snowflake. Suggest improvements and automation opportunities during migration. Conduct unit testing and support UAT (User Acceptance Testing) for migrated workflows. Maintain version control, documentation, and audit trail for all converted workflows. Required Skills: Bachelors or masters degree in computer science, Information Technology, Data Science, or a related field. Must have aleast 4 years of hands-on experience in designing and developing scalable data solutions using the Snowflake Data Cloud platform Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. 1+ years of experience with Alteryx Designer, including advanced workflow development and debugging. Strong proficiency in SQL, with 3+ years specifically working with Snowflake or other cloud data warehouses. Python programming experience focused on data engineering. Experience with data APIs , batch/stream processing. Solid understanding of data transformation logic like joins, unions, filters, formulas, aggregations, pivots, and transpositions. Experience in performance tuning and optimization of SQL queries in Snowflake. Familiarity with Snowflake features like CTEs, Window Functions, Tasks, Streams, Stages, and External Tables. Exposure to migration or modernization projects from ETL tools (like Alteryx/Informatica) to SQL-based cloud platforms. Strong documentation skills and attention to detail. Experience working in Agile/Scrum development environments. Good communication and collaboration skills.

Posted 1 week ago

Apply

7.0 - 11.0 years

30 - 35 Lacs

Bengaluru

Hybrid

Lead Data Engineer We're Hiring: Lead Data Engineer | Bangalore | 7 - 11 Years Experience Location: Bangalore(Hybrid) Position Type: Permanent Mode of Interview: Face to Face Experience: 7 - 11yrs. Skills: Snowflake, ETL tools(Informatica/BODS/Datastage), Scripting (Python/Powershell/Shell), SQL, Data Warehousing Candidate who are available for a Face to Face discussion can apply. Interested? Send your updated CV to: radhika@theedgepartnership.com Do connect to me on LinkedIn: https://www.linkedin.com/in/radhika-gm-00b20a254/ Skills and Qualification (Functional and Technical Skills) Functional Skills: Team Player: Support peers, team, and department management. Communication: Excellent verbal, written, and interpersonal communication skills. Problem Solving: Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality. Partnership and Collaboration: Develop and maintain partnership with business and IT stakeholders Attention to Detail: Ensure accuracy and thoroughness in all tasks. Technical/Business Skills: Data Engineering: Experience in designing and building Data Warehouse and Data lakes. Good knowledge of data warehouse principles, and concepts. Technical expertise working in large scale Data Warehousing applications and databases such as Oracle, Netezza, Teradata, and SQL Server. Experience with public cloud-based data platforms especially Snowflake and AWS. Data integration skills: Expertise in design and development of complex data pipelines Solutions using any industry leading ETL tools such as SAP Business Objects Data Services (BODS), Informatica Cloud Data Integration Services (IICS), IBM Data Stage. Experience of ELT tools such as DBT, Fivetran, and AWS Glue Expert in SQL - development experience in at least one scripting language (Python etc.), adept in tracing and resolving data integrity issues. Strong knowledge of data architecture, data design patterns, modeling, and cloud data solutions (Snowflake, AWS Redshift, Google BigQuery). Data Model: Expertise in Logical and Physical Data Model using Relational or Dimensional Modeling practices, high volume ETL/ELT processes. Performance tuning of data pipelines and DB Objects to deliver optimal performance. Experience in Gitlab version control and CI/CD processes. Experience working in Financial Industry is a plus.

Posted 1 week ago

Apply

15.0 - 20.0 years

20 - 30 Lacs

Noida, Gurugram

Hybrid

Design architectures using Microsoft SQL Server MongoDB Develop ETLdata lakes, Integrate reporting tools like Power BI, Qlik, and Crystal Reports to data strategy Implement AWS cloud services,PaaS,SaaS, IaaS,SQL and NoSQL databases,data integration

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Bengaluru

Work from Office

Skill required: Delivery - Adobe Analytics Designation: I&F Decision Sci Practitioner Sr Analyst Qualifications: Any Graduation Years of Experience: 5 to 8 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do So how do organizations sustain themselvesThe key is a new operating modelone that s anchored around the customer and propelled by intelligence to deliver outstanding experiences across the enterprise at speed and at scale. Adobe Analytics is a solution for applying real-time analytics and detailed segmentation across all marketing channels. It gathers structured and unstructured customer data from online and offline sources, applies real-time analytics, and shares insights across an organization. It provides the capabilities of Web & Mobile Analytics, Marketing Analytics, Predictive Analytics. What are we looking for Adobe Analytics & Looker Minimum 4 years of experience in digital analytics. Must have a good understanding of digital marketing landscape and relevant tools Extensive working experience in Adobe Analytics & Looker products Reporting & Analytics, Ad hoc Analysis, Workspace Experience in SQL and Snowflake/GCP Ability to translate the business trends to simple actions visuals Experience in Beatty/CPG/Retail Adaptable and flexible Ability to work well in a team Agility for quick learning Commitment to quality Written and verbal communication Roles and Responsibilities: Identify relevant data sources and determine effective methods for data analysis Transform the raw data from data sources to aggregate/consumable data for visualization tool and build appropriate schema/relation Design, develop, and maintain user-friendly data visualizations that align with project objectives Collaborate with cross-functional teams to understand project requirements and establish data-related project briefs Deployment of dashboard to productions and track for regular refresh Set-up alters and appropriate access controls for dashboards Ability to track and report where data not captured appropriately especially when there are tagging gaps Update, create, and support databases and reports, incorporating key metrics critical for guiding strategic decisions Ensure data accuracy, completeness, and reliability in all reporting activities Continuously refine and improve existing dashboards for optimal performance and user experience Implement user feedback and conduct usability testing to enhance dashboard effectiveness Stay abreast of industry developments and incorporate innovative techniques into visualization strategies Drive adoption of reports Qualification Any Graduation

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies