Home
Jobs

235 Snowflake Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 6.0 years

2 - 11 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

Key Skills: Proficient in writing SQL queries against Snowflake and working with Snowflake utilities such as Snow SQL, Snow Pipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Hands-on experience with Data Build Tool (DBT) and expertise in developing scripts using Unix, Python, etc. for Extract, Load, and Transform (ETL) processes. In-depth understanding of Data Warehouse /ODS concepts, ETL modelling principles, and experience in Data Warehousing - OLTP, OLAP, Dimensions, Facts, and Data modelling. Proven experience in gathering and analysing system requirements, with good working knowledge of XML Shred and installation. Familiarity with any ETL tool (Informatica or SSIS) and exposure to data visualization tools like Tableau or Power BI. Ability to effectively collaborate in a cross-functional team environment. Good to have exposure to AWS/Azure Data ecosystem.

Posted 1 day ago

Apply

5.0 - 6.0 years

3 - 10 Lacs

Hyderabad, Telangana, India

On-site

Foundit logo

Key Skills: Proficient in writing SQL queries against Snowflake and working with Snowflake utilities such as Snow SQL, Snow Pipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Hands-on experience with Data Build Tool (DBT) and expertise in developing scripts using Unix, Python, etc. for Extract, Load, and Transform (ETL) processes. In-depth understanding of Data Warehouse /ODS concepts, ETL modelling principles, and experience in Data Warehousing - OLTP, OLAP, Dimensions, Facts, and Data modelling. Proven experience in gathering and analysing system requirements, with good working knowledge of XML Shred and installation. Familiarity with any ETL tool (Informatica or SSIS) and exposure to data visualization tools like Tableau or Power BI. Ability to effectively collaborate in a cross-functional team environment. Good to have exposure to AWS/Azure Data ecosystem.

Posted 1 day ago

Apply

5.0 - 6.0 years

3 - 13 Lacs

Delhi, India

On-site

Foundit logo

Key Skills: Proficient in writing SQL queries against Snowflake and working with Snowflake utilities such as Snow SQL, Snow Pipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Hands-on experience with Data Build Tool (DBT) and expertise in developing scripts using Unix, Python, etc. for Extract, Load, and Transform (ETL) processes. In-depth understanding of Data Warehouse /ODS concepts, ETL modelling principles, and experience in Data Warehousing - OLTP, OLAP, Dimensions, Facts, and Data modelling. Proven experience in gathering and analysing system requirements, with good working knowledge of XML Shred and installation. Familiarity with any ETL tool (Informatica or SSIS) and exposure to data visualization tools like Tableau or Power BI. Ability to effectively collaborate in a cross-functional team environment. Good to have exposure to AWS/Azure Data ecosystem.

Posted 1 day ago

Apply

6.0 - 7.0 years

3 - 20 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

Job Title: Data Engineer Experience: 6 to 8 Years Location: Bangalore Job Summary We are seeking a skilled and motivated Data Engineer with 68 years of experience to join our data team. The ideal candidate will have hands-on experience in building and maintaining scalable data pipelines, working with cloud platforms (AWS or GCP), and optimizing performance for Snowflake environments. You will play a key role in transforming raw data into actionable insights for business decisions. Key Responsibilities Design, develop, and manage highly scalable ETL/ELT workflows using modern data tools and cloud services. Build and maintain data architectures using Snowflake , Pyspark, Python , AWS/GCP , SQL and Big Data technologies . Write efficient, complex SQL queries for data transformation, aggregation, and reporting. Collaborate with data scientists, analysts, and other engineers to implement best practices in data modeling and performance tuning. Monitor and optimize data pipelines for efficiency, reliability, and accuracy. Ensure data integrity, security, and availability across systems.

Posted 1 day ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

We are seeking a highly skilled Senior Data Engineer to join our dynamic team in Bangalore. You will be responsible for designing, developing, and maintaining scalable data ingestion frameworks and ELT pipelines . The ideal candidate will have deep technical expertise in cloud platforms (especially AWS) , data architecture, and orchestration tools like DBT , Apache Airflow , and Prefect . Key Responsibilities Design, develop, and maintain scalable data ingestion frameworks. Build and manage ELT pipelines using tools such as DBT , Apache Airflow , and Prefect . Work with modern cloud data warehouses like Snowflake , Redshift , or Databricks . Integrate data pipelines with AWS services like S3 , Lambda , Step Functions , and Glue . Utilize strong SQL and scripting skills for data manipulation and automation. Implement CI/CD practices for data pipelines. Ensure data integrity, quality, and performance. Collaborate with cross-functional teams to understand data requirements. Required Skills Expertise in ELT pipelines and data ingestion frameworks. Strong knowledge of DBT , Apache Airflow , and/or Prefect . Deep technical expertise in AWS services. Experience with cloud data warehouses (e.g., Snowflake , Redshift , Databricks ). Proficiency in SQL and scripting. Experience with CI/CD practices . Knowledge of data systems in the manufacturing industry is a plus. Strong problem-solving and communication skills.

Posted 1 day ago

Apply

6.0 - 11.0 years

6 - 11 Lacs

Noida, Uttar Pradesh, India

On-site

Foundit logo

We are looking for a Snowflake Developer with deep expertise in Snowflake and DBT or SQL to help us build and scale our modern data platform. Key Responsibilities: Design and build scalable ELT pipelines in Snowflake using DBT/SQL . Develop efficient, well-tested DBT models (staging, intermediate, and marts layers). Implement data quality, testing, and monitoring frameworks to ensure data reliability and accuracy. Optimize Snowflake queries, storage, and compute resources for performance and cost-efficiency. Collaborate with cross-functional teams to gather data requirements and deliver data solutions. Required Qualifications: 5+ years of experience as a Data Engineer, with at least 4 years working with Snowflake . Proficient with DBT (Data Build Tool) including Jinja templating, macros, and model dependency management. Strong understanding of ELT patterns and modern data stack principles. Advanced SQL skills and experience with performance tuning in Snowflake.

Posted 1 day ago

Apply

4.0 - 5.0 years

4 - 5 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

Must have 3+ years of IT experience, relevant experience of at least 1 year in Snowflake. In-depth understanding of Data Warehousing, ETL concepts and modeling structure principles Experience working with Snowflake Functions, hands on exp with Snowflake utilities, stage and file upload features, time travel, fail safe. Person should know Snowflake Architecture Experience in SQL is must. Expertise on engineering platform components such as Data Pipelines, Data Orchestration, Data Quality, Data Governance Analytics Hands-on experience on implementing large-scale data intelligence solution around Snowflake DW Experience in scripting language such as Python or Scala is must Good experience on streaming services such as Kafka Experience working with Semi-Structured data Required Skills: SnowFlake, Snowflake SQL, Snowpipe, SQL

Posted 1 day ago

Apply

4.0 - 5.0 years

4 - 5 Lacs

Hyderabad, Telangana, India

On-site

Foundit logo

Must have 3+ years of IT experience, relevant experience of at least 1 year in Snowflake. In-depth understanding of Data Warehousing, ETL concepts and modeling structure principles Experience working with Snowflake Functions, hands on exp with Snowflake utilities, stage and file upload features, time travel, fail safe. Person should know Snowflake Architecture Experience in SQL is must. Expertise on engineering platform components such as Data Pipelines, Data Orchestration, Data Quality, Data Governance Analytics Hands-on experience on implementing large-scale data intelligence solution around Snowflake DW Experience in scripting language such as Python or Scala is must Good experience on streaming services such as Kafka Experience working with Semi-Structured data Required Skills: SnowFlake, Snowflake SQL, Snowpipe, SQL

Posted 1 day ago

Apply

5.0 - 8.0 years

3 - 13 Lacs

Hyderabad, Telangana, India

On-site

Foundit logo

Essential functions DATA ANALYSIS/PROFILING SKILLSfor our Enterprise Data Organization to develop and manage data pipelines (data ingestion, transformation, storage etc.) for an Azure/Snowflake cloud-based data analytics platform. Qualifications Advanced SQL queries, scripts, stored procedures, materialized views, and views Focus on ELT to load data into database and perform transformations in database Ability to use analytical SQL functions Snowflake experience Cloud Data Warehouse solutions experience (Snowflake, Azure DW, or Redshift); data modeling, analysis, programming Experience with DevOps models utilizing a CI/CD tool Work in hands-on Cloud environment in Azure Cloud Platform (ADLS, Blob) Airflow Would be a plus Good interpersonal skills; comfort and competence in dealing with different teams within the organization. Requires an ability to interface with multiple constituent groups and build sustainable relationships. Strong and effective communication skills (verbal and written). Strong analytical, problem-solving skills. Experience of working in a matrix organization. Ability to prioritize and deliver. Results-oriented, flexible, adaptable. Work well independently and lead a team. Versatile, creative temperament, ability to think out-of-the box while defining sound and practical solutions. Ability to master new skills. Familiar with Agile practices and methodologies Professional data engineering experience focused on batch and real-time data pipelines using Spark, Python, SQL Data warehouse (data modeling, programming) Experience working with Snowflake Experience working on a cloud environment, preferably, Microsoft Azure Cloud Data Warehouse solutions (Snowflake, Azure DW)

Posted 1 day ago

Apply

12.0 - 18.0 years

3 - 17 Lacs

Hyderabad, Telangana, India

On-site

Foundit logo

A result- oriented thought-leader to drive the development of the data engineering practice A trusted advisor and business partner to customers across verticals, and consulting team leader who establishes engineering processes and skill development. Technology skills (any of below): Data engineering: analytical data platforms, streaming, big data, EDW, data lakes, data governance, data mesh, Spark, Kafka, Snowflake, (2 from below: AWS, GCP, Azure) Transactional databases: Redis, Cassandra, MongoDB, (2 from below: AWS, GCP, Azure) ML and MLOps: VertexAI, Sagemaker, Dataiku, Databricks, mlflow Essential functions Responsibilities: Trusted Advisor: Provide direction to clients in different verticals on their selection of data and ML platforms, data strategies, expected impact, and relevant tools and methodologies that they can deploy to accelerate Big Data and advanced analytics programs. Pre-Sales and Consulting: Lead and drive technology consulting engagements and pre-sales, architecture assessments, and discovery workshops. Plan data engineering programs of Fortune-1000 enterprises and lead a team of principal consultants and data engineers who work on client accounts and drive consulting engagements to success. Technology Strategy and R&D: Create and bring to the market solutions in data engineering, MLOps, data governance. Responsible for driving innovation through research and development activities on industry trends, defining Go-To-Market strategies, developing assets and solutions strategy across multiple industries. Engineering: Working with Grid Dynamics's delivery organization to ensure that the right tools, technologies, and processes are in place across the firm to transform and continuously improve the quality and efficiency of our clients data platforms and data management processes. Business Development & Partnership: Manage relationships with key technology partners(AWS, GCP, Azure) and industry analysts. Requirements: Extensive practical experience in Big Data engineering, data governance, and cloud data platforms. Strong understanding of cloud-based architectures for data collection, data aggregation, reporting, and BI. Strong understanding of ML platforms and tooling including open-source, cloud native, and proprietary. Deep domain knowledge in at least one of the following industries: Retail, Finance, Manufacturing, Healthcare, Life Sciences. Experience in managing and delivering sophisticated analytical and data engineering programs at the enterprise scale. Managed key client relations worldwide and advised global technology and business leaders on innovation and data strategy. Experience with Big 4 consulting is a plus. Qualifications Experience with enterprise Architecture. Would be a plus Enterprise Architecture

Posted 1 day ago

Apply

12.0 - 13.0 years

24 - 46 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

???? Urgent Hiring: Data Engineer Lead/Architect | Join in 56 Days | FTE ???? Preferred Location: Pune (1st Preference) | Bangalore / Hyderabad (2nd Preference) ? Notice Period: Immediate Joiners (within 56 days) ???? Type of Hire: Full-Time Employment (FTE) ???? Experience: 12+ years We're on the lookout for a hands-on Data Engineer Lead/Architect with a solid grip on project architecture, data engineering frameworks, and cloud platforms. If you're passionate about designing scalable pipelines and enjoy solving complex data challenges, this one's for you. Key Responsibilities Architect and lead the development of robust, scalable data solutions across batch and streaming pipelines Collaborate with cross-functional teams to define architecture and guide implementation Ensure best practices in data modeling, orchestration, version control, and CI/CD Drive technical excellence across data systems and mentor junior engineers Must-Have Skills Expert coding ability in Python and hands-on with Spark and AWS Proficiency in Azure , Snowflake , Databricks , Ab Initio , ETL Pipelines , and Data Warehousing Strong database experience: SQL , NoSQL (MongoDB, Cassandra, DynamoDB), PostgreSQL , Oracle Familiar with RAG architecture , data structures , serialization formats (JSON, AVRO, Protobuf), and big-data formats (Parquet, Iceberg) Deep understanding of orchestration tools like Airflow or AWS Step Functions Agile methodology, TDD/BDD , and DevOps practices including Jenkins , Git , and CI/CD pipelines Preferred Skills Infrastructure as Code: Terraform or AWS CloudFormation Strong documentation and communication skills Proven ability to own and deliver large-scale data initiatives end-to-end

Posted 1 day ago

Apply

3.0 - 5.0 years

3 - 5 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

Select, Filter (WHERE), group Joins Aggregate, Qualify, Roll-up and Drill-down SQl to extract report by joining facts and dimension data Advance SQL Query Otimisation stored procedures, user-defined functions window functions for complex data manipulation Row_number, Rank,. Dense_Rank LEAD, LAG Snowflake features Snowflake Architecture Cloud Computing and Infrastructure (cloud platforms like AWS, Azure, and GCP, where Snowflake can be deployed.) Data Modeling Database Security and Access Control Metadata Management Time-travel Zero copy cloning Data Pipeline Automation - Snowpipe

Posted 1 day ago

Apply

7.0 - 9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Job Summary: As a Snowflake Senior Developer, you will be responsible for designing, developing, and implementing scalable data warehousing solutions using Snowflake. You will work closely with cross-functional teams to understand business requirements and develop data pipelines, ensuring seamless data integration and high-quality data delivery. Key Responsibilities: Design and develop Snowflake data warehouses, including data modeling, data governance, and data quality Develop and maintain complex data pipelines, including data ingestion, transformation, and loading processes Optimize Snowflake performance, including query optimization, indexing, and caching Collaborate with data scientists and analysts to enable data-driven decision-making Troubleshoot data-related issues and implement data quality checks Develop and maintain documentation for data architectures, pipelines, and processes Stay up-to-date with Snowflake features and best practices, applying knowledge to improve data warehousing solutions Technical Requirements: 7+ years of experience in data warehousing, with a focus on Snowflake Strong proficiency in Snowflake architecture, including data loading, transformation, and querying Experience with SQL, including complex queries, joins, and aggregations Knowledge of data modeling, data governance, and data quality principles Experience with data integration tools, such as Informatica, Talend, or similar Familiarity with cloud-based data platforms, including AWS, Azure, or GCP Preferred Qualifications: Snowflake certification Experience with Snowflake features, such as Snowpipe, Materialized Views, and Time Travel Knowledge of data security and compliance principles, including data masking and access control Familiarity with DevOps tools, such as Git, Jenkins, or similar Soft Skills: Strong communication and collaboration skills Ability to translate technical concepts to non-technical stakeholders Strong problem-solving skills, with attention to detail and ability to troubleshoot complex issues

Posted 1 day ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Foundit logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI . Inviting applications for the role of Lead Consulta nt- Snowflake Data Engineer ( Python+Cloud ) ! In this role, the Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification is must. Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL , Bulk copy, Snowpipe , Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight . Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/ Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python and Pyspark . Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2 . Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/ PySpark , AWS/Azure, ETL concepts, & Data Warehousing concepts Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .

Posted 1 day ago

Apply

8.0 - 12.0 years

8 - 12 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

In this role, you will design, develop, and lead the adoption of Snowflake-based solutions to ensure scalable, efficient, and secure data systems that empower our business analytics and decision-making processes. As a Snowflake Advisor, you will collaborate with cross-functional teams, lead data initiatives, and act as the subject matter expert for Snowflake across the organization. What you will do: Define and implement best practices for data modelling, schema design, query optimization in Snowflake Develop and manage ETL/ELT workflows to ingest, transform and load data into Snowflake from various resources Integrate data from diverse systems like databases, APIs, flat files, cloud storage etc. into Snowflake Use tools like Streamsets, Informatica or dbt to streamline data transformation processes Monitor or tune Snowflake performance including warehouse sizing, query optimizing and storage management Manage Snowflake caching, clustering and partitioning to improve efficiency Analyze and resolve query performance bottlenecks Monitor and resolve data quality issues within the warehouse Collaborate with data analysts, data engineers and business users to understand reporting and analytic needs Work closely with DevOps team for automation, deployment and monitoring Plan and execute strategies for scaling Snowflake environments as data volume grows Monitor system health and proactively identify and resolve issues Implement automations for regular tasks Enable seamless integration of Snowflake with BI Tools like Power BI and create dashboards Support ad hoc query requests while maintaining system performance Create and maintain documentation related to data warehouse architecture, data flow, and processes Provide technical support, troubleshooting, and guidance to users accessing the data warehouse Optimize Snowflake queries and manage performance Keep up to date with emerging trends and technologies in data warehousing and data management Good working knowledge of Linux operating system Working experience on GIT and other repository management solutions Good knowledge of monitoring tools like Dynatrace, Splunk Serve as a technical leader for Snowflake-based projects, ensuring alignment with business goals and timelines Provide mentorship and guidance to team members in Snowflake implementation, performance tuning and data management Collaborate with stakeholders to define and prioritize data warehousing initiatives and roadmaps Act as point of contact for Snowflake-related queries, issues and initiatives What you will need to have: Must have 8 to 10 years of experience in data management tools like Snowflake, Streamsets, Informatica Should have experience on monitoring tools like Dynatrace, Splunk Should have experience on Kubernetes cluster management CloudWatch for monitoring and logging and Linux OS experience Ability to track progress against assigned tasks, report status, and proactively identify issues Demonstrate the ability to present information effectively in communications with peers and project management team Highly organized and works well in a fast paced, fluid and dynamic environment What would be great to have: Experience in EKS for managing Kubernetes cluster Containerization technologies such as Docker and Podman AWS CLI for command-line interactions CI/CD pipelines using Harness S3 for storage solutions and IAM for access management Banking and Financial Services experience Knowledge of software development life cycle best practices

Posted 2 days ago

Apply

5.0 - 8.0 years

3 - 18 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Your specific responsibilities will include: Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education: B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required experience: 5+ years of relevant work experience in the pharmaceutical/life sciences industry, with demonstrated hands-on experience in analyzing, modeling and extracting insights from commercial/marketing analytics datasets (specifically, real-world datasets) High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Experience creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Experience with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Experience with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Experience in analytics use cases of pharmaceutical products and vaccines Experience in market analytics and related use cases Preferred experience: Experience in analytics use cases focused on informing marketing strategies and commercial execution of pharmaceutical products and vaccines Experience with Agile ways of working, leading or working as part of scrum teams Certifications in AWS and/or modern data technologies Knowledge of the commercial/marketing analytics data landscape and key data sources/vendors Experience in building data models for data science andvisualization/reportingproducts, in collaboration with data scientists, report developers and business stakeholders Experience with data visualization technologies (e.g, PowerBI)

Posted 2 days ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Foundit logo

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Associate - Sr.Data Engineer ( DBT+Snowflake ) ! In this role, the Sr.Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Develop, implement, and optimize data pipelines using Snowflake, with a focus on Cortex AI capabilities. Extract, transform, and load (ETL) data from various sources into Snowflake, ensuring data integrity and accuracy. Implement Conversational AI solutions using Snowflake Cortex AI to facilitate data interaction through ChatBot agents. Collaborate with data scientists and AI developers to integrate predictive analytics and AI models into data workflows. Monitor and troubleshoot data pipelines to resolve data discrepancies and optimize performance. Utilize Snowflake%27s advanced features, including Snowpark, Streams, and Tasks, to enable data processing and analysis. Develop and maintain data documentation, best practices, and data governance protocols. Ensure data security, privacy, and compliance with organizational and regulatory guidelines. Responsibilities: . Bachelor&rsquos degree in Computer Science, Data Engineering, or a related field. . experience in data engineering, with experience working with Snowflake. . Proven experience in Snowflake Cortex AI, focusing on data extraction, chatbot development, and Conversational AI. . Strong proficiency in SQL, Python, and data modeling . . Experience with data integration tools (e.g., Matillion , Talend, Informatica). . Knowledge of cloud platforms such as AWS, Azure, or GCP. . Excellent problem-solving skills, with a focus on data quality and performance optimization. . Strong communication skills and the ability to work effectively in a cross-functional team. Proficiency in using DBT%27s testing and documentation features to ensure the accuracy and reliability of data transformations. Understanding of data lineage and metadata management concepts, and ability to track and document data transformations using DBT%27s lineage capabilities. Understanding of software engineering best practices and ability to apply these principles to DBT development, including version control, code reviews, and automated testing. Should have experience building data ingestion pipeline. Should have experience with Snowflake utilities such as SnowSQL , SnowPipe , bulk copy, Snowpark, tables, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight . Should have good experience in implementing CDC or SCD type 2 Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in repository tools like Github /Gitlab, Azure repo Qualifications/Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant of working experience as a Sr. Data Engineer with DBT+Snowflake skillsets Skill Matrix: DBT (Core or Cloud), Snowflake, AWS/Azure, SQL, ETL concepts, Airflow or any orchestration tools, Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 days ago

Apply

0.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Foundit logo

Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Lead consultant - SQL/SSIS/SSRS! Responsibilities . Designing and implementing robust applications. . Good knowledge in implementing SCD1,2 and 3 on SSIS . Good to have knowledge on SSRS . Debugging applications to ensure low-latency and high-availability. . Writing optimized custom SQL queries . Integrate user-facing elements into applications . Test and debug programs for SSIS and SQL developer related . Improve functionality of existing systems . Implement security and data protection solutions . Must be capable of writing SQL queries for validating dashboard outputs . Must be able to translate visual requirements into detailed technical specifications . Good understanding and exposure on any Git, Bamboo, Confluence and Jira. . Good in Dataframes and SQL ANSI using pandas. . Team player, collaborative approach and excellent communication skills . Good to have knowledge on AWS cloud and Snowflake. . Qualifications we seek in you! Minimum qualifications . BE/B Tech/ MCA . Excellent written and verbal communication skills . Candidate should have experience . Understand user requirements and specifications for SSIS packages, SSRS reports and SQL Server Agent jobs to develop new and modify existing packages/reports/jobs . Should have in-depth knowledge on SSIS including data pull from multiple sources & databases, SFTP, SCD, & package optimization . Should to read and write complex queries, ability to create database objects (tables, views, stored procedures, user-defined functions) . Should have experience in creating matrix & tabular reports using SSRS including reports having multiple tabs. Should have understanding of creating Connecting Strings, DataSources, DataSets . Basics of Designing databases, create, and implement database systems based on the end user%27s requirements. . Good understanding and exposure on any Git, Bamboo, Confluence and Jira. . Good in Dataframes and SQL ANSI using pandas. . Team player, collaborative approach and excellent communication skills . Good to have knowledge on AWS cloud and Snowflake. Preferred qualifications . Creative ways of solving problems efficiently and logically . Good understanding of datawarehouse concepts. . Good to have knowledge on Python - pandas . Good to have knowledge on Snowflake DWH, BI Tools (Visualization). . Knowledge on Database performance tunning. . Learning attitude & Flexible with project and timings . Good communication and presentation skills Why join Genpact . Lead AI-first transformation - Build and scale AI solutions that redefine industries . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career&mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills . Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace . Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 days ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Foundit logo

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant - Sr.Data Enginee r ( DBT +Snowfl ake ) ! In this role, the Sr.Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Develop, implement, and optimize data pipelines using Snowflake, with a focus on Cortex AI capabilities. Extract, transform, and load (ETL) data from various sources into Snowflake, ensuring data integrity and accuracy. Implement Conversational AI solutions using Snowflake Cortex AI to facilitate data interaction through ChatBot agents. Collaborate with data scientists and AI developers to integrate predictive analytics and AI models into data workflows. Monitor and troubleshoot data pipelines to resolve data discrepancies and optimize performance. Utilize Snowflake%27s advanced features, including Snowpark, Streams, and Tasks, to enable data processing and analysis. Develop and maintain data documentation, best practices, and data governance protocols. Ensure data security, privacy, and compliance with organizational and regulatory guidelines. Responsibilities : . Bachelor&rsquos degree in Computer Science, Data Engineering, or a related field. e xperience in data engineering, wit h experience working with Snowflake. . Proven experience in Snowflake Cortex AI, focusing on data extraction, chatbot development, and Conversational AI. . Strong proficiency in SQL, Python, and data modeling . . Experience with data integration tools (e.g., Matillion , Talend, Informatica). . Knowledge of cloud platforms such as AWS, Azure, or GCP. . Excellent problem-solving skills, with a focus on data quality and performance optimization. . Strong communication skills and the ability to work effectively in a cross-functional team. Proficiency in using DBT%27s testing and documentation features to ensure the accuracy and reliability of data transformations. Understanding of data lineage and metadata management concepts, and ability to track and document data transformations using DBT%27s lineage capabilities. Understanding of software engineering best practices and ability to apply these principles to DBT development, including version control, code reviews, and automated testing. Should have experience building data ingestion pipeline. Should have experience with Snowflake utilities such as SnowSQL , SnowPipe , bulk copy, Snowpark, tables, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight . Should have good experience in implementing CDC or SCD type 2 Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in repository tool s like Github /Gitlab, Azure repo Qualifications/Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant of working experience as a Sr. Data Engineer with DBT+Snowflake skillsets Skill Matrix: DBT (Core or Cloud), Snowflake, AWS/Azure, SQL, ETL concepts, Airflow or any orchestration tools, Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 days ago

Apply

4.0 - 9.0 years

2 - 20 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

Qualifications we are looking for Master/Bachelor degreein Computer Science, Electrical Engineering, Information Systemsor other technical discipline; advanced degree preferred. Minimum of 7+ years of software development experience (with a concentration in data centric initiatives), with demonstrated expertise in leveraging standard development best practice methodologies. Minimum 4+years of experience in Hadoop using Core JavaProgramming, Spark, Scala, Hive and Go lang Expertise in Object Oriented Programming Language Java Experience using CI/CD Process, version control and bug tracking tools. Experience in handling very large data volume in Real Time and batch mode. Experience with automation of job execution and validation Strong knowledge of Database concepts Strong team player. Strong communication skills with proven ability to present complex ideas and document in a clear and concise way. Quick learner; self-starter, detailed and in-depth.

Posted 3 days ago

Apply

7.0 - 11.0 years

3 - 14 Lacs

Hyderabad, Telangana, India

On-site

Foundit logo

Design, develop, and maintain efficient and scalable Snowflake data warehouse solutions on AWS. Build robust ETL/ELT pipelines using SQL, Python, and AWS services (e.g., Glue, Lambda, S3). Collaborate with data analysts, engineers, and business teams to gather requirements and design data models aligned with business needs. Optimize Snowflake performance through best practices in clustering, partitioning, caching, and query tuning. Ensure data quality, accuracy, and completeness across data pipelines and warehouse processes. Maintain documentation and enforce best practices for data architecture, governance, and security. Continuously evaluate tools, technologies, and processes to improve system reliability, scalability, and performance. Ensure compliance with relevant data privacy and security regulations (e.g., GDPR, CCPA). Bachelor s degree in Computer Science, Information Technology, or a related field. Minimum 5 years of experience in data engineering, with at least 3 years of hands-on experience with Snowflake.

Posted 3 days ago

Apply

6.0 - 8.0 years

2 - 13 Lacs

Hyderabad, Telangana, India

On-site

Foundit logo

Responsibilities Design, develop, and maintain efficient and scalable Snowflake data warehouse solutions on AWS. Build robust ETL/ELT pipelines using SQL, Python, and AWS services (e.g., Glue, Lambda, S3). Collaborate with data analysts, engineers, and business teams to gather requirements and design data models aligned with business needs. Optimize Snowflake performance through best practices in clustering, partitioning, caching, and query tuning. Ensure data quality, accuracy, and completeness across data pipelines and warehouse processes. Maintain documentation and enforce best practices for data architecture, governance, and security. Continuously evaluate tools, technologies, and processes to improve system reliability, scalability, and performance. Ensure compliance with relevant data privacy and security regulations (e.g., GDPR, CCPA). QualificationsBachelor s degree in Computer Science, Information Technology, or a related field. Minimum 6 years of experience in data engineering, with at least 3 years of hands-on experience with Snowflake. Strong experience working with AWS services such as S3, Glue, Lambda, Redshift, and IAM. Proficient in SQL and Python for data transformation and scripting. Solid understanding of data modeling principles (Star/Snowflake schema, normalization/denormalization). Experience in performance tuning and Snowflake optimization techniques. Excellent problem-solving skills and ability to work independently or as part of a team. Strong communication skills, both written and verbal.

Posted 3 days ago

Apply

8.0 - 13.0 years

8 - 13 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

Lead the design and implementation of enterprise-level scalable, secure, and high-performance cloud data storage solutions to meet business needs and data growth. Architect and oversee the development of complex Azure Data Factory workflows, Data Lake Storage, and secure cloud integration using Azure Private Link.Design and implement scalable, secure, and high-performance data storage solutions in the cloud, ensuring optimal accessibility and reliability. Develop zone-redundant and multi-region data storage and data warehouse (DW) solutions on Azure, along with the design and implementation of storage lifecycle policies. Integrate cloud platforms (Azure or AWS) with third-party SaaS data platforms like Snowflake, DataIku, Fivetran, DBT Cloud, etc, ensuring seamless data flow, secure networking, and Single Sign-On (SSO) integration. Securely integrate and manage data and applications between ADI s data centers and various cloud environments. Ensure compliance with data governance policies, including data encryption, access control, and other security measures. Monitor, analyze, and improve cloud data systems performance at scale, identifying optimization opportunities and implementing performance tuning practices across the infrastructure. Deploy Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and serverless solutions on Azure and AWS. Utilize Terraform, ARM, and CloudFormation Templates (CFT) to implement Infrastructure as Code (IaC) for deployments in Azure and AWS.Design and development of data pipelines, automation, and visualization tools to enable modern Data and analytics solutions. Partner with the data platform and infrastructure teams to architect and connect high quality, resilient data feeds from across the enterprise. Manage the Data platform from change management, availability and troubleshooting perspective. Ability to automate deployments and maintain high levels of uptime. Qualifications: 8+ years of experience in software engineering, including 5+ years working with data engineering technologies and cloud technologies. Degree in Computer Science, Electrical Engineering, Computer Engineering, or a related field. Strong expertise in Azure Data Factory, Data Lake Storage, and leveraging Azure Private Link for secure, enterprise-level cloud integrations. Exposure to Azure Data Fabric or Databricks Hands-on knowledge of Azure, AWS and SaaS based data platforms. Proven experience to deploy, manage and support cloud-based data platforms. Exceptional problem-solving skills, able to troubleshoot and optimize cloud infrastructure and data systems. In-depth understanding of data governance, security protocols, compliance frameworks, and implementing encryption and access control measures. Experience with automation and CI/CD pipelines in cloud environments. Experience in leading cross-functional teams, collaborating with business stakeholders, and translating technical requirements into scalable solutions. Experience with advanced monitoring and logging tools such as Azure Insights, AWS CloudWatch, and Datadog.

Posted 3 days ago

Apply

5.0 - 10.0 years

2 - 9 Lacs

Hyderabad, Telangana, India

On-site

Foundit logo

Advanced SQL queries, scripts, stored procedures, materialized views, and views Focus on ELT to load data into database and perform transformations in database Ability to use analytical SQL functions Snowflake experience Cloud Data Warehouse solutions experience (Snowflake, Azure DW, or Redshift); data modeling, analysis, programming Experience with DevOps models utilizing a CI/CD tool Work in hands-on Cloud environment in Azure Cloud Platform (ADLS, Blob) Airflow Would be a plus Good interpersonal skills; comfort and competence in dealing with different teams within the organization. Requires an ability to interface with multiple constituent groups and build sustainable relationships. Strong and effective communication skills (verbal and written). Strong analytical, problem-solving skills. Experience of working in a matrix organization. Ability to prioritize and deliver. Results-oriented, flexible, adaptable. Work well independently and lead a team. Versatile, creative temperament, ability to think out-of-the box while defining sound and practical solutions. Ability to master new skills. Familiar with Agile practices and methodologies Professional data engineering experience focused on batch and real-time data pipelines using Spark, Python, SQL Data warehouse (data modeling, programming) Experience working with Snowflake Experience working on a cloud environment, preferably, Microsoft Azure Cloud Data Warehouse solutions (Snowflake, Azure DW)

Posted 3 days ago

Apply

8.0 - 13.0 years

2 - 11 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

Advanced SQL queries, scripts, stored procedures, materialized views, and views Focus on ELT to load data into database and perform transformations in database Ability to use analytical SQL functions Snowflake experience Cloud Data Warehouse solutions experience (Snowflake, Azure DW, or Redshift); data modeling, analysis, programming Experience with DevOps models utilizing a CI/CD tool Work in hands-on Cloud environment in Azure Cloud Platform (ADLS, Blob) Airflow Preferred candidate profile Good interpersonal skills; comfort and competence in dealing with different teams within the organization. Requires an ability to interface with multiple constituent groups and build sustainable relationships. Strong and effective communication skills (verbal and written). Strong analytical, problem-solving skills. Experience of working in a matrix organization. Proactive problem solver. Ability to prioritize and deliver. Results-oriented, flexible, adaptable. Work well independently and lead a team. Versatile, creative temperament, ability to think out-of-the box while defining sound and practical solutions. Ability to master new skills. Familiar with Agile practices and methodologies Professional data engineering experience focused on batch and real-time data pipelines using Spark, Python, SQL Data warehouse (data modeling, programming) Experience working with Snowflake Experience working on a cloud environment, preferably, Microsoft Azure Cloud Data Warehouse solutions (Snowflake, Azure DW)

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies