Jobs
Interviews

2470 Snowflake Jobs - Page 35

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,BSc,BCA,MCA,MTech,MSc Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of design principles and fundamentals of architecture Understanding of performance engineering Knowledge of quality processes and estimation techniques Basic understanding of project domain Ability to translate functional / nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of SDLC and agile methodologies Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Technical and Professional : Primary skills:Technology-Data on Cloud-DataStore-Snowflake Preferred Skills: Technology-Data on Cloud-DataStore-Snowflake

Posted 3 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role & responsibilities Responsible for designing, developing ETL jobs, guides data engineers to implement data pipelines based on the architectural requirement, supports data test engineers in automating data validation jobs and configuring them as part of pipeline, evaluates tools, builds proof of concepts. Role and Skills: Must Have:-12+ years of overall experience with minimum 6 years solid experience in Data warehouse/Data Lake ETL/ELT jobs -Experience designing ETL pipelines for complex Data Warehousing/ Data Lake projects -Strong knowledge in SQL Server and Oracle Databases or any other equivalent RDBMS database -Evaluate COTS/ Open-source product and recommend right fit solutions for given customer challenge -Communicates technical architecture to other technical teams and leadership teams -Work with Data Stewards & Business Analysts to understand and define data quality thresholds -Good understanding about Data Quality, Data Modernization, and Data Governance -Understanding Meta data management, master data management and reference data management -Experience in using Cloud data platforms such as Snowflake, Kafka, Databricks etc. -Hands-on experience in big data analytics such as Hadoop, Apache Spark, Hive, Presto etc.-Understanding blob-based storage option in Cloud (AWS S3 or Azure Blob storage etc) -Knowledge in Cloud based data services like RDS, Data Factory, AWS Glue etc. -Exposure to Power.BI, Tableau, Qlix, Data.world-Experienced in automated data validation using RightData, DataGaps, ICEDQ or equivalent products Preferred for NextGen: -AI/ML Knowledge on building Intelligent data validations and eliminate data anomalies-Understanding in DataOps, Data Fabric and Data Mesh concepts -Experience in any Data Orchestration tools such as Apache Airflow -Certification in relevant area of expertise will be an added advantage 3 must haves Data Engineering, DWH and Data Lake Design Cloud Data Stack (Azure, AWS), MDM, Data Governance, Good Communication

Posted 3 weeks ago

Apply

7.0 - 12.0 years

6 - 16 Lacs

Bengaluru

Remote

5+ years’ experience with a strong proficiency with SQL query/development skills Hands-on experience with ETL tools Experience working in the healthcare industry with PHI/PII

Posted 3 weeks ago

Apply

5.0 - 10.0 years

27 - 35 Lacs

Chennai

Work from Office

• Experience data modeling for OLTP. OLAP, Star schema, Snowflake. • SAP HANA Studio, SAP XSC/XSA, SAP SLT, SAP Data Services • SAP Business Objects Data Services, SSIS, Power BI, Business Objects • Design and develop data models using SAP XSC /XSA

Posted 3 weeks ago

Apply

3.0 - 4.0 years

0 - 1 Lacs

Bengaluru

Remote

Cloud & Data Engineering Specialist Remote Work Contract Duration: 6 months Exp Level: 3 - 4 years (Must be able to work according to JD) Work Timings: (2:30 pm to 11:30 pm IST) We are seeking two highly skilled Cloud & Data Engineering Specialists to join our dynamic team. These roles will focus on designing, building, and optimizing scalable cloud-based solutions, data pipelines, and analytics platforms. The ideal candidates will have strong expertise in cloud platforms, data engineering, and modern technologies, with a focus on delivering robust, secure, and efficient data solutions. Position 1: Cloud & Data Engineering Specialist (Resource 1) Key Responsibilities: Design, develop, and maintain cloud-based solutions on Azure or AWS. Implement and manage real-time data streaming and messaging systems using Kafka. Develop scalable applications and services using Java and Python. Deploy, manage, and monitor containerized applications using Kubernetes. Build and optimize big data processing pipelines using Databricks. Manage and maintain databases, including SQL Server and Snowflake, and write complex SQL scripts. Work with Unix/Linux commands to manage and monitor system operations. Collaborate with cross-functional teams to ensure seamless integration of cloud-based solutions. Key Skills: Expertise in Azure or AWS cloud platforms. Proficiency in Kafka, Java, Python, and Kubernetes. Hands-on experience with Databricks for big data processing. Strong database management skills with SQL Server, Snowflake, and advanced SQL scripting. Solid understanding of Unix/Linux commands. Position 2: Cloud & Data Engineering Specialist (Resource 2) Key Responsibilities: Design and implement cloud solutions across Azure, AWS, and GCP platforms. Develop and optimize data pipelines using PySpark, Python, and SQL. Build and manage ETL workflows using Azure Data Factory (ADF). Work with big data technologies such as Apache Spark and Databricks to process large datasets. Design and deliver dashboards and reports using Tableau and Power BI. Implement DevOps practices, including version control with Git, CI/CD pipelines, and containerization using Docker. Collaborate with stakeholders to gather requirements and deliver scalable data solutions. Key Skills: Proficiency in Azure, AWS, and GCP cloud platforms. Strong programming skills in Python, SQL, and PySpark. Experience with Snowflake and SQL Server databases. Expertise in ETL tools like Azure Data Factory (ADF). Hands-on experience with Apache Spark and Databricks for big data processing. Proficiency in reporting tools such as Tableau and Power BI. Knowledge of DevOps practices, including Git, CI/CD pipelines, and Docker. General Requirements for Both Roles: Bachelors degree in Computer Science, Engineering, or a related field (or equivalent experience). 3+ years of experience in cloud and data engineering roles. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Proven ability to work in a fast-paced, agile environment. Send resume to kalaivanan.balasubramaniam@covalensedigital.com Thanks kalai 8015302990

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

About the Role: We're hiring 2 Cloud & Data Engineering Specialists to join our fast-paced, agile team. These roles are focused on designing, developing, and scaling modern, cloud-based data engineering solutions using tools like Azure, AWS, GCP, Databricks, Kafka, PySpark, SQL, Snowflake, and ADF. Key Responsibilities: Develop and manage cloud-native solutions on Azure or AWS Build real-time streaming apps with Kafka Engineer services using Java and Python Deploy and manage Kubernetes-based containerized applications Process big data using Databricks Administer SQL Server and Snowflake databases, write advanced SQL Utilize Unix/Linux for system operations Must-Have Skills: Azure or AWS cloud experience Kafka, Java, Python, Kubernetes Databricks, SQL Server, Snowflake Unix/Linux commands. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote.

Posted 3 weeks ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

About the Role We are looking for a technical and hands-on Lead Data Engineer to help drive the modernization of our data transformation workflows. We currently rely on legacy SQL scripts orchestrated via Airflow, and we are transitioning to a modular, scalable, CI/CD-driven DBT-based data platform. The ideal candidate has deep experience with DBT , modern data stack design , and has previously led similar migrations improving code quality, lineage visibility, performance, and engineering best practices. Key Responsibilities Lead the migration of legacy SQL-based ETL logic to DBT-based transformations Design and implement a scalable, modular DBT architecture (models, macros, packages) Audit and refactor legacy SQL for clarity, efficiency, and modularity Improve CI/CD pipelines for DBT: automated testing, deployment, and code quality enforcement Collaborate with data analysts, platform engineers, and business stakeholders to understand current gaps and define future data pipelines Own Airflow orchestration redesign where needed (e.g., DBT Cloud/API hooks or airflow-dbt integration) Define and enforce coding standards, review processes, and documentation practices Coach junior data engineers on DBT and SQL best practices Provide lineage and impact analysis improvements using DBTs built-in tools and metadata Must-Have Qualifications 8+ years of experience in data engineering Proven success in migrating legacy SQL to DBT , with visible results Deep understanding of DBT best practices , including model layering, Jinja templating, testing, and packages Proficient in SQL performance tuning , modular SQL design, and query optimization Experience with Airflow (Composer, MWAA), including DAG refactoring and task orchestration Hands-on experience with modern data stacks (e.g., Snowflake, BigQuery etc.) Familiarity with data testing and CI/CD for analytics workflows Strong communication and leadership skills; comfortable working cross-functionally Nice-to-Have Experience with DBT Cloud or DBT Core integrations with Airflow Familiarity with data governance and lineage tools (e.g., dbt docs, Alation) Exposure to Python (for custom Airflow operators/macros or utilities) Previous experience mentoring teams through modern data stack transitions

Posted 3 weeks ago

Apply

6.0 - 10.0 years

13 - 23 Lacs

Hyderabad, Bengaluru

Work from Office

Senior Snowflake Developer Bangalore/ Hyderabad 2nd Shift - (2 - 11PM) NOTE: LOOKING FOR SOMEONE WHO CAN START Required Experience: A minimum of 10 years of hands-on experience in the IT industry At least 5 years of experience in client invoicing and automation processes Strong communication skills, both verbal and written Proficient in using Jira for task tracking and project coordination Demonstrated project management experience. Technical Expertise: Minimum of 7 years of hands-on experience in the following areas: Snowflake cloud data platform SQL development (including Snowflake SQL and SQL Server) Experience with Data Modelling and Stored Procedures DBT (Data Build Tool) for data transformation Apache Airflow for workflow orchestration Google Cloud Platform (GCP) services Strong understanding of Business Intelligence (BI) tools, especially Power BI HVR and Fivetran for data replication Apache Kafka for real-time data streaming Octopus Deploy and TeamCity for CI/CD and deployment automation

Posted 3 weeks ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

New Delhi, Bengaluru

Work from Office

Sales & Marketing Manager --> --> Location, Designation --> LocationRemote DesignationSales & Marketing Manager Experience3-5 Years About XO Tek XO Tek is a cutting-edge IT software and services company based in Los Angeles, California, specializing in delivering exceptional services in Snowflake, data migration, AWS, RPA, and generative AI solutions. Our mission is to empower businesses by optimizing their data processes and leveraging the latest in technology innovations. We're on the hunt for a dynamic Offshore Sales and Marketing Professional to join our team and drive our sales to new heights. As an Offshore Sales and Marketing Professional at XO Tek, you will play a pivotal role in accelerating our growth by leveraging your expertise in IT software professional services sales. You will be responsible for generating leads, nurturing relationships, and closing deals to meet a sales target of $1MM USD. Your deep understanding of the technology landscape and existing network will be crucial in offering our specialized services to a broader audience. Responsibilities Develop and implement effective sales strategies to achieve a sales target of $1MM USD. Utilize CRM tools, such as LinkedIn Sales Navigator, to manage and grow sales pipelines efficiently. Create and maintain a sales run book for replicable success in lead generation and conversion. Leverage existing relationships and networks to offer XO Tek's services, identifying new business opportunities. Work closely with the marketing team to develop targeted campaigns that align with sales strategies. Provide detailed sales forecasting and track sales activities to ensure targets are met. Stay abreast of industry trends and competitive landscapes to position XO Tek as a leader in IT software and services. Proven track record of generating and meeting sales targets of at least $1MM USD, with evidence to support this achievement. Established relationships and networks within the IT and technology sectors, with a focus on software and professional services sales. Demonstrated experience in generating leads that convert into sales, with a strategic approach to business development. Proficiency in utilizing CRM tools, specifically LinkedIn Sales Navigator, for sales management and lead generation. Experience in creating and utilizing a sales run book to drive lead generation and sales efforts. Excellent communication and interpersonal skills, with the ability to engage effectively with clients and team members. Self-motivated with a results-driven approach, capable of working independently in a remote setting. Why Join Us At XO Tek, you'll have the opportunity to be a part of a forward-thinking company that's at the forefront of technological innovation. We offer a competitive compensation package, a flexible remote work environment, and the chance to make a significant impact on our growth trajectory. If you're passionate about technology and have a knack for sales, we'd love to hear from you. Feel Free To Contact Us...!!! Submit

Posted 3 weeks ago

Apply

10.0 - 16.0 years

30 - 40 Lacs

Chennai

Remote

Greetings from Sutherland! We are hiring for a Senior Manager - Data Scientist. This is a 5 day work from home/remote opportunity with UK shifts(3 pm - 12 pm) shifts. Please see below the job description and qualifications required for the same. The Sr. Manager / Manager Data Science at Sutherland would play a very vital role in analysing the data from various sources and providing insights that can help in driving performance in the right direction. The position would employ mathematical and statistical methods to analyse various forms of data (like data from Analytics tools, monitoring data, surveys, KPI data, etc.) and will recommend solutions or changes to processes and procedures to help improve performance. Job Description Collect data from various sources like Analytics tools, CRM, Survey, Monitoring, etc. Thoroughly clean and prune data to discard irrelevant information Explore and examine data from a variety of angles to determine hidden weaknesses, trends and/or opportunities. Conduct undirected research and frame open-ended industry questions. Employ sophisticated analytics tools and statistical methods to build efficient descriptive, diagnostic, and predictive models like trends, RCA of opportunities, prediction of KPIs, and employee behaviour like attrition. Prescribing solutions, process changes, employee profiling, training / coaching needs, etc. Devise data-driven solutions for the business challenges Communicate predictions and findings to management through effective data visualizations, reports and presentations Measure the effectiveness of actions like training, coaching, etc. Recommend cost-effective changes to existing procedures and strategies Work in a collaborative manner with stakeholders Managers and Supervisors in various departments to collect data and implement changes that are required. Keep the Management updated about the Transformational initiatives and their effectiveness Qualifications Graduate / Postgraduate in Science / Mathematics / Statistics / Engineering 10+ years of overall experience in IT/ ITES industry supporting clients from different verticals Strong analytical ability with good knowledge of data mining, data visualization and statistical tools and methods Ability to build effective models for data analysis and analytics Excellent Project management skills with ability to create and execute action plans while working with multiple stakeholders Effective communicator with experience in interacting with stakeholders internally and externally Experience with Digital tools and Transformation technology

Posted 3 weeks ago

Apply

9.0 - 11.0 years

0 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Snowflake, SQL, Stored Procs, Azure Data Bricks, PySpark, Unity Catalog, Purview, Data Build Tool (DBT), Lakehouse, Delta Tables, Optimization and Troubleshooting skills, Metadata Drven Framework. Good to have Security Knowledge, PowerBI, Scala

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Mumbai, Hyderabad, Bengaluru

Hybrid

Primary Skill: AWS Glue, Python,Pyspark,Snowflake Snowflake – Understanding of its architecture, data ingestion, and query optimization. (existing data warehouse) Python & SQL – Strong programming skills for data transformation and querying. Data Warehousing – Experience in managing existing Snowflake data warehouses and optimizing performance

Posted 3 weeks ago

Apply

5.0 - 9.0 years

15 - 19 Lacs

Chennai

Work from Office

Senior Data Engineer - Azure Years of Experience : 5 Job location: Chennai Job Description : We are looking for a skilled and experienced Senior Azure Developer to join the team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating,and helping others, this job is for you! Primary Skills : ADF,Databricks Secondary Skills : DBT,Python,Databricks,Airflow,Fivetran,Glue,Snowflake Role Description : Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Role Responsibility : l Translate functional specifications and change requests into technical specifications l Translate business requirement document, functional specification, and technical specification to related coding l Develop efficient code with unit testing and code documentation l Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving l Setting up the development environment and configuration of the development tools l Communicate with all the project stakeholders on the project status l Manage, monitor, and ensure the security and privacy of data to satisfy business needs l Contribute to the automation of modules, wherever required l To be proficient in written, verbal and presentation communication (English) l Co-ordinating with the UAT team Role Requirement : l Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) l Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) l Knowledgeable in Shell / PowerShell scripting l Knowledgeable in relational databases, nonrelational databases, data streams, and file stores l Knowledgeable in performance tuning and optimization l Experience in Data Profiling and Data validation l Experience in requirements gathering and documentation processes and performing unit testing l Understanding and Implementing QA and various testing process in the project l Knowledge in any BI tools will be an added advantage l Sound aptitude, outstanding logical reasoning, and analytical skills l Willingness to learn and take initiatives l Ability to adapt to fast-paced Agile environment Additional Requirement : l Demonstrated expertise as a Data Engineer, specializing in Azure cloud services. l Highly skilled in Azure Data Factory, Azure Data Lake, Azure Databricks, and Azure Synapse Analytics. l Create and execute efficient, scalable, and dependable data pipelines utilizing Azure Data Factory. l Utilize Azure Databricks for data transformation and processing. l Effectively oversee and enhance data storage solutions, emphasizing Azure Data Lake and other Azure storage services. l Construct and uphold workflows for data orchestration and scheduling using Azure Data Factory or equivalent tools. l Proficient in programming languages like Python, SQL, and conversant with pertinent l scripting languages.

Posted 3 weeks ago

Apply

11.0 - 13.0 years

24 - 30 Lacs

Hyderabad

Work from Office

Job Title: Data Engineer Location: Hyderabad (Onsite) Office Location: Near ITC Kohenur Job Description: We are looking for a skilled Data Engineer to join our team in Hyderabad. .

Posted 3 weeks ago

Apply

5.0 - 9.0 years

13 - 19 Lacs

Chennai

Work from Office

Senior Data Engineer - DBT and Snowflake Years of Experience : 5 Job location: Chennai Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Should hold minimum 5 years of experience in DBT and Snowflake. Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, non-relational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Additional Requirement: Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Snowflake, ensure the effective transformation and load data from diverse sources into data warehouse or data lake. Implement and manage data models in DBT, guarantee accurate data transformation and alignment with business needs. Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance. Establish best DBT processes to improve performance, scalability, and reliability. Expertise in SQL and a strong understanding of Data Warehouse concepts and Modern Data Architectures. Familiarity with cloud-based platforms (e.g., AWS, Azure, GCP). Migrate legacy transformation code into modular DBT data models.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

10 - 18 Lacs

Bengaluru

Work from Office

SUMMARY Job Role: Snowflake Data Warehouse Location Bangalore Experience 4+ years Must-Have The candidate should have a minimum of 3 years of relevant experience in Snowflake Data Warehouse. Responsibilities Analyze, design, code, and test multiple components of application code across one or more clients. Perform maintenance, enhancements, and/or development work, contributing to the overall success of the projects. Perform independently and become an SME. Actively participate/contribute in team discussions. Provide solutions to work-related problems. Collaborate with team members to analyze, design, and develop software solutions. Write clean, maintainable, and efficient code following best practices. Participate in code reviews and provide constructive feedback to peers. Troubleshoot, debug, and resolve technical issues. Stay updated on emerging technologies and apply them to projects. Professional & Technical Skills Proficiency in Snowflake Data Warehouse. Strong understanding of ETL processes and data modeling. Experience with cloud-based data platforms like AWS or Azure. Hands-on experience with SQL and database management systems. Knowledge of data warehousing concepts and best practices. Additional Information This position is based at our Bengaluru office. A 15 years full-time education is required. Requirements Requirements: Minimum of 3 years of experience in Snowflake Data Warehouse.

Posted 3 weeks ago

Apply

9.0 - 14.0 years

25 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Role & responsibilities Job Overview: We are looking for a Senior Data Engineer with strong expertise in SQL, Python, Azure Synapse, Azure Data Factory, Snowflake, and Databricks . The ideal candidate should have a solid understanding of SQL (DDL, DML, query optimization) and ETL pipelines while demonstrating a learning mindset to adapt to evolving technologies. Key Responsibilities: Collaborate with business and IT stakeholders to define business and functional requirements for data solutions. Design and implement scalable ETL/ELT pipelines using Azure Data Factory, Databricks, and Snowflake . Develop detailed technical designs, data flow diagrams, and future-state data architecture . Evangelize modern data modelling practices , including entity-relationship models, star schema, and Kimball methodology . Ensure data governance, quality, and validation by working closely with quality engineering teams . Write, optimize, and troubleshoot complex SQL queries , including DDL, DML, and performance tuning . Work with Azure Synapse, Azure Data Lake, and Snowflake for large-scale data processing . Implement DevOps and CI/CD best practices for automated data pipeline deployments. Support real-time streaming data processing with Spark, Kafka, or similar technologies . Provide technical mentorship and guide team members on best practices in SQL, ETL, and cloud data solutions . Stay up to date with emerging cloud and data engineering technologies and demonstrate a continuous learning mindset .

Posted 3 weeks ago

Apply

5.0 - 20.0 years

10 - 35 Lacs

Hyderabad, Pune, Delhi / NCR

Work from Office

Mandatory Skill - Snowflake, Matillion

Posted 3 weeks ago

Apply

5.0 - 10.0 years

9 - 15 Lacs

Chennai, Bengaluru

Work from Office

Role & responsibilities Role: Senior Technical Lead Skill : AI/ML/GENAI, Python ( NumPy,Pandas, Scikit-learn, Keras, Flask, SciPy , TensorFlow, NLTK),GPT/PaLM/BERT, BART, Azure Cloud Billing Rate: USD 28 #of positions: 1 Work Location; Chennai (preferred), Bangalore , Hyderabad, Noida, Pune Notice Period: Immediate Joiners Those who are interested may send their resume to aswathy.rajan@hcltech.com

Posted 3 weeks ago

Apply

7.0 - 12.0 years

7 - 17 Lacs

Bengaluru

Work from Office

About this role: Wells Fargo is seeking for a candidate who is responsible for overseeing enterprise data governance, quality and integrity initiatives. The candidate should collaborate with the business and technology partners to ensure the data is accurate, consistent, accessible and aligned with business needs. the role combines deep technical skills, strategic thinking, cross functional leadership and hands-on problem solving. In this role, you will: Organize and lead complex companywide initiatives to ensure that data quality is maintained so that data can effectively support business processes Oversee analysis and reporting in support of regulatory requirements Identify and recommend analysis of data quality or integrity issues Evaluate data quality metrics and data quality audits to benchmark the state of data quality Make decisions in complex and multi-faceted situations requiring solid understanding of data governance standards and procedures Identify new data sources and develop recommendations for assessing the quality of new data Lead project teams and mentor less experienced staff members Recommend remediation of process or control gaps that align to management strategy Serve as relationship manager for a line of business Consult with and provide recommendations to senior management regarding assessments of the current state of data and metadata quality within area of assigned responsibility Represent client in cross-functional groups to develop companywide data governance strategies Strategically collaborate and consult with peers, colleagues, and mid-level to senior managers to coordinate and drive collaboration on solution design and remediation execution Required Qualifications: 7+ years of Data Management, Business Analysis, Analytics, or Project Management experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Lead efforts through phases of a large enterprise project; requirements, design, implementation, testing, and end-state product management Lead complex data product initiatives, including those that are cross-functional with broad impact, and act as key participant in large-scale planning driving data enablement and capabilities across platforms and utilities Review and analyze complex multi-faceted, larger scale or longer-term data product initiatives that require in-depth evaluation of multiple factors, including intangibles or unprecedented factors to drive data enablement strategies and roadmaps, while adhering to set data governance and standards Make decisions in complex and multi-faceted situations requiring solid understanding of data, analytics, and integration needs of line of business partners to inform prioritization, roadmap, and architecture design, that influence and lead broader work team to meet deliverables and drive new initiatives Strategically collaborate and consult with peers, colleagues, and mid-level to senior managers to ensure data product solutions are built for optimal performance and design analytics applications across multiple platforms, resolve data product issues, and achieve goals; may lead projects, teams or serve as a peer mentor Provide strategic input on new use case intake, prioritization, product roadmap definition, and other critical business processes Manage complex datasets continuously focusing on the consumers of the data and their business needs, while adhering to set data governance Create and maintain data product roadmaps throughout the data product life cycle with detailed specifications, requirements, and flows for data capabilities Design and maintain innovative data products, enabling data availability for data intelligence, analytics, and reporting Serve as a strategic liaison between data management, product teams, data engineering, and architecture teams throughout the data product life cycle Skills: SQL - Teradata, Snowflake Python, Regression & Clustering, Alteryx & LLM Job Expectations: Lead development and implementation of data management strategies across business units. Ensure consistent data definition and usage across systems. Mentor and guide data analysts, enforcing best practices in data handling Collaborate with data engineers and architects to optimize data flow and storage. Oversee processes for data profiling, validation, cleansing and enrichment Identify, investigate and resolve data quality issues through root cause analysis Support data integration efforts and data architecture review

Posted 3 weeks ago

Apply

10.0 - 15.0 years

30 - 35 Lacs

Hyderabad

Work from Office

Define, Design, and Build an optimal data pipeline architecture to collect data from a variety of sources, cleanse, and organize data in SQL & NoSQL destinations (ELT & ETL Processes). Define and Build business use case-specific data models that can be consumed by Data Scientists and Data Analysts to conduct discovery and drive business insights and patterns. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build and deploy analytical models and tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Define, Design, and Build Executive dashboards and reports catalogs to serve decision-making and insight generation needs. Provide inputs to help keep data separated and secure across data centers on-prem and private and public cloud environments. Create data tools for analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Implement scheduled data load process and maintain and manage the data pipelines. Troubleshoot, investigate, and fix failed data pipelines and prepare RCA. Experience with a mix of the following Data Engineering Technologies Python, Spark, Snowflake, Databricks, Hadoop (CDH), Hive, Sqoop, oozie SQL Postgres, MySQL, MS SQL Server Azure ADF, Synapse Analytics, SQL Server, ADLS G2 AWS Redshift, EMR cluster, S3 Experience with a mix of the following Data Analytics and Visualization toolsets SQL, PowerBI, Tableau, Looker, Python, R Python libraries -- Pandas, Scikit-learn, Seaborn, Matplotlib, TF, Stat-Models, PySpark, Spark-SQL, R, SAS, Julia, SPSS, Azure Synapse Analytics, Azure ML studio, Azure Auto ML

Posted 3 weeks ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Should have developed/Worked for atleast 1 Gen AI project. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 3 weeks ago

Apply

7.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Proficiency in SQL * Data modeling in both SQL and PowerBi Star and Snowflake Schema* Experience with integrating the solution from one BI tool to other BI tools* Solution design and architecture experience* Knowledge or experience in data engineering Extraction, Transformation, Load* Familiarity with data connectors Cloud & On-Premises Databases, Gateways, SharePoint* Expertise in Power BI transformations using Power Query Editor & M - Language* Strong visualization skills * Advanced DAX * Understanding of Row-Level Security (RLS) and other security measures * Knowledge of Power BI Service architecture (Workspaces, Apps, Schedule Refresh) * Capability in data loading and incremental refresh techniques * Experience with web embedding UI/UX knowledge Experience with embedding the PowerBi report in Power Apps. Experience in building Paginated report Min. 7+ years relevant experience in Powerbase. And not less than 7 years. Good to have. Power Platform Power Apps, Power Automate Experience Microsoft Fabric Knowledge MS BI SSRS

Posted 3 weeks ago

Apply

8.0 - 13.0 years

3 - 6 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Data Engineer with extensive experience in Snowflake, Data Build Tool (dbt), Snaplogic, SQL Server, PostgreSQL, Azure Data Factory, and other ETL tools. The ideal candidate will have a strong ability to optimize SQL queries and a good working knowledge of Python. A positive attitude and excellent teamwork skills are essential. Key Responsibilities: Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Snowflake, DBT, Snaplogic, and ETL tools. SQL Optimization: Write and optimize complex SQL queries to ensure high performance and efficiency. Data Integration: Integrate data from various sources, ensuring consistency, accuracy, and reliability. Database Management: Manage and maintain SQL Server and PostgreSQL databases. ETL Processes: Develop and manage ETL processes to support data warehousing and analytics. Collaboration: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions. Documentation: Maintain comprehensive documentation of data models, data flows, and ETL processes. Troubleshooting: Identify and resolve data-related issues and discrepancies. Python Scripting: Utilize Python for data manipulation, automation, and integration tasks. Technical Skills: Proficiency in Snowflake, DBT, Snaplogic, SQL Server, PostgreSQL, and Azure Data Factory. Strong SQL skills with the ability to write and optimize complex queries. Knowledge of Python for data manipulation and automation. Knowledge of data governance frameworks and best practices Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Positive attitude and ability to work well in a team environment. Certifications: Relevant certifications (e.g., Snowflake, Azure) are a plus.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

4 - 9 Lacs

Hyderabad

Work from Office

Key Responsibilities: Create and maintain optimal data pipeline architecture. Assemble large, complex data sets that meet functional/non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. using Python/open source technologies. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud database technologies. Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure across national boundaries through multiple data centers and regions. Work with data and analytics experts to strive for greater functionality in our data systems. Test databases and perform bug fixes. Develop best practices for database design and development activities. Ability to quickly analyze existing SQL code and make improvements to enhance performance, take advantage of new SQL features, close security gaps, and increase robustness and maintainability of the code. Take on technical leadership responsibilities of database projects across various scrum teams Manage exploratory data analysis to support database and dashboard development Required Skills: Expert knowledge in databases like PostgreSQL (preferably cloud hosted in any one or more cloud offerings like AWS, Azure, GCP), and any cloud-based Data Warehouse (like Snowflake, Azure Synapse) with strong programming experience in SQL. Competence in data preparation and/or ETL tools like snapLogic, MATILLION, Azure Data Factory, AWS Glue, and SSIS (preferably strong working experience in one or more) to build and maintain data pipelines and flows. Understanding of data modeling techniques and working knowledge with OLTP and OLAP systems Deep knowledge of databases, stored procedures, optimizations of huge data In-depth knowledge of ingestion techniques, data cleaning, de-dupe, and partitioning. Experience with building the infrastructure required for data ingestion and analytics Ability to fine-tune report-generating queries Solid understanding of normalization and denormalization of data, database exception handling, transactions, profiling queries, performance counters, debugging, database & query optimization techniques Understanding of index design and performance-tuning techniques Familiarity with SQL security techniques such as data encryption at the column level, Transparent Data Encryption (TDE), signed stored procedures, and assignment of user permissions Experience in understanding the source data from various platforms and mapping them into Entity Relationship Models (ER) for data integration and reporting Adhere to standards for all databases e.g., Data Models, Data Architecture, and Naming Conventions Exposure to Source control like GIT, Azure DevOps Understanding of Agile methodologies (Scrum, Kanban) Preferably experience with NoSQL database to migrate data into other types of databases with real-time replication. Experience with automated testing and coverage tools Experience with CI/CD automation tools (desirable)

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies