Jobs
Interviews

2379 Snowflake Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Marketing Reporting Specialist at Monotype, you will play a pivotal role in visualizing the effectiveness of marketing initiatives through comprehensive reporting. Leveraging your experience with complex marketing data, you'll uncover valuable insights for key stakeholders, driving informed decisions. Your exceptional analytical skills and strategic mindset will empower you to identify and illustrate trends that uncover consumer behavior patterns. In this position, you will be integral to Monotype's marketing operations, collaborating closely with senior leaders to ensure data-driven decision-making. Responsibilities: Leverage marketing data sources to ensure an accurate, comprehensive view of all metrics, behavior data, and KPIs. Develop, build, and maintain marketing reports and dashboards using visualization tools and platforms to clearly present key metrics and trends. Garner actionable insights from complex datasets by identifying anomalies, patterns, and correlations that present opportunities for optimization and growth. Partner with multiple teams to interpret and translate their needs into compelling reports and presentations that communicate complex data insights in a clear and impactful way. Champion, create, and lead initiatives and methodologies to inform, optimize, and expand marketing reporting and analysis. Continuously learn and explore new marketing technologies and data analysis tools to stay ahead of the curve. What we're looking for: 5+ years of experience with BI, analytics tools, SQL, and insight delivery. Proficiency in SQL and data warehousing (AWS & Snowflake). Experience with marketing analytics platforms (e.g., Adobe Analytics, CJA, Marketo, Salesforce). Expertise in BI tools like Power BI and Tableau. Data Migration and Connection. Strong analytical thinking, problem-solving, and attention to detail. Excellent communication and presentation skills to engage diverse audiences. Process-driven mindset with a keen eye for data accuracy and consistency. Additional knowledge of Python and familiarity with general marketing tech stacks is a plus.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

As an Informatica IDMC Developer at Coforge, your primary responsibility will be to design, develop, and maintain robust ETL pipelines using Informatica Intelligent Data Management Cloud (IDMC/IICS). You will collaborate with data architects, analysts, and business stakeholders to gather and comprehend data requirements. Your role will involve integrating data from various sources including databases, APIs, and flat files, and optimizing data workflows for enhanced performance, scalability, and reliability. Monitoring and troubleshooting ETL jobs to address data quality issues will be a part of your daily tasks. Implementing data governance and security best practices will also be crucial, along with maintaining detailed documentation of data flows, transformations, and architecture. Your contribution to code reviews and continuous improvement initiatives will be valued. The ideal candidate for this position should possess strong hands-on experience with Informatica IDMC (IICS) and cloud-based ETL tools. Proficiency in SQL and prior experience working with relational databases like Oracle, SQL Server, and PostgreSQL is essential. Additionally, familiarity with cloud platforms such as AWS, Azure, or GCP, and knowledge of data warehousing concepts and tools like Snowflake, Redshift, or BigQuery are required. Excellent problem-solving skills and effective communication abilities are highly desirable qualities for this role. Preferred qualifications for this position include experience with CI/CD pipelines and version control systems, as well as knowledge of data modeling and metadata management. Holding certifications in Informatica or cloud platforms will be considered a plus. If you have 5-8 years of relevant experience and possess the mentioned skill set, we encourage you to apply for this position by sending your CV to Gaurav.2.Kumar@coforge.com. This position is based in Greater Noida.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a talented individual in the field of data engineering, your role will involve designing, developing, and maintaining scalable data pipelines utilizing Snowflake. Your expertise will be crucial in optimizing SQL queries and data models to enhance performance and efficiency. Implementing data security and governance best practices within Snowflake will be a key responsibility to ensure data integrity and compliance. Collaboration with data scientists and analysts will be a significant aspect of your job to understand and fulfill their data requirements effectively. Your problem-solving skills will be put to the test as you troubleshoot and resolve data-related issues promptly, ensuring smooth data operations within the organization. If you are passionate about leveraging your data engineering skills and working in a dynamic environment, this opportunity offers a platform to grow and contribute meaningfully to the organization's data infrastructure. Join us to be a part of a team that values innovation, collaboration, and continuous learning. #CareerOpportunities #JobVacancy #WorkWithUs,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

At Medtronic, you can embark on a rewarding career dedicated to exploration and innovation, all while contributing to the advancement of healthcare access and equity for all. As a Digital Engineer at our new Minimed India Hub, you will play a crucial role in leveraging technology to enhance healthcare solutions on a global scale. Specifically, as a PySpark Data Engineer, you will be tasked with designing, developing, and maintaining data pipelines using PySpark. Your collaboration with data scientists, analysts, and stakeholders will be essential in ensuring the efficient processing and analysis of large datasets, as well as handling complex transformations and aggregations. This role offers an exciting opportunity to work within Medtronic's Diabetes business. As the Diabetes division prepares for separation to foster future growth and innovation, you will have the chance to operate with increased speed and agility. By working as a separate entity, there will be a focus on driving meaningful innovation and enhancing the impact on patient care. Your responsibilities will include designing, developing, and maintaining scalable and efficient ETL pipelines using PySpark, working with structured and unstructured data from various sources, optimizing PySpark applications for performance and scalability, collaborating with data scientists and analysts to understand data requirements, implementing data quality checks, monitoring and troubleshooting data pipeline issues, documenting technical specifications, and staying updated on the latest trends and technologies in big data and distributed computing. To excel in this role, you should possess a Bachelor's degree in computer science, engineering, or a related field, along with 4-5 years of experience in data engineering focusing on PySpark. Proficiency in Python and Spark, strong coding and debugging skills, knowledge of SQL and relational databases, hands-on experience with cloud platforms, familiarity with data warehousing solutions, experience with big data technologies, problem-solving abilities, and effective communication and collaboration skills are essential. Preferred skills include experience with Databricks, orchestration tools like Apache Airflow, knowledge of machine learning workflows, understanding of data security and governance best practices, familiarity with streaming data platforms, and knowledge of CI/CD pipelines and version control systems. Medtronic offers a competitive salary and flexible benefits package, along with a commitment to recognizing and supporting employees at every stage of their career and life. As part of the Medtronic team, you will contribute to the mission of alleviating pain, restoring health, and extending life by tackling the most challenging health problems facing humanity. Join us in engineering solutions that make a real difference in people's lives.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

maharashtra

On-site

As the Technical Lead for Data Engineering at Assent, you will collaborate with other team members to identify opportunities and assess the feasibility of solutions. Your role will involve providing technical guidance, influencing decision-making, and aligning data engineering initiatives with business goals. You will drive the technical strategy, team execution, and process improvements to build resilient and scalable data systems. Mentoring a growing team and building robust, scalable data infrastructure will be essential aspects of your responsibilities. Your key requirements and responsibilities will include driving the technical execution of data engineering projects, working closely with Architecture members to design and implement scalable data pipelines, and providing technical guidance to ensure best practices in data engineering. You will collaborate cross-functionally with various teams to define and execute data initiatives, plan and prioritize work with the team manager, and stay updated with emerging technologies to drive their adoption. To be successful in this role, you should have 10+ years of experience in data engineering or related fields, expertise in cloud data platforms such as AWS, proficiency in modern data technologies like Spark, Airflow, and Snowflake, and a deep understanding of distributed systems and data pipeline design. Strong programming skills in languages like Python, SQL, or Scala, experience with infrastructure as code and DevOps best practices, and the ability to influence technical direction and advocate for best practices are also necessary. Strong communication and leadership skills, a learning mindset, and experience in security, compliance, and governance related to data systems will be advantageous. At Assent, we value your talent, energy, and passion, and offer various benefits to support your well-being, financial health, and personal growth. Our commitment to diversity, equity, and inclusion ensures that all team members are included, valued, and provided with equal opportunities for success. If you require any assistance or accommodation during the interview process, please feel free to contact us at talent@assent.com.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a data engineer at Infiligence, you will play a crucial role in designing, developing, and maintaining scalable data pipelines for batch and real-time data processing. You will have the opportunity to work with cutting-edge technologies on cloud platforms such as Azure or AWS to ensure efficient data flow across systems. Collaboration with cross-functional teams to translate business requirements into robust data models and solutions will be a key aspect of your role. Your responsibilities will also include implementing data quality, governance, and security standards throughout the data lifecycle. You will be expected to develop and maintain documentation, conduct code reviews, unit testing, and peer reviews to ensure code quality and compliance. Troubleshooting, monitoring, and resolving data pipeline and infrastructure issues will be essential to minimize business impact. To succeed in this role, you should have a minimum of 5-7 years of experience in data engineering, with a focus on building and managing large-scale data pipelines. Hands-on experience with Azure Data Services is preferred, but experience with AWS data services is also acceptable. Proficiency in SQL, Python, or Scala for data processing and transformation is required, along with familiarity with data warehousing and real-time databases. A strong understanding of data architecture, ingestion, curation, and consumption patterns is essential, along with knowledge of data quality management, metadata management, data lineage, and data security best practices. Excellent communication skills and the ability to work collaboratively with global teams are highly valued in this role. Preferred skills include experience with CI/CD processes, source control for data engineering workflows, data observability, self-testing pipelines, and exposure to business intelligence and reporting platforms. In return, Infiligence offers comprehensive insurance coverage, a competitive salary, opportunities for professional growth, and an inclusive and collaborative work culture. If you are interested in joining our team, please submit your updated CV and a cover letter via our job URL. Shortlisted candidates will undergo an HR, technical assessment, and interviews. For any queries regarding the position or application process, please contact our Talent team at Infiligence, US or Chennai offices through careers@infiligence.com.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

At Goldman Sachs, our Engineers don't just make things - we make things possible. We change the world by connecting people and capital with ideas, solving the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business. Our dynamic environment requires innovative strategic thinking and immediate, real solutions. If you want to push the limit of digital possibilities, start here. Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile, and more. We look for creative collaborators who evolve, adapt to change, and thrive in a fast-paced global environment. Data plays a critical role in every facet of the Goldman Sachs business. The Data Engineering group is at the core of that offering, focusing on providing the platform, processes, and governance for enabling the availability of clean, organized, and impactful data to scale, streamline, and empower our core businesses. As a Site Reliability Engineer (SRE) on the Data Engineering team, you will be responsible for observability, cost, and capacity with operational accountability for some of Goldman Sachs's largest data platforms. We engage in the full lifecycle of platforms from design to demise with an adapted SRE strategy to the lifecycle. We are looking for individuals with a background as a developer who can express themselves in code. You should have a focus on Reliability, Observability, Capacity Management, DevOps, and SDLC (Software Development Lifecycle). As a self-leader comfortable with problem statements, you should structure them into data-driven deliverables. You will drive strategy with skin in the game, participate in the team's activities, drive Postmortems, and have an attitude that the problem stops with you. **How You Will Fulfil Your Potential** - Drive adoption of cloud technology for data processing and warehousing - Drive SRE strategy for some of GS's largest platforms including Lakehouse and Data Lake - Engage with data consumers and producers to match reliability and cost requirements - Drive strategy with data **Relevant Technologies**: Snowflake, AWS, Grafana, PromQL, Python, Java, Open Telemetry, Gitlab **Basic Qualifications** - A Bachelor's or Master's degree in a computational field (Computer Science, Applied Mathematics, Engineering, or in a related quantitative discipline) - 1-4+ years of relevant work experience in a team-focused environment - 1-2 years hands-on developer experience at some point in career - Understanding and experience of DevOps and SRE principles and automation, managing technical and operational risk - Experience with cloud infrastructure (AWS, Azure, or GCP) - Proven experience in driving strategy with data - Deep understanding of multi-dimensionality of data, data curation, and data quality - In-depth knowledge of relational and columnar SQL databases, including database design - Expertise in data warehousing concepts - Excellent communication skills - Independent thinker, willing to engage, challenge, or learn - Ability to stay commercially focused and to always push for quantifiable commercial impact - Strong work ethic, a sense of ownership and urgency - Strong analytical and problem-solving skills - Ability to build trusted partnerships with key contacts and users across business and engineering teams **Preferred Qualifications** - Understanding of Data Lake / Lakehouse technologies incl. Apache Iceberg - Experience with cloud databases (e.g., Snowflake, Big Query) - Understanding concepts of data modeling - Working knowledge of open-source tools such as AWS lambda, Prometheus - Experience coding in Java or Python,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

The Senior Data Analyst (Supply Chain) will have a crucial role in developing the technology platform to support all supply chain integrations and solutions. You will be responsible for driving insights in collaboration with the analytics team to define system performance and identify opportunities. Additionally, you will collect business specifications and requirements from partner integration solutions and portal product management. You should have a Bachelor's degree in Supply Chain Management, Business Management, Engineering, or a related field, or equivalent work experience. You must possess intermediate to advanced skills in using data analysis tools such as MySQL and Snowflake, along with familiarity with visualization tools like PowerBI, Tableau, or Sigma. Experience with integration technologies like API, EDI, and other communication forms is essential. While an understanding of coding languages is preferred, it is not mandatory. With 3-5+ years of relevant experience, you should demonstrate the ability to define problems, collect data, and draw valid conclusions. Effective communication skills are necessary to drive projects and insights for the betterment of the business. You will coordinate with core systems project management for strategic alignment and implementation, document workflows, and ensure governance over system solutions. The ideal candidate is adaptable, resourceful, and possesses creative problem-solving skills. You should work with a sense of urgency, be able to work independently with minimal supervision, and thrive in a fast-paced, evolving environment. A passion for achieving industry-leading performance and breaking established norms is crucial. Organizational skills and high attention to detail are required to manage projects effectively and expeditiously. At HNR Tech, you will have access to an inspiring work environment, a performance-driven work culture, opportunities to learn new technologies, and guidance for growth within the company and sector. You will be exposed to complex and challenging projects within an international context and work alongside driven and passionate colleagues who strive for top quality. This position is based in Matunga, Mumbai, Maharashtra, India, with a hybrid work model. The benefits include flexible working style, diversity and inclusion, opportunities for learning and growth, a balanced working life, flexible work hours, health insurance, and fixed off on Saturdays and Sundays. HNR Tech is committed to creating a workplace and global community where inclusion is prioritized. As an equal opportunity employer, we seek to foster a welcoming and diverse environment. All qualified applicants will receive consideration regardless of non-merit-based or legally protected grounds.,

Posted 1 week ago

Apply

3.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

You are invited to apply for the position of Lead / Senior ETL & Data Migration QA Engineer at our company based in Hyderabad, India (Mandatory 5 days working from Office). With 4 to 12 years of experience, you will be a key member of our Quality Assurance team, focusing on a high-impact data migration project. Your responsibilities will include ETL testing, data validation, and cloud migration, utilizing your expertise in SQL, ETL tools (preferably Talend), and cloud platforms like Snowflake. This role necessitates leadership in overseeing QA efforts across global teams to ensure the accuracy of large-scale data transformations. Your main duties will involve designing and implementing robust test strategies and plans for data migration and ETL processes, as well as developing and executing detailed test cases, scripts, and plans to validate data accuracy, completeness, and consistency. You will conduct advanced SQL-based data validation and transformation testing, utilize ETL tools such as Talend, Informatica PowerCenter, or DataStage to validate data pipelines, and test semi-structured data formats like JSON and XML. Additionally, you will lead QA activities for cloud data migration projects, particularly to Snowflake, and coordinate testing activities across on-shore and off-shore teams to ensure timely and quality delivery. Documenting test results, defects, and collaborating with development teams for resolution, as well as contributing to automated testing frameworks for ETL processes, will also be part of your responsibilities. You will be expected to promote QA best practices and drive continuous improvement initiatives. To be eligible for this position, you should have at least 3 years of experience in QA with a focus on ETL testing, data validation, and data migration. Proficiency in SQL for complex queries and data validation is essential, along with hands-on experience in Talend (preferred), Informatica PowerCenter, or DataStage. Experience with cloud data platforms, especially Snowflake, and a strong understanding of semi-structured data formats (JSON, XML) are required. Your excellent analytical and problem-solving skills, along with experience working in distributed teams and leading QA efforts, will be highly valuable in this role. Additionally, preferred skills include experience with automated testing tools for ETL processes, knowledge of data governance and data quality standards, familiarity with AWS or other cloud ecosystems, and an ISTQB or equivalent certification in software testing. If you are passionate about quality assurance, data migration, and ETL processes, and possess the required qualifications and skills, we encourage you to apply for this challenging and rewarding opportunity.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

As a Data Governance and Management Developer at Assent, you will play a crucial role in ensuring the quality and reliability of critical data across systems and domains. Your responsibilities will include defining and implementing data quality standards, developing monitoring pipelines to detect data issues, conducting data profiling assessments, and designing data quality dashboards. You will collaborate with cross-functional teams to resolve data anomalies and drive continuous improvement in data quality. Key Requirements & Responsibilities: - Define and implement data quality rules, validation checks, and metrics for critical business domains. - Develop Data Quality (DQ) monitoring pipelines and alerts to proactively detect data issues. - Conduct regular data profiling and quality assessments to identify gaps, inconsistencies, duplicates, and anomalies. - Design and maintain data quality dashboards and reports for visibility into trends and issues. - Utilize generative AI to automate workflows, enhance data quality, and support responsible prompt usage. - Collaborate with data owners, stewards, and technical teams to resolve data quality issues. - Develop and document standard operating procedures (SOPs) for issue management and escalation workflows. - Support root cause analysis (RCA) for recurring or high-impact data quality problems. - Define and monitor key data quality KPIs and drive continuous improvement through insights and analysis. - Evaluate and recommend data quality tools that scale with the enterprise. - Provide recommendations for enhancing data processes, governance practices, and quality standards. - Ensure compliance with internal data governance policies, privacy standards, and audit requirements. - Adhere to corporate security policies and procedures set by Assent. Qualifications: - 2-5 years of experience in a data quality, data analyst, or similar role. - Degree in Computer Science, Information Systems, Data Science, or related field. - Strong understanding of data quality principles. - Proficiency in SQL, Git Hub, R, Python, SQL Server, and BI tools like Tableau, Power BI, or Sigma. - Experience with cloud data platforms (e.g., Snowflake, BigQuery) and data transformation tools (e.g., dbt). - Exposure to Graph databases and GenAI tools. - Ability to interpret dashboards and communicate data quality findings effectively. - Understanding of data governance frameworks and regulatory considerations. - Strong problem-solving skills, attention to detail, and familiarity with agile work environments. - Excellent verbal and written communication skills. Join Assent and be part of a dynamic team that values wellness, financial benefits, lifelong learning, and diversity, equity, and inclusion. Make a difference in supply chain sustainability and contribute to meaningful work that impacts the world. Contact talent@assent.com for assistance or accommodation during the interview process.,

Posted 1 week ago

Apply

5.0 - 9.0 years

35 - 37 Lacs

Bengaluru

Remote

Role & responsibilities Snowflake / SQL Architect • Architect and manage scalable data solutions using Snowflake and advanced SQL, optimizing performance for analytics and reporting. • Design and implement data pipelines, data warehouses, and data lakes, ensuring efficient data ingestion and transformation. • Develop best practices for data security, access control, and compliance within cloud-based data environments. • Collaborate with cross-functional teams to understand business needs and translate them into robust data architectures. • Evaluate and integrate third-party tools and technologies to enhance the Snowflake ecosystem and overall data strategy.

Posted 1 week ago

Apply

9.0 - 14.0 years

30 - 40 Lacs

Pune, Chennai

Work from Office

Designing, implementing, and optimizing data solutions using both Azure and Snowflake experience working with Matillion tool Azure and Snowflake, including data modeling, ETL processes, and data warehousing. SQL and data integration tools.

Posted 1 week ago

Apply

3.0 - 5.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Position summary: We are seeking a Senior Software Development Engineer – Data Engineering with 3-5 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. Key Responsibilities: Work with cloud-based data solutions (Azure, AWS, GCP). Implement data modeling and warehousing solutions. Developing and maintaining data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Designing and optimizing data storage solutions, including data warehouses and data lakes. Ensuring data quality and integrity through data validation, cleansing, and error handling. Collaborating with data analysts, data architects, and software engineers to understand data requirements and deliver relevant data sets (e.g., for business intelligence). Implementing data security measures and access controls to protect sensitive information. Monitor and troubleshoot issues in data pipelines, notebooks, and SQL queries to ensure seamless data processing. Develop and maintain Power BI dashboards and reports. Work with DAX and Power Query to manipulate and transform data. Basic Qualifications Bachelor’s or master’s degree in computer science or data science 3-5 years of experience in data engineering, big data processing, and cloud-based data platforms. Proficient in SQL, Python, or Scala for data manipulation and processing. Proficient in developing data pipelines using Azure Synapse, Azure Data Factory, Microsoft Fabric. Experience with Apache Spark, Databricks and Snowflake is highly beneficial for handling big data and cloud-based analytics solutions. Preferred Qualifications Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub). Experience in BI and analytics tools (Tableau, Power BI, Looker). Familiarity with data observability tools (Monte Carlo, Great Expectations). Contributions to open-source data engineering projects.

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Bengaluru

Work from Office

Hi All, Please find below JD SKILL : DBT Developer(Data Engineer) Location Bangalore. Experience: 8 to 15Years Position: Contract to hire Education: Engineering or equivalent (BTech\MTech\MCA) Job Description Mandatory Skills : Looking for a Data tester with DBT (Data built Tool) experience for Core conversion project. 8+ years of experience in data engineering, analytics engineering, or similar roles. Proven expertise in dbt (Data Build Tool) and modern data transformation practices. Advanced proficiency in SQL and deep understanding of dimensional modeling, medallion architecture and ELT principles. Strong hands-on experience with Snowflake, including query optimization Proficient with Azure cloud services, including Azure Data Factory, Blob Storage Strong communication and collaboration skills Familiarity with data governance, metadata management, and data quality frameworks.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 14 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Company Name: LTIMindtree Experience : 5+ Years Location: Pan India (Hybrid Model) Interview Mode: Virtual Interview Rounds: 2 Rounds Notice Period: Immediate to 30 days Job description: Roles & Responsibilities: Responsibilities will include expanding and updating the production runbook as new functionality added and processes are finetuned Establish and maintain runbooks for the data processes Establish and maintain data quality and data technical controls Identify data process performance improvements Interpreting data and analyzing results Perform Incident management activities Stakeholder communication and SLA management Monitor data integrity data processing and coordinate corrective actions when necessary Ensuring data integrity by verifying and cleaning data Perform root cause analysis on production failures on data processes

Posted 1 week ago

Apply

4.0 - 9.0 years

15 - 27 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

Location: Kolkata, Hyderabad, Bangalore Exp 4 to 17 years Band 4B, 4C, 4D Skill set -Snowflake,AWS/ Azure, Python,ETL Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be added an advantage Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, Bulk copy, Snowpipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight, Steamlit Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python or Pyspark. Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2. Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/PySpark/ DBT, AWS/Azure, ETL concepts, & Data Warehousing concepts Data Modeling , Design patterns

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Noida, India

Work from Office

Key Responsibilities: 1.Architect and design end to end data pipelines starting from Source systems to Data warehouse. 2.Lead the development of scalable Python- Spark based data processing workflows 3.Define and implement data modeling standards for DWH including fact/dimension schema and historical handling. 4.Oversee performance tuning of Python, Spark and ETL loads. 5.Ensure robust data integration with Tableau reporting by designing data structures optimized for Bl consumption. 6.Mentor junior engineers and drive engineering best practices. 7.Work loosely with business stakeholders, developers and product teams to align data initiatives with business goals, 8.Define SLAs, error handling, logging, monitoring and alerting mechanisms across pipelines. Must Have: 1. Strong Oracle SQL expertise and deep oracle DWH experience. 2. Proficiency in Python and Spark with experience handling large scale data transformations. 3. Experience in building batch data pipelines and managing dependencies. 4. Solid understanding of data warehousing principles and dimensional modeling. 5. Experience working with reporting tools like Tableau. 6. Good to have experience in cloud-based DWHs (like Snowflake) for future- readiness. Mandatory Competencies ETL - ETL - Data Stage Beh - Communication and collaboration BI and Reporting Tools - BI and Reporting Tools - Tableau QA/QE - QA Analytics - Data Analysis Database - Database Programming - SQL Big Data - Big Data - SPARK Programming Language - Python - Python Shell ETL - ETL - Ab Initio

Posted 1 week ago

Apply

2.0 - 4.0 years

10 - 14 Lacs

Bengaluru

Hybrid

Job Description Job Summary The Business Intelligence (BI) Developer II is responsible for supporting the current production BI platform along with the development of new business intelligence capabilities, leveraging data transformation best practices. The BI Developer is required to have a deep understanding of the BI architecture and processes to provide technical guidance on the optimal solution for business logic. The developer is seen as the subject matter expert (SME) on data warehousing and ELT processes leveraging SQL, Python, and Java, ideally on platforms including Snowflake and Matillion. The developer is required to effectively communicate orally and written. Responsibilities Implement new logic and/or transformation workflows to build new data products within our BI platform Manage existing code base and make required logic updates and/or technical debt cleanup Develop and support QA processes for our BI platform Provide consultation to the internal product team requesting new BI features Contribute to Data Governance policies and standards including data quality, data management, business process management, privacy, and security Troubleshoot integration/build failures to determine root cause and provide guidance on possible solutions; including writing code for resolution of an identified issue Create process models and data flow diagrams Participate in identifying and maintaining team best practices Participate in Agile SCRUM process, including managing tasks and test cases Qualifications, Skills, and Experience 2 - 4 years experience in BI Developer role, or related position 2 - 4 years of experience using SQL to query data 2 - 4 years of experience using SQL, Python, and Java to develop data warehouse and ELT processes B.S. in Computer Science or equivalent business experience Problem analysis and solving skills ability to identify root causes of problems and differentiate between perceived and actual problems. Experience leveraging Snowflake and Matillion preferred Demonstrated proficiency with Software Development Lifecycle (SDLC) Demonstrated experience working in a virtual team environment as well the ability to work independently Strong technical, organizational, and communication (written and verbal) skills that enable the ability to take requirements from a business user and implement them within the SDLC process Must be flexible with an ability to handle multiple tasks, projects, or work items concurrently while meeting prioritized deadlines Intermediate to Advanced knowledge of SQL Basic to Intermediate knowledge of Python and Java scripting Must have an eye for data quality and experience enforcing data governance across a vast volume of data Aptitude for learning new technologies and learning on the fly Flexibility and adaptability outside the assigned core responsibilities Staffing industry experience is a plus

Posted 1 week ago

Apply

7.0 - 12.0 years

30 - 37 Lacs

Hyderabad

Work from Office

Required Skills and Qualifications: 8+ years of experience in data engineering or a related field. Strong expertise in Snowflake including schema design, performance tuning, and security. Proficiency in Python for data manipulation and automation. Solid understanding of data modeling concepts (star/snowflake schema, normalization, etc.). Experience with DBT for data transformation and documentation. Hands-on experience with ETL/ELT tools and orchestration frameworks (e.g., Airflow, Prefect). Strong SQL skills and experience with large-scale data sets. Familiarity with cloud platforms (AWS, Azure, or GCP) and data services.

Posted 1 week ago

Apply

3.0 - 6.0 years

14 - 18 Lacs

Mumbai

Work from Office

Overview Data Technology group in MSCI is responsible to build and maintain state-of-the-art data management platform that delivers Reference. Market & other critical datapoints to various products of the firm. The platform, hosted on firms’ data centers and Azure & GCP public cloud, processes 100 TB+ data and is expected to run 24*7. With increased focus on automation around systems development and operations, Data Science based quality control and cloud migration, several tech stack modernization initiatives are currently in progress. To accomplish these initiatives, we are seeking a highly motivated and innovative individual to join the Data Engineering team for the purpose of supporting our next generation of developer tools and infrastructure. The team is the hub around which Engineering, and Operations team revolves for automation and is committed to provide self-serve tools to our internal customers. The position is based in Mumbai, India office. Responsibilities Build and maintain ETL pipelines for Snowflake. Manage Snowflake objects and data models. Integrate data from various sources. Optimize performance and query efficiency. Automate and schedule data workflows. Ensure data quality and reliability. Collaborate with cross-functional teams. Document processes and data flows. Qualifications Self-motivated, collaborative individual with passion for excellence B.E Computer Science or equivalent with 5+ years of total experience and at least 2 years of experience in working with Databases Good working knowledge of source control applications like git with prior experience of building deployment workflows using this tool Good working knowledge of Snowflake YAML, Python Experience managing Snowflake databases, schemas, tables, and other objects Proficient in Snowflake SQL, including CTEs, window functions, and stored procedures Familiar with Snowflake performance tuning and cost optimization tools Skilled in building ETL/ELT pipelines using dbt, Airflow, or Python Able to work with various data sources including RDBMS, APIs, and cloud storage Understanding of incremental loads, error handling, and scheduling best practices Strong SQL skills and intermediate Python proficiency for data processing Familiar with Git for version control and collaboration Basic knowledge of Azure, or GCP cloud platforms Capable of integrating Snowflake with APIs and cloud-native services What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :In this role, you will work to increase the domain data coverage and adoption of the Data Platform by promoting a connected user experience through data. You will increase data literacy and trust by leading our Data Governance and Master Data Management initiatives. You will contribute to the vision and roadmap of self-serve capabilities through the Data Platform.The senior data engineer develops data pipelines extracting and transforming data as governed assets into the data platform, improves system quality by identifying issues and common patterns and developing standard operating procedures; and enhances applications by identifying opportunities for improvement, making recommendations, and designing and implementing systems.Roles and responsibilities:(MUST HAVE) Extensive experience with cloud data warehouse like Snowflake, AWS Athena, and SQL databases like PostgreSQL, MS SQL Server. Experience with NoSQL databases like AWS DynamoDB and Azure Cosmos is a plus.(MUST HAVE) Solid experience and clear understanding of DBT.(MUST HAVE) Experience working with AWS and/or Azure CI/CD DevOps technologies, and extensive debugging experience.Good understanding of data modeling, ETL, data curation, and big data performance tuning.Experience with data ingestion tools like Fivetran is a big plus.Experience with Data Quality and Observability tools like Monte Carlo is a big plus.Experience working and integrating with Event Bus like Pulsar is a big plus.Experience integrating with a Data Catalog like Atlan is a big plus.Experience with Business Intelligence tools like PowerBI is a plus.An understanding of unit testing, test driven development, functional testing, and performanceKnowledge of at least one shell scripting language.Ability to network with key contacts outside own area of expertise.Must possess strong interpersonal, organizational, presentation and facilitation skills.Must be results oriented and customer focused.Must possess good organizational skills.Technical experience & Professional attributes:Prepare technical design specifications based on functional requirements and analysis documents.Implement, test, maintain and support software, based on technical design specifications.Improve system quality by identifying issues and common patterns and developing standard operating procedures.Enhance applications by identifying opportunities for improvement, making recommendations, and designing and implementing systems.Maintain and improve existing codebases and peer review code changes.Liaise with colleagues to implement technical designs.Investigating and using new technologies where relevantProvide written knowledge transfer material.Review functional requirements, analysis, and design documents and provide feedback.Assist customer support with technical problems and questions.Ability to work independently with wide latitude for independent decision making.Experience in leading the work of others and mentor less experienced developers in the context of a project is a plus.Ability to listen and understand information and communicate the same.Participate in architecture and code reviews.Lead or participate in other projects or duties as need arises. Education qualifications:Bachelors degree in computer science, Information Systems, or related field; or equivalent combination of education/experience. Masters degree is a plus.5 years or more of extensive experience developing mission critical and low latency solutions.At least 3 years of experience with developing and debugging distributed systems and data pipelines in the cloud. Additional Information:The Winning Way behaviors that all employees need in order to meet the expectations of each other, our customers, and our partners. Communicate with Clarity - Be clear, concise and actionable. Be relentlessly constructive. Seek and provide meaningful feedback. Act with Urgency - Adopt an agile mentality - frequent iterations, improved speed, resilience. 80/20 rule better is the enemy of done. Dont spend hours when minutes are enough. Work with Purpose - Exhibit a We Can mindset. Results outweigh effort. Everyone understands how their role contributes. Set aside personal objectives for team results. Drive to Decision - Cut the swirl with defined deadlines and decision points. Be clear on individual accountability and decision authority. Guided by a commitment to and accountability for customer outcomes. Own the Outcome - Defined milestones, commitments and intended results. Assess your work in context, if youre unsure, ask. Demonstrate unwavering support for decisions.COMMENTS:The above statements are intended to describe the general nature and level of work being performed by individuals in this position. Other functions may be assigned, and management retains the right to add or change the duties at any time. Qualification 15 years full time education

Posted 1 week ago

Apply

4.0 - 8.0 years

20 - 27 Lacs

Chennai

Hybrid

Key Responsibilities Design, develop, and maintain ETL pipelines using IBM DataStage (CP4D) and AWS Glue/Lambda for ingestion from varied sources like flat files, APIs, Oracle, DB2, etc. Build and optimize data flows for loading curated datasets into Snowflake , leveraging best practices for schema design, partitioning, and transformation logic. Participate in code reviews , performance tuning, and defect triage sessions. Work closely with data governance teams to ensure lineage, privacy tagging, and quality controls are embedded within pipelines. Contribute to CI/CD integration of ETL components using Git, Jenkins, and parameterized job configurations. Troubleshoot and resolve issues in QA/UAT/Production environments as needed. Adhere to agile delivery practices, sprint planning, and documentation requirements. Required Skills and Experience 4+ years of experience in ETL development with at least 12 years in IBM DataStage (preferably CP4D version) . Hands-on experience with AWS Glue (PySpark or Spark) and AWS Lambda for event-based processing. Experience working with Snowflake : loading strategies, stream-task, zero-copy cloning, and performance tuning. Proficiency in SQL , Unix scripting , and basic Python for data handling or automation. Familiarity with S3 , version control systems (Git), and job orchestration tools. Experience with data profiling, cleansing, and quality validation routines. Understanding of data lake/data warehouse architectures and DevOps practices

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection

Posted 1 week ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Hyderabad

Work from Office

DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The impact you will have in this role : The Enterprise Intelligence Lead will be responsible for building data pipelines using their deep knowledge of Talend, SQL and Data Analysis on the bespoke Snowflake data warehouse for Enterprise Intelligence and will be responsible for managing a growing team of consultants and employees and running a development and production support teams for the Enterprise Intelligence team for DTCC Your Primary Responsibilities : Working on and leading engineering and development focused projects from start to finish with minimal supervision Providing technical and operational support for our customer base as well as other technical areas within the company that utilize Claw Risk management functions such as reconciliation of vulnerabilities, security baselines as well as other risk and audit related objectives Ensuring incident, problem and change tickets are addressed in a timely fashion, as well as escalating technical and managerial issues Following DTCCs ITIL process for incident, change and problem resolution Manage delivery and production support teams Drive delivery independently and autonomously within team and vendor teams Liaise with onshore peers to drive seamless quality of service to stakeholders Conduct working sessions with users and SMEs to align reporting and reduce use of offline spreadsheets and potentially stale data sources Provide technical leadership for projects Work closely with other project managers and scrum masters to create and update project plans Work closely with peers to improve workflow processes & communication Qualifications: 8+ years of related experience Bachelor's degree (preferred) or equivalent experience Talents Needed for Success: Minimum of 12 years of related experience Minimum of 8 years of experience in managing data warehousing, SQL, Snowflake. Minimum of 5 years of experience in People management & Team Leadership Ability to manage distributed teams with an employee/vendor mix Strong understanding of snowflake schemas and data integration methods and tools Strong knowledge on managing data warehouses in a production environment. This includes all phases of lifecycle management: planning, design, deployment, upkeep and retirement Ability to meet deadlines, goals and objectives Ability to facilitate educational and working sessions with stakeholders and users Self-starter, continually striving to improve the teams service offerings and ones own skillset Must have a problem-solving and innovative mindset to meet a wide variety of challenges Willingness and ability to learn all aspects of our operating model as well as new tools Developed competencies around essential project management, communication (oral, written) and personal effectiveness Good SQL skills and good knowledge of relational databases, specifically, Snowflake Ability to manage agile development cycles within the DTCC SDLC (SDP) methodology Good knowledge of the technical components of Claw (i.e. Snowflake, Talend, PowerBI, PowerShell, Autosys)

Posted 1 week ago

Apply

5.0 - 9.0 years

12 - 17 Lacs

Hyderabad

Work from Office

The Impact you will have in this role: The Enterprise Test Engineering ("ETE") family is responsible for ensuring that all applications and systems meet defined quality standards. The ETE family encompasses three major areas including (a) functional testing, (b) non-functional testing, and (c) test architecture and enablement. Other key focuses include regression testing, browser testing, performance testing, capacity and stress testing, resiliency testing, environment management services, and infrastructure testing. Develops, conducts, and evaluates testing processes, working closely with developers to remediate identified system defects. In-depth knowledge of automated testing tools, and quality control and assurance approaches including the creation of reusable foundational test automation framework for the entire organization. The Lead Test Engineer is responsible for independently leading Test Engineering teams. You will be developing test plans and implementing those plans against the corresponding test procedures. You will be accountable for the development, release, and maintenance of test procedures. Your Primary Responsibilities : Responsible for system integration testing, including automation, of newly developed or enhanced applications. Play an active role in translating business and functional requirements into concrete results. Lead, develop, and advise on test automation strategies and provide critical feedback in requirements, design, implementation and execution phases. Partner with collaborators - Product Management, Application Development, DevOps and other technical teams. Track test execution milestones and report on issues and risks with the potential to affect project timelines. Construct appropriate end-2-end business scenarios through the application of a broad understanding of business objectives and goals. Responsible for Delivery Pipeline adoption Identify dependencies for environmental and data requirements. Contribute to standard framework of reusable functions. Develop a thorough understanding of the product(s) being delivered. Responsible for process compliance & associated documentation Aligns risk and control processes into day-to-day responsibilities to supervise and mitigate risk; escalates appropriately. Works closely with business and AD domain experts, to continually improve depth and breadth of knowledge for assigned applications/systems. Responsible for Project Coordination and Technical Management tasks. **NOTE: The Primary Responsibilities of this role are not limited to the details above. ** Qualifications: 5-7 years of related experience in delivering software solutionswith hands on automated testing. Bachelor's degree preferred or equivalent experience. Talents Needed for Success: Experience in Agile/Waterfall, onsite/offshore work model and coordination. In depth knowledge of the software implementation lifecycle (specifically the testing model, methodology, and processes). Experience with Test Engineering methodologies and Test Automation Frameworks Proficient in automation at all software layers (e.g. UI, services, APIs, etc.) as well as CI/CD technologies (e.g. Cloudbees, Jenkins, Cucumber, Git, JUnit, Jira, etc.). Sophisticated Java / Selenium development skills with significant experience applying those skills in test environments. Track test execution milestones, report on issues and risks with the potential to affect project timelines. Extensive experience with testing modern scripting language-based components. Proven expertise in frontend test automation using Selenium Webdriver. Expert and hands on with backend test automation using Rest Assured/Karate for API testing. JDBC/JPA for database testing (Oracle/ DB2/ Snowflake). Experience in writing sophisticated SQL queries. Experience with JIRA, ALM, Bitbucket, Git, and Jenkins. Requires the ability to work with both business clients and technical team and the ability to work independently, both individually and as part of a team. Experience in mentoring junior test engineers, verifying work products and providing mentorship, as needed. Unix, Python and AWS experience in a plus. Accountable for process compliance & associated documentation Aligns risk and control processes into day-to-day responsibilities to supervise and mitigate risk; escalates appropriately. Excellent problem-solving, collaboration, and written and verbal communication skills.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies