Jobs
Interviews

2010 Snowflake Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 7.0 years

10 - 19 Lacs

Bengaluru

Hybrid

Job Description Experience 4 to 7 years. Experience in any ETL tools [e.g. DataStage] with implementation experience in large Data Warehouse Proficiency in programming languages such as Python etc. Experience with data warehousing solutions (e.g., Snowflake, Redshift) and big data technologies (e.g., Hadoop, Spark). Strong knowledge of SQL and database management systems. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and data pipeline orchestration tools (e.g. Airflow). Proven ability to lead and develop high-performing teams, with excellent communication and interpersonal skills. Strong analytical and problem-solving abilities, with a focus on delivering actionable insights. Responsibilities Design, develop, and maintain advanced data pipelines and ETL processes using niche technologies. Collaborate with cross-functional teams to understand complex data requirements and deliver tailored solutions. Ensure data quality and integrity by implementing robust data validation and monitoring processes. Optimize data systems for performance, scalability, and reliability. Develop comprehensive documentation for data engineering processes and systems.

Posted 1 month ago

Apply

9.0 - 14.0 years

15 - 30 Lacs

Pune, Chennai, Bengaluru

Hybrid

JD: Senior Snowflake Data Architect : Design , implements, and optimizes data solutions within the Snowflake cloud data platform, ensuring data security, governance, and performance, while also collaborating with cross-functional teams and providing technical leadership. Data architect include determining a data strategy. understanding data management technologies oversee data inventory. maintain a finger on the pulse of an organization's data management systems.

Posted 1 month ago

Apply

4.0 - 6.0 years

2 - 6 Lacs

Pune

Work from Office

An ETL Tester (4+ Years Must) is responsible for testing and validating the accuracy and completeness of data being extracted, transformed, and loaded (ETL) from various sources into the target systems. These target systems can be On Cloud or on Premise They work closely with ETL developers, Data engineers, data analysts, and other stakeholders to ensure the quality of data and the reliability of the ETL processes Understand Cloud architecture and design test strategies for data moving in and out of Cloud systems Roles and Responsibilities : 1) Strong in Data warehouse testing - ETL and BI 2) Strong Database Knowledge Oracle/ SQL Server/ Teradata / Snowflake 3) Strong SQL skills with experience in writing complex data validation SQLs 4) Experience working in Agile environment Experience creating test strategy, release level test plan and test cases 5) Develop and Maintain test data for ETL testing 6) Design and Execute test cases for ETL processes and data integration 7) Good Knowledge of Rally, Jira and HP ALM 8) Experience in Automation testing and data validation using Python 9) Document test results and communicate with stakeholders on the status of ETL testing

Posted 1 month ago

Apply

0.0 - 1.0 years

2 - 4 Lacs

Mumbai

Work from Office

We are seeking an experienced Data Engineer to join our team for a 6-month contract assignment. The ideal candidate will work on data warehouse development, ETL pipelines, and analytics enablement using Snowflake, Azure Data Factory (ADF), dbt, and other tools. This role requires strong hands-on experience with data integration platforms, documentation, and pipeline optimizationespecially in cloud environments such as Azure and AWS. #KeyResponsibilities Build and maintain ETL pipelines using Fivetran, dbt, and Azure Data Factory Monitor and support production ETL jobs Develop and maintain data lineage documentation for all systems Design data mapping and documentation to aid QA/UAT testing Evaluate and recommend modern data integration tools Optimize shared data workflows and batch schedules Collaborate with Data Quality Analysts to ensure accuracy and integrity of data flows Participate in performance tuning and improvement recommendations Support BI/MDM initiatives including Data Vault and Data Lakes #RequiredSkills 7+ years of experience in data engineering roles Strong command of SQL, with 5+ years of hands-on development Deep experience with Snowflake, Azure Data Factory, dbt Strong background with ETL tools (Informatica, Talend, ADF, dbt, etc.) Bachelor's in CS, Engineering, Math, or related field Experience in healthcare domain (working with PHI/PII data) Familiarity with scripting/programming (Python, Perl, Java, Linux-based environments) Excellent communication and documentation skills Experience with BI tools like Power BI, Cognos, etc. Organized, self-starter with strong time-management and critical thinking abilities #NiceToHave Experience with Data Lakes and Data Vaults QA & UAT alignment with clear development documentation Multi-cloud experience (especially Azure, AWS)

Posted 1 month ago

Apply

7.0 - 10.0 years

17 - 27 Lacs

Gurugram

Hybrid

Primary Responsibilities: Design and develop applications and services running on Azure, with a strong emphasis on Azure Databricks, ensuring optimal performance, scalability, and security. Build and maintain data pipelines using Azure Databricks and other Azure data integration tools. Write, read, and debug Spark, Scala, and Python code to process and analyze large datasets. Write extensive query in SQL and Snowflake Implement security and access control measures and regularly audit Azure platform and infrastructure to ensure compliance. Create, understand, and validate design and estimated effort for given module/task, and be able to justify it. Possess solid troubleshooting skills and perform troubleshooting of issues in different technologies and environments. Implement and adhere to best engineering practices like design, unit testing, functional testing automation, continuous integration, and delivery. Maintain code quality by writing clean, maintainable, and testable code. Monitor performance and optimize resources to ensure cost-effectiveness and high availability. Define and document best practices and strategies regarding application deployment and infrastructure maintenance. Provide technical support and consultation for infrastructure questions. Help develop, manage, and monitor continuous integration and delivery systems. Take accountability and ownership of features and teamwork. Comply with the terms and conditions of the employment contract, company policies and procedures, and any directives. Required Qualifications: B.Tech/MCA (Minimum 16 years of formal education) Overall 7+ years of experience. Minimum of 3 years of experience in Azure (ADF), Databricks and DevOps. 5 years of experience in writing advanced leve l SQL. 2-3 years of experience in writing, reading, and debugging Spark, Scala, and Python code . 3 or more years of experience in architecting, designing, developing, and implementing cloud solutions on Azure. Proficiency in programming languages and scripting tools. Understanding of cloud data storage and database technologies such as SQL and NoSQL. Proven ability to collaborate with multidisciplinary teams of business analysts, developers, data scientists, and subject-matter experts. Familiarity with DevOps practices and tools, such as continuous integration and continuous deployment (CI/CD) and Teraform. Proven proactive approach to spotting problems, areas for improvement, and performance bottlenecks. Proven excellent communication, writing, and presentation skills. Experience in interacting with international customers to gather requirements and convert them into solutions using relevant skills. Preferred Qualifications: Knowledge of AI/ML or LLM (GenAI). Knowledge of US Healthcare domain and experience with healthcare data. Experience and skills with Snowflake.

Posted 1 month ago

Apply

6.0 - 10.0 years

6 - 11 Lacs

Hyderabad

Work from Office

Greetings from NCG! We have a opening for Snowflake Developer role in Hyderabad office! Below JD for your reference Job Description: We are looking for a highly skilled and experienced Snowflake Developer with strong Azure Data Factory (ADF) expertise to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining cloud-based data pipelines and solutions using Snowflake and Azure services. Key Responsibilities: Design and develop scalable ETL/ELT pipelines using Azure Data Factory and Snowflake . Create and optimize complex SQL scripts for data extraction, transformation, and loading. Integrate data from various sources (on-prem and cloud) into Snowflake. Monitor data pipelines and troubleshoot performance issues. Collaborate with data architects, analysts, and business stakeholders to define and deliver data solutions. Implement data quality checks , transformation logic, and audit processes. Ensure data security, compliance, and performance standards. Document technical processes and workflows. Required Skills: Minimum 6 years of hands-on experience in Data Engineering roles. Strong experience in Snowflake (Data Warehousing, Schema Design, Performance Tuning). Proficiency in Azure Data Factory (ADF) pipelines, triggers, linked services, data flows. Experience in SQL , Stored Procedures , and data modeling. Familiarity with Azure services like Azure Blob Storage, Azure Data Lake, and Azure SQL. Experience with version control (Git), CI/CD pipelines is a plus. Good understanding of data governance and data security principles . For further information please contact the HR Harshini - 9663082098 Swathi - 9972784663 Thanks and regards, Chiranjeevi Nanjunda Talent Acquisition Lead - NCG

Posted 1 month ago

Apply

4.0 - 8.0 years

12 - 18 Lacs

Hyderabad, Chennai, Coimbatore

Hybrid

We are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have experience in designing, developing, and maintaining scalable data pipelines and architectures using Hadoop, PySpark, ETL processes , and Cloud technologies . Responsibilities: Design, develop, and maintain data pipelines for processing large-scale datasets. Build efficient ETL workflows to transform and integrate data from multiple sources. Develop and optimize Hadoop and PySpark applications for data processing. Ensure data quality, governance, and security standards are met across systems. Implement and manage Cloud-based data solutions (AWS, Azure, or GCP). Collaborate with data scientists and analysts to support business intelligence initiatives. Troubleshoot performance issues and optimize query executions in big data environments. Stay updated with industry trends and advancements in big data and cloud technologies . Required Skills: Strong programming skills in Python, Scala, or Java . Hands-on experience with Hadoop ecosystem (HDFS, Hive, Spark, etc.). Expertise in PySpark for distributed data processing. Proficiency in ETL tools and workflows (SSIS, Apache Nifi, or custom pipelines). Experience with Cloud platforms (AWS, Azure, GCP) and their data-related services. Knowledge of SQL and NoSQL databases. Familiarity with data warehousing concepts and data modeling techniques. Strong analytical and problem-solving skills. Interested can reach us at +91 7305206696/ saranyadevib@talentien.com

Posted 1 month ago

Apply

5.0 - 6.0 years

9 - 16 Lacs

Gurugram

Hybrid

Role Summary: We are seeking an experienced ETL Developer with strong expertise in Informatica PowerCenter , Oracle SQL/PLSQL , and data warehousing concepts . The ideal candidate will play a key role in developing, optimizing, and maintaining ETL workflows, ensuring seamless data integration and transformation to support business-critical applications. Experience in Snowflake and job scheduling tools such as Control-M is a plus. Key Responsibilities: Collaborate with Technical Leads, Business Analysts, and Subject Matter Experts to understand data models and business requirements. Design, develop, and implement ETL solutions using Informatica PowerCenter . Develop, optimize, and maintain complex SQL/PLSQL scripts to support data processing in Oracle databases. Provide accurate development estimates and deliver high-quality solutions within agreed timelines. Ensure data integrity, reconciliation, and exception handling by following best practices and development standards. Participate in cross-functional team meetings to coordinate dependencies and deliverables. Implement procedures for data maintenance, monitoring, and performance optimization. Essential Skills & Experience: Technical: Minimum 3+ years of hands-on experience with Informatica PowerCenter in ETL development. Experience with Snowflake data warehouse platform. Familiarity with Source Control tools (e.g., Git, SVN). Proficiency in job scheduling tools like Control-M . Strong skills in UNIX shell scripting for automation. Solid experience (minimum 2 years) in SQL/PLSQL development including query tuning and optimization. In-depth understanding of Data Warehousing, Datamart, and ODS concepts . Knowledge of data normalization, OLAP techniques , and Oracle performance optimization . Experience working with Oracle or SQL Server databases (3+ years) along with Windows/UNIX environment expertise. Functional: Minimum 3 years of experience in the financial services sector or related industries. Sound understanding of data distribution, modeling, and physical database design . Ability to engage and communicate effectively with business stakeholders and data stewards . Strong problem-solving, analytical, interpersonal, and communication skills .

Posted 1 month ago

Apply

4.0 - 7.0 years

15 - 30 Lacs

Hyderabad

Hybrid

What are the ongoing responsibilities of Data Engineer responsible for? We are building a growing Data and AI team. You will play a critical role in the efforts to centralize structured and unstructured data for the firm. We seek a candidate with skills in data modeling, data management and data governance, and can contribute first-hand towards firms data strategy. The ideal candidate is a self-starter with a strong technical foundation, a collaborative mindset, and the ability to navigate complex data challenges #ASSOCIATE What ideal qualifications, skills & experience would help someone to be successful? Bachelors degree in computer science or computer applications; or equivalent experience in lieu of degree with 3 years of industry experience. Strong expertise in data modeling and data management concepts. Experience in implementing master data management is preferred. Sound knowledge on Snowflake and data warehousing techniques. Experience in building, optimizing, and maintaining data pipelines and data management frameworks to support business needs. Proficiency in at least one programming language, preferably python. Collaborate with cross-functional teams to translate business needs into scalable data and AI-driven solutions. Take ownership of projects from ideation to production, operating in a startup-like culture within an enterprise environment. Excellent communication, collaboration, and ownership mindset. Foundational Knowledge of API development and integration. Knowledge of Tableau, Alteryx is good-to-have. Work Shift Timings - 2:00 PM - 11:00 PM IST

Posted 1 month ago

Apply

6.0 - 8.0 years

15 - 22 Lacs

Gurugram

Hybrid

Role Summary: Seasoned Software Test Analyst with more than 5+ years of hands-on experience in automation testing for both web and backend systems. The ideal candidate possesses expertise in open-source automation tools, strong programming skills in Python and Java, and working knowledge of cloud environments such as AWS and Azure. Experience in the Finance and Investment domain is preferable. Key Responsibilities: Design, develop, and maintain automated test scripts using Playwright, Pytest, Selenium, Requests, Rest Assured . Perform backend and API automation testing. Write and optimize complex SQL queries in databases such as Oracle and Snowflake . Utilize NumPy and Pandas for data manipulation and validation during testing processes. Integrate and maintain automated tests in CI/CD pipelines using tools such as Jenkins and GitLab CI . Collaborate with cross-functional teams for requirement analysis, test planning, execution, and defect resolution. Ensure effective testing of cloud-based applications across AWS and Azure platforms . Apply strong logical reasoning and problem-solving skills to troubleshoot and resolve issues efficiently.

Posted 1 month ago

Apply

1.0 - 4.0 years

7 - 17 Lacs

Bengaluru

Hybrid

Job Title: Data & GenAI AWS Specialist Experience: 1-4 Years Location: Bangalore Mandatory Qualification: B.E./ B.Tech/ M.Tech/ MS from IIT or IISc ONLY Job Overview: We are seeking a seasoned Data & GenAI Specialist with deep expertise in AWS Managed Services (PaaS) to join our innovative team. The ideal candidate will have extensive experience in designing sophisticated, scalable architectures for data pipelines and Generative AI (GenAI) solutions leveraging cloud services. Key to this role is the ability to articulate architecture solutions clearly and effectively to customers, helping them conceptualize and implement advanced GenAI-driven applications tailored precisely to their business requirements. Responsibilities: Engage closely with customers to thoroughly understand their business challenges, translate their requirements into comprehensive architecture solutions, and effectively communicate intricate technical details. Architect and oversee the design, development, and deployment of scalable, resilient data processing pipelines utilizing AWS/ Azure/ GCP/ Snowflake/ open Source services. Lead the architectural design and implementation of robust GenAI systems, leveraging AWS foundational models and frameworks including Amazon Bedrock, AWS Inferentia, Amazon SageMaker, and Amazon Kendra. Collaborate with internal and customer teams to align architectural strategies with business objectives, ensuring adherence to AWS best practices. Optimize and refine data architectures to effectively handle large-scale GenAI workloads, prioritizing performance, scalability, and robust security. Document and promote architectural best practices in data engineering, pipeline architecture, and GenAI development within AWS environments. Stay abreast of emerging architectural trends, innovative technologies, and advancements in AWS and GenAI ecosystems to ensure solutions remain cutting-edge and efficient. Have a keen sense of maximising and extending the AWS investments already done by the clients, instead of a rip and replace mentality. Qualifications: B.Tech/ MS/ M.Tech degree in Computer Science, Data Science, AI, or related technical fields Knowledge of architecting and building data pipelines and GenAI solutions specifically on AWS. Expert-level proficiency in AWS architectural patterns and services, including AWS Glue, Lambda, EMR, S3, and SageMaker. Why Join Us: Be at the forefront of cutting-edge GenAI and AWS cloud architectural innovation, working on the AI Lab. Thrive in a collaborative, dynamic, and supportive team environment. Continuous learning and growth opportunities.

Posted 1 month ago

Apply

5.0 - 7.0 years

10 - 15 Lacs

Bengaluru

Work from Office

We are looking for a skilled Data Analyst with excellent communication skills and deep expertise in SQL, Tableau, and modern data warehousing technologies. This role involves designing data models, building insightful dashboards, ensuring data quality, and extracting meaningful insights from large datasets to support strategic business decisions. Key Responsibilities: Write advanced SQL queries to retrieve and manipulate data from cloud data warehouses such as Snowflake, Redshift, or BigQuery. Design and develop data models that support analytics and reporting needs. Build dynamic, interactive dashboards and reports using tools like Tableau, Looker, or Domo. Perform advanced analytics techniques including cohort analysis, time series analysis, scenario analysis, and predictive analytics. Validate data accuracy and perform thorough data QA to ensure high-quality output. Investigate and troubleshoot data issues; perform root cause analysis in collaboration with BI or data engineering teams. Communicate analytical insights clearly and effectively to stakeholders. Required Skills & Qualifications: Excellent communication skills are mandatory for this role. 5+ years of experience in data analytics, BI analytics, or BI engineering roles. Expert-level skills in SQL, with experience writing complex queries and building views. Proven experience using data visualization tools like Tableau, Looker, or Domo. Strong understanding of data modeling principles and best practices. Hands-on experience working with cloud data warehouses such as Snowflake, Redshift, BigQuery, SQL Server, or Oracle. Intermediate-level proficiency with spreadsheet tools like Excel, Google Sheets, or Power BI, including functions, pivots, and lookups. Bachelor's or advanced degree in a relevant field such as Data Science, Computer Science, Statistics, Mathematics, or Information Systems. Ability to collaborate with cross-functional teams, including BI engineers, to optimize reporting solutions. Experience in handling large-scale enterprise data environments. Familiarity with data governance, data cataloging, and metadata management tools (a plus but not required).

Posted 1 month ago

Apply

4.0 - 7.0 years

13 - 17 Lacs

Pune

Work from Office

Overview The Data Technology team at MSCI is responsible for meeting the data requirements across various business areas, including Index, Analytics, and Sustainability. Our team collates data from multiple sources such as vendors (e.g., Bloomberg, Reuters), website acquisitions, and web scraping (e.g., financial news sites, company websites, exchange websites, filings). This data can be in structured or semi-structured formats. We normalize the data, perform quality checks, assign internal identifiers, and release it to downstream applications. Responsibilities As data engineers, we build scalable systems to process data in various formats and volumes, ranging from megabytes to terabytes. Our systems perform quality checks, match data across various sources, and release it in multiple formats. We leverage the latest technologies, sources, and tools to process the data. Some of the exciting technologies we work with include Snowflake, Databricks, and Apache Spark. Qualifications Core Java, Spring Boot, Apache Spark, Spring Batch, Python. Exposure to sql databases like Oracle, Mysql, Microsoft Sql is a must. Any experience/knowledge/certification on Cloud technology preferrably Microsoft Azure or Google cloud platform is good to have. Exposures to non sql databases like Neo4j or Document database is again good to have. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 1 month ago

Apply

3.0 - 8.0 years

6 - 15 Lacs

Bengaluru

Work from Office

Role & responsibilities Design, build, and maintain scalable data pipelines using DBT and Airflow. Develop and optimize SQL queries and data models in Snowflake. Implement ETL/ELT workflows, ensuring data quality, performance, and reliability. Work with Python for data processing, automation, and integration tasks. Handle JSON data structures for data ingestion, transformation, and APIs. Leverage AWS services (e.g., S3, Lambda, Glue, Redshift) for cloud-based data solutions. Collaborate with data analysts, engineers, and business teams to deliver high-quality data products. Preferred candidate profile Strong expertise in SQL, Snowflake, and DBT for data modeling and transformation. Proficiency in Python and Airflow for workflow automation. Experience working with AWS cloud services. Ability to handle JSON data formats and integrate APIs. Strong problem-solving skills and experience in optimizing data pipelines

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Remote

5+ years of Commercial Analytics experience in Pharma/Healthcare industry (must have). Excellent communication skills. Strong stakeholder and project management skills. Good proficiency in SQL(must have). Working Knowledge of Snowflake, good to have. Knowledge of at least one BI tool, Microstrategy preferred. Should have worked on Commercial and Call Activity data. Exposure to pharma datasets from IMS, IQVIA or other similar vendors

Posted 1 month ago

Apply

5.0 - 8.0 years

1 - 2 Lacs

Gurugram

Work from Office

Urgent Requirement for Tech Consultant Data Engg with 5+yrs Exp Strong knowledge of Big Data technologies (Hadoop, Spark, Snowflake, Databricks, Airflow, AWS), Python, SQL & cloud platforms (AWS, Azure)

Posted 1 month ago

Apply

4.0 - 9.0 years

5 - 12 Lacs

Kolkata, Hyderabad, Bengaluru

Work from Office

Hiring for Snowflake Developer with experience range 2 years & above Mandatory Skills: Snowflake Education: BE/B.Tech/MCA/M.Tech/MSc./MS Location- PAN INDIA

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

PositionSenior Data Engineer - Airflow, PLSQL Experience5+ Years LocationBangalore/Hyderabad/Pune Seeking a Senior Data Engineer with strong expertise in Apache Airflow and Oracle PL/SQL, along with working experience in Snowflake and Agile methodologies. The ideal candidate will also take up Scrum Master responsibilities and lead a data engineering scrum team to deliver robust, scalable data solutions. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Apache Airflow. Write and optimize complex PL/SQL queries, procedures, and packages on Oracle databases. Collaborate with cross-functional teams to design efficient data models and integration workflows. Work with Snowflake for data warehousing and analytics use cases. Own the delivery of sprint goals, backlog grooming, and facilitation of agile ceremonies as the Scrum Master. Monitor pipeline health and troubleshoot production data issues proactively. Ensure code quality, documentation, and best practices across the team. Mentor junior data engineers and promote a culture of continuous improvement. Required Skills and Qualifications: 5+ years of experience as a Data Engineer in enterprise environments. Strong expertise in Apache Airflow for orchestrating workflows. Expert in Oracle PL/SQL - stored procedures, performance tuning, debugging. Hands-on experience with Snowflake - data modeling, SQL, optimization. Working knowledge of version control (Git) and CI/CD practices. Prior experience or certification as a Scrum Master is highly desirable. Strong analytical and problem-solving skills with attention to detail. Excellent communication and leadership skills.

Posted 1 month ago

Apply

3.0 - 6.0 years

5 - 15 Lacs

Kolkata, Pune, Bengaluru

Work from Office

Hiring for Snowflake Developer with experience range 2 years & above Mandatory Skills: Snowflake Education: BE/B.Tech/MCA/M.Tech/MSc./MS Location- PAN INDIA

Posted 1 month ago

Apply

8.0 - 12.0 years

25 - 40 Lacs

Chennai

Work from Office

We are seeking a highly skilled Data Architect to design and implement robust, scalable, and secure data solutions on AWS Cloud. The ideal candidate should have expertise in AWS services, data modeling, ETL processes, and big data technologies, with hands-on experience in Glue, DMS, Python, PySpark, and MPP databases like Snowflake, Redshift, or Databricks. Key Responsibilities: Architect and implement data solutions leveraging AWS services such as EC2, S3, IAM, Glue (Mandatory), and DMS for efficient data processing and storage. Develop scalable ETL pipelines using AWS Glue, Lambda, and PySpark to support data transformation, ingestion, and migration. Design and optimize data models following Medallion architecture, Data Mesh, and Enterprise Data Warehouse (EDW) principles. Implement data governance, security, and compliance best practices using IAM policies, encryption, and data masking. Work with MPP databases such as Snowflake, Redshift, or Databricks, ensuring performance tuning, indexing, and query optimization. Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to design efficient data integration strategies. Ensure high availability and reliability of data solutions by implementing monitoring, logging, and automation in AWS. Evaluate and recommend best practices for ETL workflows, data pipelines, and cloud-based data warehousing solutions. Troubleshoot performance bottlenecks and optimize query execution plans, indexing strategies, and data partitioning. Job Requirement Required Qualifications & Skills: Strong expertise in AWS Cloud Services: Compute (EC2), Storage (S3), and security (IAM). Proficiency in programming languages: Python, PySpark, and AWS Lambda. Mandatory experience in ETL tools: AWS Glue and DMS for data migration and transformation. Expertise in MPP databases: Snowflake, Redshift, or Databricks; knowledge of RDBMS (Oracle, SQL Server) is a plus. Deep understanding of data modeling techniques: Medallion architecture, Data Mesh, EDW principles. Experience in designing and implementing large-scale, high-performance data solutions. Strong analytical and problem-solving skills, with the ability to optimize data pipelines and storage solutions. Excellent communication and collaboration skills, with experience working in agile environments. Preferred Qualifications: AWS Certification (AWS Certified Data Analytics, AWS Certified Solutions Architect, or equivalent). Experience with real-time data streaming (Kafka, Kinesis, or similar). Familiarity with Infrastructure as Code (Terraform, CloudFormation). Understanding of data governance frameworks and compliance standards (GDPR, HIPAA, etc.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 18 Lacs

Chennai

Work from Office

Technical Project Manager We are looking for an experienced Technical Project Manager with 5 to 10 years of proven success in overseeing complex project portfolios. The ideal candidate should have a strong foundation in both technical execution and project leadership, with the ability to thrive in a fast-paced, evolving environment . Position: Technical Project Manager (Immediate Joiners Preferred) Experience: 5-10 years No of positions: 1 Location: Chennai, India Key Skills Should have strong project and customer management skills Strong foundational knowledge on cloud platforms Prototyping mindset with a focus on design thinking Ability to handle projects on Web, API, Business Intelligence, Data & Analytics, AIML, ETL & ETL Tools, BI Tools, MSSQL, Bigdata, Cloud SQL, Snowflake, Databricks, Python PMP certification is a Plus Responsibilities End-to-end project and delivery management across the full lifecycle. Collaborate closely with engineering teams to understand technical issues, contribute to solution design, and ensure effective implementation Expertise in Agile-based custom application development. Leadership of cross-functional teams and effective resource planning Strong client and stakeholder engagement, including change management Proficient in risk identification, mitigation, and governance. Skilled in tracking project schedules, resources, and costs. Experienced in coordinating with Cloud/Infrastructure teams for deployments and change requests. Develop and maintain comprehensive project documentation, including project plans, timelines, budgets, and risk assessments Oversee technical resource management, including workload validation, expertise allocation, and onboarding Manage all technical activities outlined in the customer contract, ensuring quality, mitigating risks, and adhering to timelines. Good experience in effectively managing the backlogs using JIRA or other tools

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Mumbai, Pune, Mumbai (All Areas)

Work from Office

*Seeking a Data Architect to design scalable data models, build pipelines, ensure governance & security, and optimize performance across cloud/data platforms. Collaborate with teams, drive innovation, lead data strategy & mentor others.

Posted 1 month ago

Apply

10.0 - 15.0 years

30 - 35 Lacs

Hyderabad

Work from Office

Responsibilities: Lead and manage an offshore team of data engineers, providing strategic guidance, mentorship, and support to ensure the successful delivery of projects and the development of team members. Collaborate closely with onshore stakeholders to understand project requirements, allocate resources efficiently, and ensure alignment with client expectations and project timelines. Drive the technical design, implementation, and optimization of data pipelines, ETL processes, and data warehouses, ensuring scalability, performance, and reliability. Define and enforce engineering best practices, coding standards, and data quality standards to maintain high-quality deliverables and mitigate project risks. Stay abreast of emerging technologies and industry trends in data engineering, and provide recommendations for tooling, process improvements, and skill development. Assume a data architect role as needed, leading the design and implementation of data architecture solutions, data modeling, and optimization strategies. Demonstrate proficiency in AWS services such as: Expertise in cloud data services, including AWS services like Amazon Redshift, Amazon EMR, and AWS Glue, to design and implement scalable data solutions. Experience with cloud infrastructure services such as AWS EC2, AWS S3, to optimize data processing and storage. Knowledge of cloud security best practices, IAM roles, and encryption mechanisms to ensure data privacy and compliance. Proficiency in managing or implementing cloud data warehouse solutions, including data modeling, schema design, performance tuning, and optimization techniques. Demonstrate proficiency in modern data platforms such as Snowflake and Databricks, including: Deep understanding of Snowflake's architecture, capabilities, and best practices for designing and implementing data warehouse solutions. Hands-on experience with Databricks for data engineering, data processing, and machine learning tasks, leveraging Spark clusters for scalable data processing. Ability to optimize Snowflake and Databricks configurations for performance, scalability, and cost-effectiveness. Manage the offshore team's performance, including resource allocation, performance evaluations, and professional development, to maximize team productivity and morale. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred. 10+ years of experience in data engineering, with a proven track record of leadership and technical expertise in managing complex data projects. Proficiency in programming languages such as Python, Java, or Scala, as well as expertise in SQL and relational databases (e.g., PostgreSQL, MySQL). Strong understanding of distributed computing, cloud technologies (e.g., AWS), and big data frameworks (e.g., Hadoop, Spark). Experience with data architecture design, data modeling, and optimization techniques. Excellent communication, collaboration, and leadership skills, with the ability to effectively manage remote teams and engage with onshore stakeholders. Proven ability to adapt to evolving project requirements and effectively prioritize tasks in a fast-paced environment.

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Hyderabad

Work from Office

Business Data Analyst - HealthCare Job Summary We are seeking an experienced and results-driven Business Data Analyst with 5+ years of hands-on experience in data analytics, visualization, and business insight generation. This role is ideal for someone who thrives at the intersection of business and datatranslating complex data sets into compelling insights, dashboards, and strategies that support decision-making across the organization. You will collaborate closely with stakeholders across departments to identify business needs, design and build analytical solutions, and tell compelling data stories using advanced visualization tools. Key Responsibilities Data Analytics & Insights Analyze large and complex data sets to identify trends, anomalies, and opportunities that help drive business strategy and operational efficiency. • Dashboard Development & Data Visualization Design, develop, and maintain interactive dashboards and visual reports using tools like Power BI, Tableau, or Looker to enable data-driven decisions. • Business Stakeholder Engagement Collaborate with cross-functional teams to understand business goals, define metrics, and convert ambiguous requirements into concrete analytical deliverables. • KPI Definition & Performance Monitoring Define, track, and report key performance indicators (KPIs), ensuring alignment with business objectives and consistent measurement across teams. • Data Modeling & Reporting Automation Work with data engineering and BI teams to create scalable, reusable data models and automate recurring reports and analysis processes. • Storytelling with Data Communicate findings through clear narratives supported by data visualizations and actionable recommendations to both technical and non-technical audiences. • Data Quality & Governance Ensure accuracy, consistency, and integrity of data through validation, testing, and documentation practices. Required Qualifications Bachelor’s or Master’s degree in Business, Economics, Statistics, Computer Science, Information Systems, or a related field. • 5+ years of professional experience in a data analyst or business analyst role with a focus on data visualization and analytics. • Proficiency in data visualization tools: Power BI, Tableau, Looker (at least one). • Strong experience in SQL and working with relational databases to extract, manipulate, and analyze data. • Deep understanding of business processes, KPIs, and analytical methods. • Excellent problem-solving skills with attention to detail and accuracy. • Strong communication and stakeholder management skills with the ability to explain technical concepts in a clear and business-friendly manner. • Experience working in Agile or fast-paced environments. Preferred Qualifications Experience working with cloud data platforms (e.g., Snowflake, BigQuery, Redshift). • Exposure to Python or R for data manipulation and statistical analysis. • Knowledge of data warehousing, dimensional modeling, or ELT/ETL processes. • Domain experience in Healthcare is a plus.

Posted 1 month ago

Apply

7.0 - 10.0 years

20 - 35 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

JOB DESCRIPTION Senior MLE / Architect MLE (ML Ops) Chennai / Bangalore / Hyderabad (Hybrid) Who we are Tiger Analytics is a global leader in AI and analytics, helping Fortune 1000 companies solve their toughest challenges. We offer fullstack AI and analytics services & solutions to empower businesses to achieve real outcomes and value at scale. We are on a mission to push the boundaries of what AI and analytics can do to help enterprises navigate uncertainty and move forward decisively. Our purpose is to provide certainty to shape a better tomorrow. Our team of 4000+ technologists and consultants are based in the US, Canada, the UK, India, Singapore and Australia, working closely with clients across CPG, Retail, Insurance, BFS, Manufacturing, Life Sciences, and Healthcare. Many of our team leaders rank in Top 10 and 40 Under 40 lists, exemplifying our dedication to innovation and excellence. We are a Great Place to Work-Certified (2022-24), recognized by analyst firms such as Forrester, Gartner, HFS, Everest, ISG and others. We have been ranked among the Best and Fastest Growing analytics firms lists by Inc., Financial Times, Economic Times and Analytics India Magazine. About the role Curious about the role? What your typical day would look like? We are looking for a Senior / Lead ML Engineer who will work on a broad range of cutting-edge data analytics and machine learning problems across a variety of industries. More specifically, you will • Engage with clients to understand their business context • Lead a team of data scientists and engineers to embed AI and analytics into the business decision processes. • Perform a role that involves a combination of hands-on contribution and customer engagement. • Switch between roles of an Individual Contributor, team member and solution architect as demanded by each project What do we expect? 6+ years exp of which 3+ years of relevant data science experience and at least 3 years of software development experience. • Experience in devising creative analytical approaches to solve business problems • Developing and enhancing algorithms and models to solve business problem • Maintaining all models along with development and updating of code and process documentation • Designing a solution approach, leading a team to deploy and maintain it in production. • Proficient in Structured programming language is a must - such as Python, C/C++, Java is mandatory. Cloud concepts and significant hands-on experience with at least one cloud provider. Should be knowledgeable about the pros and cons of the various services and comfortable discussing the tradeoffs with stakeholders. • Strong SQL skills. • Good knowledge of Data science approaches, machine learning algorithms and statistical methods. • ETL and Data Engineering pipelines using Spark/PySpark. • Workflow orchestration tools like Airflow.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies