Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
15 - 30 Lacs
Gurugram
Remote
Job description Data Modeler AI/ML Enablement Remote | Contract/Freelancer | Duration: 1 to 2 Months Start: Immediate | Experience: 7+ Years Were looking for experienced Data Modelers with a strong background in one or more industries: Telecom, Banking/Finance, Media or Government Only. Key Responsibilities: Design conceptual/logical/physical data models Collaborate with AI/ML teams to structure data for model training Build ontologies, taxonomies, and data schemas Ensure compliance with industry-specific data regulations Must-Have Skills & Experience: 7+ years of hands-on experience in data modeling conceptual, logical, and physical models. Proficiency in data modeling tools like Erwin, ER/Studio, or PowerDesigner. Strong understanding of data domains like customer, transaction, network, media, or case data. Familiarity with AI/ML pipelines understanding how structured data supports model training. Knowledge of data governance, quality, and compliance standards (e.g., GDPR, PCI-DSS). Ability to work independently and deliver models quickly in a short-term contract environment.
Posted 1 week ago
5.0 - 10.0 years
15 - 25 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Work from Office
Role & responsibilities Develop, test, and deploy robust Dashboards and reports in Power BI using SAP HANA and Snowflake Datasets Basic Qualifications Excellent verbal and written communication skills 5+ years of experience working with Power BI with SAP HANA and Snowflake Datasets 5+ hands-on experience in developing moderate to complex ETL data pipelines is a plus 5+ years of hands-on experience with ability to resolve complex SQL query performance issues. 5+ years of ETL Python development experience; experience parallelizing pipelines a plus Demonstrated ability to troubleshoot complex query, pipeline, and data quality issues Call : - 9584022831 Email: - Mayank@axiomsoftwaresolutions.com
Posted 1 week ago
4.0 - 9.0 years
10 - 20 Lacs
Pune
Hybrid
Hi, Greetings! This is regarding a job opportunity for the position of Data Modeller with a US based MNC in Healthcare Domain. This opportunity is under the direct pay roll of US based MNC. Job Location: Pune, Mundhwa Mode of work: Hybrid (3 days work from office) Shift timings: 1pm to 10pm About the Company: The Global MNC is a mission-driven startup transforming the healthcare payer industry. Our secure, cloud-enabled platform empowers health insurers to unlock siloed data, improve patient outcomes, and reduce healthcare costs. Since our founding in 2017, we've raised over $81 million from top-tier VCs and built a thriving SaaS business. Join us in shaping the future of healthcare data. With our deep expertise in cloud-enabled technologies and knowledge of the healthcare industry, we have built an innovative data integration and management platform that allows healthcare payers access to data that has been historically siloed and inaccessible. As a result, these payers can ingest and manage all the information they need to transform their business by supporting their analytical, operational, and financial needs through our platform. Since our founding in 2017, it has built a highly successful SaaS business, raising more than $80 Million by leading VC firms with profound expertise in the healthcare and technology industries. We are solving massive complex problems in an industry ready for disruption. We're building powerful momentum and would love for you to be a part of it! Interview process: 5 rounds of interview 4 rounds of Technical Interview 1 round of HR or Fitment discussion Job Description: Data Modeller About the Role: Were seeking a Data Modeler to join our global data modeling team. Youll play a key role in translating business requirements into conceptual and logical data models that support both operational and analytical use cases. This is a high-impact opportunity to work with cutting-edge technologies and contribute to the evolution of healthcare data platforms. What Youll Do Design and build conceptual and logical data models aligned with enterprise architecture and healthcare standards. Perform data profiling and apply data integrity principles using SQL. Collaborate with cross-functional teams to ensure models meet client and business needs. Use tools like Erwin, ER/Studio, DBT, or similar for enterprise data modeling. Maintain metadata, business glossaries, and data dictionaries. Support client implementation teams with data model expertise. What Were Looking For 2+ years of experience in data modeling and cloud-based data engineering. Proficiency in enterprise data modeling tools (Erwin, ER/Studio, DBSchema). Experience with Databricks, Snowflake, and data lakehouse architectures. Strong SQL skills and familiarity with schema evolution and data versioning. Deep understanding of healthcare data domains (Claims, Enrollment, Provider, FHIR, HL7, etc.). Excellent collaboration and communication skills. In case you have query, please feel free to contact me on the below mention email or whatsapp or call. Thanks & Regards, Priyanka Das Email: priyanka.das@dctinc.com Contact Number: 74399 37568
Posted 1 week ago
8.0 - 12.0 years
7 - 11 Lacs
Pune
Work from Office
Experience with ETL processes and data warehousing Proficient in SQL and Python/Java/Scala Team Lead Experience
Posted 1 week ago
2.0 - 4.0 years
4 - 8 Lacs
Pune
Work from Office
Experience with ETL processes and data warehousing Proficient in SQL
Posted 1 week ago
5.0 - 8.0 years
9 - 15 Lacs
Hyderabad
Work from Office
Create and execute unit/system/integration tests for SAS migration to Viya and Snowflake. Focus on data validation, automation, and code compliance. Preferred candidate profile 5 + years of experience Must to have: SAS programming, SQL, Snowflake, SAS Viya, SonarQube, test automation tools Good to have: Talend, IBM Data Replicator, Qlik Replicate, CI/CD, cloud experience
Posted 1 week ago
3.0 - 5.0 years
12 - 20 Lacs
Gurugram, Bengaluru, Mumbai (All Areas)
Work from Office
Job Description_ Data Engineer _ TransOrg Analytics Why would you like to join us? TransOrg Analytics specializes in Data Science, Data Engineering and Generative AI, providing advanced analytics solutions to industry leaders and Fortune 500 companies across India, US, APAC and the Middle East. We leverage data science to streamline, optimize, and accelerate our clients' businesses. Visit at www.transorg.com to know more about us. Responsibilities: Design, develop, and maintain robust data pipelines using Azure Data Factory and Databricks workflows. Develop an integrated data solution in Snowflake to unify data. Implement and manage big data solutions using Azure Databricks. Design and maintain relational databases using Azure Delta Lake. Ensure data quality and integrity through rigorous testing and validation. Monitor and troubleshoot data pipelines and workflows to ensure seamless operation. Implement data security and compliance measures in line with industry standards. Continuously improve data infrastructure (including CI/CD) for scalability and performance. Design, develop, and maintain ETL processes to extract, transform, and load data from various sources into Snowflake. Utilize ETL tools (e.g., ADF, Talend) to automate and manage data workflows. Develop and maintain CI/CD pipelines using GitHub and Jenkins for automated deployment of data models and ETL processes. Monitor and troubleshoot pipeline issues to ensure smooth deployment and integration. Design and implement scalable and efficient data models in Snowflake. Optimize data structures for performance and storage efficiency. Collaborate with stakeholders to understand data requirements and ensure data integrity Integrate multiple data sources to create data lake/data mart Perform data ingestion and ETL processes using SQL, Scoop, Spark or Hive Monitor job performances, manage file system/disk-space, cluster & database connectivity, log files, manage backup/security and troubleshoot various user issues Design, implement, test and document performance benchmarking strategy for platforms as well as for different use cases Setup, administer, monitor, tune, optimize and govern large scale implementations Drive customer communication during critical events and participate/lead various operational improvement initiatives Qualifications, Skill Set and competencies: Bachelor's in Computer Science, Engineering, Statistics, Maths or related quantitative degree. 2 - 5 years of relevant experience in data engineering. Must have worked on any of the cloud engineering platforms - AWS, Azure, GCP, Cloudera Proven experience as a Data Engineer with a focus on Azure cloud technologies/Snowflake. Strong proficiency in Azure Data Factory, Azure Databricks, ADLS, and Azure SQL Database. Experience with big data processing frameworks like Apache Spark. Expert level proficiency in SQL and experience with data modeling and database design. Knowledge of data warehousing concepts and ETL processes. Strong focus on PySpark, Scala and Pandas. Proficiency in Python programming and experience with other data processing frameworks. Solid understanding of networking concepts and Azure networking solutions. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Azure Data Engineer certification AZ-900 and DP-203 (Good to have) Familiarity with DevOps practices and tools for CI/CD in data engineering. Certification: MS Azure / DBR Data Engineer (Good to have) Data Ingestion - Coding & automating ETL pipelines, both batching & streaming. Should have worked on both ETL or ELT methodologies using any of traditional & new age tech stack- SSIS, Informatica, Databricks, Talend, Glue, DMS, ADF, Spark, Kafka, Storm, Flink etc. Data transformation - Experience working with MPPs, big data & distributed computing frameworks on cloud or cloud agnostic tech stack- Databricks, EMR, Hadoop, DBT, Spark etc, Data storage - Experience working on data lakes, lakehouse architecture- S3, ADLS, Blob, HDFS DWH - Strong experience modelling & implementing DWHing on tech like Redshift, Snowflake, Azure Synapse, Bigquery, Hive Orchestration & lineage - Airflow, Oozie etc.
Posted 1 week ago
7.0 - 10.0 years
18 - 22 Lacs
Bengaluru
Work from Office
Roles and Responsibilities: Development and implementation of DBT models, ensuring efficient data transformation workflows. Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate them into robust DBT solutions. Optimize DBT pipelines for performance, scalability, and maintainability. Enforce best practices in version control, testing, and documentation within the DBT environment. Monitor and troubleshoot DBT workflows to ensure reliability and timely delivery of data products. Provide guidance and mentorship to the team on DBT practices and advanced modeling techniques. Stay updated on the latest DBT features and incorporate them into the data transformation ecosystem. Critical Skills to Possess: Snowflake and DBT 7+ years of experience Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Roles and Responsibilities Roles and Responsibilities: Development and implementation of DBT models, ensuring efficient data transformation workflows. Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate them into robust DBT solutions. Optimize DBT pipelines for performance, scalability, and maintainability. Enforce best practices in version control, testing, and documentation within the DBT environment. Monitor and troubleshoot DBT workflows to ensure reliability and timely delivery of data products. Provide guidance and mentorship to the team on DBT practices and advanced modeling techniques. Stay updated on the latest DBT features and incorporate them into the data transformation ecosystem. Critical Skills to Possess: Snowflake and DBT 7+ years of experience Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience
Posted 1 week ago
8.0 - 13.0 years
18 - 20 Lacs
Noida
Remote
Job Title: Cloud Data Architect Location: 100% Remote Time: Over-lap US CST hours Duration: 6+ Months Job Description: Seeking candidates with at least 5-7 years' experience. Strong data architect, done data engineering and data architect previously. Strong SQL and Snowflake experience required. Manufacturing industry is a must have Snowflake / SQL Architect Architect and manage scalable data solutions using Snowflake and advanced SQL, optimizing performance for analytics and reporting. Design and implement data pipelines, data warehouses, and data lakes, ensuring efficient data ingestion and transformation. Develop best practices for data security, access control, and compliance within cloud-based data environments. Collaborate with cross-functional teams to understand business needs and translate them into robust data architectures. Evaluate and integrate third-party tools and technologies to enhance the Snowflake ecosystem and overall data strategy. Thanks & Regards: Abhinav Krishna Srivastava Technical Resource Specialist Email: asrivastava@fcsltd.com
Posted 1 week ago
5.0 - 10.0 years
7 - 11 Lacs
Telangana
Work from Office
Key Responsibilities ETL Development: Design and implement ETL processes using Informatica PowerCenter, Cloud Data Integration, or other Informatica tools. Data Integration: Integrate data from various sources, ensuring data accuracy, consistency, and high availability. Performance Optimization: Optimize ETL processes for performance and efficiency, ensuring minimal downtime and maximum throughput.
Posted 1 week ago
5.0 - 8.0 years
10 - 20 Lacs
Hyderabad
Hybrid
Hiring Snowflake+DBT data engineers! Experience-5+ years Work mode-Hybrid Job location-Hyderabad Role & responsibilities Data Pipeline Development: Design, build, and maintain efficient data pipelines using Snowflake and DBT. Data Modeling: Develop and optimize data models in Snowflake to support analytics and reporting needs. ETL Processes: Implement ETL processes to transform raw data into structured formats using DBT. Performance Tuning: Optimize Snowflake queries and DBT models for performance and scalability. Data Integration: Integrate Snowflake with various data sources and third-party tools. Collaboration: Work closely with data analysts, data scientists, and other stakeholders to understand data requirements and deliver solutions. Data Quality: Implement data quality checks and testing to ensure the accuracy and reliability of data. Documentation: Document data transformation processes and maintain comprehensive records of data models and pipelines. Preferred candidate profile Proficiency in SQL: Strong SQL skills for writing and optimizing queries. Experience with Snowflake: Hands-on experience with Snowflake, including data modeling, performance tuning, and integration. DBT Expertise: Proficient in using DBT for data transformation and modeling. Data Warehousing: Knowledge of data warehousing concepts and experience with platforms like Snowflake. Analytical Thinking: Ability to analyze complex data sets and derive actionable insights. Communication: Strong communication skills to collaborate with cross-functional teams. Problem-Solving: Excellent problem-solving skills and attention to detail.
Posted 1 week ago
8.0 - 12.0 years
0 - 0 Lacs
Hyderabad
Hybrid
Job Title: Lead Data Engineer Experience : 8+ years Job Type: Hybrid-3 days Location: Hyderabad Contract: 6+ months Mandatory Skills : Python, SQL, Snowflake (3+ years on each skill) Required Skills: Senior developer with approximately 8 years of experience in Data engineering Background in medium to large-scale client environments working on at least 3 or 4 projects Strong expertise in Data Engineering, ETL/ELT workflows Solid understanding of database concepts and data modeling Proficient in SQL, PL/SQL, and Python Snowflake experience (3+ years) with base or advanced certification Excellent communication skills ( written and verbal) Ability to work independently and proactively
Posted 1 week ago
5.0 - 10.0 years
12 - 18 Lacs
Chennai
Hybrid
Why you'll LOVE Sagent: You could work anywhere. We know you are talented and looking for something inspiring and impactful. A place where you will make a difference and have a great time doing it! By choosing Sagent, you can be part of our mission to make loans and homeownership simpler and safer for all consumers. Sagent powers servicers and consumers. You power Sagent! About the Opportunity: A Database Administrator SR helps an organization operationalize its data by creating the environment and processes needed to efficiently manage data and derive value from analytics. Managing backend assets; ownership and accountability over configuration and spinning up of cloud data assets and pipelines are primarily responsibilities of DataOps Engineer. Your Day-to-Day responsibilities at Sagent include but not limited to: Database Administrator SR should be very qualified with experience in managing various flavors of Data assets (Postgres, Snowflake, GCP based DBs and pipelines). This includes various day-to-day activities, from reducing development time and improving data quality to providing guidance and support to data engineers. Database uptime, performance monitoring and improvement This includes various day-to-day activities, from reducing development time and improving data quality to providing guidance and support to data team members. The responsibilities of a Database Administrator SR engineer include: Building and optimizing data pipelines to facilitate the extraction of data from multiple sources and load it into data warehouses. A DataOps engineer must be familiar with extract, load, transform (ELT) and extract, transform, load (ETL) tools. Using automation to streamline data processing. To reduce development time and increase data reliability, DataOps engineers automate manual processes, such as data extraction and testing. Managing the production of data pipelines. A DataOps engineer provides organizations with access to structured datasets and analytics they will further analyze and derive insights from. Designing data engineering assets. This involves developing frameworks to support an organizations data demands. Facilitating collaboration. DataOps engineers communicate and collaborate with other data and BI team members to enhance the quality of data products. Testing. This involves executing automated testing at every stage of a pipeline to increase productivity while reducing errors. This includes unit tests (testing separate components of a data pipeline) as well as performance tests (testing the responsiveness) and end-to-end tests (testing the whole pipeline). Adopting new solutions. This includes testing and adopting solutions and tools that adhere to the DataOps best practices. Handling security. DataOps engineers ensure data security standards are applied across the data pipelines. Reducing waste and improving data flow. This involves continually striving to reduce wasted effort, identify gaps and correct them, and improve data development and deployment processes. We'd love to hear from you if you have: Bachelors Degree in Computer Science or equivalent work experience 5+ years of Data Ops Experience Hands on experience working with Postgres, Snowflake administration and Google Cloud Platform. Hands on experience in setting up and managing CICD pipelines on Azure DevOps. Expert in SQL including performance tuning. Excellent team skills and able to work in fast-paced environment on multiple projects/jobs concurrently. Perks! As a Sagent Associate, you will be eligible to participate in our benefit programs beginning on Day #1! We offer a comprehensive package including Hybrid workplace options, Group Medial Coverage, Group Personal Accidental, Group Term Life Insurance Benefits, Flexible Time Off, Food@Work, Career Pathing and much, much more!
Posted 1 week ago
5.0 - 10.0 years
8 - 18 Lacs
Hyderabad
Work from Office
Job Title: SAS Migration Test Engineers Location: Hyderabad (5 days WFO) Employer: Sonata Software Experience Required: 5+ years in SAS Migration Testing Employment Type: [Full-Time] Key Skills Must have: SAS programming, SQL, Snowflake, SAS Viya, SonarQube, test automation tools Good to have: Talend, IBM Data Replicator, Qlik Replicate, CI/CD, cloud experience Job Description: Create and execute unit/system/integration tests for SAS migration to Viya and Snowflake. Focus on data validation, automation, and code compliance.
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Work from Office
Job Title: SAS Code Modernization Specialist Location: Hyderabad (5 days WFO) Employer: Sonata Software Experience Required: 4+ years in SAS Code Employment Type: [Full-Time] Key Skills Must have: SAS (Base, Macro, SQL), SAS Viya, Snowflake, shell scripting Good to have: SonarQube, CI/CD, Workflow Orchestrator Job Description: Refactor legacy SAS code for Viya/Snowflake, automate code assessments, integrate with pipelines, and enhance scalability/security.
Posted 1 week ago
4.0 - 9.0 years
10 - 20 Lacs
Hyderabad
Work from Office
We are looking for a talented Talend Developer with hands-on experience in Talend Management Console on Cloud and Snowflake to join our growing team. The ideal candidate will play a key role in building and optimizing ETL/ELT data pipelines, integrating complex data systems, and ensuring high performance across cloud environments. While experience with Informatica is a plus, it is not mandatory for this role. As a Talend Developer, you will be responsible for designing, developing, and maintaining data integration solutions to meet the organizations growing data needs. You will collaborate with business stakeholders, data architects, and other data professionals to ensure the seamless and secure movement of data across platforms, ensuring scalability and performance. Key Responsibilities: Develop and maintain ETL/ELT data pipelines using Talend Management Console on Cloud to integrate data from various on-premises and cloud-based sources. Design, implement, and optimize data flows for data ingestion, processing, and transformation in Snowflake to support analytical and reporting needs. Utilize Talend Management Console on Cloud to manage, deploy, and monitor data integration jobs, ensuring robust pipeline management and process automation. Collaborate with data architects to ensure that the data integration solutions align with business requirements and follow best practices. Ensure data quality, performance, and scalability of Talend-based data solutions. Troubleshoot, debug, and optimize existing ETL processes to ensure smooth and efficient data integration. Document data integration processes, including design specifications, mappings, workflows, and performance optimizations. Collaborate with the Snowflake team to implement best practices for data warehousing and data transformation. Implement error-handling and data validation processes to ensure high levels of accuracy and data integrity. Provide ongoing support for Talend jobs, including post-deployment monitoring, troubleshooting, and optimization. Participate in code reviews and collaborate in an agile development environment. Required Qualifications: 2+ years of experience in Talend development, with a focus on using the Talend Management Console on Cloud for managing and deploying jobs. Strong hands-on experience with Snowflake data warehouse, including data integration and transformation. Expertise in developing ETL/ELT workflows for data ingestion, processing, and transformation. Experience with SQL and working with relational databases to extract and manipulate data. Experience working in cloud environments (e.g., AWS, Azure, or GCP) with integration of cloud-based data platforms. Strong knowledge of data integration, data quality, and performance optimization in Talend. Ability to troubleshoot and resolve issues in data integration jobs and processes. Solid understanding of data modeling concepts and best practices for building scalable data pipelines. Preferred Qualifications: Experience with Informatica is a plus but not mandatory. Experience with scripting languages such as Python or Shell scripting for automation. Familiarity with CI/CD pipelines and working in DevOps environments for continuous integration of Talend jobs. Knowledge of data governance and data security practices in cloud environments.
Posted 1 week ago
5.0 - 10.0 years
15 Lacs
Hyderabad, Bengaluru
Work from Office
Role: Advanced Data Engineer Location: Hyderabad Experience: 4-8Years Notice Period: Imm to 30 Days Tech Stack: Proficiency in Python, SQL, Databricks, Snowflake Data Modeling, ETL process, Apache Spark and PySpark, data integration and workflow orchestration, real time data processing experience/frameworks, Cloud experience (Azure preferred) Job Description Minimum 5 years of experience in Data Engineering role that involves Analyzing, organizing raw data, and building data systems pipelines in Cloud Platform [Azure] Experienced in migrating data from on-prem to cloud-based solution architectures [Azure] Extensive experience with Python, Spark and SQL Experience in developing ETL processes. Proficient with Azure Data Lake, Azure Data Factory, Azure SQL, Azure Databricks, Azure Synapse Analytics or equivalent tools and technologies Experience building data lakes data warehouses to support operational intelligence and business intelligence. Excellent written and verbal communication skills
Posted 1 week ago
1.0 - 7.0 years
3 - 9 Lacs
Pune
Work from Office
Required Skills and Qualifications- Bachelor degree in Computer Science, Information Technology, or a related field. Hands on experience in data pipeline testing, preferably in a cloud environment. Strong experience with Google Cloud Platform services, especially BigQuery Proficient in working with Kafka, Hive, Parquet files, and Snowflake. Expertise in Data Quality Testing and metrics calculations for both batch and streaming data. Excellent programming skills in Python and experience with test automation. Strong analytical and problem-solving abilities. Excellent communication and teamwork skills.
Posted 1 week ago
1.0 - 2.0 years
3 - 4 Lacs
Bengaluru
Work from Office
Spark ML Lib,Scala,Python,Databricks on AWS , Snowflake , GitLab , Jenkins , AWS DevOps CI/CD pipeline,Machine Learning,Airflow We are seeking a highly skilled and motivated Machine Learning Engineer to join our dynamic team. The Machine Learning Engineer will be responsible for designing, developing, and deploying machine learning models to solve complex problems and enhance our products or services. The ideal candidate will have a strong background in machine learning algorithms, programming, and data analysis.Responsibilities : Problem Definition : Collaborate with cross-functional teams to define and understand business problems suitable for machine learning solutions.Translate business requirements into machine learning objectives.Data Exploration and Preparation : Analyze and preprocess large datasets to extract relevant features for model training.Address data quality issues and ensure data readiness for machine learning tasks.Model Development : Develop and implement machine learning models using state-of-the-art algorithms.Experiment with different models and approaches to achieve optimal performance.Training and Evaluation : Train machine learning models on diverse datasets and fine-tune hyperparameters.Evaluate model performance using appropriate metrics and iterate on improvements.Deployment : Deploy machine learning models into production environments.Collaborate with DevOps and IT teams to ensure smooth integration.Monitoring and Maintenance : Implement monitoring systems to track model performance in real-time.Regularly update and retrain models to adapt to evolving data patterns.Documentation : Document the entire machine learning development pipeline, from data preprocessing to model deployment.Create user guides and documentation for end-users and stakeholders.Collaboration : Collaborate with data scientists, software engineers, and domain experts to achieve project goals.Participate in cross-functional team meetings and knowledge-sharing sessions.
Posted 1 week ago
3.0 - 9.0 years
5 - 11 Lacs
Bengaluru
Work from Office
Data Analysis, Database Design Data Warehouse Design Architecture Roadmap Data Engineering SQL Server Snowflake, Azure Data FactoryShould be good in communication and interaction with stakeholders and should have experience in handling large data engineering and Analytics projectsShould be able to understand customer requirement and do solutioning and present the solution to both technical and non-technical audienceShould be able to work with stakeholders and define the architecture roadmap for the projectShould be able to manage and lead a team and should be able to contribute individually as well towards design PoCs,coordination etcSecondary skill MySQL SSIS and Pentaho and Power BI
Posted 1 week ago
4.0 - 9.0 years
6 - 11 Lacs
Bengaluru
Work from Office
4+ years of Testing Experience and at least 2 years in ETL Testing and automationExperience of automating ETL flowsExperience of development of automation framework for ETLGood coding skills in Python and PytestExpert at Test Data Analysis & Test designGood at Database Analytics(ETL or BigQuery).Having snowflake knowledge is a plusGood communication skills with customers and other stakeholdersCapable of working independently or with little supervision
Posted 1 week ago
8.0 - 12.0 years
30 - 35 Lacs
Hyderabad
Work from Office
Job Summary We are seeking an experienced Data Architect with expertise in Snowflake, dbt, Apache Airflow, and AWS to design, implement, and optimize scalable data solutions. The ideal candidate will play a critical role in defining data architecture, governance, and best practices while collaborating with cross-functional teams to drive data-driven decision-making. Key Responsibilities Data Architecture & Strategy: Design and implement scalable, high-performance cloud-based data architectures on AWS. Define data modelling standards for structured and semi-structured data in Snowflake. Establish data governance, security, and compliance best practices. Data Warehousing & ETL/ELT Pipelines: Develop, maintain, and optimize Snowflake-based data warehouses. Implement dbt (Data Build Tool) for data transformation and modelling. Design and schedule data pipelines using Apache Airflow for orchestration. Cloud & Infrastructure Management: Architect and optimize data pipelines using AWS services like S3, Glue, Lambda, and Redshift. Ensure cost-effective, highly available, and scalable cloud data solutions. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to align data solutions with business goals. Provide technical guidance and mentoring to the data engineering team. Performance Optimization & Monitoring: Optimize query performance and data processing within Snowflake. Implement logging, monitoring, and alerting for pipeline reliability. Required Skills & Qualifications 10+ years of experience in data architecture, engineering, or related roles. Strong expertise in Snowflake, including data modeling, performance tuning, and security best practices. Hands-on experience with dbt for data transformations and modeling. Proficiency in Apache Airflow for workflow orchestration. Strong knowledge of AWS services (S3, Glue, Lambda, Redshift, IAM, EC2, etc.). Experience with SQL, Python, or Spark for data processing. Familiarity with CI/CD pipelines, Infrastructure-as-Code (Terraform/CloudFormation) is a plus. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, etc.). Preferred Qualifications Certifications: AWS Certified Data Analytics - Specialty, Snowflake SnowPro Certification, or dbt Certification. Experience with streaming technologies (Kafka, Kinesis) is a plus. Knowledge of modern data stack tools (Looker, Power BI, etc.). Experience in OTT streaming could be added advantage.
Posted 1 week ago
2.0 - 5.0 years
20 - 25 Lacs
Hyderabad
Work from Office
About the Team At DAZN, the Analytics Engineering team transforms raw data into insights that drive decision-making across our global business - from content and product to marketing and revenue. We build reliable and scalable data pipelines and models that make data accessible and actionable for everyone. About the Role We are looking for an Analytics Engineer with 2+ years of experience to help build and maintain our modern data platform. You'll work with dbt , Snowflake , and Airflow to develop clean, well-documented, and trusted datasets. This is a hands-on role ideal for someone who wants to grow their technical skills while contributing to a high-impact analytics function. Key Responsibilities Build and maintain scalable data models using dbt and Snowflake Develop and orchestrate data pipelines with Airflow or similar tools Partner with teams across DAZN to translate business needs into robust datasets Ensure data quality through testing, validation, and monitoring practices Follow best practices in code versioning, CI/CD, and data documentation Contribute to the evolution of our data architecture and team standards What Were Looking For 2+ years of experience in analytics/data engineering or similar roles Strong skills in SQL and working knowledge of cloud data warehouses (Snowflake preferred) Experience with dbt for data modeling and transformation Familiarity with Airflow or other workflow orchestration tools Understanding of ELT processes, data modeling, and data governance principles Strong collaboration and communication skills Nice to Have Experience working in media, OTT, or sports technology domains Familiarity with BI tools like Looker , Tableau , or Power BI Exposure to testing frameworks like dbt tests or Great Expectations .
Posted 1 week ago
5.0 - 9.0 years
1 - 3 Lacs
Kolkata, Chennai, Bengaluru
Hybrid
Location- Pune, Mumbai, Nagpur, Goa, Noida, Gurgaon, Ahmedabad, Jaipur, Indore, Kolkata, Kochi, Hyderabad, Bangalore, Chennai,) Experience: 5-7 years Notice: 0-15 days Open position: 6 JD: Proven experience with DataStage for ETL development. Strong understanding of data warehousing concepts and best practices. Hands-on experience with Apache Airflow for workflow management. Proficiency in SQL and Python for data manipulation and scripting. Solid knowledge of Unix/Linux shell scripting. Experience with Apache Spark and Databricks for big data processing. Expertise in Snowflake for cloud data warehousing. Familiarity with version control systems (e.g., Git) and CI/CD pipelines. Excellent problem-solving and communication skills.
Posted 1 week ago
6.0 - 11.0 years
20 - 25 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
DBT - Designing, developing, and technical architecture, data pipelines, and performance scaling using tools to integrate Talend data Ensure data quality in a big data environment Very strong on PL/SQL - Queries, Procedures, JOINs. Snowflake SQL - Writing SQL queries against Snowflake and developing scripts in Unix, Python, etc., to perform Extract, Load, and Transform operations. Good to have Talend knowledge and hands-on experience. Candidates who have worked in PROD support would be preferred. Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Perform data analysis, troubleshoot data issues, and provide technical support to end-users. Develop and maintain data warehouse and ETL processes ensuring data quality and integrity. Complex problem-solving capability and continuous improvement approach Desirable to have Talend / Snowflake Certification Excellent SQL coding skills, excellent communication, and documentation skills Familiar with Agile delivery process. Must be analytical, creative, and self-motivate Work effectively within a global team environment Excellent communication skills.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi