Jobs
Interviews

267 Elt Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Design, develop, and maintain high-performance SQL and PL/SQL procedures, packages, and functions in Snowflake or other cloud database technologies. Apply advanced performance tuning techniques to optimize database objects, queries, indexing strategies, and resource usage. ; Develop code based on reading and understanding business and functional requirements following the Agile process Produce high-quality code to meet all project deadlines and ensuring the functionality matches the requirements ; Analyze and resolve issues found during the testing or pre-production phases of the software delivery lifecycle; coordinating changes with project team leaders and cross-work team members ; Provide technical support to project team members and responding to inquiries regarding errors or questions about programs Interact with architects, technical leads, team members and project managers as required to address technical and schedule issues. ; Suggest and implement process improvements for estimating, development and testing processes. Support the development of automated and repeatable processes for ETL/ELT, data integration, and data transformation using industry best practices. ; Support cloud migration and modernization initiatives, including re-platforming or refactoring legacy database objects for cloud-native platforms. ; BS Degree in Computer Science, Information Technology, Electrical/Electronic Engineering or another related field or equivalent ; A minimum of 7 years prior work experience working with an application and database development organization with deep expertise in Oracle PL/SQL or SQL Server T-SQL; must demonstrate experience delivering systems and projects from inception through implementation ; Proven experience writing and optimizing complex stored procedures, functions, and packages in relational databases such as Oracle, MySQL, SQL Server ; Strong knowledge of performance tuning, including query optimization, indexing, statistics, execution plans, and partitioning ; Understanding of data integration pipelines, ETL tools, and batch processing techniques. ; Possesses solid software development and programming skills, with an understanding of design patterns, and software development best practices ; Experience with Snowflake, Python scripting, and data transformation frameworks like dbt is a plus ; Work experience in developing Web Applications with Java, Java Script, HTML, JSPs. Experience with MVC frameworks Spring and Angular

Posted 6 days ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Lucknow

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 6 days ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Ludhiana

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

navi mumbai, maharashtra

On-site

You are currently seeking a full-time, office-based Data Engineer to join the Information Technology team at our rapidly growing corporate activities. In this role, you will work collaboratively with a team on tasks and projects crucial to the company's success. If you are looking for a dynamic career opportunity to utilize your expertise and further develop and enhance your skills, this position is the perfect fit for you. Your responsibilities will include utilizing your skills in data warehousing, business intelligence, and databases such as Snowflake, ANSI SQL, SQL Server, and T-SQL. You will support programming/software development using ETL and ELT tools like dbt, Azure Data Factory, and SSIS. Designing, developing, enhancing, and supporting business intelligence systems primarily using Microsoft Power BI will be a key part of your role. Additionally, you will be responsible for collecting, analyzing, and documenting user requirements, participating in software validation processes, creating software applications following the software development lifecycle, and providing end-user support for applications. To qualify for this position, you should have a Bachelor's Degree in Computer Science, Data Science, or a related field, along with at least 3 years of experience in Data Engineering. Knowledge of developing dimensional data models, understanding of Star Schema and Snowflake schema designs, solid ETL development and reporting knowledge, and familiarity with Snowflake cloud data warehouse, Fivetran data integration, and dbt transformations are preferred. Proficiency in Python, REST API, SQL Server databases, and bonus knowledge of C#, Azure development is desired. Excellent analytical, written, and oral communication skills are essential for this role. Medpace is a full-service clinical contract research organization (CRO) dedicated to providing Phase I-IV clinical development services to the biotechnology, pharmaceutical, and medical device industries. With a mission to accelerate the global development of safe and effective medical therapeutics, Medpace leverages local regulatory and therapeutic expertise across various major areas. Headquartered in Cincinnati, Ohio, Medpace employs over 5,000 individuals across 40+ countries. At Medpace, you will be part of a team that makes a positive impact on the lives of patients and families facing various diseases. The work you do today will contribute to improving the lives of individuals living with illness and disease in the future. Medpace offers a flexible work environment, competitive compensation and benefits package, structured career paths for professional growth, company-sponsored employee appreciation events, and employee health and wellness initiatives. Medpace has been recognized by Forbes as one of America's Most Successful Midsize Companies and has received CRO Leadership Awards for expertise, quality, capabilities, reliability, and compatibility. If your qualifications align with the requirements of the position, a Medpace team member will review your profile, and if interested, you will be contacted with further details on the next steps. Join us at Medpace and be a part of a team driven by People, Purpose, and Passion to make a difference tomorrow.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You are an experienced Alteryx developer responsible for designing and developing new applications and enhancing existing models using Alteryx Design Studio. You will be involved in the entire Software Development Life Cycle (SDLC), requiring excellent communication skills and direct collaboration with the business. It is crucial that you are self-sufficient and adept at building internal networks within the business and technology teams. Your responsibilities include owning changes from inception to deployment, implementing new functionality, identifying process gaps for improvement, and focusing on scalability and stability. You must be results-oriented, self-motivated, and able to multitask across different teams and applications. Additionally, effective communication with remotely dispersed teams is essential for this role. Your technical expertise should include workflow enhancement, designing macros, integrating Alteryx with various tools, maintaining user roles in the Alteryx gallery, using version control systems like git, and working with multiple data sources compatible with Alteryx. You should possess advanced development and troubleshooting skills, document training and support, understand SDLC methodologies, have strong communication skills, be proficient in SQL database query tools, and comprehend data warehouse architecture. In addition to the technical requirements, you will need to have experience working in an Agile environment, managing ETL/ELT data load processes, knowledge of Cloud Infrastructure, and integration with data sources and relational databases. Being self-motivated, working independently, and collaborating as a team player are essential. Your analytical, problem-solving skills, ability to handle multiple stakeholders and queries, prioritize tasks, and meet prompt deadlines are crucial. A strong client service focus and willingness to respond promptly to queries and deliverables are expected. Preferred Skills: - DataAnalytics - Alteryx,

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 27 Lacs

Bengaluru

Hybrid

Job Description: We are seeking an experienced and visionary Senior Data Architect to lead the design and implementation of scalable enterprise data solutions. This is a strategic leadership role for someone who thrives in cloud-first, data-driven environments and is passionate about building future-ready data architectures. Key Responsibilities: Define and implement enterprise-wide data architecture strategy aligned with business goals. Design and lead scalable, secure, and resilient data platforms for both structured and unstructured data. Architect data lake/warehouse ecosystems and cloud-native solutions (Snowflake, Databricks, Redshift, BigQuery). Collaborate with business and tech stakeholders to capture data requirements and translate them into scalable designs. Mentor data engineers, analysts, and other architects in data best practices. Establish standards for data modeling, integration, and management. Drive governance across data quality, security, metadata, and compliance. Lead modernization and cloud migration efforts. Evaluate new technologies and recommend adoption strategies. Support data cataloging, lineage, and MDM initiatives. Ensure compliance with privacy standards (e.g., GDPR, HIPAA, CCPA). Required Qualifications: Bachelors/Master’s degree in Computer Science, Data Science, or related field. 10+ years of experience in data architecture; 3+ years in a senior/lead capacity. Hands-on experience with modern cloud data platforms: Snowflake, Azure Synapse, AWS Redshift, BigQuery, etc. Strong skills in data modeling tools (e.g., Erwin, ER/Studio). Deep understanding of ETL/ELT , APIs, and data integration. Expertise in SQL, Python , and data-centric languages. Experience with data governance, RBAC, encryption , and compliance frameworks. DevOps/CI-CD experience in data pipelines is a plus. Excellent communication and leadership skills.

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Vadodara

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Agra

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

3.0 - 8.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Postgres DBA: 3-8 years of hands-on experience in developing and administering Azure Data Factory solutions for complex data integration projects. Proven experience in migrating databases from MSSQL to PostgreSQL in an Azure environment. Strong proficiency in SQL and experience working with both MSSQL and PostgreSQL databases. Experience with various ADF connectors, transformations, and control flow activities. Understanding of data warehousing concepts and ETL/ELT methodologies. Familiarity with application migration processes and integration patterns, including experience with platforms like PEGA is highly desirable. Experience with scripting languages such as Python or PowerShell for automation tasks.

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Nagpur

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Jaipur

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Faridabad

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

8.0 - 12.0 years

35 - 50 Lacs

Hyderabad

Work from Office

Job Description: Senior Data Analyst Location: Hyderabad, IN - Work from Office Experience: 7+ Years Role Summary We are seeking an experienced and highly skilled Senior Data Analyst to join our team. The ideal candidate will possess a deep proficiency in SQL, a strong understanding of data architecture, and a working knowledge of the Google Cloud platform (GCP)based ecosystem. They will be responsible for turning complex business questions into actionable insights, driving strategic decisions, and helping shape the future of our Product/Operations team. This role requires a blend of technical expertise, analytical rigor, and excellent communication skills to partner effectively with engineering, product, and business leaders. Key Responsibilities Advanced Data Analysis: Utilize advanced SQL skills to query, analyze, and manipulate large, complex datasets. Develop and maintain robust, scalable dashboards and reports to monitor key performance indicators (KPIs). Source Code management : Proven ability to effectively manage, version, and collaborate on code using codebase management systems like GitHub. Responsible for upholding data integrity, producing reproducible analyses, and fostering a collaborative database management environment through best practices in version control and code documentation. Strategic Insights: Partner with product managers and business stakeholders to define and answer critical business questions. Conduct deep-dive analyses to identify trends, opportunities, and root causes of performance changes. Data Architecture & Management: Work closely with data engineers to design, maintain, and optimize data schemas and pipelines. Provide guidance on data modeling best practices and ensure data integrity and quality. Reporting & Communication: Translate complex data findings into clear, concise, and compelling narratives for both technical and non-technical audiences. Present insights and recommendations to senior leadership to influence strategic decision-making. Project Leadership: Lead analytical projects from end to end, including defining project scope, methodology, and deliverables. Mentor junior analysts, fostering a culture of curiosity and data-driven problem-solving. Required Skills & Experience Bachelor's degree in a quantitative field such as Computer Science, Statistics, Mathematics, Economics, or a related discipline. 5+ years of professional experience in a data analysis or business intelligence role. Expert-level proficiency in SQL with a proven ability to write complex queries, perform window functions, and optimize queries for performance on massive datasets. Strong understanding of data architecture, including data warehousing, data modeling (e.g., star/snowflake schemas), and ETL/ELT principles. Excellent communication and interpersonal skills, with a track record of successfully influencing stakeholders. Experience with a business intelligence tool such as Tableau, Looker, or Power BI to create dashboards and visualizations. Experience with internal Google/Alphabet data tools and infrastructure, such as BigQuery, Dremel, or Google-internal data portals. Experience with statistical analysis, A/B testing, and experimental design. Familiarity with machine learning concepts and their application in a business context. A strong sense of curiosity and a passion for finding and communicating insights from data. Proficiency with scripting languages for data analysis (e.g., App scripting , Python or R ) would be an added advantage Responsibilities Lead a team of data scientists and analysts to deliver data-driven insights and solutions. Oversee the development and implementation of data models and algorithms to support new product development. Provide strategic direction for data science projects ensuring alignment with business goals. Collaborate with cross-functional teams to integrate data science solutions into business processes. Analyze complex datasets to identify trends and patterns that inform business decisions. Utilize generative AI techniques to develop innovative solutions for product development. Ensure adherence to ITIL V4 practices in all data science projects. Develop and maintain documentation for data science processes and methodologies. Mentor and guide team members to enhance their technical and analytical skills. Monitor project progress and adjust strategies to meet deadlines and objectives. Communicate findings and recommendations to stakeholders in a clear and concise manner. Drive continuous improvement in data science practices and methodologies. Foster a culture of innovation and collaboration within the data science team. Qualifications Possess strong experience in business analysis and data analysis. Demonstrate expertise in generative AI and its applications in product development. Have a solid understanding of ITIL V4 practices and their implementation. Exhibit excellent communication and collaboration skills. Show proficiency in managing and leading a team of data professionals. Display a commitment to working from the office during day shifts.

Posted 1 week ago

Apply

10.0 - 15.0 years

40 - 45 Lacs

Hyderabad

Remote

Job Title : Data Architect Location : Hyderabad/ Bangalore/Remote Experience Level: 10-15 years preferred Industry: IT/Software Services/SaaS Company Profile/Website: Pragma Edge | Powering Your Connected Enterprise Bangalore Office Location: 1st floor, IndiQube Platina, 15 Commissariat Road, Ashok Nagar, Bengaluru, Karnataka- 560025 Hyderabad Office Location: Pragma Towers, Plot No.07,Image Gardens Road, Silicon Valley, Madhapur, Hyderabad, TG-500081 Employment Type : Full-time Key Responsibilities Design and implement scalable, secure, and high-performance data architecture solutions tailored for logistics operations. Define data standards, models, and governance policies across heterogeneous data sources (e.g., EDI, ERP, TMS, WMS). Architect and optimize data pipelines to enable real-time analytics and reporting for warehouse management, freight, and inventory systems. Collaborate with business stakeholders to translate operational logistics needs into actionable data strategies. Ensure system reliability, data security, and compliance with relevant regulations. Evaluate and recommend tools and platforms including cloud-based data services (Azure, AWS, GCP). Lead data integration efforts including legacy systems migration and EDI transformations. Required Skills & Qualifications Proven experience as a Data Architect in logistics, transportation, or supply chain domains. Strong understanding of EDI formats, warehouse operations, fleet data, and logistics KPIs. Hands-on experience with data modeling, ETL, ELT, and data warehousing. Expertise in cloud platforms (Azure preferred), relational and NoSQL databases, and BI tools. Knowledge of data governance, security, and data lifecycle management. Familiarity with tools like Informatica, Talend, SQL Server, Snowflake, or BigQuery is a plus. Excellent analytical thinking and stakeholder communication skills.

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Key Responsibilities Design conformed star & snowflake schemas , implement SCD2 dimensions and fact tables. Lead Spark (PySpark/Scala) or AWSGlue ELT pipelines from RDSZeroETL/S3 into Redshift. Tune RA3 clusterssort/dist keys, WLM queues, Spectrum partitionsfor subsecond BI queries. Establish dataquality, lineage, and costgovernance dashboards using CloudWatch & Terraform/CDK. Collaborate with Product & Analytics to translate HR KPIs into selfservice data marts. Mentor junior engineers; drive documentation and coding standards. MustHave Skills AmazonRedshift (sort & dist keys, RA3, Spectrum) Spark on EMR/Glue (PySpark or Scala) Dimensional modelling (Kimball), star schema, SCD2 Advanced SQL + Python/Scala scripting AWS IAM, KMS, CloudWatch, Terraform/CDK, CI/CD (GitHub Actions or CodePipeline) NicetoHave dbt, Airflow, Kinesis/Kafka, LakeFormation rowlevel ACLs GDPR / SOC2 compliance exposure AWSDataAnalytics or SolutionsArchitect certification Education B.E./B.Tech in Computer Science, IT, or related field (Master’s preferred but not mandatory). Compensation & Benefits Competitive CTC 25–40 LPA Health insurance for self & dependents Why Join Us? Own a greenfield HR analytics platform with executive sponsorship. Modern AWS stack (RedshiftRA3, LakeFormation, EMRonEKS). Culture of autonomy, fast decisionmaking, and continuous learning. Application Process 30min technical screen 4hour takehome Spark/SQL challenge 90min architecture deep dive Panel interview (leadership & stakeholder communication)

Posted 1 week ago

Apply

10.0 - 12.0 years

25 - 35 Lacs

Chennai, Bengaluru

Work from Office

Role : AWS, Snowflake with Data Architect What awaits you/ Job Profile Design and develop scalable data pipelines using AWS services. Integrate diverse data sources and ensure data consistency and reliability. Collaborate with data scientists and other stakeholders to understand data requirements. Implement data security measures and maintain data integrity. Monitor and troubleshoot data pipelines to ensure optimal performance. Optimize and maintain data warehouse and data lake architectures. Create and maintain comprehensive documentation for data engineering processes. What should you bring along Proven experience with Snowflake and SQL. Expert-level proficiency in building and managing data pipelines with Python. Strong experience in AWS cloud services, including Lambda, S3, Glue, and other data-focused services. Exposure Terraform for provisioning and managing infrastructure-as-code on AWS. Proficiency in SQL for querying and modeling large-scale datasets. Hands-on experience with Git for version control and managing collaborative workflows. Familiarity with ETL/ELT processes and tools for data transformation. Strong understanding of data architecture, data modeling, and data lifecycle management. Excellent problem-solving and debugging skills. Strong communication and collaboration skills to work effectively in a global, distributed team environment. Must have technical skill Good understanding of Architecting the data and solution. Data Quality, Governance, Data Security and Data Modelling concepts. Data Modelling, Mapping and Compliance to BaDM. Solutioning on Cloud (AWS) with cloud tools, with good Understanding Snowflake. Defining CI/CD config for GitHub and AWS Terraform for deployment config. Ensuring Architecture evolution with latest technology. Guiding and mentoring team, reviewing code and ensuring development to standards. Good to have Technical skills Strong understanding of ETL Concepts, design patterns, industry best practices. Experience with ETL testing Snowflake Certification. AWS Certification.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

You should have a strong knowledge of SQL and Python. Experience in Snowflake is preferred. Additionally, you should have knowledge of AWS services such as S3, Lamdba, IAM, Step function, SNS, SQS, ECS, and Dynamo. It is important to have expertise in data movement technologies like ETL/ELT. Good to have skills include knowledge on DevOps, Continuous Integration, and Continuous Delivery with tools such as Maven, Jenkins, Stash, Control-M, Docker. Experience in automation and REST APIs would be beneficial for this role.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineering Manager at Micron Technology Inc., you will play a crucial role within the Technology Solutions group of the Smart Manufacturing and AI organization. Your responsibilities will involve working closely with Micron's Front End Manufacturing and Planning Ops business area, focusing on data engineering, Machine Learning, and advanced analytics solutions. We are seeking a leader with a strong technical background in Big Data and Cloud Data warehouse technologies, particularly in Cloud data warehouse platforms like Snowflake and GCP, monitoring solutions such as Splunk, and automation and machine learning using Python. Your primary tasks will include leading a team of Data Engineers, providing technical and people leadership, and ensuring the successful delivery of critical projects and production support. You will engage team members in their career development, maintain a positive work culture, and participate in the design, architecture review, and deployment of big data and cloud data warehouse solutions. Additionally, you will collaborate with key project stakeholders, analyze project needs, and translate requirements into technical specifications for the team of data engineers. To excel in this role, you should have a solid background in developing, delivering, and supporting big data engineering and advanced analytics solutions, with at least 10 years of experience in the field. Managing or leading data engineering teams for 6+ years and hands-on experience in building Cloud Data-centric solutions in GCP or other cloud platforms for 4-5 years is essential. Proficiency in Python programming, experience with Spark, ELT or ETL techniques, database management systems like SQL Server and Snowflake, and strong domain knowledge in Manufacturing Planning and Scheduling data are highly desired. Furthermore, you should possess intermediate to advanced programming skills, excellent communication abilities, and a passion for data and information. Being self-motivated, adaptable to a fast-paced environment, and having a Bachelor's degree in Computer Science, Management Information Systems, or related fields are prerequisites for this role. Micron Technology is a pioneering industry leader in memory and storage solutions, dedicated to transforming how information enriches lives globally. Our commitment to innovation, technology leadership, and operational excellence drives us to deliver high-performance memory and storage products through our Micron and Crucial brands, fueling advancements in artificial intelligence and 5G applications. If you are motivated by the power of data and eager to contribute to cutting-edge solutions, we encourage you to explore career opportunities with us at micron.com/careers.,

Posted 1 week ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

Gurugram

Work from Office

Job Summary: We are seeking a highly experienced and motivated Snowflake Data Architect & ETL Specialist to join our growing Data & Analytics team. The ideal candidate will be responsible for designing scalable Snowflake-based data architectures, developing robust ETL/ELT pipelines, and ensuring data quality, performance, and security across multiple data environments. You will work closely with business stakeholders, data engineers, and analysts to drive actionable insights and ensure data-driven decision-making. Key Responsibilities: Design, develop, and implement scalable Snowflake-based data architectures . Build and maintain ETL/ELT pipelines using tools such as Informatica, Talend, Apache NiFi, Matillion , or custom Python/SQL scripts. Optimize Snowflake performance through clustering, partitioning, and caching strategies. Collaborate with cross-functional teams to gather data requirements and deliver business-ready solutions. Ensure data quality, governance, integrity, and security across all platforms. Migrate legacy data warehouses (e.g., Teradata, Oracle, SQL Server) to Snowflake . Automate data workflows and support CI/CD deployment practices. Implement data modeling techniques including dimensional modeling, star/snowflake schema , normalization/denormalization. Support and promote metadata management and data governance best practices. Technical Skills (Hard Skills): Expertise in Snowflake : Architecture design, performance tuning, cost optimization. Strong proficiency in SQL , Python , and scripting for data engineering tasks. Hands-on experience with ETL tools: Informatica, Talend, Apache NiFi, Matillion , or similar. Proficient in data modeling (dimensional, relational, star/snowflake schema). Good knowledge of Cloud Platforms : AWS, Azure, or GCP. Familiar with orchestration and workflow tools such as Apache Airflow, dbt, or DataOps frameworks . Experience with CI/CD tools and version control systems (e.g., Git). Knowledge of BI tools such as Tableau, Power BI , or Looker . Certifications (Preferred/Required): Snowflake SnowPro Core Certification Required or Highly Preferred SnowPro Advanced Architect Certification – Preferred Cloud Certifications (e.g., AWS Certified Data Analytics – Specialty, Azure Data Engineer Associate) – Preferred ETL Tool Certifications (e.g., Talend, Matillion) – Optional but a plus Soft Skills: Strong analytical and problem-solving capabilities. Excellent communication and collaboration skills. Ability to translate technical concepts into business-friendly language. Proactive, detail-oriented, and highly organized. Capable of multitasking in a fast-paced, dynamic environment. Passionate about continuous learning and adopting new technologies. Why Join Us? Work on cutting-edge data platforms and cloud technologies Collaborate with industry leaders in analytics and digital transformation Be part of a data-first organization focused on innovation and impact Enjoy a flexible, inclusive, and collaborative work culture

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As an Informatica Developer with a specialization in Informatica Intelligent Data Management Cloud (IDMC), you will be responsible for designing, developing, and maintaining data pipelines and integrations using Informatica IDMC. Your role will involve working on Cloud Data Integration (CDI) and Cloud Application Integration (CAI) modules, building and optimizing ETL/ELT mappings, workflows, and data quality rules in a cloud setup, as well as deploying and monitoring data jobs using IDMC's operational dashboards and alerting tools. You will collaborate closely with data architects and business analysts to understand data integration requirements and write and optimize SQL queries for data processing. Strong hands-on experience with Informatica IDMC, proficiency in CDI, CAI, and cloud-based data workflows, as well as a solid understanding of ETL/ELT processes, data quality, and data integration best practices are essential for this role. Expertise in SQL and working with Oracle/SQL Server, strong analytical and problem-solving skills, along with excellent communication and interpersonal abilities, will be key to your success in this position. Your responsibilities will also include troubleshooting and resolving integration issues efficiently, ensuring performance tuning, and high availability of data solutions. This is a full-time position that requires you to work in person at locations in Bangalore, Cochin, or Trivandrum. If you are passionate about Informatica IDMC, Cloud Data Integration, and Cloud Application Integration, and possess the technical skills required for this role, we encourage you to apply and be part of our dynamic team.,

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Pune

Work from Office

Sr Data Engineer2 We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insightsorganization to build data solutions, design and implement ETL/ELT processes and manage our dataplatform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, ourvision is to spearhead technology and data-led solutions and experiences to drive growth & innovation atscale.The ideal candidate will have a strong Data Engineering background, advanced Python knowledge andexperience with cloud services and SQL/NoSQL databases.You will work closely with our cross functional stakeholders in Product, Finance and GTM along withBusiness and Enterprise Technology teams.As a Senior Data Engineer, you willCollaborating closely with various stakeholders to prioritize requests, identify improvements, andoffer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involvesconstructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineeringgroups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet businessneeds. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You''ll be a great addition to the team if you haveHold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintainingdata environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes,managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing,monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift,MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD usingGitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with variousbusiness stakeholders and translate requirements. Added bonus if you also haveA good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data

Posted 1 week ago

Apply

5.0 - 8.0 years

12 - 18 Lacs

Bengaluru

Work from Office

Must haves :ETL/ELT testing 5/5Experience with SQL, analytical queries 5/5Responsibility: Responsible for designing and executing data validation scripts for a data migration project. The scope includes verifying the migration of data from Oracle and Informix (DB2) sources to GCP Cloud Storage, with further processing through Cloud Spanner.Role and Skills:-Good communication and analytical skills-Years of relevant experience 4 to 7 years (in ETL/ELT testing)-Ability to analyse mingled Data models, Data processes and able to maintain referential intact data flows, quality controls-Experienced in working with one or more RDBMS/Columnar databases and preferably having exposure to semi structured and unstructured datasets -Prior experience in 1 or more large-scale data migration project(s)-Sound experience in writing complex SQL queries, analytical queries and able to validate results-Working knowledge in Google Cloud based data services BigQuery, Data Storage etc.-Hands-on experience using Python programming language to build data validation scripts.-Good understanding about QA Test Management, Defect Management and Testing methodologies

Posted 1 week ago

Apply

4.0 - 9.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Must have technical skills: 4 years+ onSnowflake advanced SQL expertise 4 years+ on data warehouse experiences hands on knowledge with the methods to identify, collect, manipulate, transform, normalize, clean, and validate data, star schema, normalization / denormalization, dimensions, aggregations etc, 4+ Years experience working in reporting and analytics environments development, data profiling, metric development, CICD, production deployment, troubleshooting, query tuning etc, 3 years+ on Python advanced Python expertise 3 years+ on any cloud platform AWS preferred hands on experience on AWS on Lambda, S3, SNS / SQS, EC2 is bare minimum, 3 years+ on any ETL / ELT tool Informatica, Pentaho, Fivetran, DBT etc. 3+ years with developing functional metrics in any specific business vertical (finance, retail, telecom etc), Must have soft skills: Clear communication written and verbal communication, especially with time off, delays in delivery etc. Team Player Works in the team and works with the team, Enterprise Experience Understands and follows enterprise guidelines for CICD, security, change management, RCA, on-call rotation etc, Nice to have: Technical certifications from AWS, Microsoft, Azure, GCP or any other recognized Software vendor, 4 years+ on any ETL / ELT tool Informatica, Pentaho, Fivetran, DBT etc. 4 years+ with developing functional metrics in any specific business vertical (finance, retail, telecom etc), 4 years+ with team lead experience, 3 years+ in a large-scale support organization supporting thousands ofusers

Posted 1 week ago

Apply

2.0 - 6.0 years

3 - 7 Lacs

Gurugram

Work from Office

We are looking for a Pyspark Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives. Responsibilities Develop and maintain data pipelines implementing ETL processes. Take responsibility for Hadoop development and implementation. Work closely with a data science team implementing data analytic pipelines. Help define data governance policies and support data versioning processes. Maintain security and data privacy working closely with Data Protection Officer internally. Analyse a vast number of data stores and uncover insights. Skillset Required Ability to design, build and unit test the applications in Pyspark. Experience with Python development and Python data transformations. Experience with SQL scripting on one or more platforms Hive, Oracle, PostgreSQL, MySQL etc. In-depth knowledge of Hadoop, Spark, and similar frameworks. Strong knowledge of Data Management principles. Experience with normalizing/de-normalizing data structures, and developing tabular, dimensional and other data models. Have knowledge about YARN, cluster, executor, cluster configuration. Hands on working in different file formats like Json, parquet, csv etc. Experience with CLI on Linux-based platforms. Experience analysing current ETL/ELT processes, define and design new processes. Experience analysing business requirements in BI/Analytics context and designing data models to transform raw data into meaningful insights. Good to have knowledge on Data Visualization. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources.

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 12 Lacs

Chennai

Work from Office

Min 3+ yrs in Data engineer(GenAI platform) ETL/ELT workflows using AWS,Azure Databricks,Airflow,Azure Data Factory Exp in Azure Databricks,Snowflake,Airflow,Python,SQL,Spark,Spark Streaming,AWS EKS, CI/CD(Jenkins) Elasticsearch,SOLR,OpenSearch,Vespa

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies