Jobs
Interviews

6 Etlelt Pipelines Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

bhubaneswar

On-site

As an Apache Superset Data Engineer, you will play a crucial role in the design, development, and maintenance of scalable data pipelines and analytics infrastructure. Your primary focus will be on data visualization and dashboarding using Apache Superset, bridging the gap between data engineering and business intelligence. By creating intuitive dashboards and reports, you will empower stakeholders to access accurate and actionable insights efficiently. Your responsibilities will include creating, customizing, and maintaining interactive dashboards in Apache Superset to support key performance indicators (KPIs), experimentation, and business insights. Collaboration with analysts, BI teams, and business users to gather requirements and deliver effective visualizations will be essential. Additionally, you will conduct data validation, feature engineering, and exploratory data analysis to ensure data accuracy and integrity. Analyzing A/B test results and providing insights to inform business strategies will be part of your role. You will be responsible for establishing and maintaining standards for statistical testing, data validation, and analytical workflows. Integrating Superset with various database systems such as MySQL or PostgreSQL and managing associated drivers and connections will be crucial. Ensuring secure, scalable, and high-performing Superset deployments is also a key aspect of this position. Communication is vital in this role, as you will need to clearly convey findings and recommendations to both technical and non-technical stakeholders. Required skills include proven expertise in building dashboards and visualizations using Apache Superset, a strong command of SQL, experience with relational databases, proficiency in Python (or Java) for data manipulation, and solid understanding of data modeling, ETL/ELT pipelines, and data warehousing principles. Problem-solving skills, attention to data quality and detail, and the ability to simplify complex technical concepts for non-technical audiences are essential. Nice-to-have qualifications include familiarity with cloud platforms like AWS and ECS. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Engineering, or a related field, along with a minimum of 3 years of relevant experience.,

Posted 20 hours ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a talented and driven Backend Engineer with a solid understanding of data engineering workflows, you will play a crucial role in our team. Your expertise in Python, preferably with FastAPI, along with knowledge of SQL and NoSQL databases, will be instrumental in designing robust backend services and contributing to the development of high-performance data pipelines. You will have the unique opportunity to work at the intersection of API development and data systems, where you will help build the infrastructure supporting our data-driven applications. Your responsibilities will include designing, developing, and maintaining backend services, building and consuming RESTful APIs, and working with both SQL and NoSQL databases for efficient data storage and modeling. Additionally, you will be involved in developing and managing ETL/ELT data pipelines, collaborating with cross-functional teams to integrate third-party APIs and data sources, and ensuring the scalability, performance, and reliability of backend systems. Your participation in code reviews, architectural discussions, and technical design will be invaluable to the team. To excel in this role, you should possess proficiency in Python, experience in FastAPI or similar frameworks, a strong understanding of REST API design and best practices, and hands-on experience with relational and non-relational databases. Familiarity with data engineering concepts, software engineering principles, and version control is also essential. Preferred qualifications include exposure to cloud platforms like AWS, GCP, or Azure, familiarity with containerization tools such as Docker and Kubernetes, and experience working in agile teams and CI/CD environments. If you are passionate about building scalable systems and enabling data-driven applications, we are excited to hear from you.,

Posted 22 hours ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer specializing in Snowflake architecture, you will be responsible for designing and implementing scalable data warehouse architectures, including schema modeling and data partitioning. Your role will involve leading or supporting data migration projects to Snowflake from on-premise or legacy cloud platforms. You will be developing ETL/ELT pipelines and integrating data using various tools such as DBT, Fivetran, Informatica, and Airflow. It will be essential to define and implement best practices for data modeling, query optimization, and storage efficiency within Snowflake. Collaboration with cross-functional teams, including data engineers, analysts, BI developers, and stakeholders, will be crucial to align architectural solutions effectively. Ensuring data governance, compliance, and security by implementing RBAC, masking policies, and access control within Snowflake will also be part of your responsibilities. Working closely with DevOps teams to enable CI/CD pipelines, monitoring, and infrastructure as code for Snowflake environments is essential. Your role will involve optimizing resource utilization, monitoring workloads, and managing the cost-effectiveness of the platform. Staying updated with Snowflake features, cloud vendor offerings, and best practices will be necessary to drive continuous improvement in data architecture. Qualifications & Skills: - Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. - 5+ years of experience in data engineering, data warehousing, or analytics architecture. - 3+ years of hands-on experience in Snowflake architecture, development, and administration. - Strong knowledge of cloud platforms such as AWS, Azure, or GCP. - Solid understanding of SQL, data modeling, and data transformation principles. - Experience with ETL/ELT tools, orchestration frameworks, and data integration. - Familiarity with data privacy regulations (GDPR, HIPAA, etc.) and compliance. Additional Qualifications: - Snowflake certification (SnowPro Core / Advanced). - Experience in building data lakes, data mesh architectures, or streaming data platforms. - Familiarity with tools like Power BI, Tableau, or Looker for downstream analytics. - Experience with Agile delivery models and CI/CD workflows. This role offers an exciting opportunity to work on cutting-edge data architecture projects and collaborate with diverse teams to drive impactful business outcomes.,

Posted 23 hours ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You are an experienced Python + Databricks Developer who will be a valuable addition to our data engineering team. Your expertise in Python programming, data processing, and hands-on experience with Databricks will be instrumental in building and optimizing data pipelines. Your key responsibilities will include designing, developing, and maintaining scalable data pipelines using Databricks and Apache Spark. You will be expected to write efficient Python code for data transformation, cleansing, and analytics. Collaboration with data scientists, analysts, and engineers is essential to understand data needs and deliver high-performance solutions. Optimizing and tuning data pipelines for performance and cost efficiency, implementing data validation, quality checks, and monitoring, as well as working with cloud platforms (preferably Azure or AWS) to manage data workflows are crucial aspects of the role. Ensuring best practices in code quality, version control, and documentation will also be part of your responsibilities. To be successful in this role, you should have 5+ years of professional experience in Python development and at least 3 years of hands-on experience with Databricks, including notebooks, clusters, Delta Lake, and job orchestration. Strong experience with Spark, especially PySpark, is required. Proficiency in working with large-scale data processing and ETL/ELT pipelines, solid understanding of data warehousing concepts and SQL, as well as experience with Azure Data Factory, AWS Glue, or other data orchestration tools will be beneficial. Familiarity with version control tools like Git and excellent problem-solving and communication skills are also essential. If you are looking to leverage your Python and Databricks expertise to contribute to building robust data pipelines and optimizing data workflows, this role is a great fit for you.,

Posted 23 hours ago

Apply

3.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

We are seeking a highly skilled and experienced Snowflake Architect to take charge of designing, developing, and deploying enterprise-grade cloud data solutions. As the ideal candidate, you should possess a robust background in data architecture, cloud data platforms, and Snowflake implementation. Hands-on experience in end-to-end data pipeline and data warehouse design is essential for this role. Your responsibilities will include leading the architecture, design, and implementation of scalable Snowflake-based data warehousing solutions. You will be tasked with defining data modeling standards, best practices, and governance frameworks. Designing and optimizing ETL/ELT pipelines using tools such as Snowpipe, Azure Data Factory, Informatica, or DBT will be a key aspect of your role. Collaboration with stakeholders to understand data requirements and translating them into robust architectural solutions will also be expected. Additionally, you will be responsible for implementing data security, privacy, and role-based access controls within Snowflake. Guiding development teams on performance tuning, query optimization, and cost management in Snowflake is crucial. Ensuring high availability, fault tolerance, and compliance across data platforms will also fall under your purview. Mentoring developers and junior architects on Snowflake capabilities is another important aspect of this role. In terms of Skills & Experience, we are looking for candidates with at least 8+ years of overall experience in data engineering, BI, or data architecture, and a minimum of 3+ years of hands-on Snowflake experience. Expertise in Snowflake architecture, data sharing, virtual warehouses, clustering, and performance optimization is highly desirable. Strong proficiency in SQL, Python, and cloud data services (e.g., AWS, Azure, or GCP) is required. Hands-on experience with ETL/ELT tools like ADF, Informatica, Talend, DBT, or Matillion is also necessary. A good understanding of data lakes, data mesh, and modern data stack principles is preferred. Experience with CI/CD for data pipelines, DevOps, and data quality frameworks is a plus. Solid knowledge of data governance, metadata management, and cataloging is beneficial. Preferred qualifications include holding a Snowflake certification (e.g., SnowPro Core/Advanced Architect), familiarity with Apache Airflow, Kafka, or event-driven data ingestion, knowledge of data visualization tools such as Power BI, Tableau, or Looker, and experience in healthcare, BFSI, or retail domain projects. If you meet these requirements and are ready to take on a challenging and rewarding role as a Snowflake Architect, we encourage you to apply.,

Posted 23 hours ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Microsoft Azure Engineer in Bangalore (Hybrid) with 5+ years of experience, you will be responsible for building and optimizing cloud solutions on Microsoft Azure. Your expertise in Azure Synapse, Azure Data Factory, and related cloud technologies will be crucial in ensuring scalability, security, and automation. Your key responsibilities will include: Cloud Data Engineering & Processing: - Designing and optimizing ETL/ELT pipelines using Azure Synapse and Data Factory. - Developing and managing data pipelines, data lakes, and workflows within the Azure ecosystem. - Implementing data security, governance, and compliance best practices. Backend & Application Development: - Developing scalable cloud applications using Azure Functions, Service Bus, and Event Grid. - Building RESTful APIs and microservices for cloud-based data processing. - Integrating Azure services to enhance data accessibility and processing. Cloud & DevOps: - Deploying and managing solutions using Azure DevOps, CI/CD, and Infrastructure as Code (Terraform, Bicep). - Optimizing cloud costs and ensuring high availability of data platforms. - Implementing logging, monitoring, and security best practices. Required Skills & Experience: - 5+ years of experience in Azure cloud engineering and development. - Strong expertise in Azure Synapse, Data Factory, and Microsoft Fabric. - Proficiency in CI/CD, Azure DevOps, and related tools. - Experience with Infrastructure as Code (Terraform, Bicep). - Hands-on knowledge of Azure Functions, Service Bus, Event Grid, and API development. - Familiarity with SQL, T-SQL, Cosmos DB, and relational databases. - Strong experience in data security and compliance. Preferred Skills (Good to Have): - Knowledge of Databricks, Python, and ML models for data processing. - Familiarity with event-driven architectures (Kafka, Event Hubs). - Azure certifications (e.g., DP-203, AZ-204). Apply now if you are ready to leverage your expertise in Microsoft Azure to contribute to building robust cloud solutions and optimizing data processing workflows.,

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies