Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
8 - 12 Lacs
Noida
Work from Office
5-9 years In Data Engineering, software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse environment Expert level Hands to the following: Python, SQL PySpark DBT and Apache Airflow DevOps, Jenkins, CI/CD Data Governance and Data Quality frameworks Data Lakes, Data Warehouse AWS services including S3, SNS, SQS, Lambda, EMR, Glue, Athena, EC2, VPC etc. Source code control - GitHub, VSTS etc. Mandatory Competencies Python - Python Database - SQL Data on Cloud - AWS S3 DevOps - CI/CD DevOps - Github ETL - AWS Glue Beh - Communication
Posted 4 weeks ago
15.0 - 20.0 years
13 - 17 Lacs
Noida
Work from Office
We are looking for a skilled Senior Data Architect with 15 to 20 years of experience to lead our data warehousing function, setting the vision and direction for driving actionable insights across revenue, subscriptions, paid marketing channels, and operational functions. This role is based in Remote. Roles and Responsibility Define and execute the long-term strategy for our data warehousing platform using medallion architecture and modern cloud-based solutions. Oversee end-to-end pipeline design, implementation, and maintenance for seamless integration with business intelligence tools. Champion best practices in data modeling, including the effective use of DBT packages to streamline complex transformations. Establish rigorous data quality standards, governance policies, and automated validation frameworks across all data streams. Develop frameworks to reconcile revenue discrepancies and unify validation across Finance, SEM, and Analytics teams. Implement robust monitoring and alerting systems to quickly identify, diagnose, and resolve data pipeline issues. Lead, mentor, and grow a high-performing team of data warehousing specialists, fostering a culture of accountability, innovation, and continuous improvement. Partner with RevOps, Analytics, SEM, Finance, and Product teams to align the data infrastructure with business objectives, serving as the primary data warehouse expert in discussions around revenue attribution and paid marketing channel performance. Translate complex technical concepts into clear business insights for both technical and non-technical stakeholders. Oversee deployment processes, including staging, QA, and rollback strategies, to ensure minimal disruption during updates. Regularly assess and optimize data pipelines for performance, scalability, and reliability while reducing operational overhead. Lead initiatives to transition from legacy on-premise systems to modern cloud-based architectures for improved agility and cost efficiency. Stay abreast of emerging trends and technologies in data warehousing, analytics, and cloud solutions. Propose and lead innovative projects to enhance our data capabilities, with a particular focus on predictive and prescriptive analytics. Represent the data warehousing function in senior leadership discussions and strategic planning sessions. Job Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field. Proven track record in designing and implementing scalable data warehousing solutions in cloud environments. Deep experience with medallion architecture and modern data pipeline tools, including DBT (and DBT packages), Databricks, SQL, and cloud-based data platforms. Strong understanding of ETL/ELT best practices, data modeling (logical and physical), and large-scale data processing. Hands-on experience with BI tools (e.g., Tableau, Looker) and familiarity with Google Analytics, and other tracking systems. Solid understanding of attribution models (first-touch, last-touch, multi-touch) and experience working with paid marketing channels. Excellent leadership and team management skills with the ability to mentor and inspire cross-functional teams. Outstanding communication skills, capable of distilling complex technical information into clear business insights. Demonstrated ability to lead strategic initiatives, manage competing priorities, and deliver results in a fast-paced environment.
Posted 4 weeks ago
9.0 - 14.0 years
3 - 7 Lacs
Noida
Work from Office
We are looking for a skilled Data Engineer with 9 to 15 years of experience in the field. The ideal candidate will have expertise in designing and developing data pipelines using Confluent Kafka, ksqlDB, and Apache Flink. Roles and Responsibility Design and develop data pipelines for real-time and batch data ingestion and processing using Confluent Kafka, ksqlDB, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources, including databases, APIs, and message queues. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identifying bottlenecks and implementing optimizations. Job Bachelor's degree or higher from a reputed university. 8 to 10 years of total experience, with a majority related to ETL/ELT big data and Kafka. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema
Posted 4 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Noida
Work from Office
company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=5 to 10 , jd= Job Role Data Engineer, ADF Job Location Remote Job Type FTE JD We are looking for an experienced Data Engineer / Architect who is hands-on with Microsoft Azure Data Factory (ADF) and SQL development , and can take complete ownership of data flow design and execution. This role requires a mix of technical skills and business understanding to analyze, define, and build scalable data pipelines that align with organizational goals. The ideal candidate will work closely with business stakeholders to understand processes, define data flow requirements , and implement robust ETL/ELT solutions using Microsoft Azure technologies. Key Responsibilities: Collaborate with business users and analysts to understand data requirements and workflows . Define and document end-to-end data flow architectures and integration strategies. Build and maintain data pipelines using Azure Data Factory (ADF) and SQL stored procedures . Design, optimize, and troubleshoot complex SQL queries and stored procedures . Translate business processes into technical solutions , ensuring alignment with data governance and enterprise architecture standards. Drive data quality , transformation logic , and load processes with efficiency and consistency. Take ownership of data integration tasks and ensure timely delivery of high-quality data. Monitor, troubleshoot, and optimize ETL workflows and data storage performance. Support cloud data platform modernization and integration projects as needed. Must-Have Skills: 6+ years of hands-on experience in Data Engineering or Architecture roles. Proven experience with Azure Data Factory (ADF) pipeline design, triggers, datasets, linked services. Advanced SQL skills with expertise in writing and optimizing stored procedures , functions , and data transformation logic . Strong experience in business engagement ability to gather requirements, interpret business processes, and translate them into technical workflows. Familiarity with data modeling , data warehousing , and ETL/ELT pipelines . Understanding of data governance , metadata , and data lineage . Nice-to-Have Skills: Exposure to Azure Synapse Analytics , Databricks , or Power BI . Experience working in Agile/Scrum environments. Familiarity with CI/CD pipelines for data workflows. Knowledge of Python or .NET for scripting or data orchestration is a plus. Soft Skills: Excellent communication and stakeholder management abilities. Analytical mindset with attention to detail. Strong sense of ownership and ability to work independently and collaboratively. , Title=Data Engineer ADF, ref=6566357
Posted 4 weeks ago
8.0 - 13.0 years
2 - 30 Lacs
Bengaluru
Work from Office
????? About the Role: We are looking for a highly skilled and passionate Data Engineer to join our innovative and growing data team The ideal candidate will have strong expertise in building and managing scalable data pipelines, cloud data solutions, and modern data engineering tools such as Snowflake, DBT, Terraform, and FiveTran You will work closely with cross-functional teams to design and implement efficient data architecture and apply Data Vault 2 0 methodologies for long-term scalability ??? Key Responsibilities :Design, build, and maintain robust and scalable data pipelines using Snowflake and DBT Develop and maintain ELT processes for data ingestion from various sources using Fivetran or similar tools Implement Data Vault 2 Create and manage Infrastructure as Code (IaC) using Terraform to deploy and manage cloud infrastructure Work extensively with Azure Cloud services (preferred) or other platforms like AWS or GCP Collaborate with data architects, analysts, and stakeholders to align data strategies with business objectives Ensure data quality, governance, and security best practices are followed across systems Continuously optimize and enhance cloud data architecture for performance and scalability Monitor and troubleshoot data pipelines and ensure high availability and reliability of data systems ?? Required Skills & Qualification s:68 years of hands-on experience in Data Engineering role Expertise in Snowflake data modeling, performance tuning, and pipeline desig Proficiency in DBT (Data Build Tool) building data models, transformations, and dbt project developmen Cloud experience Azure is preferred, but AWS or GCP will also be considere Strong experience in Infrastructure as Code (IaC) using Terrafor Proficient in ETL/ELT processes with tools like Fivetran, Stitch, or simila Solid understanding and working experience with Data Vault 2 Familiarity with the AutomateDV dbt packag Strong analytical and problem-solving skill Excellent verbal and written communication skill Ability to work independently and collaboratively in a fast-paced environmen ???? Preferred Qualificatio ns:Certification in Snowflake, Azure Data Engineer, or related fiel Previous experience working in product-based or data-focused compani Exposure to Agile/Scrum project environmen Understanding of CI/CD pipelines and automation best practices for data workflo
Posted 4 weeks ago
5.0 - 10.0 years
7 - 11 Lacs
Pune
Work from Office
About the Role Were looking for a Data Engineer to help build reliable and scalable data pipelines that power reports, dashboards, and business decisions at Hevo. Youll work closely with engineering, product, and business teams to make sure data is accurate, available, and easy to use. Key Responsibilities Independently design and implement scalable ELT workflows using tools like Hevo, dbt, Airflow, and Fivetran. Ensure the availability, accuracy, and timeliness of datasets powering analytics, dashboards, and operations. Collaborate with Platform and Engineering teams to address issues related to ingestion, schema design, and transformation logic. Escalate blockers and upstream issues proactively to minimize delays for stakeholders. Maintain strong documentation and ensure discoverability of all models, tables, and dashboards. Own end-to-end pipeline quality, minimizing escalations or errors in models and dashboards. Implement data observability practices such as freshness checks, lineage tracking, and incident alerts. Regularly audit and improve accuracy across business domains. Identify gaps in instrumentation, schema evolution, and transformation logic. Ensure high availability and data freshness through monitoring, alerting, and incident resolution processes. Set up internal SLAs, runbooks, and knowledge bases (data catalog, transformation logic, FAQs). Improve onboarding material and templates for future engineers and analysts Required Skills & Experience 3-5 years of experience in Data Engineering, Analytics Engineering, or related roles. Proficient in SQL and Python for data manipulation, automation, and pipeline creation. Strong understanding of ELT pipelines, schema management, and data transformation concepts. Experience with modern data stack : dbt, Airflow, Hevo, Fivetran, Snowflake, Redshift, or BigQuery. Solid grasp of data warehousing concepts: OLAP/OLTP, star/snowflake schemas, relational & columnar databases. Understanding of Rest APIs, Webhooks, and event-based data ingestion. Strong debugging skills and ability to troubleshoot issues across systems. Preferred Background Experience in high-growth industries such as eCommerce, FinTech, or hyper-commerce environments. Experience working with or contributing to a data platform (ELT/ETL tools, observability, lineage, etc.). Core Competencies Excellent communication and problem-solving skills Attention to detail and a self-starter mindset High ownership and urgency in execution Collaborative and coachable team player Strong prioritization and resilience under pressure
Posted 4 weeks ago
11.0 - 16.0 years
40 - 45 Lacs
Pune
Work from Office
Role Description This role is for a Senior business functional analyst for Group Architecture. This role will be instrumental in establishing and maintaining bank wide data policies, principles, standards and tool governance. The Senior Business Functional Analyst acts as a link between the business divisions and the data solution providers to align the target data architecture against the enterprise data architecture principles, apply agreed best practices and patterns. Group Architecture partners with each division of the bank to ensure that Architecture is defined, delivered, and managed in alignment with the banks strategy and in accordance with the organizations architectural standards. Your key responsibilities Data Architecture: The candidate will work closely with stakeholders to understand their data needs and break out business requirements into implementable building blocks and design the solution's target architecture. AI/ML: Identity and support the creation of AI use cases focused on delivery the data architecture strategy and data governance tooling. Identify AI/ML use cases and architect pipelines that integrate data flows, data lineage, data quality. Embed AI-powered data quality, detection and metadata enrichment to accelerate data discoverability. Assist in defining and driving the data architecture standards and requirements for AI that need to be enabled and used. GCP Data Architecture & Migration: A strong working experience on GCP Data architecture is must (BigQuery, Dataplex, Cloud SQL, Dataflow, Apigee, Pub/Sub, ...). Appropriate GCP architecture level certification. Experience in handling hybrid architecture & patterns addressing non- functional requirements like data residency, compliance like GDPR and security & access control. Experience in developing reusable components and reference architecture using IaaC (Infrastructure as a code) platforms such as terraform. Data Mesh: The candidate is expected to have proficiency in Data Mesh design strategies that embrace the decentralization nature of data ownership. The candidate must have good domain knowledge to ensure that the data products developed are aligned with business goals and provide real value. Data Management Tool: Access various tools and solutions comprising of data governance capabilities like data catalogue, data modelling and design, metadata management, data quality and lineage and fine-grained data access management. Assist in development of medium to long term target state of the technologies within the data governance domain. Collaboration: Collaborate with stakeholders, including business leaders, project managers, and development teams, to gather requirements and translate them into technical solutions. Your skills and experience Demonstrable experience in designing and deploying AI tooling architectures and use cases Extensive experience in data architecture, within Financial Services Strong technical knowledge of data integration patterns, batch & stream processing, data lake/ data lake house/data warehouse/data mart, caching patterns and policy bases fine grained data access. Proven experience in working on data management principles, data governance, data quality, data lineage and data integration with a focus on Data Mesh Knowledge of Data Modelling concepts like dimensional modelling and 3NF. Experience of systematic structured review of data models to enforce conformance to standards. High level understanding of data management solutions e.g. Collibra, Informatica Data Governance etc. Proficiency at data modeling and experience with different data modelling tools. Very good understanding of streaming and non-streaming ETL and ELT approaches for data ingest. Strong analytical and problem-solving skills, with the ability to identify complex business requirements and translate them into technical solutions.
Posted 4 weeks ago
5.0 - 10.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams
Posted 4 weeks ago
9.0 - 12.0 years
1 - 2 Lacs
Hyderabad
Remote
Job Title: Data Architect Location: Remote Employment Type: Full-Time Reports to: Lead Data Strategist About Client / Project: Client is a specialist data strategy and AI consultancy that empowers businesses to unlock tangible value from their data assets. We specialize in developing comprehensive data strategies tailored to address core business and operational challenges. By combining strategic advisory with hands-on implementation, we ensure data becomes a true driver of business growth, operational efficiency, and competitive advantage for our clients. As a solutions-focused and forward-thinking consultancy, we help organizations transform their data capabilities using modern technology, reduce costs, and accelerate business growth by aligning every initiative directly with our clients core business objectives. Role Overview We are seeking a highly experienced Data Architect to lead the design and implementation of scalable data architectures for global clients across industries. You will define enterprise-grade data platforms leveraging cloud-native technologies and modern data frameworks. Key Responsibilities Design and implement cloud-based data architectures (GCP, AWS, Azure, Snowflake, Redshift, Databricks, or Hadoop)• Develop conceptual, logical, and physical data models Define data flows, ETL/ELT pipelines, and ingestion strategies Design and maintain data catalogs, metadata, and domain structures Establish data architecture standards, reference models, and blueprints Oversee data lineage, traceability, and audit readiness Guide integration of AI/ML pipelines and analytics solutions Ensure data privacy, protection, and compliance (e.g., GDPR, HIPAA) Collaborate closely with Engineers, Analysts, and Strategists Required Skills & Qualifications 8+ years of experience in data architecture or enterprise data platform roles Deep experience with at least two major cloud platforms (AWS, Azure, GCP) Proven hands-on work with modern data platforms: Snowflake, Databricks, Redshift, Hadoop Strong understanding of data warehousing, data lakes, lakehouse architecture Advanced proficiency in SQL, Python, Spark, and/or Scala Experience with data cataloging and metadata tools (e.g., Informatica, Collibra, Alation) Knowledge of data governance frameworks and regulatory compliance Strong documentation, stakeholder communication, and architectural planning skills Bachelor’s degree in Computer Science, Engineering, or related field (Master’s preferred)
Posted 1 month ago
8.0 - 13.0 years
2 - 30 Lacs
Bengaluru
Work from Office
Hello, Truecaller is calling you from Bangalore, India! Ready to pick up Our goal is to make communication smarter, safer, and more efficient, all while building trust everywhere We're all about bringing you smart services with a big social impact, keeping you safe from fraud, harassment, scam calls or messages, so you can focus on the conversations that matter Top 20 most downloaded apps globally, and worlds #1 caller ID and spam-blocking service for Android and iOS, with extensive AI capabilities, with more than 450 million active users per month Founded in 2009, listed on Nasdaq OMX Stockholm and is categorized as a Large Cap Our focus on innovation, operational excellence, sustainable growth, and collaboration has resulted in consistently high profitability and strong EBITDA margins A team of 400 people from ~35 different nationalities spread across our headquarters in Stockholm and offices in Bangalore, Mumbai, Gurgaon and Tel Aviv with high ambitions We in the Data Infrastructure Team work on a vision to empower our product-oriented teams with a complete self-service analytics platform to make data-driven decisions possible in Truecaller, guaranteeing the stability, velocity, and delivery of the mainstream events pipeline As a Senior Data Engineer, you will play an important role in the development of data pipelines, frameworks and models to support the understanding of our users and make better product decisions You will contribute to empowering the product teams with a complete self-serve analytics platform by working on scalable and robust solutions while collaborating with data engineers, data scientists and data analysts across the company What you bring in: 6+ years of hands-on experience in data engineering and large-scale distributed systems Proven expertise in building and maintaining complex ETL/ELT pipelines Deep knowledge of orchestration frameworks (e g , Airflow), and workflow optimization Strong cloud infrastructure experience (GCP preferred; AWS or Azure also relevant) Expert-level programming in Python or Scala; solid understanding of Spark internals Experience with CI/CD tools (e g , Jenkins, GitHub Actions) and infrastructure as code Familiarity with managing self-hosted tools like Spark or Airflow on Kubernetes Experience managing data warehouse in BigQuery or Redshift Strong communication skills and a proactive, problem-solving mindset The impact you will create: Design , build and optimize the data ingestion pipeline to reliably deliver billions of events daily in defined SLA Lead initiatives to improve scalability , performance and reliability Provide support for all product teams in building and optimizing their complex pipelines Identify and address pain points in the existing data platform; propose and implement high-leverage improvements Develop new tools and frameworks to streamline the data platform workflows Drive adoption of best practices in data and software engineering (testing, CI/CD, version control, monitoring) Work in close collaboration with data scientists and data analysts to help support their work in production Support production ML workflows and real-time streaming use cases Mentor other engineers and contribute to a culture of technical excellence and knowledge sharing It would be great if you also have: Working with messaging systems like Kafka , Redpanda Experience with real-time data streaming platforms (e g , Flink, Spark Structured Streaming) Familiarity with ML platforms or MLOps workflows Life at Truecaller Behind the code: https://www instagram com/lifeattruecaller/ Sounds like your dream job We will fill the position as soon as we find the right candidate, so please send your application as soon as possible As part of the recruitment process, we will conduct a background check This position is based in Bangalore, India We only accept applications in English What we offer: A smart, talented and agile team: An international team where ~35 nationalities are working together in several locations and time zones with a learning, sharing and fun environment A great compensation package: Competitive salary, 30 days of paid vacation, flexible working hours, private health insurance, parental leave, telephone bill reimbursement, Udemy membership to keep learning and improving and Wellness allowance Great tech tools: Pick the computer and phone that you fancy the most within our budget ranges Office life: We strongly believe in the in-person collaboration and follow an office-first approach while offering some flexibility Enjoy your days with great colleagues with loads of good stuff to learn from, daily lunch and breakfast and a wide range of healthy snacks and beverages In addition, every now and then check out the playroom for a fun break or join our exciting parties and or team activities such as Lab days, sports meetups etc There something for everyone! Come as you are: Truecaller is diverse, equal and inclusive We need a wide variety of backgrounds, perspectives, beliefs and experiences in order to keep building our great products No matter where you are based, which language you speak, your accent, race, religion, color, nationality, gender, sexual orientation, age, marital status, etc All those things make you who you are, and thats why we would love to meet you
Posted 1 month ago
1.0 - 4.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Hello, Truecaller is calling you from Bangalore, India! Ready to pick up Our goal is to make communication smarter, safer, and more efficient, all while building trust everywhere We're all about bringing you smart services with a big social impact, keeping you safe from fraud, harassment, scam calls or messages, so you can focus on the conversations that matter Top 20 most downloaded apps globally, and worlds #1 caller ID and spam-blocking service for Android and iOS, with extensive AI capabilities, with more than 450 million active users per month Founded in 2009, listed on Nasdaq OMX Stockholm and is categorized as a Large Cap Our focus on innovation, operational excellence, sustainable growth, and collaboration has resulted in consistently high profitability and strong EBITDA margins A team of 400 people from ~35 different nationalities spread across our headquarters in Stockholm and offices in Bangalore, Mumbai, Gurgaon and Tel Aviv with high ambitions We in the Data Infrastructure Team work on a vision to empower our product-oriented teams with a complete self-service analytics platform to make data-driven decisions possible in Truecaller, guaranteeing the stability, velocity, and delivery of the mainstream events pipeline As a Senior Data Engineer Platform, you will play an important role in the development of data pipelines, frameworks and models to support the understanding of our users and making better product decisions You will contribute to empowering the product teams with a complete self-serve analytics platform by working on scalable and robust solutions while collaborating with data engineers, data scientists and data analysts across the company What You Bring In 6+ years of hands-on experience in data engineering and large-scale distributed systems Proven expertise in building and maintaining complex ETL/ELT pipelines Deep knowledge of orchestration frameworks (e g , Airflow), and workflow optimization Strong cloud infrastructure experience (GCP preferred; AWS or Azure also relevant) Expert-level programming in Python or Scala; solid understanding of Spark internals Experience with CI/CD tools (e g , Jenkins, GitHub Actions) and infrastructure as code Familiarity with managing self-hosted tools like Spark or Airflow on Kubernetes Experience managing data warehouse in BigQuery or Redshift Strong communication skills and a proactive, problem-solving mindset The Impact You Will Create Design , build and optimize the data ingestion pipeline to reliably deliver billions of events daily in defined SLA Lead initiatives to improve scalability , performance and reliability Provide support for all product teams in building and optimizing their complex pipelines Identify and address pain points in the existing data platform; propose and implement high-leverage improvements Develop new tools and frameworks to streamline the data platform workflows Drive adoption of best practices in data and software engineering (testing, CI/CD, version control, monitoring) Work in close collaboration with data scientists and data analysts to help support their work in production Support production ML workflows and real-time streaming use cases Mentor other engineers and contribute to a culture of technical excellence and knowledge sharing It Would Be Great If You Also Have Working with messaging systems like Kafka , Redpanda Experience with real-time data streaming platforms (e g , Flink, Spark Structured Streaming) Familiarity with ML platforms or MLOps workflows Life at Truecaller Behind the code: https://www instagram com/lifeattruecaller/ Sounds like your dream job We will fill the position as soon as we find the right candidate, so please send your application as soon as possible As part of the recruitment process, we will conduct a background check This position is based in Bangalore, India We only accept applications in English What We Offer A smart, talented and agile team: An international team where ~35 nationalities are working together in several locations and time zones with a learning, sharing and fun environment A great compensation package: Competitive salary, 30 days of paid vacation, flexible working hours, private health insurance, parental leave, telephone bill reimbursement, Udemy membership to keep learning and improving and Wellness allowance Great tech tools: Pick the computer and phone that you fancy the most within our budget ranges Office life: We strongly believe in the in-person collaboration and follow an office-first approach while offering some flexibility Enjoy your days with great colleagues with loads of good stuff to learn from, daily lunch and breakfast and a wide range of healthy snacks and beverages In addition, every now and then check out the playroom for a fun break or join our exciting parties and or team activities such as Lab days, sports meetups etc There something for everyone! Come as you are: Truecaller is diverse, equal and inclusive We need a wide variety of backgrounds, perspectives, beliefs and experiences in order to keep building our great products No matter where you are based, which language you speak, your accent, race, religion, color, nationality, gender, sexual orientation, age, marital status, etc All those things make you who you are, and thats why we would love to meet you Job info Location Bengaluru, Karnataka, India Category Data Engineering Team Platform Posted today
Posted 1 month ago
6.0 - 10.0 years
10 - 20 Lacs
Chennai
Work from Office
Do you love leading data-driven transformations and mentoring teams in building scalable data platforms? Were looking for a Data Tech Lead to drive innovation, architecture, and execution across our data ecosystem. Your Role: Lead the design and implementation of modern data architecture, ETL/ELT pipelines, and data lakes/warehouses Set technical direction and mentor a team of talented data engineers Collaborate with product, analytics, and engineering teams to translate business needs into data solutions Define and enforce data modeling standards, governance, and naming conventions Take ownership of the end-to-end data lifecycle: ingestion, transformation, storage, access, and monitoring Evaluate and implement the right cloud/on-prem tools and frameworks Troubleshoot and resolve complex data challenges, while optimizing for performance and cost Contribute to documentation, design blueprints, and knowledge sharing We’re Looking For Someone With: Proven experience in leading data engineering or data platform teams Expertise in designing scalable data architectures and modern data stacks Strong hands-on experience with cloud platforms (AWS/Azure/GCP) and big data tools Proficiency in Python, SQL, Spark, Databricks, or similar tools A passion for clean code, performance tuning, and high-impact delivery Strong communication, collaboration, and leadership skills
Posted 1 month ago
2.0 - 4.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Gurugram, Haryana, India; Bengaluru, Karnataka, India; Hyderabad, Telangana, India Minimum qualifications: Bachelors degree in Engineering, Computer Science, a related field, or equivalent practical experience Experience coding with one or more programming languages (e g , Java, C/C++, Python) Experience troubleshooting technical issues for internal/external partners or customers Preferred qualifications: Experience in distributed data processing frameworks and modern age investigative and transactional data stores Experience in working with/on data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools, environments, and data structures Experience in big data, information retrieval, data mining Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB Experience with Infrastructure as Code (IaC) and Continuous Integration/Continuous Delivery (CICD) tools like Terraform, Ansible, Jenkins etc Understanding of at least one database type with the ability to write complex SQLs About the jobThe Google Cloud Platform team helps customers transform and build what's next for their business ? all with technology built in the cloud Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware Our teams are dedicated to helping our customers ? developers, small and large businesses, educational institutions and government agencies ? see the benefits of our technology come to life As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners As a Strategic Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform You will work on data migrations and modernization projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges You will have an in-depth understanding of data governance and security controls You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers Additionally, you will work closely with Product Management and Product Engineering teams to build and constantly drive excellence in our products Google Cloud accelerates every organizations ability to digitally transform its business and industry We deliver enterprise-grade solutions that leverage Googles cutting-edge technology, and tools that help developers build more sustainably Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems Responsibilities Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platform (GCP) Design, Migrate/Build and Operationalize data storage and processing infrastructure using Cloud native products Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders Google is proud to be an equal opportunity workplace and is an affirmative action employer We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status We also consider qualified applicants regardless of criminal histories, consistent with legal requirements See also Google's EEO Policy and EEO is the Law If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form
Posted 1 month ago
2.0 - 4.0 years
5 - 9 Lacs
Gurugram
Work from Office
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Gurugram, Haryana, India; Bengaluru, Karnataka, India; Hyderabad, Telangana, India Minimum qualifications: Bachelors degree in Engineering, Computer Science, a related field, or equivalent practical experience Experience coding with one or more programming languages (e g , Java, C/C++, Python) Experience troubleshooting technical issues for internal/external partners or customers Preferred qualifications: Experience in distributed data processing frameworks and modern age investigative and transactional data stores Experience in working with/on data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools, environments, and data structures Experience in big data, information retrieval, data mining Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB Experience with Infrastructure as Code (IaC) and Continuous Integration/Continuous Delivery (CICD) tools like Terraform, Ansible, Jenkins etc Understanding of at least one database type with the ability to write complex SQLs About the jobThe Google Cloud Platform team helps customers transform and build what's next for their business ? all with technology built in the cloud Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware Our teams are dedicated to helping our customers ? developers, small and large businesses, educational institutions and government agencies ? see the benefits of our technology come to life As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners As a Strategic Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform You will work on data migrations and modernization projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges You will have an in-depth understanding of data governance and security controls You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers Additionally, you will work closely with Product Management and Product Engineering teams to build and constantly drive excellence in our products Google Cloud accelerates every organizations ability to digitally transform its business and industry We deliver enterprise-grade solutions that leverage Googles cutting-edge technology, and tools that help developers build more sustainably Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems Responsibilities Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platform (GCP) Design, Migrate/Build and Operationalize data storage and processing infrastructure using Cloud native products Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders Google is proud to be an equal opportunity workplace and is an affirmative action employer We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status We also consider qualified applicants regardless of criminal histories, consistent with legal requirements See also Google's EEO Policy and EEO is the Law If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form
Posted 1 month ago
2.0 - 4.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Gurugram, Haryana, India; Bengaluru, Karnataka, India; Hyderabad, Telangana, India Minimum qualifications: Bachelors degree in Engineering, Computer Science, a related field, or equivalent practical experience Experience coding with one or more programming languages (e g , Java, C/C++, Python) Experience troubleshooting technical issues for internal/external partners or customers Preferred qualifications: Experience in distributed data processing frameworks and modern age investigative and transactional data stores Experience in working with/on data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools, environments, and data structures Experience in big data, information retrieval, data mining Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB Experience with Infrastructure as Code (IaC) and Continuous Integration/Continuous Delivery (CICD) tools like Terraform, Ansible, Jenkins etc Understanding of at least one database type with the ability to write complex SQLs About the jobThe Google Cloud Platform team helps customers transform and build what's next for their business ? all with technology built in the cloud Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware Our teams are dedicated to helping our customers ? developers, small and large businesses, educational institutions and government agencies ? see the benefits of our technology come to life As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners As a Strategic Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform You will work on data migrations and modernization projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges You will have an in-depth understanding of data governance and security controls You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers Additionally, you will work closely with Product Management and Product Engineering teams to build and constantly drive excellence in our products Google Cloud accelerates every organizations ability to digitally transform its business and industry We deliver enterprise-grade solutions that leverage Googles cutting-edge technology, and tools that help developers build more sustainably Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems Responsibilities Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platform (GCP) Design, Migrate/Build and Operationalize data storage and processing infrastructure using Cloud native products Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders Google is proud to be an equal opportunity workplace and is an affirmative action employer We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status We also consider qualified applicants regardless of criminal histories, consistent with legal requirements See also Google's EEO Policy and EEO is the Law If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form
Posted 1 month ago
8.0 - 10.0 years
20 - 35 Lacs
Greater Noida
Work from Office
We are seeking a seasoned Informatica CDGC expert to work with Informatica team and lead the implementation and optimization of Informatica Cloud Data Governance and Catalog solutions. The ideal candidate will establish best practices, drive data governance initiatives, and mentor a team of data professionals to ensure a scalable and efficient governance framework aligned with business objectives Roles and Responsibilities Lead the end-to-end implementation of Informatica Cloud Data Governance and Catalog (CDGC) solutions, ensuring timely and high-quality delivery. Design, configure, and deploy data governance frameworks using Informatica CDGC aligned with organizational standards and compliance requirements. Develop and implement best practices for metadata management, data lineage, data quality, and stewardship within the Informatica CDGC environment. Collaborate with cross-functional teams, including data architects, engineers, analysts, and business stakeholders to drive data governance adoption. Provide expert guidance on data governance policies, workflows, and tool utilization to maximize the value of Informatica CDGC. Mentor and coach team members on technical and governance aspects of Informatica CDGC, fostering skill development and knowledge sharing. Troubleshoot and resolve complex technical issues related to Informatica CDGC deployment and integrations. Stay current with Informatica CDGC product updates, industry trends, and data governance best practices to continuously enhance governance capabilities. Create and maintain documentation, including architecture diagrams, configuration guides, and training materials. Support audit and compliance activities related to data governance and metadata management. Proven experience working with Informatica Data Governance and Catalog tools, preferably Cloud Data Governance and Catalog (CDGC). Strong understanding of data governance concepts, metadata management, data lineage, and data quality principles. Hands-on experience implementing and configuring Informatica CDGC solutions in enterprise environments. Proficiency with ETL/ELT processes, metadata integration, and data cataloging. Solid knowledge of data management frameworks and regulatory compliance (e.g., GDPR, CCPA). Excellent problem-solving and analytical skills with the ability to mentor and lead a team. Strong communication skills with experience working across technical and business stakeholders. Ability to create and deliver training sessions, workshops, and detailed technical documentation.
Posted 1 month ago
7.0 - 12.0 years
14 - 18 Lacs
Noida
Work from Office
Who We Are Build a brighter future while learning and growing with a Siemens company at the intersection of technology, community and s ustainability. Our global team of innovators is always looking to create meaningful solutions to some of the toughest challenges facing our world. Find out how far your passion can take you. What you need * BS in an Engineering or Science discipline, or equivalent experience * 7+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 5 years' experience in a data focused role * Experience in data integration (ETL/ELT) development using multiple languages (e.g., Java, Scala, Python, PySpark, SparkSQL) * Experience building and maintaining data pipelines supporting a variety of integration patterns (batch, replication/CD C, event streaming) and data lake/warehouse in production environments * Experience with AWS-based data services technologies (e.g., Kinesis, Glue, RDS, Athena, etc.) and Snowflake CDW * Experience of working in the larger initiatives building and rationalizing large scale data environments with a large variety of data pipelines, possibly with internal and external partner integrations, would be a plus * Willingness to experiment and learn new approaches and technology applications * Knowledge and experience with various relational databases and demonstrable proficiency in SQL and supporting analytics uses and users * Knowledge of software engineering and agile development best practices * Excellent written and verbal communication skills The Brightly culture Were guided by a vision of community that serves the ambitions and wellbeing of all people, and our professional communities are no exception. We model that ideal every day by being supportive, collaborative partners to one another, conscientiousl y making space for our colleagues to grow and thrive. Our passionate team is driven to create a future where smarter infrastructure protects the environments that shape and connect us all. That brighter future starts with us.
Posted 1 month ago
3.0 - 5.0 years
12 - 16 Lacs
Pune
Work from Office
Siemens Digital Industries Software is seeking a detail-oriented Data Analyst to join our Strategy team where your analytical prowess will directly shape the future of industrial innovation. This pivotal role is crafted for an analytical thinker who thrives on solving complex problems and is eager to grow into a Data Science professional within our world-renowned technology ecosystem. As a Siemens Data Analyst, you'll experience an outstanding opportunity to work at the intersection of AI-driven solutions and global industrial digitalization, with clear pathways for professional development and mentorship from world-class technology leaders. Your work will not just analyze data, but will contribute to transformative technologies that solve real-world challenges for the Strategy organization. Responsibilities: Develop and maintain robust data pipelines using tools like Snowflake and ELT/ETL processes. Optimally communicate complex data findings and recommendations to both technical and non-technical stakeholders. Gather, clean, and analyze large datasets to extract meaningful insights. Develop and maintain dashboards, reports, and visualizations to track key business metrics. Collaborate with partners to define data-driven solutions and support strategic decision-making. Identify trends, patterns, and correlations in data to improve business processes. Apply SQL, Python, or R to perform data manipulation and analysis. Ensure data accuracy, integrity, and consistency across reporting platforms. Work closely with data engineers to optimize data pipelines and database structures. Experiment with innovative AI technologies (Langchain for generative AI) to explore new marketing possibilities and develop data-driven recommendations. Explore opportunities to integrate machine learning and AI techniques into analysis over time. Qualifications: Bachelors degree in Data Science, Computer Science, Statistics, Mathematics, or a related field. We are looking for candidate with 3-5 years of experience in data analysis, business intelligence, data science or a related field. Proficiency in SQL for querying and handling data! Experience with data visualization tools (e.g., Tableau, Power BI, Looker, Matplotlib, Seaborn). Familiarity with Python or R for data analysis and automation and git version control. We are seeking for candidate with solid understanding of regression analysis and other statistical methods. Experience with AI/Machine Learning frameworks (e.g., Langchain) is a strong plus. Experience with Snowflake data warehousing is a plus! Adaptable and eager to learn new tools and technologies. A results-oriented approach with a good work ethic and a passion for innovation. Strong analytical and problem-solving skills with attention to detail. Excellent communication skills to collaborate with technical and non-technical teams. Willingness to expand into machine learning and data science methodologies. Easy to work with global team and can build positive relationships with a broad set of stakeholders that have diverse interests. A collection of over 377,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and creativity and help us shape tomorrow! We offer a comprehensive reward package which includes a competitive basic salary, bonus scheme, generous holiday allowance, pension, and private healthcare. Siemens Software. Transform the every day with us' #li-plm #LI-Hybrid, #SWSaaS ...Less
Posted 1 month ago
8.0 - 13.0 years
4 - 8 Lacs
Hyderabad
Work from Office
This role will be instrumental in building and maintaining robust, scalable, and reliable data pipelines using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. The ideal candidate will have a strong understanding of data streaming concepts, experience with real-time data processing, and a passion for building high-performance data solutions. This role requires excellent analytical skills, attention to detail, and the ability to work collaboratively in a fast-paced environment. Essential Responsibilities Design & develop data pipelines for real time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations. Automate data pipeline deployment, monitoring, and maintenance tasks. Stay up-to-date with the latest advancements in data streaming technologies and best practices. Contribute to the development of data engineering standards and best practices within the organization. Participate in code reviews and contribute to a collaborative and supportive team environment. Work closely with other architects and tech leads in India & US and create POCs and MVPs Provide regular updates on the tasks, status and risks to project manager The experience we are looking to add to our team Required Bachelors degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry. Hands-on experience with ksqlDB for real-time data transformations and stream processing. Experience with Kafka Connect and building custom connectors. Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform - Excellent problemsolving, analytical, and communication skills. Ability to work independently and as part of a team Good to have Experience in Google Cloud Healthcare industry experience Experience in Agile
Posted 1 month ago
9.0 - 14.0 years
4 - 8 Lacs
Hyderabad
Work from Office
To be responsible for data modelling, design, and development of the batch and real-time extraction, load, transform (ELT) processes, and the setup of the data integration framework, ensuring best practices are followed during the integration development. Bachelors degree in CS/IT or related field (minimum) Azure Data Engineer (ADF, ADSL, MS Fabric), Databricks Azure DevOps, Confluence
Posted 1 month ago
6.0 - 9.0 years
8 - 11 Lacs
Hyderabad
Work from Office
Mandatory skill ETL_GCP_Bigquery Develop, implement, and optimize ETL/ELT pipelines for processing large datasets efficiently. Work extensively with BigQuery for data processing, querying, and optimization. Utilize Cloud Storage, Cloud Logging, Dataproc, and Pub/Sub for data ingestion, storage, and event-driven processing. Perform performance tuning and testing of the ELT platform to ensure high efficiency and scalability. Debug technical issues, perform root cause analysis, and provide solutions for production incidents. Ensure data quality, accuracy, and integrity across data pipelines. Collaborate with cross-functional teams to define technical requirements and deliver solutions. Work independently on assigned tasks while maintaining high levels of productivity and efficiency. Skills Required Proficiency in SQL and PL/SQL for querying and manipulating data. Experience in Python for data processing and automation. Hands-on experience with Google Cloud Platform (GCP), particularly o BigQuery (must-have) o Cloud Storage o Cloud Logging o Dataproc o Pub/Sub Experience with GitHub and CI/CD pipelines for automation and deployment. Performance tuning and performance testing of ELT processes. Strong analytical and debugging skills to resolve data and pipeline issues efficiently. Self-motivated and able to work independently as an individual contributor. Good understanding of data modeling, database design, and data warehousing concepts.
Posted 1 month ago
6.0 - 10.0 years
15 - 20 Lacs
Hyderabad
Work from Office
Develop, optimize, and maintain scalable data pipelines using Python and PySpark. Design and implement data processing workflows leveraging GCP services such as: BigQuery Dataflow Cloud Functions Cloud Storage
Posted 1 month ago
8.0 - 12.0 years
20 - 32 Lacs
Hyderabad, Ahmedabad
Hybrid
Were Hiring: Senior Data Engineer – Azure & Snowflake Expert Location: Hyderabad / Ahmedabad Experience: 8–12 Years Immediate Joiners Preferred Are you passionate about designing scalable data pipelines and building high-performing data platforms in the cloud? We are looking for a Senior Data Engineer with strong hands-on expertise in Snowflake and Azure Data Factory to join our growing team. Key Responsibilities: Design and optimize scalable data pipelines for large datasets. Develop and orchestrate ETL/ELT workflows using Azure Data Factory (ADF) . Manage data storage with Azure Blob Storage and ADLS Gen2 . Implement event-driven automations using Azure Logic Apps . Write robust SQL queries, stored procedures, and build data models. Ensure data quality, security, and governance practices are enforced. Troubleshoot and optimize existing pipelines and infrastructure. Must-Have Skills : Expert-level Snowflake knowledge – design, development, and optimization. Proficiency in the Azure data ecosystem : ADF, Blob Storage, ADLS Gen2, Logic Apps. Strong SQL expertise for complex data manipulation. Familiarity with Git and version control. Excellent problem-solving and communication skills. Nice to Have : Experience with dbt (data build tool) . Knowledge of Python and DevOps/CI-CD practices for data engineering.
Posted 1 month ago
1.0 - 4.0 years
3 - 7 Lacs
Ahmedabad
Work from Office
About the Company e.l.f. Beauty, Inc. stands with every eye, lip, face and paw. Our deep commitment to clean, cruelty free beauty at an incredible value has fueled the success of our flagship brand e.l.f. Cosmetics since 2004 and driven our portfolio expansion. Today, our multi-brand portfolio includes e.l.f. Cosmetics, e.l.f. SKIN, pioneering clean beauty brand Well People, Keys Soulcare, a groundbreaking lifestyle beauty brand created with Alicia Keys and Naturium, high-performance, biocompatible, clinically-effective and accessible skincare. In our Fiscal year 24, we had net sales of $1 Billion and our business performance has been nothing short of extraordinary with 24 consecutive quarters of net sales growth. We are the #2 mass cosmetics brand in the US and are the fastest growing mass cosmetics brand among the top 5. Our total compensation philosophy offers every full-time new hire competitive pay and benefits, bonus eligibility (200% of target over the last four fiscal years), equity, flexible time off, year-round half-day Fridays, and a hybrid 3 day in office, 2 day at home work environment. We believe the combination of our unique culture, total compensation, workplace flexibility and care for the team is unmatched across not just beauty but any industry. Visit our Career Page to learn more about our team: https://www.elfbeauty.com/work-with-us Position Summary We are seeking a skilled Data Engineer to join our dynamic team. The Data Engineer will be responsible for designing, developing, and maintaining our data pipelines, integrations, and data warehouse infrastructure. The successful candidate will work closely with data scientists, analysts, and business stakeholders to ensure that our data is accurate, secure, and accessible for all users. Responsibilities Design and build scalable data pipeline architecture that can handle large volumes of data Develop ELT/ETL pipelines to extract, load and transform data from various sources into our data warehouse Optimize and maintain the data infrastructure to ensure high availability and performance Collaborate with data scientists and analysts to identify and implement improvements to our data pipeline and models Develop and maintain data models to support business needs Ensure data security and compliance with data governance policies Identify and troubleshoot data quality issues Automate and streamline processes related to data management Stay up-to-date with emerging data technologies and trends to ensure the continuous improvement of our data infrastructure and architecture. Analyze the data products and requirements to align with data strategy Assist in extracting or researching data for cross-functional business partners for consumer insights, supply chain, and finance teams Enhance the efficiency, automation, and accuracy of existing reports Follow best practices in data querying and manipulation to ensure data integrity Requirements Bachelor's or master’s degree in computer science, Data Science, or a related field Must have 8+ years of experience as a Snowflake Data Engineer or related role Must have experience with Snowflake Strong Snowflake experience building, maintaining and documenting data pipelines Expertise in Snowflake concepts like RBAC management, virtual warehouse, file format, streams, zero copy clone, time travel and understand how to use these features Strong SQL development experience including SQL queries and stored procedures Strong knowledge of ELT/ETL no-code/low-code tools like Informatica / SnapLogic. Well versed in data standardization, cleansing, enrichment, and modeling Proficiency in one or more programming languages such as Python, Java, or C# Experience with cloud computing platforms such as AWS, Azure, or GCP Knowledge of ELT/ETL processes, data warehousing, and data modeling Familiarity with data security and governance best practices Excellent hands-on experience in problem-solving and analytical skills and improving the performance of processes Strong communication and collaboration skills Minimum work experience 8 Maximum work experience 13 This job description is intended to describe the general nature and level of work being performed in this position. It also reflects the general details considered necessary to describe the principal functions of the job identified, and shall not be considered, as detailed description of all the work required inherent in the job. It is not an exhaustive list of responsibilities, and it is subject to changes and exceptions at the supervisors’ discretion. e.l.f. Beauty respects your privacy. Please see our Job Applicant Privacy Notice (www.elfbeauty.com/us-job-applicant-privacy-notice) for how your personal information is used and shared.
Posted 1 month ago
2.0 - 6.0 years
5 - 8 Lacs
Ahmedabad
Work from Office
About the Company e.l.f. Beauty, Inc. stands with every eye, lip, face and paw. Our deep commitment to clean, cruelty free beauty at an incredible value has fueled the success of our flagship brand e.l.f. Cosmetics since 2004 and driven our portfolio expansion. Today, our multi-brand portfolio includes e.l.f. Cosmetics, e.l.f. SKIN, pioneering clean beauty brand Well People, Keys Soulcare, a groundbreaking lifestyle beauty brand created with Alicia Keys and Naturium, high-performance, biocompatible, clinically-effective and accessible skincare. In our Fiscal year 24, we had net sales of $1 Billion and our business performance has been nothing short of extraordinary with 24 consecutive quarters of net sales growth. We are the #2 mass cosmetics brand in the US and are the fastest growing mass cosmetics brand among the top 5. Our total compensation philosophy offers every full-time new hire competitive pay and benefits, bonus eligibility (200% of target over the last four fiscal years), equity, flexible time off, year-round half-day Fridays, and a hybrid 3 day in office, 2 day at home work environment. We believe the combination of our unique culture, total compensation, workplace flexibility and care for the team is unmatched across not just beauty but any industry. Visit our Career Page to learn more about our team: https://www.elfbeauty.com/work-with-us Position Summary We are seeking a skilled Sr. Data Engineer to join our dynamic team. The Sr. Data Engineer will be responsible for designing, developing, and maintaining our data pipelines, integrations, and data warehouse infrastructure. The successful candidate will work closely with data scientists, analysts, and business stakeholders to ensure that our data is accurate, secure, and accessible for all users. Responsibilities Design and build scalable data pipeline architecture that can handle large volumes of data Develop ELT/ETL pipelines to extract, load and transform data from various sources into our data warehouse Optimize and maintain the data infrastructure to ensure high availability and performance Collaborate with data scientists and analysts to identify and implement improvements to our data pipeline and models Develop and maintain data models to support business needs Ensure data security and compliance with data governance policies Identify and troubleshoot data quality issues Automate and streamline processes related to data management Stay up-to-date with emerging data technologies and trends to ensure the continuous improvement of our data infrastructure and architecture. Analyze the data products and requirements to align with data strategy Assist in extracting or researching data for cross-functional business partners for consumer insights, supply chain, and finance teams Enhance the efficiency, automation, and accuracy of existing reports Follow best practices in data querying and manipulation to ensure data integrity Requirements Bachelor's or Master's degree in Computer Science, Data Science, or a related field Must have 15+ years of experience as a Data Engineer or related role Must have experience with Snowflake. Strong Snowflake experience building, maintaining and documenting data pipelines Expertise in Snowflake concepts like RBAC management, virtual warehouse, file format, streams, zero copy clone, time travel and understand how to use these features Strong SQL development experience including SQL queries and stored procedures Strong knowledge of ELT/ETL no-code/low-code tools like Informatica / SnapLogic. Well versed in data standardization, cleansing, enrichment, and modeling Proficiency in one or more programming languages such as Python, Java, or C# Experience with cloud computing platforms such as AWS, Azure, or GCP Knowledge of ELT/ETL processes, data warehousing, and data modeling Familiarity with data security and governance best practices Excellent hands-on experience in problem-solving and analytical skills and improving the performance of processes Strong communication and collaboration skills Minimum work experience 15 Maximum work experience 20 This job description is intended to describe the general nature and level of work being performed in this position. It also reflects the general details considered necessary to describe the principal functions of the job identified, and shall not be considered, as detailed description of all the work required inherent in the job. It is not an exhaustive list of responsibilities, and it is subject to changes and exceptions at the supervisors’ discretion. e.l.f. Beauty respects your privacy. Please see our Job Applicant Privacy Notice (www.elfbeauty.com/us-job-applicant-privacy-notice) for how your personal information is used and shared.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi