Lead - Data Engineering ( Palantir)

100 years

0 Lacs

Posted:2 weeks ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

We’re Lear For You


Lear, a global automotive technology leader in Seating and E-Systems, is Making every drive better by delivering intelligent in-vehicle experiences for customers around the world. With over 100 years of experience, Lear has earned a legacy of operational excellence while building its future on innovation. Our talented team is committed to creating products that ensure the comfort, well-being, convenience, and safety of consumers. Working together, we are Making every drive better™.


To know more about Lear please visit our career site:


🚀 We're Hiring: Lead- Data Engineering (Palantir Foundry) at Lear Corporation!


Are you a data engineering expert ready to drive impact and support crucial decision-making? Join Lear Corporation as a **Senior Data Engineer** to take the lead on designing, building, and maintaining scalable data pipelines within Palantir Foundry.


About the Role:

In this role, you’ll collaborate with cross-functional teams to ensure data solutions are robust, reliable, and aligned with our organizational needs. Your expertise will shape the future of data-driven insights and support mission-critical projects that drive decision-making. You’ll also mentor junior team members, helping foster a collaborative and innovative environment.


Key Responsibilities:

. Manage Execution of Data-Focused Projects:

  • As a senior member of the LEAR foundry team, support in designing, building and maintaining data-focused projects using Lear’s data analytics and application platforms.
  • Participate in projects from conception to root cause analytics and solution deployment.
  • Understand program and product delivery phases, contributing expert analysis across the lifecycle. Ensure Project deliverables are met as per agreed timeline.

2. Tools and Technologies:

  • Utilize key tools within Palantir Foundry, including:

 Pipeline Builder: Author data pipelines using a visual interface.

 Code Repositories: Manage code for data pipeline development.

 Data Lineage: Visualize end-to-end data flows.

  • Leverage programmatic health checks to ensure pipeline durability.
  • Work with both new and legacy technologies to integrate separate data feeds and transform them into new scalable datasets.
  • Mentor junior data engineers on best practices.

3. Data Pipeline Architecture and Development:

  • Lead the design and implementation of complex data pipelines.
  • Collaborate with cross-functional teams to ensure scalability, reliability, and efficiency and utilize Git concepts for version control and collaborative development.
  • Optimize data ingestion, transformation, and enrichment processes.

4. Big Data, Dataset Creation and Maintenance:

  • Utilize pipeline or code repository to transform big data into manageable datasets and produce high-quality datasets that meet the organization’s needs.
  • Implement optimum build time to ensure effective utilization of resource.

5. High-Quality Dataset Production:

  • Produce and maintain datasets that meet organizational needs.
  • Optimize the size and build scheduled of datasets to reflect the latest information.
  • Implement data quality health checks and validation.

6. Collaboration and Leadership:

  • Work closely with data scientists, analysts, and operational teams.
  • Provide technical guidance and foster a collaborative environment.
  • Champion transparency and effective decision-making.

7. Continuous Improvement:

  • Stay abreast of industry trends and emerging technologies.
  • Enhance pipeline performance, reliability, and maintainability.
  • Contribute to the evolution of Foundry’s data engineering capabilities.

8. Compliance and data security:

  • Ensure documentation and procedures align with internal practices (ITPM) and Sarbanes Oxley requirements, continuously improving them.

9. Team Development and Collaboration:

  • Mentor junior team members and contribute to their growth.
  • Foster collaboration within cross-functional teams.
  • Share best practices and encourage knowledge sharing.

10. Quality Assurance & Optimization:

  • Optimize data pipelines and their impact on resource utilization of downstream processes.
  • Continuously test and improve data pipeline performance and reliability.
  • Optimize system performance for all deployed resources.


Qualifications:

  • Bachelor’s or master’s degree in Computer Science, Engineering, or a related field.
  • Minimum 5 years of experience in data engineering, ETL, and data integration.
  • Proficiency in Python and libraries like Pyspark, Pandas, Numpy.
  • Strong understanding of Palantir Foundry and its capabilities.
  • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka).
  • Excellent problem-solving skills and attention to detail.
  • Effective communication and leadership abilities.


**Why Lear?**

Be part of a team that values innovation, excellence, and growth. Join us and make a real impact in a role that blends hands-on technical expertise with leadership in a collaborative, cutting-edge environment.


Ready to advance your data engineering career with Lear? **Apply now!**

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You