Home
Jobs

16 Palantir Foundry Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

8 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

Hyderabad or Remote Responsibilities: * Design, develop, and maintain Palantir platforms using Foundry/Gotham/Apollo technologies. * Collaborate with cross-functional teams on project delivery and support. Send your resume to tanweer@cymbaltech.com

Posted 1 day ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Work from Office

Naukri logo

Job Summary Bachelors or Masters degree in Computer Science, Engineering, Data Science, or a related field. 6+ Years of experience in data engineering or analytics with 2+ years leading Palantir Foundry/Gotham implementations. Strong understanding of data integration, transformation, and modeling techniques. Proficiency in Python, SQL, and experience with pipeline development using Palantir tools. Understanding of Banking and Financial Services industry. Excellent communication and stakeholder management skills. Experience with Agile project delivery and cross-functional team collaboration.

Posted 2 days ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

About the role: As a Senior Data Risk Manager , you will play a central role in shaping how Swiss Re identifies, assesses, and governs operational risks linked to data. Sitting in the 2nd Line of Defence, you will provide independent oversight, advise on control effectiveness, and challenge risk-taking decisions related to data use, storage, quality, lineage, and security. You'll also have the opportunity to influence our approach to data-related risks in AI and emerging technologies, helping shape governance practices that extend across a global enterprise. Key Responsibilities: Design and enhance Swiss Re's Data Risk Control Framework by identifying and embedding key controls across the data lifecycle. Challenge and advise 1st Line teams on risk identification, assessment, and control adequacy related to data management and digital processes. Lead risk reviews and thematic assessments across digital services, systems, or strategic technology projects to surface and address data management risks. Monitor implementation of data risk controls across business units and functions, gathering feedback to support continuous improvement. Establish risk reporting and monitoring standards for data management risks at Group level, providing clear risk insights to senior stakeholders. Assess AI-related data risks , ensuring alignment with applicable internal governance and external regulatory frameworks. Engage regularly with senior stakeholders , promoting a strong risk culture and influencing data governance behaviour across the organisation. About the team: The Digital & Technology Risk Management (DTRM) team acts as the 2nd Line of Defence for all digital and technology-related risks at Swiss Re. We provide independent oversight, challenge, and insight across Swiss Re's global digital landscape. Serving as an independent partner to the business, we help shape the Group's risk posture across various technology domains, ranging from infrastructure and application security to digital innovation and AI. Our commitment lies in driving high standards of resilience, informed risk-taking, and sound control practices through strong engagement and credible challenge. From reviewing control frameworks to assessing emerging risks, we help shape responsible innovation and build resilience into every layer of our technology environment. About you: We are looking for a confident and forward-thinking risk professional with a deep understanding of data governance and its associated risks. Experience & Capabilities Minimum 7 years of experience in operational risk, digital/technology risk, or data governance roles-preferably within financial services, reinsurance, or consulting. Familiarity with data lifecycle and records management frameworks (e.g., DAMA-DMBOK) and their practical application across large organisations. Proven experience conducting risk assessments, spot checks , and thematic reviews in a complex, regulated environment. Technical & Tooling Familiarity with data quality assurance techniques , metadata management, and lineage tracking. Proficient in using data governance platforms (e.g., Collibra , Palantir Foundry ) and supporting tools to analyse or visualise data flows and risks. Strong understanding of AI/ML data governance risks and regulatory developments (e.g., GDPR, AI Act, data ethics frameworks). Behavioural & Interpersonal Comfortable working independently , including collaboration with managers or stakeholders in different time zones. Strong stakeholder engagement and communication skills with the ability to influence and challenge at all levels. Demonstrated ability to balance business enablement with effective risk management . Certifications (Desirable) Certified Data Management Professional (CDMP) Certified in Risk and Information Systems Control (CRISC) Other data or risk-related qualifications are a plus Keywords: Reference Code: 134393

Posted 3 days ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune

Work from Office

Naukri logo

Were Lear for You Lear, a global automotive technology leader in Seating and E-Systems, is Making every drive better by delivering intelligent in-vehicle experiences for customers around the world. With over 100 years of experience, Lear has earned a legacy of operational excellence while building its future on innovation. Our talented team is committed to creating products that ensure the comfort, well-being, convenience, and safety of consumers. Working together, we are Making every drive better. To know more about Lear please visit our career site: www.lear.com Job Title: Lead Data Engineer Function: Data Engineer Location: Bhosari, Pune Position Focus: As a Lead Data Engineer at Lear, you will take a leadership role in designing, building, and maintaining robust data pipelines within the Foundry platform. Your expertise will drive the seamless integration of data and analytics, ensuring high-quality datasets and supporting critical decision-making processes. If youre passionate about data engineering and have a track record of excellence, this role is for you! Job Description Manage Execution of Data-Focused Projects: As a senior member of the LEAR foundry team, support in designing, building and maintaining data-focused projects using Lears data analytics and application platforms. Participate in projects from conception to root cause analytics and solution deployment. Understand program and product delivery phases, contributing expert analysis across the lifecycle. Ensure Project deliverables are met as per agreed timeline. Tools and Technologies: Utilize key tools within Palantir Foundry, including: Pipeline Builder: Author data pipelines using a visual interface. Code Repositories: Manage code for data pipeline development. Data Lineage: Visualize end-to-end data flows. Leverage programmatic health checks to ensure pipeline durability. Work with both new and legacy technologies to integrate separate data feeds and transform them into new scalable datasets. Mentor junior data engineers on best practices. Data Pipeline Architecture and Development: Lead the design and implementation of complex data pipelines. Collaborate with cross-functional teams to ensure scalability, reliability, and efficiency and utilize Git concepts for version control and collaborative development. Optimize data ingestion, transformation, and enrichment processes. Big Data, Dataset Creation and Maintenance: Utilize pipeline or code repository to transform big data into manageable datasets and produce high-quality datasets that meet the organizations needs. Implement optimum build time to ensure effective utilization of resource. High-Quality Dataset Production: Produce and maintain datasets that meet organizational needs. Optimize the size and build scheduled of datasets to reflect the latest information. Implement data quality health checks and validation. Collaboration and Leadership: Work closely with data scientists, analysts, and operational teams. Provide technical guidance and foster a collaborative environment. Champion transparency and effective decision-making. Continuous Improvement: Stay abreast of industry trends and emerging technologies. Enhance pipeline performance, reliability, and maintainability. Contribute to the evolution of Foundrys data engineering capabilities. Compliance and data security: Ensure documentation and procedures align with internal practices (ITPM) and Sarbanes Oxley requirements, continuously improving them. Quality Assurance & Optimization: Optimize data pipelines and their impact on resource utilization of downstream processes. Continuously test and improve data pipeline performance and reliability. Optimize system performance for all deployed resources. analysis and to provide adequate explanation for the monthly, quarterly and yearly analysis. Oversees all accounting procedures and systems and internal controls used in the company. Supports the preparation of budgets and financial reports, including income statements, balance sheets, cash flow analysis, tax returns and reports for Government regulatory agencies. Motivates the immediate reporting staff for better performance and effective service and encourage team spirit. Coordinates with the senior and junior management of other departments as well, as every department in the organization is directly or indirectly associated with the finance department. Education: Bachelors or masters degree in computer science, Engineering, or a related field. Experience: Minimum 5 years of experience in data engineering, ETL, and data integration. Proficiency in Python and libraries like Pyspark, Pandas, Numpy. Strong understanding of Palantir Foundry and its capabilities. Familiarity with big data technologies (e.g., Spark, Hadoop, Kafka). Excellent problem-solving skills and attention to detail. Effective communication and leadership abilities.

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Job Description: Data Engineer I (F Band) About the Role: As a Data Engineer, you will be responsible for implementing data pipelines and analytics solutions to support key decision-making processes in our Life & Health Reinsurance business. You will become part of a project that is leveraging cutting edge technology that applies Big Data and Machine Learning to solve new and emerging problems for Swiss Re. You will be expected to gain a full understanding of the reinsurance data and business logic required to deliver analytics solutions. Key responsibilities include: Work closely with Product Owners and Engineering Leads to understand requirements and evaluate the implementation effort. Develop and maintain scalable data transformation pipelines Implement analytics models and visualizations to provide actionable data insights Collaborate within a global development team to design and deliver solutions. About the Team: Life & Health Data & Analytics Engineering is a key tech partner for our Life & Health Reinsurance division, supporting in the transformation of the data landscape and the creation of innovative analytical products and capabilities. A large globally distributed team working in an agile development landscape, we deliver solutions to make better use of our reinsurance data and enhance our ability to make data-driven decisions across the business value chain. About You: Are you eager to disrupt the industry with us and make an impact Do you wish to have your talent recognized and rewarded Then join our growing team and become part of the next wave of data innovation. Key qualifications include: Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline At least 1-3 years of experience working with large scale software systems Proficient in Python/PySpark Proficient in SQL (Spark SQL preferred) Palantir Foundry experience is a strong plus. Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred) Experience with JavaScript/HTML/CSS a plus Experience working in a Cloud environment such as AWS or Azure is a plus Strong analytical and problem-solving skills Enthusiasm to work in a global and multicultural environment of internal and external professionals Strong interpersonal and communication skills, demonstrating a clear and articulate standard of written and verbal communication in complex environments Keywords: Reference Code: 134086

Posted 1 week ago

Apply

5.0 - 10.0 years

1 - 2 Lacs

Hyderabad

Work from Office

Naukri logo

3 years of leading experience Strong in #Python programming, #Pyspark queries and #Palantir. Experience using tools such as: #Git/#Bitbucket, #Jenkins/#CodeBuild, #CodePipeline

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

About the Role: As a Data Engineer, you will be responsible for implementing data pipelines and analytics solutions to support key decision-making processes in our Life & Health Reinsurance business. You will become part of a project that is leveraging cutting edge technology that applies Big Data and Machine Learning to solve new and emerging problems for Swiss Re. You will be expected to gain a full understanding of the reinsurance data and business logic required to deliver analytics solutions. Key responsibilities include: Work closely with Product Owners and Engineering Leads to understand requirements and evaluate the implementation effort. Develop and maintain scalable data transformation pipelines Implement analytics models and visualizations to provide actionable data insights Collaborate within a global development team to design and deliver solutions. About the Team: Life & Health Data & Analytics Engineering is a key tech partner for our Life & Health Reinsurance division, supporting in the transformation of the data landscape and the creation of innovative analytical products and capabilities. A large globally distributed team working in an agile development landscape, we deliver solutions to make better use of our reinsurance data and enhance our ability to make data-driven decisions across the business value chain. About You: Are you eager to disrupt the industry with us and make an impact Do you wish to have your talent recognized and rewarded Then join our growing team and become part of the next wave of data innovation. Key qualifications include: Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline At least 1-3 years of experience working with large scale software systems Proficient in Python/PySpark Proficient in SQL (Spark SQL preferred) Palantir Foundry experience is a strong plus. Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred) Experience with JavaScript/HTML/CSS a plus Experience working in a Cloud environment such as AWS or Azure is a plus Strong analytical and problem-solving skills Enthusiasm to work in a global and multicultural environment of internal and external professionals Strong interpersonal and communication skills, demonstrating a clear and articulate standard of written and verbal communication in complex environments Keywords: Reference Code: 134085

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Work from Office

Naukri logo

Work Responsibilities The Palantir Developer will be responsible for designing and implementing modern data architecture solutions that facilitate enterprise-level transformation. Key responsibilities include: Data Architecture Design: Create and optimize modern data architectures that support advanced analytics and operational requirements. Pipelining: Develop and maintain efficient data pipelines using Palantir Foundry to ensure seamless data flow and accessibility for analytics. Advanced Analytics: Create and deploy advanced analytics products that provide actionable insights to stakeholders, enhancing decision-making processes. Artificial Intelligence Integration: Collaborate with data scientists to incorporate AI and machine learning models into data pipelines and analytics products, enabling predictive capabilities. Agentic AI Exposure: Leverage knowledge of Agentic AI to develop systems that can autonomously make decisions and take actions based on data insights, enhancing operational capabilities. Collaboration: Work closely with cross-functional teams, including data scientists, engineers, and business analysts, to gather requirements and deliver tailored solutions. Cloud Technologies: Utilize cloud-based tools and services to enhance scalability, security, and performance of data solutions. Best Practices: Implement best practices for data governance, quality, and security to maintain data integrity and compliance with relevant regulations. Continuous Improvement: Identify opportunities for process improvements and automation to enhance operational efficiency within data ecosystems. Documentation: Maintain comprehensive documentation of data architecture designs, pipeline configurations, and analytics processes. The Team Artificial Intelligence & Data Engineering: In this age of disruption, organizations need to embrace data-driven decision-making to Deliver Enterprise Value. Our Team Leverages Data, Analytics, Robotics, And Cognitive Technologies To Uncover Insights And Drive Transformation In Business. Key Initiatives Include Data Ecosystem Implementation: Collaborate with clients to implement large-scale data ecosystems that integrate structured and unstructured data for comprehensive insights. Predictive Analytics: Utilize machine learning and predictive modeling techniques to derive actionable insights and predict future scenarios. AI Solutions Development: Work on developing AI-driven solutions that enhance data analytics capabilities, including natural language processing (NLP), computer vision, and recommendation systems. Agentic AI Development: Engage in projects that involve the development and deployment of Agentic AI systems capable of autonomous decision-making and action-taking based on real-time data. Operational Efficiency: Drive operational efficiency by utilizing automation and cognitive techniques for data management, ensuring timely and accurate reporting. Client Engagement: Engage with clients to understand their unique challenges and tailor solutions that align with their strategic objectives. Innovative Solutions: Research and implement innovative technologies and methodologies that enhance data analytics capabilities and drive business value. Training and Support: Provide training and support to clients on data tools and platforms to ensure they can maximize the value of their data assets. Qualifications Required: Education: Bachelors degree in Computer Science, Data Science, Engineering, or a related field. Experience 3+ years of hands-on experience in data extraction and manipulation using various tools and programming languages. 3+ years of experience in engineering and developing Palantir pipelines, with a strong understanding of data integration techniques. 3+ years of experience collaborating with Palantir Foundry data scientists and engineers on complex data projects. 2+ years of experience working with AI and machine learning technologies, including model development, deployment, and performance tuning. Familiarity with Agentic AI concepts and applications, including experience developing or working with autonomous systems is a plus. Technical Skills: Proficiency in programming languages such as Python, SQL, or R, along with experience in statistical analysis and machine learning techniques. Problem-Solving: Strong analytical and problem-solving skills, with the ability to think critically and creatively. Communication: Excellent interpersonal and communication skills to effectively convey technical concepts to non-technical stakeholders.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Naukri logo

Job Description: We are looking for a skilled Palantir Foundry Developer with strong hands-on experience in data engineering using PySpark and SQL . The ideal candidate should be proficient in designing, building, and maintaining scalable data pipelines and integrating with Palantir Foundry environments. Key Skills: Palantir Foundry (Mandatory) PySpark, Advanced SQL and Data Modelling Data Pipeline Development and Optimization ETL Processes, Data Transformation

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Hyderabad

Remote

Naukri logo

Hiring for Top MNC: For long term contract Data Engineer - Palantir Technical Capability Foundry Certified (Data Engineering) Foundry Certified (Foundational) Time Series Data Equipment & Sensors - O&G Context and Engineering Ontology Manager Pipeline Builder Data Linerage Object Explorer Python & Spark (PySpark) -specifically PySpark which is the extension of the big data platform Spark that Foundry uses. SQL Mesa (Palantir proprietary language) Experience: 5+ Years Soft Skills: Strong Communication Skills (focus on O&G enginnering) abilty to engage with multiple Product Manager's . Ability to work independenty and voice of authority Interested candidates can share their resume: tejasri.m@i-q.co

Posted 3 weeks ago

Apply

6.0 - 10.0 years

20 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Position: Palantir Foundry & Pyspark Data Engineer Location: Hyderabad (PG&E Office) Key Skills: Palantir Foundry, Python, spark, AWS, Pyspark Experience: 6 -10 Years will be perfect fit Responsibilities: Preferred candidate having experience with Palantir Foundry (Code Repository, Contour, Data connection and workshop). Palantir Foundry experience is must to have. Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. Collaborate with product and technology teams to design and validate the capabilities of the data platform Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability Provide technical support and usage guidance to the users of our platforms services. Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. Qualifications: Experience building and optimizing data pipelines in a distributed environment Experience supporting and working with cross-functional teams Proficiency working in Linux environment 4+ years of advanced working knowledge of Palantir Foundry, SQL, Python, and PySpark 2+ years of experience with using a broad range of AWS technologies Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, CodePipeline Experience with platform monitoring and alerts tools

Posted 1 month ago

Apply

8.0 - 13.0 years

5 - 12 Lacs

Mysuru, Pune

Hybrid

Naukri logo

Role & responsibilities 4+ years of experience as a Data Engineer or similar role - 3+ of years experience building data solutions at scale using one of the Enterprise Data platforms – Palantir Foundry, Snowflake, Cloudera/Hive, Amazon Redshift - 3+ years of experience with SQL and No-SQL databases (Snowflake or Hive) - 3+ years of hands-on experience with programming using Python, Spark or C# - Experience with DevOps principals and CI/CD - Strong understanding of ETL principles and data integration patterns - Experience with Agile and iterative development processes is a plus - Experience with cloud services such as AWS, Azure etc. and other big data tools like Spark, Kafka, etc. is a plus (not mandatory) - Knowledge of Typescript & Full stack development experience is a plus (not mandatory)

Posted 1 month ago

Apply

6.0 - 11.0 years

25 - 30 Lacs

Hyderabad

Hybrid

Naukri logo

About We are hiring a Lead Data Solutions Engineer with expertise in PySpark, Python, and preferably Palantir Foundry. You will focus on transforming complex operational data into clear customer communications for Planned Power Outages (PPO) within the energy sector. Role & responsibilities Build, enhance, and manage scalable data pipelines using PySpark and Python to process dynamic operational data. Interpret and consolidate backend system changes into single-source customer notifications. Leverage Foundry or equivalent platforms to build dynamic data models and operational views. Act as a problem owner for outage communication workflows and edge cases. Collaborate with operations and communication stakeholders to ensure consistent message delivery. Implement logic and validation layers to filter out inconsistencies in notifications. Continuously optimize data accuracy and message clarity. Preferred candidate profile Ideal Profile 5+ years of experience in data engineering/data solutions. Strong command of PySpark, Python, and large-scale data processing. Experience in dynamic, evolving environments with frequent changes. Strong communication and collaboration skills. Ability to simplify uncertain data pipelines into actionable formats. Nice to Have Experience with Palantir Foundry, Databricks, or AWS Glue. Exposure to utility, energy, or infrastructure domains. Familiarity with customer communication systems, SLA governance, or outage scheduling.

Posted 1 month ago

Apply

10 - 17 years

20 - 25 Lacs

Kolkata

Remote

Naukri logo

Role & Responsibilities Design and implement a next-generation digital twin platform for healthcare payer workflows. Model healthcare processes into DTDL or ontology formats using Azure Digital Twins or Palantir Foundry. Lead integration of Celonis and Camunda for process mining and workflow automation. Develop and maintain real-time telemetry pipelines to enable live optimization via AI agents. Collaborate with cross-functional teams, including product managers, AI/ML engineers, and domain experts. Own architectural decisions ensuring scalability, security, and compliance with healthcare standards. Participate in technical reviews, performance tuning, and roadmap planning. Preferred Candidate Profile 10+ years of experience in healthcare technology, digital twin, or enterprise architecture. Proven expertise in tools like Azure Digital Twins, Palantir Foundry, Celonis, and Camunda. Deep understanding of payer-side operations and medical management workflows. Familiarity with IoT/event-streaming architectures and AI/ML pipeline integration. Strong grasp of healthcare data standards such as FHIR and HL7 is a plus. Excellent communication and stakeholder management skills. Prior experience delivering enterprise healthcare solutions for Indian or global clients is desirable

Posted 1 month ago

Apply

7 - 12 years

8 - 18 Lacs

Hyderabad

Remote

Naukri logo

Job Title: Digital Twin Architect Healthcare (India / Remote) Location : Remote (India-based candidates preferred) Experience : 7+ Years Start Date : Immediate / As per notice period Type : Full-Time / Contract About the Role We are hiring experienced Digital Twin Architects to lead the design and build of a next-gen digital twin platform for medical management in the healthcare payer domain . This role involves working on Azure Digital Twins , Palantir Foundry , Celonis , Camunda , and integrating AI/telemetry pipelines for real-time feedback and optimization. If you have strong healthcare domain knowledge, especially in payer systems , and hands-on experience in digital-twin/ontology solutions, this is a unique opportunity to work on a global project with cutting-edge technology. Key Responsibilities Build and manage a digital-twin platform for payor-side medical workflows. Translate processes into DTDL or ontology models using Azure Digital Twins / Palantir Foundry. Use Celonis / Camunda for process mining and workflow improvements. Integrate AI agents and telemetry pipelines to enable live optimization. Collaborate with cross-functional teams (product, AI/ML, and business stakeholders). Own architectural decisions and ensure scalability and compliance. Must-Have Skills 7+ years of experience in digital twin , graph/ontology , or healthcare architecture . Strong understanding of payer operations / medical management workflows . Experience with tools such as: Azure Digital Twins or Palantir Foundry Celonis , Camunda Event-streaming/ IoT pipelines Excellent communication and stakeholder management skills. Good to Have Knowledge of FHIR / HL7 standards . Experience with AI/ML agent integration in live environments. Background in enterprise healthcare product implementation (India or global clients). Why Join Us? Opportunity to work on a global healthcare product . Be part of a high-impact project using the latest in AI, IoT, and digital twin tech. Remote-first culture with cross-border collaboration. Competitive compensation and flexible work hours.

Posted 1 month ago

Apply

5 - 10 years

10 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Palantir Foundry Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years of full term education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring that the applications are developed according to specifications and delivered on time. Your typical day will involve collaborating with the team to understand the requirements, designing and coding the applications, and testing and debugging them to ensure they function properly. You will also be involved in troubleshooting and resolving any issues that arise during the development process. Your creativity and technical expertise will play a crucial role in delivering high-quality applications. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Design and build applications according to business process and application requirements. - Configure applications to ensure they meet the specified functionality. - Collaborate with the team to understand the requirements and translate them into technical specifications. - Code and test applications to ensure they function properly. - Troubleshoot and resolve any issues that arise during the development process. Professional & Technical Skills: - Must To Have Skills: Proficiency in Palantir Foundry. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 5 years of experience in Palantir Foundry. - This position is based at our Bengaluru office. - A 15 years of full term education is required. 15 years of full term education

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies