Home
Jobs

41 Sparql Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Hello Visionary! We empower our people to stay resilient and relevant in a constantly evolving world. Were looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like youThen it seems like youd make a great addition to our vibrant team. We are looking for Semantic Web ETL Developer . Before our software developers write even a single line of code, they have to understand what drives our customers. What is the environmentWhat is the user story based onImplementation means- trying, testing, and improving outcomes until a final solution emerges. Knowledge means exchange- discussions with colleagues from all over the world. Join our Digitalization Technology and Services (DTS) team based in Bangalore. YOULL MAKE A DIFFERENCE BY: - Implementing innovative Products and Solution Development processes and tools by applying your expertise in the field of responsibility. JOB REQUIREMENTS/ S: - International experience with global projects and collaboration with intercultural team is preferred - 6 - 8 years experience on developing software solutions with Python language. - Experience in research and development processes (Software based solutions and products) ; in commercial topics; in implementation of strategies, POCs - Manage end-to-end development of web applications and knowledge graph projects, ensuring best practices and high code quality. - Provide technical guidance and mentorship to junior developers, fostering their growth and development. - Design scalable and efficient architectures for web applications, knowledge graphs, and database models. - Carry out code standards and perform code reviews, ensuring alignment with standard methodologies like PEP8, DRY, and SOLID principles. - Collaborate with frontend developers, DevOps teams, and database administrators to deliver cohesive solutions. - Strong and Expert-like proficiency in Python web frameworks Django, Flask, FAST API, Knowledge Graph Libraries. - Experience in designing and developing complex RESTful APIs and microservices architectures. - Strong understanding of security standard processes in web applications (e.g., authentication, authorization, and data protection). - Extensive experience in building and querying knowledge graphs using Python libraries like RDFLib, Py2neo, or similar. - Proficiency in SPARQL for advanced graph data querying. - Experience with graph databases like Neo4j, GraphDB, or Blazegraph.or AWS Neptune - Experience in expert functions like Software Development / Architecture, Software Testing (Unit Testing, Integration Testing) - Excellent in DevOps practices, including CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes). - Excellent in Cloud technologies and architecture. Should have exposure on S3, EKS, ECR, AWS Neptune - Exposure to and working experience in the relevant Siemens sector domain (Industry, Energy, Healthcare, Infrastructure and Cities) required. LEADERSHIP QUALITIES - Visionary LeadershipAbility to lead the team towards long-term technical goals while managing immediate priorities. - Strong CommunicationGood interpersonal skills to work effectively with both technical and non-technical stakeholders. - Mentorship & CoachingFoster a culture of continuous learning, skill development, and collaboration within the team. - Conflict ResolutionAbility to manage team conflicts and provide constructive feedback to improve team dynamics. Create a better #TomorrowWithUs! This role is in Bangalore, where youll get the chance to work with teams impacting entire cities, countries- and the craft of things to come. Were Siemens. A collection of over 312,000 minds building the future, one day at a time in over 200 countries. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and creativity and help us craft tomorrow. At Siemens, we are always challenging ourselves to build a better future. We need the most innovative and diverse Digital Minds to develop tomorrow s reality. Find out more about the Digital world of Siemens here/digitalminds (http:///digitalminds)

Posted 1 day ago

Apply

10.0 - 15.0 years

35 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

: Job Title Data Science_AI Engineer , AVP LocationBangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design & Develop Agentic AI ApplicationsUtilize frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language ModelsCustomize LLMs and SLMs using domain-specific data to improve performance and relevance in specialized applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-FunctionallyWork with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimize AI WorkflowsEmploy MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 10+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs, and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. For internal use only Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or e- commerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for crossfunctional collaboration. Ability to work independently and manage multiple projects simultaneously. How well support you

Posted 2 days ago

Apply

6.0 years

15 - 17 Lacs

India

Remote

Linkedin logo

Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the intersection of Artificial Intelligence, Cloud Infrastructure, and Enterprise SaaS , we create data-driven products that power decision-making for Fortune 500 companies and high-growth tech firms. Our multidisciplinary teams ship production-grade generative-AI and Retrieval-Augmented Generation (RAG) solutions that transform telecom, finance, retail, and healthcare workflows—without compromising on scale, security, or speed. Role & Responsibilities Build & ship LLM/RAG solutions: design, train, and productionize advanced ML and generative-AI models (GPT-family, T5) that unlock new product capabilities. Own data architecture: craft schemas, ETL/ELT pipelines, and governance processes to guarantee high-quality, compliant training data on AWS. End-to-end MLOps: implement CI/CD, observability, and automated testing (Robot Framework, JMeter, XRAY) for reliable model releases. Optimize retrieval systems: engineer vector indices, semantic search, and knowledge-graph integrations that deliver low-latency, high-relevance results. Cross-functional leadership: translate business problems into measurable ML solutions, mentor junior scientists, and drive sprint ceremonies. Documentation & knowledge-sharing: publish best practices and lead internal workshops to scale AI literacy across the organization. Skills & Qualifications Must-Have – Technical Depth: 6 + years building ML pipelines in Python; expert in feature engineering, evaluation, and AWS services (SageMaker, Bedrock, Lambda). Must-Have – Generative AI & RAG: proven track record shipping LLM apps with LangChain or similar, vector databases, and synthetic-data augmentation. Must-Have – Data Governance: hands-on experience with metadata, lineage, data-cataloging, and knowledge-graph design (RDF/OWL/SPARQL). Must-Have – MLOps & QA: fluency in containerization, CI/CD, and performance testing; ability to embed automation within GitLab-based workflows. Preferred – Domain Expertise: background in telecom or large-scale B2B platforms where NLP and retrieval quality are mission-critical. Preferred – Full-Stack & Scripting: familiarity with Angular or modern JS for rapid prototyping plus shell scripting for orchestration. Benefits & Culture Highlights High-impact ownership: green-field autonomy to lead flagship generative-AI initiatives used by millions. Flex-first workplace: hybrid schedule, generous learning stipend, and dedicated cloud credits for experimentation. Inclusive, data-driven culture: celebrate research publications, OSS contributions, and diverse perspectives while solving hard problems together. Skills: data,modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance

Posted 3 days ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Architect – Ontology & Palantir Foundry Location: Remote Employment Type: Full-time About Us Our client is a leading AWS Premier Consulting Partner with over 400 tech professionals across the US, Canada, Dubai, and Asia. We specialize in delivering cutting-edge cloud, data, and AI/ML solutions to global enterprise clients. Job Description We are seeking experienced Data Architects with a strong background in Ontology Modeling and hands-on expertise in Palantir Foundry . The ideal candidates will play a key role in architecting scalable data solutions, developing semantic data models, and driving enterprise-wide data integration and governance initiatives. Key Responsibilities Design and implement ontology-driven data models to support complex analytical and operational use cases Architect and manage large-scale data pipelines and transformations within Palantir Foundry Collaborate with stakeholders to translate business needs into semantic models and data solutions Work closely with data engineering and data science teams to ensure data quality, consistency, and accessibility Lead efforts in metadata management, data lineage, and knowledge graph modeling Required Skills & Experience 7+ years of experience in data architecture, data modeling, or semantic technologies Strong expertise in Ontology modeling (OWL, RDF, SPARQL, SKOS, etc.) Proven hands-on experience with Palantir Foundry, including ontology management, pipelines, and code workbooks Familiarity with knowledge graphs, semantic web standards, and enterprise data governance Excellent communication and stakeholder management skills Preferred Qualifications Experience working with cloud-native data platforms (AWS preferred) Prior work in healthcare, finance, or government domains is a plus AWS certifications or equivalent cloud certifications are advantageous Why Join Us? Work with a global leader in AWS data transformation projects Flexible remote work setup Opportunity to work on high-impact, mission-critical projects with Fortune 500 clients Skills: cloud-native data platforms,data architecture,palantir foundry,ontology modeling,semantic technologies,knowledge graphs,data governance,data,palantir,ontology,data modeling

Posted 3 days ago

Apply

0.0 - 5.0 years

18 - 24 Lacs

Bengaluru, Karnataka

Remote

Indeed logo

Job Description: Are you an experienced Graph Database Data Engineer with a passion for designing and building cutting-edge data solutions? We're looking for a skilled professional to join our remote team. While this is a remote role, candidates must reside in Bangalore, Hyderabad, or Pune. F2F mandate for the Final Discussion, About the Role: We're seeking a talented Graph DB Data Engineer to design, implement, and maintain our graph database solutions. You'll play a crucial role in building robust data pipelines, ensuring data quality, and optimizing query performance, all while leveraging modern cloud and containerization technologies. Required Skills & Experience: *6 – 9 years of hands-on experience as a Data Engineer, with significant focus on graph databases. *Deep understanding of Graph Database Architecture, Data Structures, and Operations. *Expertise in graph query languages: SPARQL and Gremlin. *Proven experience with AWS Neptune. *Strong proficiency in Python for data engineering tasks. *Hands-on experience with EKS and Kubernetes (K8s) for deploying and managing applications. *Familiarity with AWS services including S3 and Lambda. *Experience with Terraform for Infrastructure as Code. *Strong analytical and problem-solving skills. *Excellent communication and collaboration abilities. Job Types: Full-time, Permanent Pay: ₹1,800,000.00 - ₹2,400,000.00 per year Schedule: Day shift Monday to Friday Morning shift Experience: Graph Database (Architecture, Data Structures, Operations): 6 years (Required) Query Languages (SPARQL and Gremlin): 6 years (Required) AWS Neptune, S3, Lambda, Terraform: 5 years (Required) Python: 5 years (Preferred) Location: Bangalore, Karnataka (Required) Work Location: Remote

Posted 3 days ago

Apply

0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Key Responsibilities Knowledge Graph Development : Design and develop scalable knowledge graph architectures using RDF and OWL standards. SPARQL Query Design & Optimization : Write complex SPARQL queries to extract and manipulate data from triplestores or graph databases. Python-based Application Development : Build modular, efficient, and testable Python applications that interact with semantic data and APIs. Data Integration : Integrate diverse data sources (structured, unstructured, linked data) into graph databases. Ontology Engineering Support : Collaborate with ontologists and domain experts to extend and refine existing data ontologies. System Integration & Testing : Ensure smooth integration of developed modules with other enterprise services and perform comprehensive unit and integration Technical Skills : Programming Language : Advanced proficiency in Python Semantic Web & Graph Technologies : Solid understanding of Knowledge Graphs, RDF (Resource Description Framework) Strong hands-on experience with SPARQL Familiarity with OWL, SHACL, and Linked Data principles Graph Databases : Experience with graph databases such as Apache Jena, GraphDB, Blazegraph, Virtuoso, Stardog, or Neo4j (with RDF plugins) API Integration : Experience building or consuming RESTful APIs Familiarity with JSON-LD, Turtle, RDF/XML formats Version Control : Proficiency in Git, GitHub/GitLab workflows Testing & Debugging : Strong debugging skills, unit testing, and experience with testing frameworks in Skills : Familiarity with Natural Language Processing (NLP) concepts and libraries Experience with ETL tools, data pipelines, or data ingestion frameworks Exposure to Docker, Kubernetes, or other cloud-native environments Experience with CI/CD Skills & Qualities : Excellent analytical and problem-solving skills Strong communication and collaboration abilities in a remote team setting Attention to detail and a passion for writing clean, maintainable code Ability to manage time efficiently and work : Bachelors or Masters degree in Computer Science, Information Technology, or related field (ref:hirist.tech)

Posted 4 days ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Were building the technological foundation for our companys Semantic Layera common data language powered by Anzo / Altair Graph Studio. As a Senior Software Engineer, youll play a critical role in setting up and managing this platform on AWS EKS, enabling scalable, secure, and governed access to knowledge graphs, parallel processing engines, and ontologies across multiple domains, including highly sensitive ones like clinical trials. Youll help design and implement a multi-tenant, cost-aware, access-controlled infrastructure that supports internal data product teams in securely building and using connected knowledge graphs. Key Responsibilities Implement a Semantic Layer for on Anzo / Altair Graph Studio and Anzo Graph Lakehouse in a Kubernetes or ECS environment (EKS / ECS) Develop and manage Infrastructure as Code using Terraform and configuration management via Ansible Integrate platform authentication and authorization with Microsoft Entra ID (Azure AD) Design and implement multi-tenant infrastructure patterns that ensure domain-level isolation and secure data access Build mechanisms for cost attribution and usage visibility per domain and use case team Implement fine-grained access control, data governance, and monitoring for domains with varying sensitivity (e.g., clinical trials) Automate deployment pipelines and environment provisioning for dev, test, and production environments Collaborate with platform architects, domain engineers, and data governance teams to curate and standardize ontologies Minimum Requirements 4 - 9 years of experience in Software / Platform Engineering, DevOps, or Cloud Infrastructure roles Proficiency in Python for automation, tooling, or API integration Hands-on experience with AWS EKS / ECS and associated services (IAM, S3, CloudWatch, etc.) Strong skills in Terraform / Ansible / IaC for infrastructure provisioning and configuration Familiarity with RBAC, OIDC, and Microsoft Entra ID integration for enterprise IAM Understanding of Kubernetes multi-tenancy and security best practices Experience building secure and scalable platforms supporting multiple teams or domains Preferred Qualifications Experience deploying or managing Anzo, Altair Graph Studio, or other knowledge graph / semantic layer tools Familiarity with RDF, SPARQL, or ontologies in an enterprise context Knowledge of data governance, metadata management, or compliance frameworks Exposure to cost management tools like AWS Cost Explorer / Kubecost or custom chargeback systems Why Join Us Be part of a cutting-edge initiative shaping enterprise-wide data access and semantics Work in a cross-functional, highly collaborative team focused on responsible innovation Influence the architecture and strategy of a foundational platform from the ground up

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Are you passionate about transforming complex data into powerful knowledge graphs? We're looking for a Graph DB Data Engineer to join our team and shape the future of intelligent data systems. What You’ll Do: Design and develop scalable graph database solutions , ideally using AWS Neptune Build and manage robust data pipelines to ingest and transform structured and unstructured data Write performant queries using SPARQL and Gremlin Leverage Python for automation, data processing, and integration tasks Work across AWS services including EKS, Kubernetes, Lambda, and S3 What We’re Looking For: Strong expertise in graph database architectures , structures, and query languages Solid experience building end-to-end data pipelines Proficiency in Python programming Familiarity with AWS cloud environment , especially Neptune, Lambda, S3, and EKS A problem-solving mindset and a passion for clean, scalable data engineering Work mode: Remote If you thrive in dynamic environments and love working with cutting-edge technologies, we want to hear from you. Interested or know someone great? Let’s connect! #GraphDB #DataEngineer #AWSNeptune #Python #KnowledgeGraphs #Hiring #DataJobs #GraphDatabase Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

P1,C3,STS Required Skills Hands-on experience of Linux 8.x operating system for 5 years at an advanced level. Experience with Service Oriented Architecture, Distributed Systems, Scripting such as Python and shell, Relational database (E.g., Sybase, DB2, SQL, Postgres) Hands-on experience of web servers (Apache / Nginx), Application Servers (Tomcat / JBoss) to include application integration, configuration, and troubleshooting. Hands-on experience Docker containers, Kubernetes and SaaS platform integration Exposure and experience messaging technology like Kafka Clear concept of load balancer, web proxies and storage platforms like NAS / SAN from an implementation perspective only. Familiar with basic security policies for secure hosting solutions, Kerberos and standard encryption methodologies including SSL and TLS. Prior experience managing large web-based n-tier applications in secure environments on cloud Strong knowledge SRE Principles with grasp over tools / approach to apply them Strong infrastructure knowledge in Linux / Unix admin, Storage, Networking and Web Technologies Experience in troubleshooting Application Issues and Managing Incidents Exposure to tools like Open Telemetry, Prometheus, Grafana, Splunk, Ansible Excellent verbal and written communication skills. Desired / Nice to have skills Exposure to Big Data platforms like Hadoop / Cloudera and ELK Stack Working knowledge of workflow orchestration tool like Airflow Familiarity with caching DB like Redis, NoSQL database and SPARQL Capacity planning and performance tuning exercise Identity management protocols like OIDC / OAuth, SAML, LDAP integration Cloud Application and respective infrastructure Knowledge is a plus. Working knowledge of GenAI, LLM models Experience in Cloud / Distributed computing technology or certification is a plus Experience 5 to 8 years in a similar role of hands-on application / middleware specialist. Prior experience of working in a global financial organization is an advantage Location The candidate will be based at Morgan Stanleys office in Mumbai. NFR Tech is looking to onboard an application support and SRE specialist for their Application and Data Engineering (ADE) group. ADE provides application engineering, tooling, automation and elevated production support services conforming to company security blueprints and focused on performance, reliability and scalability by understanding the technical requirement from application owners and business, participate in technical evaluation of vendors and vendor technologies, conduct proof of concept, packaging and deploying middleware products. Skills Linux Python/Shell Database-Sybase, DB2 Web Servers Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Company Description NXP Semiconductors enables secure connections and infrastructure for a smarter world, advancing solutions that make lives easier, better and safer. As the world leader in secure connectivity solutions for embedded applications, we are driving innovation in the secure connected vehicle, end-to-end security & privacy and smart connected solutions markets. Organization Description Do you feel challenged by being part of the IT department of NXP, the company with a mission of “Secure Connections for a Smarter World”? Do you perform best in a role representing IT in projects in a fast moving, international environment? Within R&D IT Solutions, the Product Creation Applications (PCA) department is responsible for providing and supporting the R&D design community globally with best-in-class applications and support. The applications are used by over 6,000 designers. Job Summary As a Graph Engineer, you will: Develop pipelines and code to support the ingress and egress of this data to and from the knowledge graphs. Perform basic and advanced graph querying and data modeling on the knowledge graphs that lie at the heart of the organization's Product Creation ecosystem. Maintain the (ETL) pipelines, code and Knowledge Graph to stay scalable, resilient and performant in line with customer’s requirements. Work in an international and Agile DevOps environment. This position offers an opportunity to work in a globally distributed team where you will get a unique opportunity of personal development in a multi-cultural environment. You will also get a challenging environment to develop expertise in the technologies useful in the industry. Primary Responsibilities Translate requirements of business functions into “Graph-Thinking”. Build and maintain graphs and related applications from data and information, using latest graph technologies to leverage high value use cases. Support and manage graph databases. Integrate graph data from various sources – internal and external. Extract data from various sources, including databases, APIs, and flat files. Load data into target systems, such as data warehouses and data lakes. Develop code to move data (ETL) from the enterprise platform applications into the enterprise knowledge graphs. Optimize ETL processes for performance and scalability. Collaborate with data engineers, data scientists and other stakeholders to model the graph environment to best represent the data coming from the multiple enterprise systems. Skills / Experience Semantic Web technologies: RDF RDFS, OWL, SHACL SPARQL JSON-LD, N-Triples/N-Quads, Turtle, RDF/XML, TriX API-led architectures REST, SOAP Microservices API Management Graph databases, such as Dydra, Amazon Neptune, Neo4J, Oracle Spatial & Graph is a plus Experience with other NoSQL databases, such as key-value databases and document-based databases (e.g. XML databases) is a plus Experience with relational databases Programming experience, preferably Java, JavaScript, Python, PL/SQL Experience with web technologies: HTML, CSS, XML, XSLT, XPath Experience with modelling languages such as UML Understanding of CI/CD automation, version control, build automation, testing frameworks, static code analysis, IT service management, artifact management, container management, and experience with related tools and platforms. Familiarity with Cloud computing concepts (e.g. in AWS and Azure). Education & Personal Skillsets A master’s or bachelor’s degree in the field of computer science, mathematics, electronics engineering or related discipline with at least 10 plus years of experience in a similar role Excellent problem-solving and analytical skills A growth mindset with a curiosity to learn and improve. Team player with strong interpersonal, written, and verbal communication skills. Business consulting and technical consulting skills. An entrepreneurial spirit and the ability to foster a positive and energized culture. You can demonstrate fluent communication skills in English (spoken and written). Experience working in Agile (Scrum knowledge appreciated) with a DevOps mindset. More information about NXP in India... Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

7 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will be part of Researchs Semantic Graph Team is seeking a dedicated and skilled Semantic Data Engineer to build and optimize knowledge graph-based software and data resources. This role primarily focuses on working with technologies such as RDF, SPARQL, and Python. In addition, the position involves semantic data integration and cloud-based data engineering. The ideal candidate should possess experience in the pharmaceutical or biotech industry, demonstrate deep technical skills, and be proficient with big data technologies and demonstrate experience in semantic modeling. A deep understanding of data architecture and ETL processes is also essential for this role. In this role, you will be responsible for constructing semantic data pipelines, integrating both relational and graph-based data sources, ensuring seamless data interoperability, and leveraging cloud platforms to scale data solutions effectively. Roles & Responsibilities: Develop and maintain semantic data pipelines using Python, RDF, SPARQL, and linked data technologies. Develop and maintain semantic data models for biopharma scientific data Integrate relational databases (SQL, PostgreSQL, MySQL, Oracle, etc.) with semantic frameworks. Ensure interoperability across federated data sources, linking relational and graph-based data. Implement and optimize CI/CD pipelines using GitLab and AWS. Leverage cloud services (AWS Lambda, S3, Databricks, etc.) to support scalable knowledge graph solutions. Collaborate with global multi-functional teams, including research scientists, Data Architects, Business SMEs, Software Engineers, and Data Scientists to understand data requirements, design solutions, and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions. Collaborate with data scientists, engineers, and domain experts to improve research data accessibility. Adhere to standard processes for coding, testing, and designing reusable code/components. Explore new tools and technologies to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Maintain comprehensive documentation of processes, systems, and solutions. Harmonize research data to appropriate taxonomies, ontologies, and controlled vocabularies for context and reference knowledge. Basic Qualifications and Experience: Doctorate Degree OR Masters degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 6+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Cloud and Automation Expertise: Good experience in using cloud platforms (preferably AWS) for data engineering, along with Python for automation, data federation techniques, and model-driven architecture for scalable solutions. Technical Problem-Solving: Excellent problem-solving skills with hands-on experience in test automation frameworks (pytest), scripting tasks, and handling large, complex datasets. Good-to-Have Skills: Experience in biotech/drug discovery data engineering Experience applying knowledge graphs, taxonomy and ontology concepts in life sciences and chemistry domains Experience with graph databases (Allegrograph, Neo4j, GraphDB, Amazon Neptune) Familiarity with Cypher, GraphQL, or other graph query languages Experience with big data tools (e.g. Databricks) Experience in biomedical or life sciences research data management Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Expertise in Graph Database : A deep understanding of Graph Database (Architecture, Structures, and Operations , query languages (such as SPARQL and Gremlin). Experience in AWS Neptune is preferred. Knowledge of Data Pipelines: Proficiency in designing and managing data pipelines is crucial for ensuring the efficient flow and transformation of data into the knowledge graph. High level of Proficiency in Python programming AWS services including EKS, K8s, S3, and Lambda Secondary Skills CI/CD , Kubernetes, Docker This is compulsory - Expertise in Graph Database and Python programming

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Company: Indian / Global Engineering & Manufacturing Organization Key Skills: ETL/ELT, RDF, OWL, SPARQL, Neo4j, AWS Neptune, ArangoDB, Python, SQL, Cypher, Semantic Modeling, Cloud Data Pipelines, Data Quality, Knowledge Graph, Graph Query Optimization, Semantic Search. Roles and Responsibilities: Design and build advanced data pipelines for integrating structured and unstructured data into graph models. Develop and maintain semantic models using RDF, OWL, and SPARQL. Implement and optimize data pipelines on cloud platforms such as AWS, Azure, or GCP. Model real-world relationships through ontologies and hierarchical graph data structures. Work with graph databases such as Neo4j, AWS Neptune, ArangoDB for knowledge graph development. Collaborate with cross-functional teams including AI/ML and business analysts to support semantic search and analytics. Ensure data quality, security, and compliance throughout the pipeline lifecycle. Monitor, debug, and enhance performance of graph queries and data transformation workflows. Create clear documentation and communicate technical concepts to non-technical stakeholders. Participate in global team meetings and knowledge-sharing sessions to align on data standards and architectural practices. Experience Requirement: 3-8 years of hands-on experience in ETL/ELT engineering and data integration. Experience working with graph databases such as Neo4j, AWS Neptune, or ArangoDB. Proven experience implementing knowledge graphs, including semantic modeling using RDF, OWL, and SPARQL. Strong Python and SQL programming skills, with proficiency in Cypher or other graph query languages. Experience designing and deploying pipelines on cloud platforms (AWS preferred). Track record of resolving complex data quality issues and optimizing pipeline performance. Previous collaboration with data scientists and product teams to implement graph-based analytics or semantic search features. Education: Any Graduation. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Employment type: Freelance, Project Based What this is about: At e2f, we offer an array of remote opportunities to work on compelling projects aimed at enhancing AI capabilities. As a significant team member, you will help shape the future of AI-driven solutions. We value your skills and domain expertise, offering competitive compensation and flexible working arrangements. Job Description: We are looking for an experienced Data Analyst with a strong background in SPARQL for a project-based position. The ideal candidate will be responsible for writing, reviewing, and optimizing the queries to extract valuable insights from our knowledge base. Qualifications: Bachelor's degree in Computer Science, Data Science, or a related field. Proven experience with SPARQL Familiarity with Cypher query languages Expertise in Knowledge Graphs Strong analytical and problem-solving skills Excellent communication and collaboration skills Ability to prioritize and manage workload efficiently Understanding of and adherence to project guidelines and policies. Responsibilities: You can commit a minimum of 4 hours per day - Flexible schedule (You can split your hours as you prefer). Participate in a training meeting. Adhere to deadlines and guideline standards. What We Offer: Engage in exciting generative AI development from the convenience of your home. Enjoy flexible work hours and availability. If you're interested: Apply to our job advertisement. We'll review your profile and, if it aligns with our search, we will contact you as soon as possible to share rates and further details. About Us: e2f is dedicated to facilitating natural communication between people and machines across languages and cultures. With expertise in data science, we provide top-tier linguistic datasets for AI and NLP projects. Know more here: www.e2f.com Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Who We Are Euromonitor International is the leading independent market research company; investigating and understanding what consumers want and need, helping businesses create products and services that cater to their preferences and trends. We are an organisation that champions flexibility, with opportunity to grow and be supported with continuous learning and development. What You Will Be Doing The successful candidate will use and explore various innovative economic, econometric, and statistical modelling tools that best help us address our clients' strategic objectives. A typical modelling and analytics project starts with the identification and conceptualization of our clients' needs and objectives. This is then followed by an evaluation of all possible solutions by their feasibility given available data and the client's objectives. The final stage often involves the identification of the most optimal economic or econometric solution. As the data analyst gains experience in building client solutions, we will look to involve him/her in project opportunity screening and proposal build, developing modelling updates as well as driving client presentations and/or meetings. Key drivers - Research and analyze economic, demographic and industrial data from around the world. Use econometric modelling techniques as well as analytical judgment to come up with custom solutions for our clients. Participation in internal peer review meetings and contribution to the best solution search process as well as giving comments on already created preliminary solutions. Presentation and visualization of model results to a client in an intuitive manner Draw conclusions based on the analysis of our results. Monitoring academic press for latest developments in economics and statistics to make sure we use cutting edge analytical techniques. Proper documentation of each project and sharing best modelling practices Commission and organize research, standardization and modelling by freelance associates in Lithuania and around the world. Liaise with sales and marketing department to evaluate client inquiries. What You'll Need- Excellent communication skills and English fluency (both oral and writing) Understanding and interest in international economic, demographic and industry trends A good working knowledge of R is Mandatory Experience in SQL, SPARQL, JavaScript, HTML or similar would be an advantage. Excellent analytics skills Excellent organizational skills and creativity The confidence and ability to take the post forward with the minimum of supervision. A strong knowledge in mathematics /statistics/ econometrics A genuine interest in Artificial Intelligence and related fields like data science and Machine learning would be an advantage. knowledge in economics would be an advantage. M.A/MSc. degree in economics, statistics, econometrics, mathematics, physics, operation research or similar field. Highly skilled candidates with a BA/BSc. will also be considered. Candidates with B.tech/Mtech or MBA will not be considered What you'll get - Professional Development: Grow your career with opportunities within a consultative and professional environment Flexible Work Schedule: Achieve a healthy work-life balance with our flexible work schedule options, including remote work opportunities and flexible hours Positive Work Environment: Join a collaborative and inclusive workplace culture where your ideas are valued, diversity is celebrated, and teamwork is encouraged Community Involvement: Make a positive impact in the community through our volunteer programs, charitable initiatives, and corporate social responsibility efforts (and more....!) Our Values We act with integrity We are curious about the world We are stronger together We seek to empower We find strength in diversity Show more Show less

Posted 2 weeks ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do Role Description: We are seeking a Senior Data Engineer with expertise in Graph Data technologies to join our data engineering team and contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a strong background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Own the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Strong understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Masters degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelors degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Summary The person in this role will be the technical team lead and the point of contact between the PM, Architect and People leader. This person will work closely with the Product Owner to break down features into detailed, technical, work chunks that will be implemented by the team members. This person will oversee the detailed technical designs of the individual features. This person will need to fully understand the Modeling ecosystem and where it fits in the GridOS context. Job Description Roles and Responsibilities Serve as technical lead for the Modeling Development team: Single point of contact about technical development aspects for the Architect, PO, Scrum Master and Team Manager, owns onboarding and ramp-up processes for the team members, owns efficiency and quality of the development process. Responsible for the quality of the development in terms of software performances, code quality, test automation, code coverage, CI/CD and documentation. Oversee the detailed technical designs of the individual features. High level estimates of the different features of the products. Owns technical deliverables during the entire lifecycle of the products. Keep the products development lifecycle on track in terms of budget, time and quality. Keep track of developments happening within GridOS ecosystem and build bridges with other engineering and services teams. Interact with Services teams, and partner integrator teams, to provide processes to ensure best use of GridOS Modeling products and services. Effectively communicate both verbally and in writing with peers and team members as an inclusive team member. Serves as a technical leader or mentor on complex, integrated implementations within the GridOS Modeling product teams. Work in a self-directed fashion to proactively identify system problems, failures, and areas for improvement. Track issue resolution and document solutions implemented and create troubleshooting guides. Peer review of Pull Requests. Education Qualification For roles outside USA: Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) with significant experience. For roles in USA: Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) Years of experience: 8+ years Desired Characteristics Technical Expertise Strong understanding of OOP concepts Strong experience with Kubernetes and microservices architectures Containers technology Strong expertise in JAVA and Python, Maven and Springboot framework REST API (OpenAPI) and event design GraphQL schemas & services design Graph technologies and frameworks: Apache Jena / Neo4J / GraphDB Experience with RDF and SPARQL Unit and integration tests design CI/CD pipelines designs JSON & YAML Schemas Events driven architecture Data streaming technologies such as Apache Kafka Microservice observability and metrics Integration skills Autonomous and able to work asynchronously (due to time zone difference) Software & API documentation Good to have Data engineering and data architecture expertise Apache Camel & Apache Arrow Experience in Grid or Energy software business (AEMS / ADMS / Energy Markets / SCADA / GIS) Business Acumen Adept at navigating the organizational matrix; understanding people's roles, can foresee obstacles, identify workarounds, leverage resources and rally teammates. Understand how internal and/or external business model works and facilitate active customer engagement Able to articulate the value of what is most important to the business/customer to achieve outcomes Able to produce functional area information in sufficient detail for cross-functional teams to utilize, using presentation and storytelling concepts. Possess extensive knowledge of full solution catalog within a business unit and proficiency in discussing each area at an advanced level. Six Sigma Green Belt Certification or equivalent quality certification. Leadership Demonstrated working knowledge of internal organization Foresee obstacles, identify workarounds, leverage resources, rally teammates. Demonstrated ability to work with and/or lead blended teams, including 3rd party partners and customer personnel. Demonstrated Change Management /Acceleration capabilities Strong interpersonal skills, including creativity and curiosity with ability to effectively communicate and influence across all organizational levels Proven analytical and problem resolution skills Ability to influence and build consensus with other Information Technology (IT) teams and leadership. Note To comply with US immigration and other legal requirements, it is necessary to specify the minimum number of years' experience required for any role based within the USA. For roles outside of the USA, to ensure compliance with applicable legislation, the JDs should focus on the substantive level of experience required for the role and a minimum number of years should NOT be used. This Job Description is intended to provide a high level guide to the role. However, it is not intended to amend or otherwise restrict/expand the duties required from each individual employee as set out in their respective employment contract and/or as otherwise agreed between an employee and their manager. Additional Information Relocation Assistance Provided: No Show more Show less

Posted 2 weeks ago

Apply

5.0 years

4 - 5 Lacs

Hyderābād

On-site

GlassDoor logo

Lead Knowledge Engineer Hyderabad, India Data Management 311636 Job Description About The Role: Grade Level (for internal use): 11 The Role : The Knowledge Engineering team are seeking a Lead Knowledge Engineer to support our strategic transformation from a traditional data organization into a next generation interconnected data intelligence organization. The Team : The Knowledge Engineering team within data strategy and governance helps to lead fundamental organizational and operational change driving our linked data, open data, and data governance strategy, both internally and externally. The team partners closely with data and software engineering to envision and build the next generation of data architecture and tooling with modern technologies. The Impact : Knowledge Engineering efforts occur within the broader context of major strategic initiatives to extend market leadership and build next-generation data, insights and analytics products that are powered by our world class datasets. What’s in it for you : The Lead Knowledge Engineer role is an opportunity to work as an individual contributor in creatively solving complex challenges alongside visionary leadership and colleagues. It’s a role with highly visible initiatives and outsized impact. The wider division has a great culture of innovation, collaboration, and flexibility with a focus on delivery. Every person is respected and encouraged to be their authentic self. Responsibilities : Develop, implement, and continue to enhance ontologies, taxonomies, knowledge graphs, and related semantic artefacts for interconnected data, as well as topical/indexed query, search, and asset discovery Design and prototype data / software engineering solutions enabling to scale the construction, maintenance and consumption of semantic artefacts and interconnected data layer for various application contexts Provide thought leadership for strategic projects ensuring timelines are feasible, work is effectively prioritized, and deliverables met Influence the strategic semantic vision, roadmap, and next-generation architecture Execute on the interconnected data vision by creating linked metadata schemes to harmonize semantics across systems and domains Analyze and implement knowledge organization strategies using tools capable of metadata management, ontology management, and semantic enrichment Influence and participate in governance bodies to advocate for the use of established semantics and knowledge-based tools Qualifications: Able to communicate complex technical strategies and concepts in a relatable way to both technical and non-technical stakeholders and executives to effectively persuade and influence 5+ years of experience with ontology development, semantic web technologies (RDF, RDFS, OWL, SPARQL) and open-source or commercial semantic tools (e.g., VocBench, TopQuadrant, PoolParty, RDFLib, triple stores); Advanced studies in computer science, knowledge engineering, information sciences, or related discipline preferred 3+ years of experience in advanced data integration with semantic and knowledge graph technologies in complex, enterprise-class, multi-system environment(s); skilled in all phases from conceptualization to optimization Programming skills in a mainstream programming language (Python, Java, JavaScript), with experience in utilizing cloud services (AWS, Google Cloud, Azure) is a great bonus Understanding of the agile development life cycle and the broader data management discipline (data governance, data quality, metadata management, reference and master data management) S&P Global Enterprise Data Organization is a unified, cross-divisional team focused on transforming S&P Global’s data assets. We streamline processes and enhance collaboration by integrating diverse datasets with advanced technologies, ensuring efficient data governance and management. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 10 - Officials or Managers (EEO-2 Job Categories-United States of America), DTMGOP103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 311636 Posted On: 2025-05-14 Location: Hyderabad, Telangana, India

Posted 3 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a skilled and creative RShiny Developer with hands-on experience in MarkLogic and graph databases. You will be responsible for designing and developing interactive web applications using RShiny, integrating complex datasets stored in MarkLogic, and leveraging graph capabilities for advanced analytics and knowledge representation. Roles & Responsibilities: Develop interactive dashboards and web applications using RShiny. Connect and query data from MarkLogic, especially leveraging its graph and semantic features (e.g., RDF triples, SPARQL). Design and maintain backend data workflows and APIs. Collaborate with data scientists, analysts, and backend engineers to deliver integrated solutions. Optimize performance and usability of RShiny applications. Functional Skills: Must-Have Skills: Proven experience with R and RShiny in a production or research setting. Proficiency with MarkLogic , including use of its graph database features (triples, SPARQL queries, semantics). Familiarity with XQuery , XPath , or REST APIs for interfacing with MarkLogic. Strong understanding of data visualization principles and UI/UX best practices. Experience with data integration and wrangling. Good-to-Have Skills: Experience with additional graph databases (e.g., Neo4j, Stardog) is a plus. Background in knowledge graphs, linked data, or ontologies (e.g., OWL, RDF, SKOS). Familiarity with front-end frameworks (HTML/CSS/JavaScript) to enhance RShiny applications. Experience in regulated industries (e.g., pharma, finance) or with complex domain ontologies. Professional Certifications (preferred): SAFe Methodology Courses in R, RShiny, and data visualization from reputable institutions (e.g., Johns Hopkins Data Science Specialization on Coursera) Other Graph Certifications (optional but beneficial) Neo4j Certified Professional (to demonstrate transferable graph database skills) Linked Data and Semantic Web Training (via organizations like W3C or OReilly) Soft Skills: Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task management skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

37 - 45 Lacs

Bengaluru

Work from Office

Naukri logo

: Job TitleSenior Data Science Engineer Lead LocationBangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, Vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design & Develop Agentic AI ApplicationsUtilize frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language ModelsCustomize LLMs and SLMs using domain-specific data to improve performance and relevance in specialized applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-FunctionallyWork with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimize AI WorkflowsEmploy MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring Your skills and experience 15+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs, and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or ecommerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for crossfunctional collaboration. Ability to work independently and manage multiple projects simultaneously How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs

Posted 3 weeks ago

Apply

3.0 - 7.0 years

5 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

The Role : The Knowledge Engineering team are seeking a Lead Knowledge Engineer to support our strategic transformation from a traditional data organization into a next generation interconnected data intelligence organization. The Team : The Knowledge Engineering team within data strategy and governance helps to lead fundamental organizational and operational change driving our linked data, open data, and data governance strategy, both internally and externally. The team partners closely with data and software engineering to envision and build the next generation of data architecture and tooling with modern technologies. The Impact : Knowledge Engineering efforts occur within the broader context of major strategic initiatives to extend market leadership and build next-generation data, insights and analytics products that are powered by our world class datasets. Whats in it for you : The Lead Knowledge Engineer role is an opportunity to work as an individual contributor in creatively solving complex challenges alongside visionary leadership and colleagues. Its a role with highly visible initiatives and outsized impact. The wider division has a great culture of innovation, collaboration, and flexibility with a focus on delivery. Every person is respected and encouraged to be their authentic self. Responsibilities : Develop, implement, and continue to enhance ontologies, taxonomies, knowledge graphs, and related semantic artefacts for interconnected data, as well as topical/indexed query, search, and asset discovery Design and prototype data software engineering solutions enabling to scale the construction, maintenance and consumption of semantic artefacts and interconnected data layer for various application contexts Provide thought leadership for strategic projects ensuring timelines are feasible, work is effectively prioritized, and deliverables met Influence the strategic semantic vision, roadmap, and next-generation architecture Execute on the interconnected data vision by creating linked metadata schemes to harmonize semantics across systems and domains Analyze and implement knowledge organization strategies using tools capable of metadata management, ontology management, and semantic enrichment Influence and participate in governance bodies to advocate for the use of established semantics and knowledge-based tools Qualifications: Able to communicate complex technical strategies and concepts in a relatable way to both technical and non-technical stakeholders and executives to effectively persuade and influence 5+ years of experience with ontology development, semantic web technologies (RDF, RDFS, OWL, SPARQL) and open-source or commercial semantic tools (e.g., VocBench, TopQuadrant, PoolParty, RDFLib, triple stores); Advanced studies in computer science, knowledge engineering, information sciences, or related discipline preferred 3+ years of experience in advanced data integration with semantic and knowledge graph technologies in complex, enterprise-class, multi-system environment(s); skilled in all phases from conceptualization to optimization Programming skills in a mainstream programming language (Python, Java, JavaScript), with experience in utilizing cloud services (AWS, Google Cloud, Azure) is a great bonus Understanding of the agile development life cycle and the broader data management discipline (data governance, data quality, metadata management, reference and master data management)

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Essential Duties And Responsibilities Design, implement, and deploy solutions that are reliable, scalable, and perform at a high-level to meet the needs of our global clients. Following Agile practices and participating in planning games, code reviews & sprint demos. Actively participates in architectural discussions and ensures that designs follow the approved architectural patterns. Continually learn about new technologies, generate new ideas and improve the use of technology in the product. Supports production issues with related products. Job Qualifications Education: Bachelor’s degree in Computer Science, Information Technology, MIS, or related field. Tasks and responsibilities: Development of new software and adaptation of existing software. Integrate local systems into the international environment. Take a proactive role during backlog-refinement (grooming) sessions on solutions for the requested requirements. Solving incidents. Analyze and improve the (backend) performance. Extradite software and documentation. Planning and reporting on progress in accordance with Agile. Experience: Minimum of 3-5 years of experience in the software industry Experience in working in agile teams using modern technologies like Java 8, Spring, REST webservices and different kind of datastores. Be able to adopt new technologies and concepts quickly and should always be interested in new upcoming tools and languages. Preferred experience: Frameworks like Spring, CXF, Hibernate Test Frameworks/Tools (JUnit, EasyMock, Approval testing, SoapUI) Atlassian Stack (BitBucket, Confluence, JIRA, Bamboo) Docker, Amazon ECS RDF triple stores / Graph databases, SPARQL Agile: Scrum, Kanban and DevOps Competencies: Strong collaboration and listening skills Excellent communication skills in English, both written and verbal Ability to work in a distributed, international, multicultural environment Responsive and flexible in handling critical support issues whenever they occur Strong analytical skills. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process. Show more Show less

Posted 3 weeks ago

Apply

4.0 - 6.0 years

15 - 25 Lacs

Pune

Work from Office

Naukri logo

Responsibilities: Create and optimize complex SPARQL Protocol and RDF Query Language queries to retrieve and analyse data from graph databases. Develop graph-based applications and models to solve real-world problems and extract valuable insights from data. Design, develop, and maintain scalable data pipelines using Python rest apis get data from different cloud platforms. Create and optimize complex SPARQL queries to retrieve and analyze data from graph databases. Study and understand the nodes, edges, and properties in graphs, to represent and store data in relational databases. Qualifications: Strong proficiency in SparQL, and RDF query language, Python and Rest APIs. Experience with database technologies sql and sparql. Preferred Skills: Knowledge of cloud platforms like AWS, Azure, or GCP. Experience with version control systems like Github. Understanding of environments and deployment processes and cloud infrastructure.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. The SAP HANA Database and Analytics Core engine team is looking for an intermediate, or senior developer to contribute to our Knowledge Graph Database System engine development. In this role, you will be designing, developing features, and maintaining our Knowledge Graph engine, which runs inside SAP HANA in-memory database. At SAP, all members of the engineering team, including management, are hands-on and close to the code. If you think you can thrive in such an environment, and you have the necessary skills and experience please do not hesitate to apply. What You’ll Do- As a developer, you will have the opportunity to: Contribute to hands-on coding, design, and architecture that is best suited for our team size and performance targets. Collaborate in a team environment that extends to colleagues in remote locations and from various lines of businesses within the company. Ability to communicate and guide other teams to construct best possible queries for their needs. Assess new technology, tool, and infrastructure to keep up with the rapid pace of change. Embrace lean and agile software development principles. Debug, troubleshoot and communicate with customers about their issues with their data models, and queries. Continually enhance existing skills and seek new areas for personal development. What You Bring- Bachelor’s degree or equivalent university education in computer science or engineering with 3-5 years of experience in developing enterprise class software. Experience in Development with modern C++. Knowledge of development of Database Internals like - Query Optimizer/Planner, Query Executor, System Management, Transaction Management, and/or Persistence. Knowledge of SQL, and Graph technologies like RDF/SPARQL. Knowledge of full SDLC and development of tests using Python or other tools. Experience designing and developing well-encapsulated, and object-oriented code. Solution-oriented and open minded. Manage collaboration with sister teams and partner resources in remote locations. High service and customer orientation Skilled in process optimization and drives for permanent change. Strong in analytical thinking/problem solving. Interpersonal skills: team player, proactive networking, results and execution oriented, motivated to work in an international and intercultural environment. Excellent oral and written communication skills and presentation skills MEET YOUR TEAM- The team is responsible for developing HANA Knowledge Graph, a high-performance graph analytics database system, made available to SAP customers, partners, and various internal groups as part of HANA Multi Model Database System. It is specifically designed for processing large-scale graph data and executing complex graph queries with high efficiency. HANA Knowledge Graph enables organizations to gain insights from their graph datasets, discover patterns, perform advanced graph analytics, and unlock the value of interconnected data. HANA Knowledge Graph utilizes massive parallel processing (MPP) architecture to leverage the power of distributed computing. It is built with W3C web standards specifications of graph data and query language – RDF and SPARQL. The various components of HANA Knowledge Graph System include – Storage, Data Load, Query Parsing, Query Planning and Optimization, Query Execution, Transaction Management, Memory Management, Network Communications, System Management, Data Persistence, Backup & Restore, Performance Tuning, etc. At SAP, HANA Knowledge Graph is set to play a critical role in the development of several AI products. Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 396628 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: . Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job title: R&D Data Modeling Manager Associate Location: Hyderabad Sanofi is a global life sciences company committed to improving access to healthcare and supporting the people we serve throughout the continuum of care. From prevention to treatment, Sanofi transforms scientific innovation into healthcare solutions, in human vaccines, rare diseases, multiple sclerosis, oncology, immunology, infectious diseases, diabetes and cardiovascular solutions and consumer healthcare. More than 110,000 people in over 100 countries at Sanofi are dedicated to making a difference in patients’ daily lives, wherever they live and enabling them to enjoy a healthier life. As a company with a global vision of drug development and a highly regarded corporate culture, Sanofi is recognized as one of the best pharmaceutical companies in the world and is pioneering the application of Artificial Intelligence (AI) with a strong commitment to developing advanced data standards to increase reusability & interoperability and thus accelerate impact on global health. The R&D Data Office serves as a cornerstone of this effort. Our team is responsible for cross-R&D data strategy, governance, and management. We partner with Business and Digital and drive data needs across priority and transformative initiatives across R&D. Team members serve as advisors, leaders, and educators to colleagues and data professionals across the R&D value chain. As an integral team member, you will be responsible for defining how R&D's structured, semi-structured and unstructured data will be stored, consumed, integrated / shared and reported by different end users such as scientists, clinicians, and more. You will also be pivotal in developing sustainable mechanisms for ensuring data are FAIR (findable, accessible, interoperable, and reusable). Position Summary The primary responsibility of this position is to support semantic integration and data harmonization across pharmaceutical R&D functions. In this role, you will design and implement ontologies and controlled vocabularies that enable interoperability of scientific, clinical, and operational data. Your work will be critical in accelerating discovery, improving data reuse, and enhancing insights across the drug development lifecycle. Main Responsibilities Develop, maintain, and govern ontologies and semantic models for key pharmaceutical domains, including preclinical, clinical, regulatory, and translational research Design and implement controlled vocabularies and taxonomies to standardize terminology across experimental data, clinical trials, biomarkers, compounds, and regulatory documentation Collaborate with cross-functional teams including chemists, biologists, pharmacologists, data scientists, and IT architects to align semantic models with scientific workflows and data standards Map internal data sources to public ontologies and standards to ensure FAIR (Findable, Accessible, Interoperable, Reusable) data principles Leverage semantic web technologies and ontology tools to build knowledge representation frameworks Participate in ontology alignment, reasoning, and validation processes to ensure quality and logical consistency Document semantic assets, relationships, and governance policies to support internal education and external compliance Deliverables Domain-specific ontologies representing concepts such as drug discovery (e.g., compounds, targets, assays), preclinical and clinical studies, biomarkers, adverse events, pharmacokinetics / dynamics, mechanisms of action, and disease models built using OWL/RDF and aligned with public standards Controlled vocabularies & taxonomies for experimental conditions, cell lines, compound classes, endpoints, clinical trial protocols, etc. Semantic data models supporting the integration of heterogeneous data sources (e.g., lab systems, clinical trial data, external databases) Knowledge graphs or knowledge maps for semantic integration of structured data from internal R&D systems Mappings to public ontologies, standards, and external knowledge bases like: CDISC, MedDRA, LOINC, UMLS, SNOMED CT, RxNorm, UniProt, DrugBank, PubChem, NCBI Ontology documentation & governance artifacts, including ontology scope, design rationale, versioning documentation, and usage guidelines for internal stakeholders Validation reports and consistency checks, including outputs from reasoners or SHACL validation to ensure logical coherence and change impact assessments when modifying existing ontologies Training and stakeholder support materials: slide decks, workshops, and tutorials on using ontologies in data annotation, integration, and search Support for application developers embedding semantic layers About You Experience: 5+ years of experience in ontology engineering, data management, data analysis, data architecture, or another related field Proven experience in ontology engineering, Proven experience in ontology development within the biomedical or pharmaceutical domain Experience working with biomedical ontologies and standards (e.g., GO, BAO, EFO, ChEBI, NCBI Taxonomy, NCI Thesaurus, etc.) Familiarity with controlled vocabulary curation and knowledge graph construction. Demonstrated ability to understand end-to-end data use and business needs Knowledge and/or experience of Pharma R&D or life sciences data and data domains. Understanding of FAIR data principles, data governance, and metadata management Strong analytical problem-solving skills. Demonstrated strong attention to detail, quality, time management and customer focus Excellent written and oral communication skills. Strong networking, influencing, and negotiating skills and superior problem-solving skills Demonstrated willingness to make decisions and to take responsibility for such. Excellent interpersonal skills (team player) Knowledge and experience in ontology engineering and maintenance are required. Knowledge and experience with OWL, RDF, SKOS, and SPARQL Familiarity with ontology engineering tools (e.g., Protégé, CENtree, TopBraid Composer PoolParty), Familiarity with ontology engineering methodologies (e.g., NeOn, METHONTOLOGY, Uschold and King, Grüninger and Fox, etc.) Knowledge and experience in data modeling are highly desired. Experience with pharma R&D platforms, requirements gathering, system design, and validation/quality/compliance requirements Experience with hierarchical data models from conceptualization to implementation, bachelor’s in computer science, Information Science, Knowledge Engineering, or related; Masters or higher preferred Languages: English null Show more Show less

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies