Home
Jobs

7 Graphdb Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

20 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Working knowledge of Azure AI Working knowledge of LLM , RAG- Data insights , Reasoning Agent , Chain of thought Knowledge of Graph DB , COSMOS DB , Neo4j Prompt engineering API building Domain knowledge in Finance , data management Should have worked on enterprise AI use cases and one go live solution.

Posted 16 hours ago

Apply

4.0 - 6.0 years

7 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will be part of Researchs Semantic Graph Team is seeking a dedicated and skilled Semantic Data Engineer to build and optimize knowledge graph-based software and data resources. This role primarily focuses on working with technologies such as RDF, SPARQL, and Python. In addition, the position involves semantic data integration and cloud-based data engineering. The ideal candidate should possess experience in the pharmaceutical or biotech industry, demonstrate deep technical skills, and be proficient with big data technologies and demonstrate experience in semantic modeling. A deep understanding of data architecture and ETL processes is also essential for this role. In this role, you will be responsible for constructing semantic data pipelines, integrating both relational and graph-based data sources, ensuring seamless data interoperability, and leveraging cloud platforms to scale data solutions effectively. Roles & Responsibilities: Develop and maintain semantic data pipelines using Python, RDF, SPARQL, and linked data technologies. Develop and maintain semantic data models for biopharma scientific data Integrate relational databases (SQL, PostgreSQL, MySQL, Oracle, etc.) with semantic frameworks. Ensure interoperability across federated data sources, linking relational and graph-based data. Implement and optimize CI/CD pipelines using GitLab and AWS. Leverage cloud services (AWS Lambda, S3, Databricks, etc.) to support scalable knowledge graph solutions. Collaborate with global multi-functional teams, including research scientists, Data Architects, Business SMEs, Software Engineers, and Data Scientists to understand data requirements, design solutions, and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions. Collaborate with data scientists, engineers, and domain experts to improve research data accessibility. Adhere to standard processes for coding, testing, and designing reusable code/components. Explore new tools and technologies to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Maintain comprehensive documentation of processes, systems, and solutions. Harmonize research data to appropriate taxonomies, ontologies, and controlled vocabularies for context and reference knowledge. Basic Qualifications and Experience: Doctorate Degree OR Masters degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 6+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Cloud and Automation Expertise: Good experience in using cloud platforms (preferably AWS) for data engineering, along with Python for automation, data federation techniques, and model-driven architecture for scalable solutions. Technical Problem-Solving: Excellent problem-solving skills with hands-on experience in test automation frameworks (pytest), scripting tasks, and handling large, complex datasets. Good-to-Have Skills: Experience in biotech/drug discovery data engineering Experience applying knowledge graphs, taxonomy and ontology concepts in life sciences and chemistry domains Experience with graph databases (Allegrograph, Neo4j, GraphDB, Amazon Neptune) Familiarity with Cypher, GraphQL, or other graph query languages Experience with big data tools (e.g. Databricks) Experience in biomedical or life sciences research data management Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills

Posted 1 week ago

Apply

4.0 - 6.0 years

3 - 6 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Essential Skills/Experience: Hands-on experience with Neo4j and Cypher query development. Solid grounding in RDF, OWL, SHACL, SPARQL, and semantic modeling standard methodologies. Strong proficiency in Python (or an equivalent language) for automation, data transformation, and pipeline integration. Demonstrated ability to define use cases, structure delivery backlogs, and manage technical execution. Strong problem-solving and communication skills, with a delivery-focused mindset. Bachelor s degree in Computer Science, Data Science, Information Systems, or a related field (Master s preferred). Desirable Skills/Experience: Experience with additional graph platforms such as GraphDB, Stardog, or Amazon Neptune. Familiarity with Cognite Data Fusion, IoT/industrial data integration, or other large-scale operational data platforms. Understanding of knowledge representation techniques and reasoning systems. Exposure to AI/ML approaches using graphs or semantic features. Knowledge of tools such as Prot g , TopBraid Composer, or VocBench. Familiarity with metadata standards, data governance, and FAIR principles.

Posted 1 week ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

The Team The Okta platform provides directory services, single sign-on, strong authentication, provisioning, workflow, and built in reporting. It runs in the cloud on a secure, reliable, extensively audited platform and integrates deeply with on premises applications, directories, and identity management systems. We are looking for an experienced Staff Software Engineer to work on our Onboarding and Lifecycle Management (LCM) Platform team with focus on enhancing and managing services for importing, syncing and provisioning identities and access policies i.e., users, groups, roles, entitlements, etc. These features allow customers the flexibility to link and enhance their business processes with Okta s identity management product. This role is to build, design solutions, and maintain our platform for scale. The ideal candidate is someone who has experience building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure. Job Duties And Responsibilities Work with senior engineering team in major development projects, design and implementation Be a key contributor in the implementation of the LCM infrastructure Troubleshooting customer issues and debugging from logs (Splunk, Syslogs, etc.) Design & Implement features with functional and unit tests along with monitoring and alerts Conduct design & code reviews, analysis and performance tuning Quick prototyping to validate scale and performance Provide technical leadership and mentorship to more junior engineers Interface with Architects, QA, Product Owners, Engineering Services, Tech Ops Partner with our Product Development, QA, and Site Reliability Engineering teams for scoping the development and deployment work Required Knowledge, Skills, And Abilities The ideal candidate is someone who is experienced building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure 7+ years of Software Development in Java, preferably significant experiences with Hibernate and Spring Boot 5+ years of development experience building services, internal tools and frameworks 2+ years experience automating and deploying large scale production services in AWS, GCP or similar Deep understanding of infrastructure level technologies: caching, stream processing, resilient architectures Experience working with relational databases, ideally MySQL, PostgreSQL or GraphDB Ability to work effectively with distributed teams and people of various backgrounds Lead and mentor junior engineers Education B.S. Computer Science or equivalent#LI-MM5

Posted 1 week ago

Apply

1.0 - 4.0 years

2 - 6 Lacs

Mumbai, Pune, Chennai

Work from Office

Naukri logo

Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad

Posted 2 weeks ago

Apply

4.0 - 6.0 years

4 - 6 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

Roles & Responsibilities: Lead conversations with business collaborators to elucidate semantic models of pharmaceutical business concepts, aligned definitions, and relationships. Negotiate and debate across collaborators to drive alignment and create system-independent information models, taking a data-centric approach aligned with business data domains. Develop comprehensive business information models and ontologies that capture industry-specific concepts, including CMC, Clinical, and Operations data. Facilitate whiteboarding sessions with business subject matter experts to elicit knowledge, drive interoperability across pharmaceutical domains, and interface between data producers and consumers. Educate peers on the practical use and differentiating value of Linked Data and FAIR+ data principles. Champion standards for master data & reference data. Formalize data models in RDF as OWL and SHACL ontologies that interoperate with each other and with relevant industry standards like FHIR and IDMP for healthcare data exchange. Build a broad semantic knowledge graph that threads data together across end-to-end business processes and enables the transformation to data-centricity and new ways of working Apply pragmatic semantic abstraction to simplify diverse pharmaceutical and healthcare data patterns effectively. Basic Qualifications: Doctorate degree OR Masters degree and 4 to 6 years of Data Science experience OR Bachelors degree and 6 to 8 years of Data Science experience OR Diploma and 10 to 12 years of Data Science experience Preferred Qualifications: About the role You will play a key role in a regulatory submission content automation initiative which will modernize and digitize the regulatory submission process, positioning Amgen as a leader in regulatory innovation. The initiative uses state-of-the-art technologies, including Generative AI, Structured Content Management, and integrated data to automate the creation, review, and approval of regulatory content. Role Description: The Sr Data Scientist is responsible for developing interconnected business information models and ontologies that capture real-world meaning of data by studying the business, our data, and the industry. With a focus on pharmaceutical industry-specific data, including Clinical, Operations, and Chemistry, Manufacturing, and Controls (CMC), this role involves creating robust semantic models based on data-centric principles to realize a connected data ecosystem that empowers consumers. The Information Modeler drives seamless cross-functional data interoperability, enables efficient decision-making, and supports digital transformation in pharmaceutical operations. Functional Skills: Must-Have Skills: Proven ability to lead and develop successful teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Deep understanding of pharmaceutical industry data, including CMC, Process Development, Manufacturing, Engineering Quality, Supply Chain, and Operations. Advanced skills in semantic modeling, RDF, OWL, SHACL, and ontology development in TopBraid and/or Protg. Demonstrated experience creating knowledge graphs with semantic RDF technologies (e.g. Stardog, AllegroGraph, GraphDB, Neptune) and testing models with real data. Highly proficient with RDF, SPARQL, Linked Data concepts, and interacting with triple stores. Highly proficient at facilitating, capturing, and organizing collaborative discussions through tools such as Miro, Lucidspark, Lucidchart, and Confluence. Expertise in FAIR data principles and their application in healthcare and pharmaceutical data models. Good-to-Have Skills: Experience in regulatory data modeling and compliance requirements in the pharmaceutical domain. Familiarity with pharmaceutical lifecycle data (PLM), including product development and regulatory submissions. Knowledge of supply chain and operations data modeling in the pharmaceutical industry. Proficiency in integrating data from various sources, such as LIMS, EDC systems, and MES. Hands-on data analysis and wrangling experience including SQL-based data transformation and solving integration challenges arising from differences in data structure, meaning, or terminology Expertise in FHIR data standards and their application in healthcare and pharmaceutical data models. Soft Skills: Exceptional interpersonal, business analysis, facilitation, and communication skills. Ability to interpret complex regulatory and operational requirements into data models. Analytical thinking for problem-solving in a highly regulated environment. Adaptability to manage and prioritize multiple projects in a dynamic setting. Strong appreciation for customer- and user-centric product design thinking.

Posted 3 weeks ago

Apply

7.0 - 10.0 years

16 - 30 Lacs

Noida, Lucknow

Work from Office

Naukri logo

Hi We have one urgent requirement for Python Developer roles who can Join in max 30 Days About HCL Software HCLSoftware, a division of HCLTech, develops, markets, sells, and supports software for AI and Automation, Data, Analytics and Insights, Digital Transformation, and Enterprise Security. HCLSoftware is the cloud-native solution factory for enterprise software and powers millions of apps at more than 20,000 organizations, including more than half of the Fortune 1000 and Global 2000 companies. HCLSoftware's mission is to drive ultimate customer success through relentless product innovation. Website : hcl-software.com Please also find below JD Job Title: Python Developer Location: Noida Experience Required: 7 to 10 years Notice Period: Serving and join within 2-4 weeks (Early joiners only) Job Description: We are seeking an experienced Python Developer with a strong background in application and product development to join our team in Noida. The ideal candidate will have extensive experience in Python programming , with a focus on building robust and scalable applications. We are specifically looking for professionals who have been involved in full-cycle product development , rather than those whose experience is limited to writing scripts for testing or automation purposes. Key Responsibilities: Design, develop, and maintain Python-based applications and products. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the performance, quality, and responsiveness of applications. Identify and correct bottlenecks and fix bugs. Help maintain code quality, organization, and automation. Participate in code reviews. Work with other developers, designers, and stakeholders to build high-quality, innovative, and fully performing software. Requirements: 7 to 10 years of proven experience in Python development. Strong understanding of Python programming and application development . Hands-on experience in full-cycle application/product development and Agile/Scrum development methodology. Solid understanding of software development principles, algorithms, and data structures. Experience with Python frameworks such as Django, Flask, or FastAPI. Proficient understanding of code versioning tools like Git . Familiarity with databases (GraphDB / Neo4j) and cloud services is a plus. Experience with deploying applications in cloud environments (AWS, Azure, or GCP). Hands-on experience with containerization technologies like Docker. Familiarity with orchestration tools such as Kubernetes for deploying and managing microservices. Excellent problem-solving skills and attention to detail. Good communication and collaboration skills. Thanks & Regards Syed Hasan Abbas (He/Him) Senior Executive HR | HCL –Software || TAG LinkedIn - www.linkedin.com/in/hasan-abbas

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies