Home
Jobs

366 Neo4J Jobs - Page 10

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Tata Consultancy Services is hiring Python Full stack Developers !!! Role**Python Full stack Developer Desired Experience Range**6-8 Years Location of Requirement**Hyderabad Desired Skills -Technical/Behavioral Primary Skill Frontend o 6+ years of overall experience with proficiency in React (2+ years), Typescript (1+ year), React hooks (1+ year) o Experience with ESlint, CSS in JS styling (preferably Emotion), state management (preferably Redux), and JavaScript bundlers such as Webpack o Experience with integrating with RESTful APIs or other web services Backend o Expertise with Python (3+ years, preferably Python3) o Proficiency with a Python web framework (2+ years, preferably flask and FastAPI) o Experience with a Python linter (preferably flake8), graph databases (preferably Neo4j), a package manager (preferably pip), Elasticsearch, and Airflow o Experience with developing microservices, RESTful APIs or other web services o Experience with Database design and management, including NoSQL/RDBMS tradeoffs Interested and Eligible candidates can apply Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled and customer-focused GraphDB / Neo4J Solutions Engineer to join our team. This role is responsible for delivering high-quality solution implementation to our customers to implement GraphDB based product and collaborating with cross-functional teams to ensure customer success. Solution lead is expected to provide in-depth solutions on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. Solution lead must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). Roles And Responsibilities Collaborate with core engineering, Customers and solution engineering teams for functional and technical discovery sessions. Prepare product and live software demonstrations Create and maintain public documentation, internal knowledge base articles, and FAQs. Ability to design efficient graph schemas and develop prototypes that address customer requirements (e.g., Fraud Detection, Recommendation Engines, Knowledge Graphs). Knowledge of indexing strategies, partitioning, and query optimization in GraphDB. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Education and Experience Education: B.Tech in computer engineering, Information Technology, or related field. Experience: 5+ years of experience in a Solution Lead role on Data based Software Product such as GraphDB, Neo4J Must Have Skills SQL Expertise: 4+ years of experience in SQL for database querying, performance tuning, and debugging. Graph Databases and GraphDB platforms: 4+ years of hands on experience with Neo4j, or similar graph database systems. Scripting & Automation: 4+ years with strong skills in C, C++, Python for automation, task management, and issue resolution. Virtualization and Cloud knowledge: 4+ years with Azure, GCP or AWS. Management skills : 3+ years Experience with data requirements gathering and data modeling, white boarding and developing/validating proposed solution architectures. The ability to communicate complex information and concepts to prospective users in a clear and effective way. Monitoring & Performance Tools: Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing: Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled and customer-focused Technical Support Engineer to join our team. This role is responsible for delivering high-quality technical support to our customers to troubleshoot complex technical issues and collaborating with cross-functional teams to ensure customer success. Technical Support Engineer is expected to provide advanced technical support on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. The primary responsibility is to troubleshoot and resolve technical issues, support product adoption, and ensure customer satisfaction. TSE must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). They work collaboratively with engineering teams to escalate and resolve complex issues when necessary (i.e. code change required, first time seeing a behavior). Roles and Responsibilities Respond to customer inquiries and provide in-depth technical support via multiple communication channels. Collaborate with core engineering and solution engineering teams to diagnose and resolve complex technical problems. Create and maintain public documentation, internal knowledge base articles, and FAQs. Monitoring and meeting SLAs. Timely triage varying issues based on error messages, log files, threads dumps, stack traces, sample code, and other available data points. Efficiently troubleshoot cluster issues across multiple servers, data centers, and regions, in a variety of clouds (AWS, Azure, GCP, etc), virtual, and bare metal environments. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Must Have Skills Education: B.Tech in computer engineering, Information Technology, or related field. Experience: GraphDB experience is must 5+ years of experience in a Technical Support Role p on Data based Software Product at least L3 level. Linux Expertise: 4+ years with in-depth understanding of Linux, including filesystem, process management, memory management, networking, and security. Graph Databases: 3+ years of experience with Neo4j, or similar graph database systems. SQL Expertise: 3+ years of experience in SQL for database querying, performance tuning, and debugging. Data Streaming & Processing: 2+ years hands-on experience with Kafka, Zookeeper, and Spark. Scripting & Automation: 2+ years with strong skills in Bash scripting and Python for automation, task management, and issue resolution. Containerization & Orchestration: 1+ year proficiency in Docker, Kubernetes, or other containerization technologies is essential. Monitoring & Performance Tools: Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing: Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Web & API Technologies: Understanding of HTTP, SSL, REST APIs for debugging and troubleshooting API-related issues. Nice to have Skills Familiarity with Data Science or ML will be an edge. Experience with LDAP, SSO, OAuth authentication. Strong understanding of database internals and system architecture. Cloud certification (at least DevOps Engineer level) Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About the Company Resources is the backbone of Publicis Groupe, the world’s third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare, and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury, and risk management to help Publicis Groupe agencies do their best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications, and tools to enhance productivity, encourage collaboration, and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. About the Role The main purpose of this role is to advance the application of business intelligence, advanced data analytics, and machine learning for Marcel. The Data Scientist will work with other data scientists, engineers, and product owners to ensure the delivery of all commitments on time and in high quality. Responsibilities Design and develop advanced data science and machine learning algorithms, with a strong emphasis on Natural Language Processing (NLP) for personalized content, user understanding, and recommendation systems. Work on end-to-end LLM-driven features, including fine-tuning pre-trained models (e.g., BERT, GPT), prompt engineering, vector embeddings, and retrieval-augmented generation (RAG). Build robust models on diverse datasets to solve for semantic similarity, user intent detection, entity recognition, and content summarization/classification. Analyze user behaviour through data and derive actionable insights for platform feature improvements using experimentation (A/B testing, multivariate testing). Architect scalable solutions for deploying and monitoring language models within platform services, ensuring performance and interpretability. Collaborate cross-functionally with engineers, product managers, and designers to translate business needs into NLP/ML solutions. Regularly assess and maintain model accuracy and relevance through evaluation, retraining, and continuous improvement processes. Write clean, well-documented code in notebooks and scripts, following best practices for version control, testing, and deployment. Communicate findings and solutions effectively across stakeholders — from technical peers to executive leadership. Contribute to a culture of innovation and experimentation, continuously exploring new techniques in the rapidly evolving NLP/LLM space. Qualifications Minimum Experience (relevant): 3 years Maximum Experience (relevant): 5 years Required Skills Proficiency in Python and NLP frameworks: spaCy, NLTK, Hugging Face Transformers, OpenAI, LangChain. Strong understanding of LLMs, embedding techniques (e.g., SBERT, FAISS), RAG architecture, prompt engineering, and model evaluation. Experience in text classification, summarization, topic modeling, named entity recognition, and intent detection. Experience deploying ML models in production and working with orchestration tools such as Airflow, MLflow. Comfortable working in cloud environments (Azure preferred) and with tools such as Docker, Kubernetes (AKS), and Git. Strong experience working with data science/ML libraries in Python (SciPy, NumPy, TensorFlow, SciKit-Learn, etc.) Strong experience working in cloud development environments (especially Azure, ADF, PySpark, DataBricks, SQL) Experience building data science models for use on front end, user facing applications, such as recommendation models Experience with REST APIs, JSON, streaming datasets Understanding of Graph data, Neo4j is a plus Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources Understanding of Jenkins, CI/CD processes using Git, for cloud configs and standard code repositories such as ADF configs and Databricks Preferred Skills Bachelor's degree in engineering, computer science, statistics, mathematics, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience. Advanced knowledge of data science techniques, and experience building, maintaining, and documenting models Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases preferably Graph DB. Experience building and optimizing ADF and PySpark based data pipelines, architectures and data sets on Graph and Azure Datalake. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Show more Show less

Posted 3 weeks ago

Apply

4.0 - 9.0 years

7 - 16 Lacs

Pune, Bengaluru, Greater Noida

Work from Office

Naukri logo

About the Role We are seeking a skilled and security-conscious Backend Engineer to join our growing engineering team. In this role, you will be responsible for designing, developing, and maintaining secure backend systems and services. Youll work with modern technologies across cloud platforms, graph databases, and containerized environments to build scalable and resilient infrastructure. Key Responsibilities Design and implement backend services and APIs using Python. Manage and query graph data using Neo4j. Work across cloud platforms (AWS, Azure, GCP) to build and deploy secure, scalable applications. Optimize and maintain relational and analytical databases including PostgreSQL and ClickHouse. Develop and deploy serverless applications and microservices. Containerize applications using Docker and manage deployment pipelines. Collaborate with security teams to integrate best practices and tools into the development lifecycle. Mandatory Skills Proficiency in Python programming . Hands-on experience with Neo4j for graph database management and Cypher querying. Working knowledge of AWS , Azure , and Google Cloud Platform (GCP) . Experience with PostgreSQL and ClickHouse for database optimization and management. Understanding of serverless architecture and deployment strategies. Proficiency with Docker for containerization and deployment. Nice to Have Experience with AWS ECS and EKS for container orchestration. Familiarity with open-source vulnerability/secret scanning tools (e.g., Trivy, Gitleaks, etc.). Exposure to CI/CD pipelines and DevSecOps practices. What We Offer Competitive compensation and benefits. Flexible work environment. Opportunities to work on cutting-edge security and cloud technologies. A collaborative and inclusive team culture.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Chennai

Work from Office

Naukri logo

Job Title: Senior Consultant - Knowledge Graph and Semantic Engineer career level: D3 Introduction to role: Join AstraZeneca Operations IT, where your work directly impacts patients by transforming our ability to develop life-changing medicines. We empower the business to perform at its peak, combining powerful science with leading digital technology platforms and data. With a passion for impacting lives through data, analytics, AI, machine learning, and more, we offer a dynamic and challenging environment filled with opportunities to learn and grow. Be part of a team that innovates, disrupts an industry, and changes lives. Accountabilities: As a Senior Consultant in Knowledge Graph and Semantic Engineering, you will design and develop ontologies, semantic models, and property graphs representing key business concepts in manufacturing and supply chain. You will define reusable graph patterns, apply standard methodologies in knowledge graph modeling, and develop SPARQL and Cypher queries for integrating, retrieving, and validating semantic data. You will support the integration of structured and unstructured data using semantic modeling and graph-based approaches. Collaborate with stakeholders to define business-driven semantic use cases and technical requirements, maintain delivery backlogs, oversee implementation plans, and ensure alignment with business priorities. Partner with product owners, data engineers, and architects to drive adoption of semantic technologies across functions. Deliver training, guidance, and documentation to upskill team members and stake \holders on semantic technologies. Stay ahead of on developments in semantic web, property graphs, linked data, and graph-based AI/ML. Essential Skills/Experience: Hands-on experience with Neo4j and Cypher query development. Solid grounding in RDF, OWL, SHACL, SPARQL, and semantic modeling standard methodologies. Strong proficiency in Python (or an equivalent language) for automation, data transformation, and pipeline integration. Demonstrated ability to define use cases, structure delivery backlogs, and manage technical execution. Strong problem-solving and communication skills, with a delivery-focused mindset. Bachelor s degree in Computer Science, Data Science, Information Systems, or a related field (Master s preferred). Desirable Skills/Experience: Experience with additional graph platforms such as GraphDB, Stardog, or Amazon Neptune. Familiarity with Cognite Data Fusion, IoT/industrial data integration, or other large-scale operational data platforms. Understanding of knowledge representation techniques and reasoning systems. Exposure to AI/ML approaches using graphs or semantic features. Knowledge of tools such as Prot g , TopBraid Composer, or VocBench. Familiarity with metadata standards, data governance, and FAIR principles. straZeneca is a place where diverse minds work inclusively to drive change across international boundaries. We couple technology with an inclusive mindset to develop a leading ecosystem. Our cross-functional teams bring together the best minds from across the globe to uncover new solutions. We think holistically about applying technology while building partnerships inside and out. By driving simplicity and efficiencies, we make a real difference. Ready to make an impact? Apply now to join our team! Date Posted 02-Jun-2025 Closing Date 08-Jun-2025

Posted 3 weeks ago

Apply

1.0 - 2.0 years

2 - 3 Lacs

Gurgaon

On-site

GlassDoor logo

Work Experience :1-2 Roles and Responsibilities -Candidate should be able to work on assignments with least guidance -Responsible for robust and efficient implementation -Suggest technical solutions for short and long term -Participate and lead software design meetings and analyze user needs to determine technical requirements. -Collaborate with team members and across groups to prepare technical specifications. -Design and develop high-quality code. -Test, debug, analyze and fix application problems/issues. -Work closely with stakeholders to fix issues in a timely fashion. -Participate in code reviews in accordance with program specifications and coding standards, practices and conventions, new designs, changes and enhancements. -Mentor less skilled employees -Interact with stakeholders (internal and external) to support product releases and/or resolve program problems. -Research and remain informed of new technology and development tools. Skills Required 1.Strong experience in core Java. 2. Good to have hands-on knowledge in Spring boot. 3. Good to have a basic understanding of AI/ML. 4. Hands-on development in a J2EE framework like Spring Boot, Spring MVC. 5. Good knowledge of Relational Databases like MySQL, PostgreSQL, MS SQL Server with complex SQL Queries, Stored Procedure, Triggers. 6. Candidates must have knowledge of ORM like Hibernate. 7. Candidate must have good working knowledge of Data Structure. 8. Unit testing experience using Junit, Spock. 9. Knowledge of versioning tools like GIT, SVN. 10. Knowledge of build tools like Maven, Gradle, Ant. 11. Knowledge of MongoDB, Cassandra, Neo4J, Redis, Kafka, ActiveMQ, Caches, Microservices will be a major plus. 12. Good to have knowledge of Javascript, jQuery or AngularJS. 13. Knowledge of Agile software development methodologies and the software development lifecycle. 14. Knowledge of Unit Test Cases will be an added advantage. 15. Candidate can work in a team as well as individually. SKILLS: Java, Springboot, AI/ML, Database, Hibernet, GIT, Javascript, HTML, SQL, JQuery, Agile Experience- 1-2 Years

Posted 3 weeks ago

Apply

0 years

0 Lacs

Kanpur, Uttar Pradesh, India

On-site

Linkedin logo

Description As a Blockchain Intelligence Analyst, you will collect, analyze, and report on intel leads (e.g., crypto exchanges, fraud) ensuring best-in-class intelligence, problem solving, and ability to turn data into insights. You will collaborate with expert minds in academics, industry, and Law enforcements to build scalable systems to detect, prevent, and mitigate cryptocurrency fraud and financial crime. This is an exciting opportunity to make a measurable impact within a corporate environment that values precision, innovation, and strategic thinking. Responsibilitie s ● Analyze user behavior on different virtual asset service providers (such as marketplaces, and exchanges). This task requires one to perform OSInt and extract relevant information about the platform. ● Develop related pipeline using Python and other technologies. ● Support product and analytics teams in building data-pipelines and business logic. ● Develop strategies to counter concealment measures used by service providers (captchas, closed group invites etc). ● Establish partnerships with 3rd party data vendors, research bodies, anti-cybercrime groups. ● Monitor forums for cybercrime intelligence. ● Develop intelligence reports for marketing and SAR reports for Law Enforcement Agencies. Eligibility ● Proven work experience in intelligence-related positions, ideally in Blockchain Intelligence (This is not an entry-level position). ● Proficiency and expertise in different methods used for OSInt. ● Strong ability to synthesize, interpret, and report complex information clearly and accurately. ● Knowledge of blockchain technologies and different platforms, with a focus on tracing transactions and identifying malicious activity. Also have working knowledge of cryptocurrencies, darknet marketplaces, Tor, experience in making payments with Bitcoin and Ethereum. ● Excellent communication skills to collaborate with cross-functional teams and able to explain to others with no technical knowledge. ● Ability to research in another language also (Please specify the language(s) in your application). ● Proficiency in Python, MongoDB, Neo4j, Cypher, Git. Desired Eligibility ● Relevant certifications and experience in cybersecurity or intelligence gathering. ● Working knowledge of SQL is preferred. ● Bachelor's degree or above in Computer Science or related field. ● Prior experience working in fast-paced environments or startups. ● Prior experience in publishing research oriented articles in A* journals. Travel As and when required, across the country for project execution and monitoring as well as for coordination with geographically distributed teams. Communication Submit a cover letter summarising your experience in relevant technologies and software along with a resume and the Latest passport-size photograph. Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Excellent communications skills (Verbal and written) at all levels Scrum Master advanced certifications (PSM, SASM, SSM, etc) Working experience on Agile project management tools (Jira, VersionOne, Rally, etc) Good to have skills: ○ Knowledge and experience in working with the SAFe & Agile framework. ○ Experience with continuous delivery, DevOps, and release management. ○ Ability to communicate concisely and accurately to team and to management ○ Knowledge in all or several of the following: In software development (Python, JavaScript, ASP, C#, HTML5...) Data storage Technologies (SQL, . Net, NoSQL (Neo4J, Neptune), S3, AWS (Redshift) ) Web development technologies and frameworks (e.g. Angular, AngularJS, ReactJS…) DevOps methodologies and practices Total Experience Expected: 08-10 years Qualifications Engg Graduate with minimum 6 to 8 years of total experience out of which minimum of 3+ years as Scrum Master Additional Information Comfortable in working from client location & also in EU Timezone. At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less

Posted 3 weeks ago

Apply

8.0 - 10.0 years

12 - 13 Lacs

Gurugram

Work from Office

Naukri logo

Over 7 years of hands-on experience in Java, SQL, and Neo4j, with strong skills in backend development, database design, and graph database modeling for scalable, high-performance applications. Mail:kowsalya.k@srsinfoway.com

Posted 3 weeks ago

Apply

4.0 - 6.0 years

15 - 25 Lacs

Pune

Work from Office

Naukri logo

Responsibilities: Create and optimize complex SPARQL Protocol and RDF Query Language queries to retrieve and analyse data from graph databases. Develop graph-based applications and models to solve real-world problems and extract valuable insights from data. Design, develop, and maintain scalable data pipelines using Python rest apis get data from different cloud platforms. Create and optimize complex SPARQL queries to retrieve and analyze data from graph databases. Study and understand the nodes, edges, and properties in graphs, to represent and store data in relational databases. Qualifications: Strong proficiency in SparQL, and RDF query language, Python and Rest APIs. Experience with database technologies sql and sparql. Preferred Skills: Knowledge of cloud platforms like AWS, Azure, or GCP. Experience with version control systems like Github. Understanding of environments and deployment processes and cloud infrastructure.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. BPI is built on the Blue Planet Cloud Native Platform (CNP), a modern OSS that converges design, delivery, and assurance software applications to eliminate inefficient operational silos and helps streamline the introduction and delivery of innovative services across network domains and vendor We are looking for a software engineer who will contribute to developing industry-leading dynamic network inventory software. Job Requirements Below Software Development Experience in Java. Extremely competent in Java, with emphasis on Core Java (OOPs concept, Designing skills, Multi-threading, Concurrency, Collection Framework, Exception handling and debug skills), Java Swing, JavaFX, JAXB, XML Parsing techniques, Socket Programming etc. Familiarity with relational and non-relational database concepts. Experience in writing queries on databases like Oracle and Neo4j Familiarity with UI technologies such as Angular. Excellent troubleshooting/debugging skills. Excellent Problem-Solving skills. Strong knowledge of operating systems: Linux, MAC, and Windows Strong commitment to product excellence and quality. Ability to resolve complex issues that may require design trade-offs. Bachelor’s/Master of Engineering in computer science or a related discipline. Excellent written and verbal communication skills, effectively able to collaborate with multiple teams across geographically diverse areas. Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less

Posted 3 weeks ago

Apply

2.0 - 4.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

Career Category Information Systems Job Description ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world s toughest diseases, and make people s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what s known today. ABOUT THE ROLE Role Description: We are seeking a skilled and creative RShiny Developer with hands-on experience in MarkLogic and graph databases. You will be responsible for designing and developing interactive web applications using RShiny, integrating complex datasets stored in MarkLogic, and leveraging graph capabilities for advanced analytics and knowledge representation. Roles & Responsibilities: Develop interactive dashboards and web applications using RShiny. Connect and query data from MarkLogic, especially leveraging its graph and semantic features (e. g. , RDF triples, SPARQL). Design and maintain backend data workflows and APIs. Collaborate with data scientists, analysts, and backend engineers to deliver integrated solutions. Optimize performance and usability of RShiny applications. Functional Skills: Must-Have Skills: Proven experience with R and RShiny in a production or research setting. Proficiency with MarkLogic , including use of its graph database features (triples, SPARQL queries, semantics). Familiarity with XQuery , XPath , or REST APIs for interfacing with MarkLogic. Strong understanding of data visualization principles and UI/UX best practices. Experience with data integration and wrangling. Good-to-Have Skills: Experience with additional graph databases (e. g. , Neo4j, Stardog) is a plus. Background in knowledge graphs, linked data, or ontologies (e. g. , OWL, RDF, SKOS). Familiarity with front-end frameworks (HTML/CSS/JavaScript) to enhance RShiny applications. Experience in regulated industries (e. g. , pharma, finance) or with complex domain ontologies. Professional Certifications (preferred): SAFe Methodology Courses in R, RShiny, and data visualization from reputable institutions (e. g. , Johns Hopkins Data Science Specialization on Coursera) Other Graph Certifications (optional but beneficial) Neo4j Certified Professional (to demonstrate transferable graph database skills) Linked Data and Semantic Web Training (via organizations like W3C or O Reilly) Soft Skills: Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task management skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .

Posted 3 weeks ago

Apply

8.0 - 10.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Job Summary: We are seeking a skilled and motivated Backend/Data Engineer with hands-on experience in MongoDB and Neo4j to design and implement data-driven applications. The ideal candidate will be responsible for building robust database systems, integrating complex graph and document-based data models, and collaborating with cross-functional teams. Experience - 6- 12 years Key Responsibilities: • Design, implement, and optimize document-based databases using MongoDB. • Model and manage connected data using Neo4j (Cypher query language). • Develop RESTful APIs and data services to serve and manipulate data stored in MongoDB and Neo4j. • Implement data pipelines for data ingestion, transformation, and storage. • Optimize database performance and ensure data integrity and security. • Collaborate with frontend developers, data scientists, and product managers. • Maintain documentation and support for database solutions. Required Skills: • Strong proficiency in MongoDB: schema design, indexing, aggregation framework. • Solid experience with Neo4j: graph modeling, Cypher queries, performance tuning. • Programming proficiency in Python, Node.js, or Java. • Familiarity with REST APIs, GraphQL, or gRPC. • Experience with data modeling (both document and graph models). • Knowledge of data security, backup, and recovery techniques. Preferred Skills: • Experience with Mongoose, Spring Data MongoDB, or Neo4j-OGM. • Familiarity with data visualization tools (e.g., Neo4j Bloom). • Experience with Docker, Kubernetes, or other DevOps tools. • Exposure to other databases (e.g., PostgreSQL, Redis).

Posted 3 weeks ago

Apply

7.0 - 10.0 years

6 - 7 Lacs

Bengaluru

On-site

GlassDoor logo

About Us: Data Scientist – 3 – Kotak811 Kotak811 is a Neobank incubated by Kotak Mahindra Bank, with a view of providing completely digitized banking services in the convenience of the customer’s mobile phone. 811 is an early mover in the Indian fintech space that started off as a downloadable savings bank account in 2017, post demonetization, when India took one step closer to a digital economy. The Data Scientist-3 in Bangalore (or Mumbai) will be part of the 811 Data Strategy Group that comprises Data Engineers, Data Scientists and Data Analytics professionals. He/she will be associated with one of the key functional areas such as Product Strategy, Cross Sell, Asset Risk, Fraud Risk, Customer Experience etc. and help build robust and scalable solutions that are deployed for real time or near real time consumption and integrated into our proprietary Customer Data Platform (CDP). This is an exciting opportunity to work on data driven analytical solutions and have a profound influence on the growth trajectory of a super fast evolving digital product. Key Requirements of The Role Advanced degree in an analytical field (e.g., Data Science, Computer Science, Engineering, Applied Mathematics, Statistics, Data Analysis) or substantial hands on work experience in the space 7 - 10 Years of relevant experience in the space Expertise in mining AI/ML opportunities from open ended business problems and drive solution design/development while closely collaborating with engineering, product and business teams Strong understanding of advanced data mining techniques, curating, processing and transforming data to produce sound datasets. Strong experience in NLP, time series forecasting and recommendation engines preferred Create great data stories with expertise in robust EDA and statistical inference. Should have at least a foundational understanding in Experimentation design Strong understanding of the Machine Learning lifecycle - feature engineering, training, validation, scaling, deployment, scoring, monitoring, and feedback loop. Exposure to Deep Learning applications and tools like TensorFlow, Theano, Torch, Caffe preferred Experience with analytical programming languages, tools and libraries (Python a must) as well as Shell scripting. Should be proficient in developing production ready code as per best practices. Experience in using Scala/Java/Go based libraries a big plus Very proficient is SQL and other relational databases along with PySpark or Spark SQL. Proficient is using NoSQL databases. Experience in using GraphDBs like Neo4j a plus. Candidate should be able to handle unstructured data with ease. Candidate should have experience in working with MLEs and be proficient (with experience) in using MLOps tools. Should be able to consume the capabilities of said tools with deep understanding of deployment lifecycle. Experience in CI/CD deployment is a big plus. Knowledge of key concepts in distributed systems like replication, serialization, concurrency control etc. a big plus Good understanding of programming best practices and building code artifacts for reuse. Should be comfortable with version controlling and collaborate comfortably in tools like git Ability to create frameworks that can perform model RCAs using analytical and interpretability tools. Should be able to peer review model documentations/code bases and find opportunities Experience in end-to-end delivery of AI driven Solutions (Deep learning , traditional data science projects) Strong communication, partnership and teamwork skills Should be able to guide and mentor teams while leading them by example. Should be an integral part of creating a team culture focused on driving collaboration, technical expertise and partnerships with other teams Ability to work in an extremely fast paced environment, meet deadlines, and perform at high standards with limited supervision A self-starter who is looking to build grounds up and contribute to the making of a potential big name in the space Experience in Banking and financial services is a plus. However, sound logical reasoning and first principles problem solving are even more critical A typical day in the life of the job role: 1. As a key partner at the table, attend key meetings with the business team to bring in the data perspective to the discussions 2. Perform comprehensive data explorations around to generate inquisitive insights and scope out the problem 3. Develop simplistic to advanced solutions to address the problem at hand. We believe in making swift (albeit sometimes marginal) impact to business KPIs and hence adopt an MVP approach to solution development 4. Build re-usable code analytical frameworks to address commonly occurring business questions 5. Perform 360-degree customer profiling and opportunity analyses to guide new product strategy. This is a nascent business and hence opportunities to guide business strategy are plenty 6. Guide team members on data science and analytics best practices to help them overcome bottlenecks and challenges 7. The role will be an approximate 60% IC – 40% leading and the ratios can vary basis need and fit 8. Develop Customer-360 Features that will be integrated into the Customer Data Platform (CDP) to enhance the single view of our customer Website: https://www.kotak811.com/

Posted 3 weeks ago

Apply

6.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Summary We are seeking a skilled and motivated Backend/Data Engineer with hands-on experience in MongoDB and Neo4j to design and implement data-driven applications. The ideal candidate will be responsible for building robust database systems, integrating complex graph and document-based data models, and collaborating with cross-functional teams. Key Responsibilities Experience - 6- 12 years Design, implement, and optimize document-based databases using MongoDB. Model and manage connected data using Neo4j (Cypher query language). Develop RESTful APIs and data services to serve and manipulate data stored in MongoDB and Neo4j. Implement data pipelines for data ingestion, transformation, and storage. Optimize database performance and ensure data integrity and security. Collaborate with frontend developers, data scientists, and product managers. Maintain documentation and support for database solutions. ⸻ Required Skills Strong proficiency in MongoDB: schema design, indexing, aggregation framework. Solid experience with Neo4j: graph modeling, Cypher queries, performance tuning. Programming proficiency in Python, Node.js, or Java. Familiarity with REST APIs, GraphQL, or gRPC. Experience with data modeling (both document and graph models). Knowledge of data security, backup, and recovery techniques. ⸻ Preferred Skills Experience with Mongoose, Spring Data MongoDB, or Neo4j-OGM. Familiarity with data visualization tools (e.g., Neo4j Bloom). Experience with Docker, Kubernetes, or other DevOps tools. Exposure to other databases (e.g., PostgreSQL, Redis). Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Position Overview Design elegant solutions for systems that result in simple, extensible, maintainable, high-quality code based on 12/15 factor app methodology. Provide hands-on technical leadership in architecture, design, development, code reviews, quality assurance, observability, and product support. Use technical knowledge of product design, patterns, and code to identify risks and prevent software defects. Mentor and nurture other team members on doing the above listed at quality. Foster a culture of collaboration, disciplined software engineering practices, and a mindset to leave things better than you found them. Optimize team processes to improve productivity and responsiveness to feedback and changing priorities. Build strong relationships with key stakeholders, collaborate, and communicate effectively to reach successful outcomes. Passionate about delivering high-impact and breakthrough value to stakeholders. Hire, retain and grow top talent on the team, lead the team with a growth mindset, and model a diverse and inclusive environment. Desire to learn the domain and deliver enterprise solutions with minimal traditional coding at a higher velocity. Lead in the planning, execution, and delivery of the team’s commitments. Qualifications & Experience Master’s or bachelor’s degree in computer science or another STEM field Experience with software development in the Alternative Asset Management or Investment Banking domain 10+ years of software development experience in at least one of the following OO languages: Java, C++, or C# 5+ years of Web 2.0 UI/UX development experience in at least one of the following frameworks using JavaScript/TypeScript: ExtJS, ReactJS, AngularJS, or Vue. Hands-on development expertise in Java, Spring Boot, REST, Messaging, JPA, and SQL for the last 5+ years Hands-on development expertise in building applications using RESTful and Microservices architecture Expertise in developing applications using TDD/BDD/ATDD with hands-on experience with at least one of Junit, Spring Test, TestNG, or Cucumber A strong understanding of SOLID principles, Design Patterns, Enterprise Integration Patterns, and 12/15-factor app methodology A strong understanding of relational databases, SQL, ER modeling, and ORM technologies A strong understanding of BPM and its application Hands-on experience with various CI/CD practices and tools such as Jenkins, Azure DevOps, TeamCity, etcetera Exceptional problem-solving & debugging skills. Awareness of emerging application development methodologies, design patterns, and technologies. Ability to quickly learn new and emerging technologies and adopt solutions from within the company or the open-source community. 3+ years managing and mentoring engineering teams Experience with the below will be a plus Buy-side operational and fund accounting processes Business processes and workflows using modern BPM/Low Code/No Code platforms (JBPM, Bonitasoft, Appian, Logic Apps, Unqork, etcetera…) OpenAPI, GraphQL, gRPC, ESB, SOAP, WCF, Kafka, and Node Serverless architecture Microsoft Azure Designing and implementing microservices on AKS Azure DevOps Sencha platform NoSQL databases (MongoDB, Cosmos DB, Neo4J) Python software development Functional programming paradigm Apollo provides equal employment opportunities regardless of age, disability, gender reassignment, marital or civil partner status, pregnancy or maternity, race, colour, nationality, ethnic or national origin, religion or belief, veteran status, gender/sex or sexual orientation, or any other criterion or circumstance protected by applicable law, ordinance, or regulation. The above criteria are intended to be used as a guide only – candidates who do not meet all the above criteria may still be considered if they are deemed to have relevant experience/ equivalent levels of skill or knowledge to fulfil the requirements of the role. Any job offer will be conditional upon and subject to satisfactory reference and background screening checks, all necessary corporate and regulatory approvals or certifications as required from time to time, and entering into definitive contractual documentation satisfactory to Apollo. Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Designation: Sr. Data Scientist Experience: 4–8 Years Job Type: Full-time We are seeking a highly skilled and motivated Senior Data Scientist to join our dynamic team. In this role, you will leverage your advanced analytical and technical expertise to solve complex business problems and drive impactful data-driven decisions. You will design, develop, and deploy sophisticated machine learning models, conduct in-depth data analyses, and collaborate with cross-functional teams to deliver actionable insights. Responsibilities: Design and implement RAG pipelines using LangChain and LangGraph. Integrate AWS Open Source Vector Databases (e.g., OpenSearch with KNN plugin). Handle complex query chaining, prompt orchestration. Work with graph-based knowledge representations (e.g., Neo4j, Stardog). Collaborate with teams to deliver scalable GenAI solutions. Required Skills: Proficiency in LLMs, LangChain, and embeddings. Strong background in classification, regression, clustering, and NLP. Knowledge of AWS and DevOps (Docker, Git). Hands-on with FastAPI, model deployment workflows. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

Remote

Linkedin logo

Data Scientist Associate Responsibilities As a selected intern, your daily tasks will include: Engaging in data science projects and analytics. Developing and implementing data models, AI, ML, deep learning, NLP, GenAI, LangChain, LLM, LLAMA, OpenAI, and GPT-based solutions. Managing data pipelines, ETL/ELT processes, and data warehousing. Utilizing Python and its libraries for advanced programming tasks. Handling data collection, management, cleaning, and transformation. Creating data visualizations using BI tools such as Power BI, Kibana, and Google Data Studio. Working with databases like MongoDB, Neo4j, Dgraph, and SQL. Leveraging cloud platforms, including GCP, AWS, Azure, Linode, and Heroku. Required Skills Python Flask Django MongoDB API Development Elasticsearch Machine Learning Artificial Intelligence Job Details Work Mode: Remote (Work From Home) Start Date: Immediate Duration: 6 months Stipend: ₹10,000 – ₹12,000 per month Industry: Information Technology & Services Employment Type: Probation of 6 Months followed by Full-time Position based on performance Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Company Description Credence provides global business intelligence solutions to the international trade industry, empowering businesses and financial institutions with timely access to risk warning signs and transactional risk assessments. Leveraging advancements in AI engineering, our products deliver deeper insights and predictive intelligence for smarter decision-making Credence is a brand of Interlinkages Limited (www.interlinkages.online), headquartered in Hong Kong, which provides technology-driven, innovative financing solutions to Corporates and Banks worldwide. With offices in India, Hong Kong, and Bangladesh, we are continuously evolving our AI-driven solutions to redefine business intelligence in cross-border trade and financial risk assessment. Role Description We are looking for a skilled backend developer to join our team in building intelligent, scalable, and API-driven web applications. This is a full-time, on-site role based in Gurugram. You will play a critical role in building backend services in Node.js and Python, integrating with AI models, and delivering efficient, event-driven solutions. We're looking for someone who excels at taking on complex challenges and prides themselves on writing clean, scalable and efficient code. Join us to be at the forefront of revolutionizing how we navigate multi-national financial landscapes. Responsibilities Design and implement scalable RESTful APIs and backend services for cross-platform applications Build and maintain existing MEVN and Python stack applications Event driven integrations for automation of business processes (using queues, webhooks) Utilize prompt engineering techniques to optimize AI model interactions. Developing high-performance applications by writing testable, reusable, and efficient code. Implement Retrieval-Augmented Generation (RAG) methods to improve information retrieval Develop backend systems that can process, store, and retrieve structured/unstructured data efficiently. Ensure compliance with regional and global data protection and privacy regulations Participate in code reviews, branching strategies, and CI/CD pipelines for clean, reliable releases. You are expected to hold yourself accountable for writing high-quality, well-designed, maintainable software. Technical Skills required: 3+ years of backend development experience with Node.js (Express) and/or Python (Flask, FastAPI). Strong understanding of RESTful API design, authentication, and performance optimization 2+ years of experience with NoSQL & RDBMS databases (preferably MongoDB, MySQL) Exposure to RAG pipelines or working with vector stores (e.g., Pinecone, FAISS, Weaviate) for AI-powered search and retrieval Experience in automating data collection and processing workflows to streamline operations. Ability to fine-tune pre-trained AI models for specific tasks using frameworks Experience in designing and implementing graph databases (e.g., Neo4j) to model complex relationships and perform efficient queries. Strong debugging, profiling, and optimization skills. Hands-on experience with Git (including branching strategies), Docker, and basic cloud deployments (AWS/GCP/Azure). Knowledge of data validation, error handling, and system observability/logging practices. Understanding of data protection laws (e.g., GDPR, HIPAA) and secure coding practices. Soft Skills Strong problem-solving skills with keen attention to detail. Ability to work both independently and collaboratively within a team Have good verbal and written communication skills. Expected Notice period – Less than 15 days Location : Gurugram, Haryana www.credencedata.com Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Location(s): Quay Building 8th Floor, Bagmane Tech Park, Bengaluru, IN Line Of Business: Data Estate(DE) Job Category: Engineering & Technology Experience Level: Experienced Hire At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Role Overview: We are seeking a highly skilled and experienced Senior Full Stack Engineer to join our dynamic team. You will play a crucial role in designing, developing, deploying, and maintaining highly resilient, low-latency web applications that form the core of our user experience. We're looking for a hands-on expert with deep proficiency in the modern JavaScript ecosystem, particularly Node.js, TypeScript, and React. While your core expertise lies in JavaScript technologies, experience developing backend systems with Python and/or Java is valuable. As a senior member of the team, you will significantly influence our technical direction, mentor other engineers, and champion software development best practices. Key Responsibilities: Take ownership of the design, development, testing, deployment, and maintenance of robust, scalable,highly resilient, low latency web applications Lead the implementation of complex features, focusing on performant front-end solutions (React, TypeScript) and efficient back-end services (primarily Node.js) Architect and implement solutions optimized for speed, scalability, and reliability across the entire stack Design, build, document, and maintain clean, efficient, and scalable APIs Collaborate effectively with product managers, designers, and fellow engineers to translate requirements into well-architected technical solutions Write high-quality, maintainable, secure, and thoroughly tested code Actively participate in code reviews, providing constructive feedback Diagnose, troubleshoot, and resolve complex technical issues across all environments Mentor junior and mid-level engineers, fostering their technical growth Stay abreast of emerging web technologies, evaluating and proposing their adoption where beneficial Contribute significantly to architectural discussions, helping to shape our technical landscape Required Qualifications & Skills: 10+ years of professional experience in full-stack software development, with a proven track record of shipping complex web applications Demonstrable experience building and operating web applications with high availability and low-latency. Strong proficiency in JavaScript and TypeScript Extensive experience with Node.js for building scalable back-end services Strong proficiency in React and its ecosystem (state management, hooks) Solid command of modern web technologies (HTML5, CSS3) Experience designing and building robust APIs following RESTful principles Understanding of fundamental software engineering principles and architectural design patterns Experience working with relational databases and at least one NoSQL database Proficiency with Git and modern CI/CD practices Experience with testing frameworks (unit, integration, end-to-end) Strong analytical, problem-solving, and debugging capabilities Excellent communication and interpersonal skills Preferred Qualifications & Skills: Experience with Python (Django, Flask, FastAPI) and/or Java (Spring Boot) Familiarity with graph databases, particularly Neo4j Cloud platform experience (AWS, Azure, or GCP) Experience with Docker and Kubernetes Knowledge of microservices architecture patterns Experience with caching strategies (Redis, Memcached) Understanding of message queues and event-driven architecture Experience with observability tools for monitoring, logging, and tracing Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary. For more information on the Securities Trading Program, please refer to the STP Quick Reference guide on ComplianceNet Please note: STP categories are assigned by the hiring teams and are subject to change over the course of an employee’s tenure with Moody’s. Show more Show less

Posted 3 weeks ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a seasoned Senior Engineering Manager (Data Engineering) to drive the development & maintenance of data pipelines developed by data engineering teams focusing on deep domain expertise of HR/Finance data. This role will lead a team of data engineers who will be maintaining data pipelines, and operational frameworks that support enterprise-wide data solutions. The ideal candidate will drive best practices in data engineering, cloud technologies, and Agile development, ensuring robust governance, data quality, and efficiency. The role requires technical expertise, operational excellence and a deep understanding of data solutions to optimize data-driven decision-making. Roles & Responsibilities: Lead and mentor a high performing team of data engineers who will be developing and maintaining the complex data pipelines. Drive the development of data tools and frameworks for managing and accessing data efficiently across the organization. Oversee the implementation of performance monitoring protocols across data pipelines, ensuring real-time visibility, alerts, and automated recovery mechanisms. Coach engineers in building dashboards and aggregations to monitor pipeline health and detect inefficiencies, ensuring optimal performance and cost-effectiveness. Lead the implementation of self-healing solutions, reducing failure points and improving pipeline stability and efficiency across multiple product features. Oversee data governance strategies, ensuring compliance with security policies, regulations, and data accessibility best practices. Guide engineers in data modeling, metadata management, and access control, ensuring structured data handling across various business use cases. Collaborate with business leaders, product owners, and cross-functional teams to ensure alignment of data architecture with product requirements and business objectives. Prepare team members for stakeholder discussions by helping assess data costs, access requirements, dependencies, and availability for business scenarios. Drive Agile and Scaled Agile (SAFe) methodologies, managing sprint backlogs, prioritization, and iterative improvements to enhance team velocity and project delivery. Stay up-to-date with emerging data technologies, industry trends, and best practices, ensuring the organization leverages the latest innovations in data engineering and architecture. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Proficiency in Python, PySpark, SQL. Experience with dimensional data modeling. Experience working with Apache Spark, Apache Airflow. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Experienced with AWS or GCP or Azure cloud services. Understanding of end-to-end project/product life cycle. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications 12 -15 years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Project Management certifications preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Show more Show less

Posted 4 weeks ago

Apply

40.0 years

5 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-216718 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: May. 30, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a skilled and creative RShiny Developer with hands-on experience in MarkLogic and graph databases. You will be responsible for designing and developing interactive web applications using RShiny, integrating complex datasets stored in MarkLogic, and leveraging graph capabilities for advanced analytics and knowledge representation. Roles & Responsibilities: Develop interactive dashboards and web applications using RShiny. Connect and query data from MarkLogic, especially leveraging its graph and semantic features (e.g., RDF triples, SPARQL). Design and maintain backend data workflows and APIs. Collaborate with data scientists, analysts, and backend engineers to deliver integrated solutions. Optimize performance and usability of RShiny applications. Functional Skills: Must-Have Skills: Proven experience with R and RShiny in a production or research setting. Proficiency with MarkLogic , including use of its graph database features (triples, SPARQL queries, semantics). Familiarity with XQuery , XPath , or REST APIs for interfacing with MarkLogic. Strong understanding of data visualization principles and UI/UX best practices. Experience with data integration and wrangling. Good-to-Have Skills: Experience with additional graph databases (e.g., Neo4j, Stardog) is a plus. Background in knowledge graphs, linked data, or ontologies (e.g., OWL, RDF, SKOS). Familiarity with front-end frameworks (HTML/CSS/JavaScript) to enhance RShiny applications. Experience in regulated industries (e.g., pharma, finance) or with complex domain ontologies. Professional Certifications (preferred): SAFe Methodology Courses in R, RShiny, and data visualization from reputable institutions (e.g., Johns Hopkins’ “Data Science Specialization” on Coursera) Other Graph Certifications (optional but beneficial) Neo4j Certified Professional (to demonstrate transferable graph database skills) Linked Data and Semantic Web Training (via organizations like W3C or O’Reilly) Soft Skills: Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task management skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies