Home
Jobs

366 Neo4J Jobs - Page 9

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

10 - 15 Lacs

Bangalore Rural

Work from Office

Naukri logo

Exp in leveraging Graph data science concepts to solve business use cases a must. Exp with any graph tools/packages like Neo4J, MemGraph, NetworkX, GraphFrame, iGraph SQL, Python, Scala, ETL frameworks, data warehousing) Required Candidate profile Immediate Joiners

Posted 3 weeks ago

Apply

10.0 - 20.0 years

35 - 60 Lacs

Mumbai, India

Work from Office

Naukri logo

Design Full Stack solutions with cloud infrastructure (IAAS, PAAS, SAAS, on Premise, Hybrid Cloud) Support Application and infrastructure design and build as a subject matter expert Implement proof of concepts to demonstrate value of the solution designed Provide consulting support to ensure delivery teams build scalable, extensible, high availability, low latency, and highly usable applications Ensure solutions are aligned with requirements from all stake holders such as Consumers, Business, IT, Security and Compliance Ensure that all Enterprise IT parameters and constraints are considered as part of the design Design an appropriate technical solution to meet business requirements that may involve Hybrid cloud environments including Cloud-native architecture, Microservices, etc. Working knowledge of high availability, low latency end-to-end technology stack is especially important using both physical and virtual load balancing, caching, and scaling technology Awareness of Full stack web development frameworks such as Angular / React / Vue Awareness of relational and no relational / NoSql databases such as MongoDB / MS SQL / Cassandra / Neo4J / DynamoDB Awareness of Data Streaming platforms such as Apache Kafka / Apache Flink / AWS Kinesis Working experience of using AWS Step Functions or Azure Logic Apps with serverless Lambda or Azure Functions Optimizes and incorporates the inputs of specialists in solution design. Establishes the validity of a solution and its components with both short- term and long-term implications. Identifies the scalability options and implications on IT strategy and/or related implications of a solution and includes these in design activities and planning. Build strong professional relationships with key IT and business executives. Be a trusted advisor for Cross functional and Management Teams. Partners effectively with other teams to ensure problem resolution. Provide solutions and advice, create Architectures, PPT. Documents and effectively transfer knowledge to internal and external stakeholders Demonstrates knowledge of public cloud technology & solutions. Applies broad understanding of technical innovations & trends in solving business problems. Manage special projects and strategic initiatives as assigned by management. Implement and assist in developing policies for Information Security, and Environmental compliance, ensuring the highest standards are maintained. Ensure adherence to SLAs with internal and external customers and compliance with Information Security Policies, including risk assessments and procedure reviews.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

JOB DESCRIPTION You strive to be an essential member of a diverse team of visionaries dedicated to making a lasting impact. Don’t pass up this opportunity to collaborate with some of the brightest minds in the field and deliver best-in-class solutions to the industry. As a Senior Lead Data Architect at JPMorgan Chase within the Consumer and Community Banking Data Technology, you are an integral part of a team that works to develop high-quality data architecture solutions for various software applications, platform and data products. Drive significant business impact and help shape the global target state architecture through your capabilities in multiple data architecture domains. Job responsibilities Represents the data architecture team at technical governance bodies and provides feedback regarding proposed improvements regarding data architecture governance practices Evaluates new and current technologies using existing data architecture standards and frameworks Regularly provides technical guidance and direction to support the business and its technical teams, contractors, and vendors Design secure, high-quality, scalable solutions and reviews architecture solutions designed by others Drives data architecture decisions that impact data product & platform design, application functionality, and technical operations and processes Serves as a function-wide subject matter expert in one or more areas of focus Actively contributes to the data engineering community as an advocate of firmwide data frameworks, tools, and practices in the Software Development Life Cycle Influences peers and project decision-makers to consider the use and application of leading-edge technologies Advises junior architects and technologists Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years of applied experience Advanced knowledge of architecture, applications, and technical processes with considerable in-depth knowledge in data architecture discipline and solutions (e.g., data modeling, native cloud data services, business intelligence, artificial intelligence, machine learning, data domain driven design, etc.) Practical cloud based data architecture and deployment experience, preferably AWS Practical SQL development experiences in cloud native relational databases, e.g. Snowflake, Athena, Postgres Ability to deliver various types of data models with multiple deployment targets, e.g. conceptual, logical and physical data models deployed as an operational vs. analytical data stores Advanced in one or more data engineering disciplines, e.g. streaming, ELT, event processing Ability to tackle design and functionality problems independently with little to no oversight Ability to evaluate current and emerging technologies to select or recommend the best solutions for the future state data architecture Preferred qualifications, capabilities, and skills Financial services experience, card and banking a big plus Practical experience in modern data processing technologies, e.g., Kafka streaming, DBT, Spark, Airflow, etc. Practical experience in data mesh and/or data lake Practical experience in machine learning/AI with Python development a big plus Practical experience in graph and semantic technologies, e.g. RDF, LPG, Neo4j, Gremlin Knowledge of architecture assessments frameworks, e.g. Architecture Trade off Analysis ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction.

Posted 3 weeks ago

Apply

5.0 years

5 - 7 Lacs

Hyderābād

On-site

GlassDoor logo

You strive to be an essential member of a diverse team of visionaries dedicated to making a lasting impact. Don’t pass up this opportunity to collaborate with some of the brightest minds in the field and deliver best-in-class solutions to the industry. As a Senior Lead Data Architect at JPMorgan Chase within the Consumer and Community Banking Data Technology, you are an integral part of a team that works to develop high-quality data architecture solutions for various software applications, platform and data products. Drive significant business impact and help shape the global target state architecture through your capabilities in multiple data architecture domains. Job responsibilities Represents the data architecture team at technical governance bodies and provides feedback regarding proposed improvements regarding data architecture governance practices Evaluates new and current technologies using existing data architecture standards and frameworks Regularly provides technical guidance and direction to support the business and its technical teams, contractors, and vendors Design secure, high-quality, scalable solutions and reviews architecture solutions designed by others Drives data architecture decisions that impact data product & platform design, application functionality, and technical operations and processes Serves as a function-wide subject matter expert in one or more areas of focus Actively contributes to the data engineering community as an advocate of firmwide data frameworks, tools, and practices in the Software Development Life Cycle Influences peers and project decision-makers to consider the use and application of leading-edge technologies Advises junior architects and technologists Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years of applied experience Advanced knowledge of architecture, applications, and technical processes with considerable in-depth knowledge in data architecture discipline and solutions (e.g., data modeling, native cloud data services, business intelligence, artificial intelligence, machine learning, data domain driven design, etc.) Practical cloud based data architecture and deployment experience, preferably AWS Practical SQL development experiences in cloud native relational databases, e.g. Snowflake, Athena, Postgres Ability to deliver various types of data models with multiple deployment targets, e.g. conceptual, logical and physical data models deployed as an operational vs. analytical data stores Advanced in one or more data engineering disciplines, e.g. streaming, ELT, event processing Ability to tackle design and functionality problems independently with little to no oversight Ability to evaluate current and emerging technologies to select or recommend the best solutions for the future state data architecture Preferred qualifications, capabilities, and skills Financial services experience, card and banking a big plus Practical experience in modern data processing technologies, e.g., Kafka streaming, DBT, Spark, Airflow, etc. Practical experience in data mesh and/or data lake Practical experience in machine learning/AI with Python development a big plus Practical experience in graph and semantic technologies, e.g. RDF, LPG, Neo4j, Gremlin Knowledge of architecture assessments frameworks, e.g. Architecture Trade off Analysis

Posted 3 weeks ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Design Full Stack solutions with cloud infrastructure (IAAS, PAAS, SAAS, on Premise, Hybrid Cloud) Support Application and infrastructure design and build as a subject matter expert Implement proof of concepts to demonstrate value of the solution designed Provide consulting support to ensure delivery teams build scalable, extensible, high availability, low latency, and highly usable applications Ensure solutions are aligned with requirements from all stake holders such as Consumers, Business, IT, Security and Compliance Ensure that all Enterprise IT parameters and constraints are considered as part of the design Design an appropriate technical solution to meet business requirements that may involve Hybrid cloud environments including Cloud-native architecture, Microservices, etc. Working knowledge of high availability, low latency end-to-end technology stack is especially important using both physical and virtual load balancing, caching, and scaling technology Awareness of Full stack web development frameworks such as Angular / React / Vue Awareness of relational and no relational / NoSql databases such as MongoDB / MS SQL / Cassandra / Neo4J / DynamoDB Awareness of Data Streaming platforms such as Apache Kafka / Apache Flink / AWS Kinesis Working experience of using AWS Step Functions or Azure Logic Apps with serverless Lambda or Azure Functions Optimizes and incorporates the inputs of specialists in solution design. Establishes the validity of a solution and its components with both short- term and long-term implications. Identifies the scalability options and implications on IT strategy and/or related implications of a solution and includes these in design activities and planning. Build strong professional relationships with key IT and business executives. Be a trusted advisor for Cross functional and Management Teams. Partners effectively with other teams to ensure problem resolution. Provide solutions and advice, create Architectures, PPT. Documents and effectively transfer knowledge to internal and external stakeholders Demonstrates knowledge of public cloud technology & solutions. Applies broad understanding of technical innovations & trends in solving business problems. Manage special projects and strategic initiatives as assigned by management. Implement and assist in developing policies for Information Security, and Environmental compliance, ensuring the highest standards are maintained. Ensure adherence to SLAs with internal and external customers and compliance with Information Security Policies, including risk assessments and procedure reviews. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

Develop and maintain RESTful APIs using Python FAST or Flask frameworks. Design, implement, and manage containerized applications using Kubernetes and Docker. Optimize and manage in-memory data structures using Redis. Work with graph databases such as Neo4j or Vector J to manage and query complex data relationships. Collaborate with cross-functional teams to define, design, and ship new features. Write clean, maintainable, and efficient code. Troubleshoot and debug applications. Participate in code reviews to maintain code quality and share knowledge. Show more Show less

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Kanpur, Uttar Pradesh, India

On-site

Linkedin logo

Description We are looking for a data engineer and scientist with a deep understanding of blockchain network protocols, infrastructure, systems engineering, distributed systems, and architectural design. The candidate will lead research and ideation to scale data infrastructure and feasibility in production. The candidate will have the opportunity to expand expertise in different areas of engineering and applied research. This is an exciting opportunity to make a measurable impact within a corporate environment that values precision, innovation, and strategic thinking. Responsibilities Architect and build a highly scalable, resilient, and distributed system that integrates a variety of data sources. Standardizing data schemas Maintain servers and Build performant, and reliable production-grade systems. Design and optimize protocol implementations. Work across the tech stack (ETL, cache, code, deploy) Mentor fellow team members and participate in code reviews and design reviews Collaborate cross-functionally with other data scientists, designers, and product managers - to define and implement services. Develop novel clustering approaches to tackling blockchain-specific challenges around scale, security, and privacy. Research on privacy preserving blockchains Develop related pipeline using Python and other technologies Support product and intelligence teams. Eligibility Bachelor's degree or above in Computer Science or related field. At least 1 Year of experience in related domain & skills. Experience in building real time highly dynamic and data heavy systems Ability to work independently, be proactive, prioritize, and communicate novel ideas experience working in fast-paced environments or startups Prior experience in publishing research oriented articles in A* journals. Proficiency and expertise in different data platforms. Proficiency in Python, Kafka, websockets, Elasticsearch, MongoDB, Kubernetes, Docker, Neo4j, Cypher, Wireshark, Git, AWS, Google Cloud, Swagger, Linux, VS, Solidity, and network protocols. Strong ability to synthesize, interpret, and report complex information clearly and accurately. Knowledge of blockchain technologies and different platforms, with a focus on tracing transactions and identifying malicious activity. Also have working knowledge of cryptocurrencies, darknet marketplaces, Tor, experience in making payments with bitcoin and ethereum Compensation. Excellent communication skills to collaborate with cross-functional teams and able to explain to others with no technical knowledge. Understanding of coding standards Desired Eligibility Relevant certifications and experience in cybersecurity or intelligence gathering. Working knowledge of SQL is preferred. Prior experience working in fast-paced environments or startups. Travel As and when required, across the country for project execution and monitoring as well as for coordination with geographically distributed teams. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title : Senior Data Engineer -Snowflake-Bangalore location only Start date -June joining or 20 days max. Experience 5-7 years Work mode Hybrid( 3days in a week to office) -Bangalore Top 3 skills Expert knowledge and minimum 5 years professional experience on data pipeline modeling/implementation, data management, data transformation in a corporate environment Practical experience in automated data quality management and test automation Well-versed in Cloud based data management technologies (Snowflake, Databricks, Neo4J, dbt) Experience with processing, optimizing, and managing large data sets (multiple TB scale). Looking for resources with strong snowflake experience. Proven knowledge (e.g. certification) on data vault framework High proficiency in designing, managing, monitoring, and administrating data pipelines and data structures in Cloud based technology stack Ability to translate complex problems into simple technical solutions Ability to abstract data problems into re-usable components/modules Lifetime willingness to learn and explore new technologies in data, analytics and AI Strong orientation towards quality and results (attention to detail and standardization) Practical experience working in agile environments Good communication skills Analytical and out of the box thinking Natural drive for innovation and optimization Teamwork oriented collaborative attitude in a multi-national hybrid team environment Expert knowledge and minimum 5 years professional experience on data pipeline modeling/implementation, data management, data transformation in a corporate environment Practical experience in automated data quality management and test automation Well-versed in Cloud based data management technologies (Snowflake, Databricks, Neo4J, dbt) Experience with processing, optimizing, and managing large data Company Name: VARITE INDIA PRIVATE LIMITED About The Client: A global IT services and consulting company, multinational information technology (IT), headquartered in Tokyo, Japan. The Client offers a wide array of IT services, including application development, infrastructure management, and business process outsourcing. Their consulting services span business and technology, while their digital solutions focus on transformation and user experience design. It excels in data and intelligence services, emphasizing analytics, AI, and machine learning. Additionally, their cybersecurity, cloud, and application services round out a comprehensive portfolio designed to meet the diverse needs of businesses worldwide. About The Job: Senior Data Engineer Job Description Role Overview: A Senior Data Engineer is responsible for designing, building, and maintaining the infrastructure and tools required for efficient data processing and analysis. This role involves working closely with business team, data analysts, and other stakeholders to ensure data quality and accessibility. We need profiles with Snowflake experience. Essential Job Functions: Design and Implement ETL Processes: Develop and manage ETL (Extract, Transform, Load) processes to transform raw data into usable formats. Data Pipeline Management: Build and maintain scalable data pipelines to handle increasing data volumes and complexity. Data Warehousing: Manage data warehousing solutions to store and organize large datasets. Collaboration: Work with data scientists and business teams to improve data models and support data-driven decision-making. Data Integration and optimization: Implement and manage the deployment of data integration models into production environments. Optimize the workload and data processing Qualifications: Post degree in computer science, information science, or data analytics Expert knowledge and minimum 5 years professional experience on data pipeline modeling/implementation, data management, data transformation in a corporate environment Practical experience in automated data quality management and test automation Well-versed in Cloud based data management technologies (Snowflake, Databricks, Neo4J, dbt) Experience with processing, optimizing, and managing large data sets (multiple TB scale). Profound knowledge and hands-on experience with advanced analytics/machine learning environments such as Phyton, Apache Spark, Snowpark on Snowflake Proficient English plus an additional foreign language is preferred Knowledge on data visualization such as Qlik or PowerBI is preferred Working experience with Agile methods within Microsoft DevOps Experience working with enterprise data in the field of laboratory diagnostics or large global enterprise preferred. Experience in working with multinational teams and/or different countries and cultures. How to Apply: Interested candidates are invited to submit their resume using the apply online button on this job post. About VARITE: VARITE is a global staffing and IT consulting company providing technical consulting and team augmentation services to Fortune 500 Companies in USA, UK, CANADA and INDIA. VARITE is currently a primary and direct vendor to the leading corporations in the verticals of Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services. Equal Opportunity Employer: VARITE is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, marital status, veteran status, or disability status. Unlock Rewards: Refer Candidates and Earn. If you're not available or interested in this opportunity, please pass this along to anyone in your network who might be a good fit and interested in our open positions. VARITE offers a Candidate Referral program, where you'll receive a one-time referral bonus based on the following scale if the referred candidate completes a three-month assignment with VARITE. Exp Req - Referral Bonus 0 - 2 Yrs. - INR 5,000 2 - 6 Yrs. - INR 7,500 6 + Yrs. - INR 10,000 Show more Show less

Posted 3 weeks ago

Apply

2.0 - 5.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will will be at the forefront of innovation, using their skills to design and implement pioneering AI/Gen AI solutions. With an emphasis on creativity, collaboration, and technical excellence, this role provides a unique opportunity to work on ground-breaking projects that enhance operational efficiency at the Amgen Technology and Innovation Centre while ensuring the protection of critical systems and data. Roles & Responsibilities: Design, develop, and deploy Gen AI solutions using advanced LLMs like OpenAI API, Open Source LLMs (Llama2, Mistral, Mixtral), and frameworks like Langchain and Haystack. Design and implement AI & GenAI solutions that drive productivity across all roles in the software development lifecycle. Demonstrate the ability to rapidly learn the latest technologies and develop a vision to embed the solution to improve the operational efficiency within a product team Collaborate with multi-functional teams (product, engineering, design) to set project goals, identify use cases, and ensure seamless integration of Gen AI solutions into current workflows. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Programming Languages such as Java and Python experience OR Bachelor’s degree and 3 to 5 years of Programming Languages such as Java and Python experience OR Diploma and 7 to 9 years of Programming Languages such as Jav and Python experience Preferred Qualifications: Proficiency in programming languages such as Python and Java. Leverage advanced knowledge of Python open-source software stack such as Django or Flask, Django Rest or FastAPI, etc. Experience working with RAG technologies and LLM frameworks, LLM model registries (Hugging Face), LLM APIs, embedding models, and vector databases Familiarity with cloud security (AWS /Azure/ GCP) Utilize expertise in integrating and demonstrating Gen AI LLMs to maximize operational efficiency.Productivity Tools and Technology Engineer Good-to-Have Skills: Experience with graph databases (Neo4J and Cypher would be a big plus) Experience with Prompt Engineering and familiarity with frameworks such as Dspy would be a big plus Professional Certifications: AWS / GCP / Databricks Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. Thrive What you can expect of us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 3 weeks ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are seeking an experienced AI Solution Architect to lead the design and implementation of AI-driven, cloud-native applications. The ideal candidate will possess deep expertise in Generative AI, Agentic AI, cloud platforms (AWS, Azure, GCP), and modern data engineering practices. This role involves collaborating with cross-functional teams to deliver scalable, secure, and intelligent solutions in a fast-paced, innovation-driven environment. Key Responsibilities: Design and architect AI/ML solutions, including Generative AI, Retrieval-Augmented Generation (RAG), and fine-tuning of Large Language Models (LLMs) using frameworks like LangChain, LangGraph, and Hugging Face. Implement cloud migration strategies for monolithic systems to microservices/serverless architectures using AWS, Azure, and GCP. Lead development of document automation systems leveraging models such as BART, LayoutLM, and Agentic AI workflows. Architect and optimize data lakes, ETL pipelines, and analytics dashboards using Databricks, PySpark, Kibana, and MLOps tools. Build centralized search engines using ElasticSearch, Solr, and Neo4j for intelligent content discovery and sentiment analysis. Ensure application and ML pipeline security with tools like OWASP ZAP, SonarQube, WebInspect, and container security tools. Collaborate with InfoSec and DevOps teams to maintain CI/CD pipelines, perform vulnerability analysis, and ensure compliance. Guide modernization initiatives across app stacks and coordinate BCDR-compliant infrastructures for mission-critical services. Provide technical leadership and mentoring to engineering teams during all phases of the SDLC. Required Skills & Qualifications 12+ years of total experience, with extensive tenure as a Solution Architect in AI and cloud-driven transformations. Hands-on experience with: Generative AI, LLMs, Prompt Engineering, LangChain, AutoGen, Vertex AI, AWS Bedrock Python, Java (Spring Boot, Spring AI), PyTorch Vector & Graph Databases: ElasticSearch, Solr, Neo4j Cloud Platforms: AWS, Azure, GCP (CAF, serverless, containerization) DevSecOps: SonarQube, OWASP, oAuth2, container security Strong background in application modernization, cloud-native architecture, and MLOps orchestration. Familiarity with front-end technologies: HTML, JavaScript, Angular, JQuery. Certifications Google Professional Cloud Architect AWS Solution Architect Associate Cisco Certified Design Associate (CCDA) Cisco Certified Network Associate (CCNA) Cisco Security Ninja Green Belt Show more Show less

Posted 3 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a skilled and creative RShiny Developer with hands-on experience in MarkLogic and graph databases. You will be responsible for designing and developing interactive web applications using RShiny, integrating complex datasets stored in MarkLogic, and leveraging graph capabilities for advanced analytics and knowledge representation. Roles & Responsibilities: Develop interactive dashboards and web applications using RShiny. Connect and query data from MarkLogic, especially leveraging its graph and semantic features (e.g., RDF triples, SPARQL). Design and maintain backend data workflows and APIs. Collaborate with data scientists, analysts, and backend engineers to deliver integrated solutions. Optimize performance and usability of RShiny applications. Functional Skills: Must-Have Skills: Proven experience with R and RShiny in a production or research setting. Proficiency with MarkLogic , including use of its graph database features (triples, SPARQL queries, semantics). Familiarity with XQuery , XPath , or REST APIs for interfacing with MarkLogic. Strong understanding of data visualization principles and UI/UX best practices. Experience with data integration and wrangling. Good-to-Have Skills: Experience with additional graph databases (e.g., Neo4j, Stardog) is a plus. Background in knowledge graphs, linked data, or ontologies (e.g., OWL, RDF, SKOS). Familiarity with front-end frameworks (HTML/CSS/JavaScript) to enhance RShiny applications. Experience in regulated industries (e.g., pharma, finance) or with complex domain ontologies. Professional Certifications (preferred): SAFe Methodology Courses in R, RShiny, and data visualization from reputable institutions (e.g., Johns Hopkins Data Science Specialization on Coursera) Other Graph Certifications (optional but beneficial) Neo4j Certified Professional (to demonstrate transferable graph database skills) Linked Data and Semantic Web Training (via organizations like W3C or OReilly) Soft Skills: Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task management skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

37 - 45 Lacs

Bengaluru

Work from Office

Naukri logo

: Job TitleSenior Data Science Engineer Lead LocationBangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, Vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design & Develop Agentic AI ApplicationsUtilize frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language ModelsCustomize LLMs and SLMs using domain-specific data to improve performance and relevance in specialized applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-FunctionallyWork with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimize AI WorkflowsEmploy MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring Your skills and experience 15+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs, and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or ecommerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for crossfunctional collaboration. Ability to work independently and manage multiple projects simultaneously How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs

Posted 3 weeks ago

Apply

5.0 - 8.0 years

2 - 6 Lacs

Mumbai

Work from Office

Naukri logo

Job Information Job Opening ID ZR_1963_JOB Date Opened 17/05/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Neo4j GraphDB Developer City Mumbai Province Maharashtra Country India Postal Code 400001 Number of Positions 5 Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 3 weeks ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Indi

Work from Office

Naukri logo

- Proficient in Python programming. - Experience with Neo4j for graph database management and querying. - Knowledge of cloud platforms including AWS, Azure, and GCP. - Familiarity with Postgres and Clickhouse for database management and optimization. - Understanding of serverless architecture for building and deploying applications. - Experience with Docker for containerization and deployment. Role & responsibilities Preferred candidate profile

Posted 3 weeks ago

Apply

17.0 - 19.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Software Engineering Associate Director - HIH - Evernorth. About Evernorth Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Position Summary: The Software Development Associate Director provides hands on leadership, management, and thought leadership for a Delivery organization enabling Cigna's Technology teams. This individual will lead a team based in our Hyderabad Innovation Hub to deliver innovative solutions supporting multiple business and technology domains within Cigna. This includes the Sales & Underwriting, Producer, Service Operations, and Pharmacy business lines, as well as testing and DevOps enablement. The focus of the team is to build innovative go-to-market solutions enabling business while modernizing our existing asset base to support business growth. The Technology strategy is aligned to our business strategy and the candidate will not only be able to influence technology direction but also establishing our team through recruiting and mentoring employees and vendor resources. This is a hands-on position with visibility to the highest levels of the Cigna Technology team. This leader will focus on enabling innovation using the latest technologies and development techniques. This role will foster rapidly building out a scalable delivery organization that aligns with all areas within the Technology team. The ideal candidate will be able to attract and develop talent in a highly dynamic environment. Job Description & Responsibilities: Provide leadership, vision, and design direction for the quality and development of the US Medical and Health Services Technology teams based at the Hyderabad Innovation Hub (HIH). Work in close coordination with leaders and teams based in the United States, as well as contractors employed by the US Medical and Health Services Technology team who are based both within and outside of the United States, to deliver products and capabilities in support of Cigna's business lines. Provide leadership to HIH leaders and teams ensuring the team is meeting the following objectives: Design, configuration, implementation application design/development, and quality engineering within the supported technologies and products. Hands-on people manager who has experience leading agile teams of highly talented technology professionals developing large solutions and internal facing applications. They are expected to work closely with developers, quality engineers, technical project managers, principal engineers, and business stakeholders to ensure that application solutions meet business/customer requirements. A servant leader mentality and a history of creating an inclusive environment, fostering diverse views and approaches from the team, and coaching and mentoring them to thrive in a dynamic workplace. A history of embracing and incubating emerging technology and open-source products. A passion for building highly resilient, scalable, and available platforms, rich reusable foundational capabilities and seamless developer experience while focusing on strategic vision and technology roadmap delivery in an MVP / iterative fast paced approach. Accountable for driving towards timely decisions while influencing across engineering and development delivery teams to drive towards meeting project timelines while balancing destination state. Ensure engineering solutions align with the Technology strategy and that they support the application’s requirements. Plan and implement procedures that will maximize engineering and operating efficiency for application integration technologies. Identify and drive process improvement opportunities. Proactive monitoring and management design of supported assets assuring performance, availability, security, and capacity. Maximize the efficiency (operational, performance, and cost) of the application assets. Experience Required: 17 to 19 years of IT and business/industry or equivalent experience preferred, with at least 5 years of experience in a leadership role with responsibility for the delivery of large-scale projects and programs. Leadership, cross-cultural communication, and familiarity with wide range of technologies and stakeholders. Strong Emotional Intelligence with the ability to foster collaboration across geographically dispersed teams. Experience Desired: Recognized leader with proven track record of delivering software engineering initiatives and cross-IT/business initiatives. Proven experience leading/managing technical teams with a passion for developing talent within the team. Experience with vendor management in an onshore/offshore model. Experience in Healthcare, Pharmacy and/or Underwriting systems. Experience with AWS. Education and Training Required: B.S. degree in Computer Science, Information Systems, or other related degrees; Industry certifications such as AWS Solution Architect, PMP, Scrum Master, or Six Sigma Green Belt are also ideal. Primary Skills: Familiarity with most of the following Application Development technologies: Python, RESTful services, React, Angular, Postgres, and MySQL (relational database management systems). Familiarity with most of the following Data Engineering technologies: Databricks, Spark, PySpark, SQL, Teradata, and multi-cloud environments. Familiarity with most of the following Cloud and Emerging technologies: AWS, LLMs (OpenAI, Anthropic), vector databases (Pinecone, Milvus), graph databases (Neo4j, JanusGraph, Neptune), prompt engineering, and fine-tuning AI models. Familiarity with enterprise software development lifecycle to include production reviews and ticket resolution, navigating freeze/stability periods effectively, total cost of ownership reporting, and updating applications to align with evolving security and cloud standards. Familiarity with agile methodology including SCRUM team leadership or Scaled Agile (SAFE). Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Deep people and matrix management skills, with a heavy emphasis on coaching and mentoring of less senior staff, and a strong ability to influence VP level leaders. Proven ability to resolve issues and mitigate risks that could undermine the delivery of critical initiatives. Strong written and verbal communication skills with the ability to interact with all levels of the organization. Strong influencing/negotiation skills. Strong interpersonal/relationship management skills. Strong time and project management skills. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Overview: We are looking for a hands-on, full-cycle AI/ML Engineer who will play a central role in developing a cutting-edge AI agent platform. This platform is designed to automate and optimize complex workflows by leveraging large language models (LLMs), retrieval-augmented generation (RAG), knowledge graphs, and agent orchestration frameworks. As the AI/ML Engineer, you will be responsible for building intelligent agents from the ground up — including prompt design, retrieval pipelines, fine-tuning models, and deploying them in a secure, scalable cloud environment. You’ll also implement caching strategies, handle backend integration, and prototype user interfaces for internal and client testing. This role requires deep technical skills, autonomy, and a passion for bringing applied AI solutions into real-world use. Key Responsibilities: Design and implement modular AI agents using large language models (LLMs) to automate and optimize a variety of complex workflows Deploy and maintain end-to-end agent/AI workflows and services in cloud environments, ensuring reliability, scalability, and low-latency performance for production use Build and orchestrate multi-agent systems using frameworks like LangGraph or CrewAI, supporting context-aware, multi-step reasoning and task execution Develop and optimize retrieval-augmented generation (RAG) pipelines using vector databases (e.g., Qdrant, Pinecone, FAISS) to power semantic search and intelligent document workflows Fine-tune LLMs using frameworks such as Hugging Face Transformers, LoRA/PEFT, DeepSpeed, or Accelerate to create domain-adapted models Integrate knowledge graphs (e.g., Neo4j, AWS Neptune) into agent pipelines for context enhancement, reasoning, and relationship modeling Implement cache-augmented generation strategies using semantic caching and tools like Redis or vector similarity to reduce latency and improve consistency Build scalable backend services using FastAPI or Flask and develop lightweight user interfaces or prototypes with tools like Streamlit, Gradio, or React Monitor and evaluate model and agent performance using prompt testing, feedback loops, observability tools, and safe AI practices Collaborate with architects, product managers, and other developers to translate problem statements into scalable, reliable, and explainable AI systems Stay updated on the latest in cloud platforms (AWS/GCP/Azure), software frameworks, agentic frameworks, and AI/ML technologies Prerequisites: Strong Python development skills, including API development and service integration Experience with LLM APIs (OpenAI, Anthropic, Hugging Face), agent frameworks (LangChain, LangGraph, CrewAI), and prompt engineering Experience deploying AI-powered applications using Docker, cloud infrastructure (Azure preferred), and managing inference endpoints, vector DBs, and knowledge graph integrations in a live production setting Proven experience with RAG pipelines and vector databases (Qdrant, Pinecone, FAISS) Hands-on experience fine-tuning LLMs using PyTorch, Hugging Face Transformers, and optionally TensorFlow, with knowledge of LoRA, PEFT, or distributed training tools like DeepSpeed Familiarity with knowledge graphs and graph databases such as Neo4j or AWS Neptune, including schema design and Cypher/Gremlin querying Basic frontend prototyping skills using Streamlit or Gradio, and ability to work with frontend teams if needed Working knowledge of MLOps practices (e.g., MLflow, Weights & Biases), containerization (Docker), Git, and CI/CD workflows Cloud deployment experience with Azure, AWS, or GCP environments Understanding of caching strategies, embedding-based similarity, and response optimization through semantic caching Preferred Qualifications: Bachelor’s degree in Technology (B.Tech) or Master of Computer Applications (MCA) is required; MS in similar field preferred 7–10 years of experience in AI/ML, with at least 2 years focused on large language models, applied NLP, or agent-based systems Demonstrated ability to build and ship real-world AI-powered applications or platforms, preferably involving agents or LLM-centric workflows Strong analytical, problem-solving, and communication skills Ability to work independently in a fast-moving, collaborative, and cross-functional environment Prior experience in startups, innovation labs, or consulting firms a plus Compensation: The compensation structurewill be discussed during the interview Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Key Responsibilities Graph Database Development: Design, develop, and maintain graph database schemas using Neo4j. Query Optimization: Optimize Neo4j queries for performance and efficiency. Data Processing & Analysis: Utilize Python, PySpark, or Spark SQL for data transformation and analysis. User Acceptance Testing (UAT): Conduct UAT to ensure data accuracy and overall system functionality. Data Pipeline Management: Develop and manage scalable data pipelines using Databricks and Azure Data Factory (ADF). Cloud Integration: Work with Azure cloud services and be familiar with Azure data engineering components. Desired Skills Strong experience with Neo4j and Cypher query language Proficient in Python and/or PySpark Hands-on experience with Databricks and Azure Data Factory Familiarity with data engineering tools and best practices Good understanding of database performance tuning Ability to work in fast-paced, client-driven environments Skills: azure,data engineering tools,neo4j,pyspark,azure data factory,spark sql,databricks,cloud,database performance tuning,cypher query language,python Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Company Description NodeStar is a pioneering AI technology company that specializes in developing cutting-edge conversational AI applications. Our diverse team comprises visionary tech founders, seasoned executives, AI PhDs, and product pioneers who have forged a new path in AI innovation. We create integrated solutions that propel our partners to new heights by infusing conversational interfaces with game mechanics for interactive experiences across multiple platforms. Role Description This is a full-time role for a Senior/Staff Python Backend Developer at NodeStar. As a senior technical leader, you will architect and build scalable backend systems that power our AI-driven applications. You will lead complex technical initiatives, mentor engineering teams, and drive architectural decisions that shape our platform's future. This role requires deep expertise in Python, distributed systems, and cloud infrastructure, combined with the ability to translate business requirements into robust technical solutions that scale to millions of users. Core Responsibilities Architect and design large-scale distributed systems and microservices architecture Lead technical initiatives across multiple teams and drive engineering excellence Define technical roadmaps and architectural standards for backend systems Mentor and guide junior and mid-level developers, fostering their professional growth Own end-to-end delivery of complex features from design to production deployment Drive technical decision-making and evaluate new technologies for adoption Collaborate with product, AI/ML teams, and stakeholders to align technical solutions with business goals Establish best practices for code quality, testing, deployment, and monitoring Lead performance optimization initiatives and ensure system reliability at scale Participate in on-call rotations and incident response for critical systems Qualifications Bachelor's degree in Computer Science or related field (Master's preferred) 5+ years of professional backend development experience, with 2+ years in senior/lead roles Expert-level proficiency in Python and deep understanding of its internals Extensive experience with FastAPI, Django, and async Python frameworks Proven track record of designing and implementing distributed systems at scale Strong expertise in database design, optimization, and management (PostgreSQL, Redis) Deep knowledge of AWS services (EKS, RDS, Lambda, SQS, etc.) and cloud architecture patterns Experience with microservices, event-driven architecture, and message queuing systems Expertise in API design, GraphQL, and RESTful services Strong understanding of software security best practices and compliance requirements Excellent communication skills and ability to influence technical decisions Preferred Qualifications Experience building AI/ML-powered applications and working with LLMs Expertise with container orchestration (Kubernetes) and infrastructure as code (Terraform) Experience with streaming data platforms and real-time processing Knowledge of LangChain, LangGraph, and modern AI application frameworks Experience with vector and graph databases in production environments Track record of leading successful migrations or major architectural changes Published articles, conference talks, or open-source contributions Experience in high-growth startups or AI-focused companies Technical Stack Languages: Python 3.x (expert level), with knowledge of Go or Rust a plus Frameworks: FastAPI, LangGraph, Django REST framework, Celery AI/ML: LangChain, Pydantic, experience with LLM integration Databases: PostgreSQL, Redis, Chroma, Neo4j, experience with sharding and replication Infrastructure: AWS (extensive), Docker, Kubernetes, Terraform Monitoring: DataDog, Prometheus, ELK stack or similar Architecture: Microservices, event-driven systems, CQRS, domain-driven design What We Offer Competitive salary Professional development opportunities Flexible work arrangements Collaborative and innovative work environment Paid time off and holidays Potential for equity We value skill and experience over tenure. If you have less than 5 years of experience but are passionate about backend development and have a proven track record of success, we encourage you to apply and be part of our innovative and dynamic team at NodeStar! Show more Show less

Posted 3 weeks ago

Apply

8.0 - 12.0 years

35 - 42 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities: * Design and implement AI solutions using GML, Neo4J, ArangoDB, LangChain, LLamaIndex, RAG. * Collaborate with cross-functional teams on ML projects with Python, PySpark, Pytorch, R, SQL.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Job Profile Summary The Cloud NoSQL Database Engineer performs database engineering and administration activities, including design, planning, configuration, monitoring, automation, self-serviceability, alerting, and space management. The role involves database backup and recovery, performance tuning, security management, and migration strategies. The ideal candidate will lead and advise on Neo4j and MongoDB database solutions, including migration, modernization, and optimization, while also supporting secondary RDBMS platforms (SQL Server, PostgreSQL, MySQL, Oracle). The candidate should be proficient in workload migrations to Cloud (AWS/Azure/GCP). Key Responsibilities: MongoDB Administration: Install, configure, and maintain Neo4j (GraphDB) and MongoDB (NoSQL) databases in cloud and on-prem environments. NoSQL Data Modeling: Design and implement graph-based models in Neo4j and document-based models in MongoDB to optimize data retrieval and relationships. Performance Tuning & Optimization: Monitor and tune databases for query performance, indexing strategies, and replication performance. Backup, Restore, & Disaster Recovery: Design and implement backup and recovery strategies for Neo4j, MongoDB, and secondary database platforms. Migration & Modernization: Lead database migration strategies, including homogeneous and heterogeneous migrations between NoSQL, Graph, and RDBMS platforms. Capacity Planning: Forecast database growth and plan for scalability, optimal performance, and infrastructure requirements. Patch Management & Upgrades: Plan and execute database software upgrades, patches, and service packs. Monitoring & Alerting: Set up proactive monitoring and alerting for database health, performance, and potential failures using Datadog, AWS CloudWatch, Azure Monitor, or Prometheus. Automation & Scripting: Develop automation scripts using Python, AWS CLI, PowerShell, Shell scripting to streamline database operations. Security & Compliance: Implement database security best practices, including access controls, encryption, key management, and compliance with cloud security standards. Incident & Problem Management: Work within ITIL frameworks to resolve incidents, service requests, and perform root cause analysis for problem management. High Availability & Scalability: Design and manage Neo4j clustering, MongoDB replication/sharding, and HADR configurations across cloud and hybrid environments. Vendor & Third-Party Tool Management: Evaluate, implement, and manage third-party tools for Neo4j, MongoDB, and cloud database solutions. Cross-Platform Database Support: Provide secondary support for SQL Server (Always On, Replication, Log Shipping), PostgreSQL (Streaming Replication, Partitioning), MySQL (InnoDB Cluster, Master-Slave Replication), and Oracle (RAC, Data Guard, GoldenGate). Cloud Platform Expertise: Hands-on with cloud-native database services such as AWS DocumentDB, DynamoDB, Azure CosmosDB, Google Firestore, Google BigTable. Cost Optimization: Analyze database workload, optimize cloud costs, and recommend licensing enhancements. Shape Knowledge & Skills: Strong expertise in Neo4j (Cypher Query Language, APOC, Graph Algorithms, GDS Library) and MongoDB (Aggregation Framework, Sharding, Replication, Indexing). Experience with homogeneous and heterogeneous database migrations (NoSQL-to-NoSQL, Graph-to-RDBMS, RDBMS-to-NoSQL). Familiarity with database monitoring tools such as Datadog, Prometheus, CloudWatch, Azure Monitor. Proficiency in automation using Python, AWS CLI, PowerShell, Bash/Shell scripting. Experience in cloud-based database deployment using AWS RDS, Aurora, DynamoDB, Azure SQL, Azure CosmosDB, GCP Cloud SQL, Firebase, BigTable. Understanding of microservices and event-driven architectures, integrating MongoDB and Neo4j with applications using Kafka, RabbitMQ, or AWS SNS/SQS. Experience with containerization (Docker, Kubernetes) and Infrastructure as Code (Terraform, CloudFormation, Ansible). Strong analytical and problem-solving skills for database performance tuning and optimization. Shape Education & Certifications: Bachelor’s degree in Computer Science, Information Systems, or a related field. Database Specialty Certifications in Neo4j and MongoDB (Neo4j Certified Professional, MongoDB Associate/Professional Certification). Cloud Certifications (AWS Certified Database - Specialty, Azure Database Administrator Associate, Google Cloud Professional Data Engineer). Preferred Experience: 5+ years of experience in database administration with at least 3 years dedicated to Neo4j and MongoDB. Hands-on experience with GraphDB & NoSQL architecture and migrations. Experience working in DevOps environments and automated CI/CD pipelines for database deployments. Strong expertise in data replication, ETL, and database migration tools such as AWS DMS, Azure DMS, MongoDB Atlas Live Migrate, Neo4j ETL Tool. Show more Show less

Posted 3 weeks ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Gurugram

Work from Office

Naukri logo

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What you’ll do We are looking for experienced Knowledge Graph developers who have the following set of technical skillsets and experience. Undertake complete ownership in accomplishing activities and assigned responsibilities across all phases of project lifecycle to solve business problems across one or more client engagements. Apply appropriate development methodologies (e.g.agile, waterfall) and best practices (e.g.mid-development client reviews, embedded QA procedures, unit testing) to ensure successful and timely completion of assignments. Collaborate with other team members to leverage expertise and ensure seamless transitions; Exhibit flexibility in undertaking new and challenging problems and demonstrate excellent task management. Assist in creating project outputs such as business case development, solution vision and design, user requirements, prototypes, and technical architecture (if needed), test cases, and operations management. Bring transparency in driving assigned tasks to completion and report accurate status. Bring Consulting mindset in problem solving, innovation by leveraging technical and business knowledge/ expertise and collaborate across other teams. Assist senior team members, delivery leads in project management responsibilities. Build complex solutions using Programing languages, ETL service platform, etc. What you’ll bring Bachelor’s or master’s degree in computer science, Engineering, or a related field. 4+ years of professional experience in Knowledge Graph development in Neo4j or AWS Neptune or Anzo knowledge graph Database. 3+ years of experience in RDF ontologies, Data modelling & ontology development Strong expertise in python, pyspark, SQL Strong ability to identify data anomalies, design data validation rules, and perform data cleanup to ensure high-quality data. Project management and task planning experience, ensuring smooth execution of deliverables and timelines. Strong communication and interpersonal skills to collaborate with both technical and non-technical teams. Experience with automation testing Performance OptimizationKnowledge of techniques to optimize knowledge graph operations like data inserts. Data ModelingProficiency in designing effective data models within Knowledge Graph, including relationships between tables and optimizing data for reporting. Motivation and willingness to learn new tools and technologies as per the team’s requirements. Additional Skills: Strong communication skills, both verbal and written, with the ability to structure thoughts logically during discussions and presentations Experience in pharma or life sciences dataFamiliarity with pharmaceutical datasets, including product, patient, or healthcare provider data, is a plus. Experience in manufacturing data is a plus Capability to simplify complex concepts into easily understandable frameworks and presentations Proficiency in working within a virtual global team environment, contributing to the timely delivery of multiple projects Travel to other offices as required to collaborate with clients and internal project teams Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Chandigarh, India

On-site

Linkedin logo

We are seeking an experienced MERN Stack Trainer to design, develop, and deliver instructor-led and hands-on training programs covering the full MERN (MongoDB, Express.js, React, Node.js) technology stack. The ideal candidate will possess strong software-architecture knowledge, be well-versed in backend management and design patterns, and be capable of guiding students through both core and advanced topics such as asynchronous programming, database design, scalability, reliability, and maintainability. This role requires designing curriculum, creating lab exercises, evaluating student progress, and continuously refining content to align with industry best practices. Key Responsiblities. Curriculum Design & Development: Define learning objectives, course outlines, and module breakdowns for MERN stack topics. Training Delivery & Facilitation Conduct live instructor-led sessions (onsite/virtual) adhering to learning principles. Facilitate hands-on labs where participants build real-world projects (e.g., e-commerce site, chat application, CRUD apps). Demonstrate step-by-step development, debugging, and deployment workflows. Assign and review practical exercises; provide detailed feedback and remediation for struggling learners. Mentor participants on best practices, troubleshooting, and performance optimization. Assessment & Evaluation Design quizzes, coding challenges, and project assessments that rigorously test conceptual understanding and practical skills. Track participant progress (attendance, lab completion, assessment scores) and prepare weekly status reports. Provide certification guidance and mock interview sessions for MERN-related roles. Continuously collect participant feedback to refine content and delivery style. Content Maintenance & Continuous Improvement Stay up-to-date with the latest MERN ecosystem developments: new Node.js features, React releases, database enhancements, DevOps tooling. Regularly revise training materials to incorporate emerging technologies (e.g., serverless functions, Next.js, GraphQL, TypeScript). Collaborate with instructional designers, subject-matter experts, and other trainers to ensure consistency and quality across programs. Required Qualifications Educational Background Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a closely related field. Professional Experience Minimum 3 years of hands-on training experience for designing and building full-stack applications using the MERN stack (Node.js, Express.js, React.js, MongoDB). Preferred 3 years of formal training or mentoring experience in a classroom (physical/virtual) environment, preferably to engineering students or early-career software engineers. Technical Proficiency (must demonstrate strong expertise in all of the following): Node.js & Express.js : building RESTful services, middleware patterns, debugging, error handling, performance tuning. MongoDB : schema design, indexing, aggregation pipelines, replication, and sharding. React.js : component architecture, hooks, state management (Redux or equivalent), React Router, testing frameworks (Jest, React Testing Library). Frontend Technologies : HTML5 semantics, CSS3 (Flexbox, Grid, responsive design, Sass/LESS), Bootstrap, JavaScript (ES6+), jQuery fundamentals. Database Administration : proficiency in at least one relational database (PostgreSQL or MariaDB) and one NoSQL/document database (MongoDB). Familiarity with Redis (caching/real-time sessions), Neo4j, and InfluxDB (optional). Software Architecture & Design Patterns : SOLID principles, MVC/MVVM, event-driven patterns, microservices vs. monolith trade-offs. DevOps & Tooling : Git/GitHub workflows, containerization basics (Docker), basic cloud deployment. Testing & Quality : unit testing (Mocha/Chai, Jest), integration testing (Supertest), basic performance testing (JMeter), code linting (ESLint), code coverage. Soft Skills Excellent verbal and written communication skills; ability to explain complex concepts in a simplified and structured manner. Proven classroom management and facilitation skills; adaptable to diverse learner backgrounds. Strong problem-solving aptitude and the ability to perform live troubleshooting during sessions. Demonstrated organizational skills: ability to manage multiple batches, track progress, and ensure timely delivery of content. High degree of professionalism, punctuality, and ownership. Show more Show less

Posted 3 weeks ago

Apply

40.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

India - Hyderabad JOB ID: R-213724 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 03, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a seasoned Engineering Manager (Data Engineering) to lead the end-to-end management of enterprise data assets and operational data workflows. This role is critical in ensuring the availability, quality, consistency, and timeliness of data across platforms and functions, supporting analytics, reporting, compliance, and digital transformation initiatives. You will be responsible for the day-to-day data operations, manage a team of data professionals, and drive process excellence in data intake, transformation, validation, and delivery. You will work closely with cross-functional teams including data engineering, analytics, IT, governance, and business stakeholders to align operational data capabilities with enterprise needs. Roles & Responsibilities: Lead and manage the enterprise data operations team, responsible for data ingestion, processing, validation, quality control, and publishing to various downstream systems. Define and implement standard operating procedures for data lifecycle management, ensuring accuracy, completeness, and integrity of critical data assets. Oversee and continuously improve daily operational workflows, including scheduling, monitoring, and troubleshooting data jobs across cloud and on-premise environments. Establish and track key data operations metrics (SLAs, throughput, latency, data quality, incident resolution) and drive continuous improvements. Partner with data engineering and platform teams to optimize pipelines, support new data integrations, and ensure scalability and resilience of operational data flows. Collaborate with data governance, compliance, and security teams to maintain regulatory compliance, data privacy, and access controls. Serve as the primary escalation point for data incidents and outages, ensuring rapid response and root cause analysis. Build strong relationships with business and analytics teams to understand data consumption patterns, prioritize operational needs, and align with business objectives. Drive adoption of best practices for documentation, metadata, lineage, and change management across data operations processes. Mentor and develop a high-performing team of data operations analysts and leads. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Experience managing data workflows in cloud environments such as AWS, Azure, or GCP. Strong problem-solving skills with the ability to analyze complex data flow issues and implement sustainable solutions. Working knowledge of SQL, Python, or scripting languages for process monitoring and automation. Experience collaborating with data engineering, analytics, IT operations, and business teams in a matrixed organization. Familiarity with data governance, metadata management, access control, and regulatory requirements (e.g., GDPR, HIPAA, SOX). Excellent leadership, communication, and stakeholder engagement skills. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications Any Degree and 9-13 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 3 weeks ago

Apply

30.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Position Overview ABOUT APOLLO Apollo is a high-growth, global alternative asset manager. In our asset management business, we seek to provide our clients excess return at every point along the risk-reward spectrum from investment grade to private equity with a focus on three investing strategies: yield, hybrid, and equity. For more than three decades, our investing expertise across our fully integrated platform has served the financial return needs of our clients and provided businesses with innovative capital solutions for growth. Through Athene, our retirement services business, we specialize in helping clients achieve financial security by providing a suite of retirement savings products and acting as a solutions provider to institutions. Our patient, creative, and knowledgeable approach to investing aligns our clients, businesses we invest in, our employees, and the communities we impact, to expand opportunity and achieve positive outcomes. OUR PURPOSE AND CORE VALUES Our Clients Rely On Our Investment Acumen To Help Secure Their Future. We Must Never Lose Our Focus And Determination To Be The Best Investors And Most Trusted Partners On Their Behalf. We Strive To Be The leading provider of retirement income solutions to institutions, companies, and individuals. The leading provider of capital solutions to companies. Our breadth and scale enable us to deliver capital for even the largest projects – and our small firm mindset ensures we will be a thoughtful and dedicated partner to these organizations. We are committed to helping them build stronger businesses. A leading contributor to addressing some of the biggest issues facing the world today – such as energy transition, accelerating the adoption of new technologies, and social impact – where innovative approaches to investing can make a positive difference. We are building a unique firm of extraordinary colleagues who: Outperform expectations. Challenge Convention Champion Opportunity Lead responsibly. Drive collaboration As One Apollo team, we believe that doing great work and having fun go hand in hand, and we are proud of what we can achieve together. Our Benefits Apollo relies on its people to keep it a leader in alternative investment management, and the firm’s benefit programs are crafted to offer meaningful coverage for both you and your family. Please reach out to your Human Capital Business Partner for more detailed information on specific benefits. Position Overview At Apollo, we are a global team of alternative investment managers passionate about delivering uncommon value to our investors and shareholders. With over 30 years of proven expertise across Private Equity, Credit, and Real Assets in various regions and industries, we are known for our integrated businesses, our strong investment performance, our value-oriented philosophy, and our people. We seek a Senior Engineer/Full Stack Developer to innovate, manage, direct, architect, design, and implement solutions focused on our trade operations and controller functions across Private Equity, Credit, and Real Assets. The ideal candidate is a well-rounded hands-on engineer passionate about delivering quality software on the Java stack. Our Senior Engineer will work closely with key stakeholders in our Middle Office and Controllers teams and in the Credit and Opportunistic Technology teams to successfully deliver business requirements, projects, and programs. The candidate will have proven skills in independently managing the full software development lifecycle, working with end-users, business analysts, and project managers in defining and refining the problem statement, and delivering quality solutions on time. They will have the aptitude to quickly learn and embrace emerging technologies and proven methodologies to innovate and improve the correctness, quality, and timeliness of solutions delivered by the team. Primary Responsibilities Contribute to development of elegant solutions for systems that result in simple, extensible, maintainable, high-quality code. Participate in design discussions, hands-on technical, development, code reviews, quality assurance, observability, and product support. Use technical knowledge of patterns and code to identify risks and prevent software defects. Foster a culture of collaboration, disciplined software engineering practices, and a mindset to leave things better than you found them. Optimize team processes to improve productivity and responsiveness to feedback and changing priorities. Build strong relationships with key stakeholders, collaborate, and communicate effectively to reach successful outcomes. Passionate about delivering high-impact and breakthrough value to stakeholders. Desire to learn the domain and deliver enterprise solutions with at a higher velocity. Contribute to deliverables from early stages of requirement gathering through development, testing, UAT, deployment and post-production Lead in the planning, execution, and delivery of the team’s commitments. Qualifications & Experience Qualifications & Experience: Master’s or bachelor’s degree in Computer Science or another STEM field Experience with software development in the Alternative Asset Management or Investment Banking domain 5+ years of software development experience in at least one of the following OO languages: Java, C++, or C# 3+ years of Web 2.0 UI/UX development experience in at least one of the following frameworks using JavaScript/TypeScript: ExtJS, ReactJS, AngularJS, or Vue. Hands-on development expertise in Java, Spring Boot, REST, Messaging, JPA, and SQL for the last 2+ years Hands-on development expertise in building applications using RESTful and Microservices architecture Expertise in developing applications using TDD/BDD/ATDD with hands-on experience with at least one of Junit, Spring Test, TestNG, or Cucumber A strong understanding of SOLID principles, Design Patterns, Enterprise Integration Patterns A strong understanding of relational databases, SQL, ER modeling, and ORM technologies A strong understanding of BPM and its application Hands-on experience with various CI/CD practices and tools such as Jenkins, Azure DevOps, TeamCity, etcetera Exceptional problem-solving & debugging skills. Awareness of emerging application development methodologies, design patterns, and technologies. Ability to quickly learn new and emerging technologies and adopt solutions from within the company or the open-source community. Experience with the below will be a plus Buy-side operational and fund accounting processes Business processes and workflows using modern BPM/Low Code/No Code platforms (JBPM, Bonitasoft, Appian, Logic Apps, Unqork, etcetera…) OpenAPI, GraphQL, gRPC, ESB, SOAP, WCF, Kafka, and Node Serverless architecture Microsoft Azure Designing and implementing microservices on AKS Azure DevOps Sencha platform NoSQL databases (MongoDB, Cosmos DB, Neo4J) Python software development Functional programming paradigm Apollo provides equal employment opportunities regardless of age, disability, gender reassignment, marital or civil partner status, pregnancy or maternity, race, color, nationality, ethnic or national origin, religion or belief, veteran status, gender/sex or sexual orientation, or any other criterion or circumstance protected by applicable law, ordinance, or regulation. The above criteria are intended to be used as a guide only – candidates who do not meet all the above criteria may still be considered if they are deemed to have relevant experience/ equivalent levels of skill or knowledge to fulfil the requirements of the role. Any job offer will be conditional upon and subject to satisfactory reference and background screening checks, all necessary corporate and regulatory approvals or certifications as required from time to time and entering into definitive contractual documentation satisfactory to Apollo. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. We believe in the power of people. We are a network strategy and technology company that is motivated by making a difference in people’s lives – their productivity, creativity, health and comfort. We’re looking for a highly motivated, talented and experienced engineer who is passionate about product verification automation activities and is ready to assume a leadership position within the team in addressing future projects. You will certify solutions that provide our customers opportunities to differentiate their service offerings in a very competitive market. The ideal candidate is a flexible, highly technical problem solver, with interdisciplinary knowledge of software, and test & test automation. You feel at home in a dynamic, multi-disciplined engineering environment, acting as an interface between product design, other Blue Planet test engineering teams, and members of other functional groups (support, documentation, marketing, etc). RESPONSIBILITIES Engage with various engineering teams, product line managers and product owners to transform concepts and high-level requirements into optimized test coverage and enhanced customer experience. Automate and maintain all manually devised and executed test cases using automation best practices and maintain CI/CD pipeline framework Coding E2E Automated tests for the Angular UI frontend with Cucumber/Webdriver.io. Coding Rest API testing automation Coding of System testing with ansible, bash scripting Drive (plan and implement) lab or simulation environment setup activities to fully address proposed testing scenarios and coordinate equipment acquisition/sharing agreements with the various teams concerned. Analyse test results and prepare test reports. Investigate software defects and highlight critical issues that can have potential customer impact and consult with software development engineers in finding resolution or to address problems related to specifications and/or test plans/procedures. Raise Agile Jira bugs for product defects Report on automation status Research the best tools/ways of test automation for required functionality Skills Expected from the candidate: Frontend testing frameworks/libraries: Cucumber/Webdriver.io Backend programming/markup languages: Python Backend testing: Rest API testing automation tools, Postman/Newman, Jasmine Load testing: JMeter, Grafana + Prometheus Container management: Docker, Kubernetes, OpenStack Testing Theory: terminology, testing types, asynchronous automated testing Continuous Integration Tools: Jenkins, TeamCity, GitLab Cloud Environments: AWS, Azure, Google cloud Version control system: Git, Bitbucket System Testing Automation with: Bash, Shell, Python, Ansible scripting Hands-on experience of CI/CD pipeline configuration and maintenance Solid operational and administrator experience with Unix operation systems Understanding of Web application and Microservice solution architecture Strong abilities to rapidly learn new complex technological concepts and apply knowledge in daily activities. Excellent written (documentation) and interpersonal communication skills (English). Strong abilities to work as part of a team or independently with little supervision. Experienced working as part of an Agile scrum team and with DevOps process Desirable For The Candidate Ticketing: Jira Documentation: Confluence, Gitlab Frontend programming/markup languages: Typescript/JavaScript, html, CSS, SVG Frontend development frameworks/libraries: Angular 2+, Node.js/npm, D3.js, gulp Programming theory: algorithms and data structures, relational and graph database concepts, etc. Non-critical Extras Domain: Telecom, Computer Networking, OSS Builds: Maven, NPM, JVM, NodeJS Databases: PostgreSQL, Neo4j, ClickHouse Test Management: TestRail Other Skills: ElasticSearch, Drools, Kafka integration, REST (on Spring MVC), SSO (LDAP, Reverse Proxy, OAuth2) Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies