Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
The Data Scientist-3 in Bangalore (or Mumbai) will be part of the 811 Data Strategy Group that comprises Data Engineers, Data Scientists and Data Analytics professionals. He/she will be associated with one of the key functional areas such as Product Strategy, Cross Sell, Asset Risk, Fraud Risk, Customer Experience etc. and help build robust and scalable solutions that are deployed for real time or near real time consumption and integrated into our proprietary Customer Data Platform (CDP). This is an exciting opportunity to work on data driven analytical solutions and have a profound influence on the growth trajectory of a super fast evolving digital product. Key Requirements of The Role Advanced degree in an analytical field (e.g., Data Science, Computer Science, Engineering, Applied Mathematics, Statistics, Data Analysis) or substantial hands on work experience in the space 7 - 10 Years of relevant experience in the space Expertise in mining AI/ML opportunities from open ended business problems and drive solution design/development while closely collaborating with engineering, product and business teams Strong understanding of advanced data mining techniques, curating, processing and transforming data to produce sound datasets. Strong experience in NLP, time series forecasting and recommendation engines preferred Create great data stories with expertise in robust EDA and statistical inference. Should have at least a foundational understanding in Experimentation design ? Strong understanding of the Machine Learning lifecycle - feature engineering, training, validation, scaling, deployment, scoring, monitoring, and feedback loop. Exposure to Deep Learning applications and tools like TensorFlow, Theano, Torch, Caffe preferred Experience with analytical programming languages, tools and libraries (Python a must) as well as Shell scripting. Should be proficient in developing production ready code as per best practices. Experience in using Scala/Java/Go based libraries a big plus Very proficient is SQL and other relational databases along with PySpark or Spark SQL. Proficient is using NoSQL databases. Experience in using GraphDBs like Neo4j a plus. Candidate should be able to handle unstructured data with ease. Candidate should have experience in working with MLEs and be proficient (with experience) in using MLOps tools. Should be able to consume the capabilities of said tools with deep understanding of deployment lifecycle. Experience in CI/CD deployment is a big plus. Knowledge of key concepts in distributed systems like replication, serialization, concurrency control etc. a big plus Good understanding of programming best practices and building code artifacts for reuse. Should be comfortable with version controlling and collaborate comfortably in tools like git Ability to create frameworks that can perform model RCAs using analytical and interpretability tools. Should be able to peer review model documentations/code bases and find opportunities Experience in end-to-end delivery of AI driven Solutions (Deep learning , traditional data science projects) Strong communication, partnership and teamwork skills ? Should be able to guide and mentor teams while leading them by example. Should be an integral part of creating a team culture focused on driving collaboration, technical expertise and partnerships with other teams ? Ability to work in an extremely fast paced environment, meet deadlines, and perform at high standards with limited supervision A self-starter who is looking to build grounds up and contribute to the making of a potential big name in the space ? Experience in Banking and financial services is a plus. However, sound logical reasoning and first principles problem solving are even more critical job role: 1. As a key partner at the table, attend key meetings with the business team to bring in the data perspective to the discussions 2. Perform comprehensive data explorations around to generate inquisitive insights and scope out the problem 3. Develop simplistic to advanced solutions to address the problem at hand. We believe in making swift (albeit sometimes marginal) impact to business KPIs and hence adopt an MVP approach to solution development 4. Build re-usable code analytical frameworks to address commonly occurring business questions 5. Perform 360-degree customer profiling and opportunity analyses to guide new product strategy. This is a nascent business and hence opportunities to guide business strategy are plenty 6. Guide team members on data science and analytics best practices to help them overcome bottlenecks and challenges 7. The role will be an approximate 60% IC 40% leading and the ratios can vary basis need and fit 8. Develop Customer-360 Features that will be integrated into the Customer Data Platform (CDP) to enhance the single view of our customer
Posted 13 hours ago
6.0 years
2 - 5 Lacs
Pune
On-site
Our team members are at the heart of everything we do. At Cencora, we are united in our responsibility to create healthier futures, and every person here is essential to us being able to deliver on that purpose. If you want to make a difference at the center of health, come join our innovative company and help us improve the lives of people and animals everywhere. Apply today! Job Details Cencora is looking for a Mid-Level SQL Server Database Developer to join our Data Warehouse Team in our 3rd Party Logistics Division. Working closely with the Data Warehouse, Business Intelligence, EDI and Account Management teams; lessons learned from this activity will provide a foundation for growth for your career goals and exciting opportunities to take our operations farther with exciting technologies and methods. If you love a fast-paced and challenging work environment with many future world-wide support opportunities, you may be our ideal candidate. Shift : 02:00 PM to 11:00 PM IST PRIMARY DUTIES AND RESPONSIBILITIES: Build out new code management, release, and control procedures Troubleshoot SSIS package & SQL job failures Setup new inbound and outbound file processing requests Possess strong data analysis skills and an investigative mindset to troubleshoot and resolve issues by analyzing data and examining code in depth Be highly skilled in debugging and understanding existing code bases in T-SQL to connect the dots and resolve complex issues Develop centralized performance and security monitoring methods Design and implement High Availability and Disaster Recovery Solutions Hands-on experience in Microsoft SQL Server installation, configuration, performance tuning, maintenance, and Database Administration on Production Servers Maintain backup & recovery plans Participate in on-call rotation schedule Perform multiple Windows Server and SQL upgrades/migrations Work with supporting vendors, database owners, infrastructure teams Work with Windows environment for better SQL server compliance Contribute to new cloud platform choices Be well-organized and focused with good communication skills REQUIREMENTS: 6+ years - SQL Server – T-SQL 4+ years - SSIS Development and support 4+ years - SQL Server Administration 4+ years - Windows Server Administration 4+ years - Data Warehouse Environment One of the following PowerShell 3+ Years C# 3+ Years Nice To Have: 3rd Party Logistics Experience is a Major Plus PowerShell AS400, RPG Knowledge Windows Server Administration Azure . Experience & Educational Requirements: Bachelor’s Degree in Computer Science, Information Technology or any other related discipline or equivalent related experience. 2+ years of directly-related or relevant experience, preferably in software designing and development. Preferred Certifications: Android Development Certification Microsoft Asp.Net Certification Microsoft Certified Engineer Application/Infrastructure/Enterprise Architect Training and Certification, e.g. TOGAF Certified Scrum Master SAFe Agile Certification DevOps Certifications like AWS Certified DevOps Engineer Skills & Knowledge: Behavioral Skills: Critical Thinking Detail Oriented Interpersonal Communication Learning Agility Problem Solving Time Management Technical Skills: API Design Cloud Computing Methodologies Integration Testing & Validation Programming/Coding Database Management Software Development Life Cycle (SDLC) Technical Documentation Web Application Infrastructure Web Development Frameworks Tools Knowledge: Cloud Computing Tools like AWS, Azure, Google cloud Container Management and Orchestration Tools Big Data Frameworks like Hadoop Java Frameworks like JDBC, Spring, ORM Solutions, JPA, JEE, JMS, Gradle, Object Oriented Design Microsoft Office Suite NoSQL Database Platforms like MongoDB, BigTable, Redis, RavenDB Cassandra, HBase, Neo4j, and CouchDB Programming Languages like JavaScript, HTML/CSS, Python, SQL Operating Systems & Servers like Windows, Linux, Citrix, IBM, Oracle, SQL What Cencora offers Benefit offerings outside the US may vary by country and will be aligned to local market practice. The eligibility and effective date may differ for some benefits and for team members covered under collective bargaining agreements. Full time Affiliated Companies Affiliated Companies: CENCORA BUSINESS SERVICES INDIA PRIVATE LIMITED Equal Employment Opportunity Cencora is committed to providing equal employment opportunity without regard to race, color, religion, sex, sexual orientation, gender identity, genetic information, national origin, age, disability, veteran status or membership in any other class protected by federal, state or local law. The company’s continued success depends on the full and effective utilization of qualified individuals. Therefore, harassment is prohibited and all matters related to recruiting, training, compensation, benefits, promotions and transfers comply with equal opportunity principles and are non-discriminatory. Cencora is committed to providing reasonable accommodations to individuals with disabilities during the employment process which are consistent with legal requirements. If you wish to request an accommodation while seeking employment, please call 888.692.2272 or email hrsc@cencora.com. We will make accommodation determinations on a request-by-request basis. Messages and emails regarding anything other than accommodations requests will not be returned
Posted 15 hours ago
10.0 years
28 - 32 Lacs
India
On-site
Job Title: Data Architect – Graph Database SpecialistLocation: Bangalore, India Experience: 10 Years Notice Period: Immediate to 30 Days Salary: Up to ₹32 LPAJob Overview:We are hiring an experienced Data Architect specializing in graph databases to design scalable, high-performance data solutions. The ideal candidate will have a strong background in graph theory, knowledge graph implementation, and graph data modeling, along with hands-on expertise in tools like Neo4j, Neptune, TigerGraph, and others.Key Responsibilities: Translate complex business requirements into technical graph-based data models and solutions.Design and develop graph database architectures, schemas, and indexes optimized for traversal and performance.Implement knowledge graphs aligned with enterprise data strategies.Optimize data queries and manage indexing, caching, and retrieval logic for large-scale graph data.Ensure performance, scalability, reliability, and data integrity in production graph databases.Perform schema migrations, version upgrades, and enforce database security and access controls.Troubleshoot graph database performance metrics and resolve issues proactively.Collaborate with software engineers, data scientists, and product teams to integrate graph databases into broader application ecosystems.Provide technical guidance to junior developers and contribute to best practices and standards.Stay current on graph technology trends, tools, and industry implementations. Required Skills :Proven expertise in graph database technologies: Neo4j, Neptune, TigerGraph, OrientDB, ArangoDB, GraphBase, etc.Deep knowledge of graph theory, data structures, and traversal algorithms.Strong experience with knowledge graph design and deployment.Proficiency in graph query languages like Cypher, Gremlin, or SPARQL.Experience with data modeling, database optimization, and analytics implementation.Excellent collaboration, documentation, and communication skills.Be part of the data future—build intelligent, connected data architectures that power smarter applications. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,200,000.00 per year Schedule: Day shift Work Location: In person
Posted 15 hours ago
7.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
The Data Scientist-3 in Bangalore (or Mumbai) will be part of the 811 Data Strategy Group that comprises Data Engineers, Data Scientists and Data Analytics professionals. He/she will be associated with one of the key functional areas such as Product Strategy, Cross Sell, Asset Risk, Fraud Risk, Customer Experience etc. and help build robust and scalable solutions that are deployed for real time or near real time consumption and integrated into our proprietary Customer Data Platform (CDP). This is an exciting opportunity to work on data driven analytical solutions and have a profound influence on the growth trajectory of a super fast evolving digital product. Key Requirements of The Role Advanced degree in an analytical field (e.g., Data Science, Computer Science, Engineering, Applied Mathematics, Statistics, Data Analysis) or substantial hands on work experience in the space 7 - 10 Years of relevant experience in the space Expertise in mining AI/ML opportunities from open ended business problems and drive solution design/development while closely collaborating with engineering, product and business teams Strong understanding of advanced data mining techniques, curating, processing and transforming data to produce sound datasets. Strong experience in NLP, time series forecasting and recommendation engines preferred Create great data stories with expertise in robust EDA and statistical inference. Should have at least a foundational understanding in Experimentation design Strong understanding of the Machine Learning lifecycle - feature engineering, training, validation, scaling, deployment, scoring, monitoring, and feedback loop. Exposure to Deep Learning applications and tools like TensorFlow, Theano, Torch, Caffe preferred Experience with analytical programming languages, tools and libraries (Python a must) as well as Shell scripting. Should be proficient in developing production ready code as per best practices. Experience in using Scala/Java/Go based libraries a big plus Very proficient is SQL and other relational databases along with PySpark or Spark SQL. Proficient is using NoSQL databases. Experience in using GraphDBs like Neo4j a plus. Candidate should be able to handle unstructured data with ease. Candidate should have experience in working with MLEs and be proficient (with experience) in using MLOps tools. Should be able to consume the capabilities of said tools with deep understanding of deployment lifecycle. Experience in CI/CD deployment is a big plus. Knowledge of key concepts in distributed systems like replication, serialization, concurrency control etc. a big plus Good understanding of programming best practices and building code artifacts for reuse. Should be comfortable with version controlling and collaborate comfortably in tools like git Ability to create frameworks that can perform model RCAs using analytical and interpretability tools. Should be able to peer review model documentations/code bases and find opportunities Experience in end-to-end delivery of AI driven Solutions (Deep learning , traditional data science projects) Strong communication, partnership and teamwork skills Should be able to guide and mentor teams while leading them by example. Should be an integral part of creating a team culture focused on driving collaboration, technical expertise and partnerships with other teams Ability to work in an extremely fast paced environment, meet deadlines, and perform at high standards with limited supervision A self-starter who is looking to build grounds up and contribute to the making of a potential big name in the space Experience in Banking and financial services is a plus. However, sound logical reasoning and first principles problem solving are even more critical A typical day in the life of the job role: 1. As a key partner at the table, attend key meetings with the business team to bring in the data perspective to the discussions 2. Perform comprehensive data explorations around to generate inquisitive insights and scope out the problem 3. Develop simplistic to advanced solutions to address the problem at hand. We believe in making swift (albeit sometimes marginal) impact to business KPIs and hence adopt an MVP approach to solution development 4. Build re-usable code analytical frameworks to address commonly occurring business questions 5. Perform 360-degree customer profiling and opportunity analyses to guide new product strategy. This is a nascent business and hence opportunities to guide business strategy are plenty 6. Guide team members on data science and analytics best practices to help them overcome bottlenecks and challenges 7. The role will be an approximate 60% IC 40% leading and the ratios can vary basis need and fit 8. Develop Customer-360 Features that will be integrated into the Customer Data Platform (CDP)
Posted 17 hours ago
7.0 - 15.0 years
9 - 17 Lacs
Ahmedabad
Work from Office
Job Description Department: Research & Development Key Responsibilities: Lead the design, development, and deployment of data-driven solutions, leveraging Python and machine learning frameworks. Oversee data strategy, ensuring quality, governance, and security. Build and optimize predictive models to solve business challenges. Collaborate with cross-functional teams to integrate data science workflows with Rustbased microservices and cloud platforms. Mentor and guide the data science team, fostering a culture of continuous learning. Stay updated with the latest advancements in AI/ML and recommend their adoption. Required Skills and Qualifications: Proficiency in Python and data science libraries (TensorFlow, PyTorch, Pandas, Scikitlearn). Expertise in AI/ML algorithms, statistical modeling, and data visualization. Good to have experience with big data tools like Hadoop, Spark, or similar. Knowledge of deploying models in production environments. Strong understanding of data governance and compliance standards. Leadership experience with a focus on team management and innovation. Advanced degree in Data Science, Machine Learning, Statistics, or related field. Experience with real-time analytics and streaming data frameworks (e.g., Kafka, Flink). Familiarity with graph-based databases like Neo4j or JanusGraph. Knowledge of AutoML,MLops, frameworks and tools. Exposure to cloud-based AI/ML services (AWS SageMaker, Google AI Platform, Azure ML). Working Experiance on LLM, LAMA, Mistral, RAG, Agentised RAG, LORA, and Model Optimization
Posted 2 days ago
0 years
5 - 8 Lacs
Bengaluru
On-site
Key Responsibilities: Design and implement the Inventory Framework solution, integrating existing and new capabilities while keeping commercialization in mind. Ensure technical preparedness and oversee delivery of the product, including hands-on involvement when necessary. Act as the primary point of contact between the onsite team and offshore stakeholders, ensuring clear and consistent communication. Quickly address and resolve technical issues or escalate them to the appropriate teams when necessary. Define the end-to-end architecture, data models, and integration strategies with OSS/BSS systems. Ensure compliance with TM Forum standards (SID, Open APIs) and industry best practices. Innovate and introduce efficient processes and intelligent solutions using AI/ML to improve operations. Engage with business teams to understand operational challenges and provide practical, scalable solutions. Proficiency in APIs, TM Forum Open APIs, GraphQL, and integration frameworks to enhance system interoperability. Ability to work effectively in agile environments with cross-functional teams, ensuring seamless product development cycles. Knowledge of 4G/5G Core, RRAN, transport, and modeling of network assets, connectivity, data centers, and 3GPP interfaces/protocols. Work with microservices architecture, API-driven inventory models, and cloud-native deployments. Proficiency in Python, FastAPI/Django/Flask, and database management (PostgreSQL, MongoDB, Neo4j, GraphQuery). Skills & Qualifications: Strong understanding of inventory systems and their application in telecom networks. Expertise in cloud-native technologies and microservices architecture. Excellent communication and stakeholder management skills. Ability to innovate and apply AI/ML for better process efficiency. Proficiency in APIs and integration frameworks such as GraphQL and TM Forum Open APIs. Experience in agile environments and working with cross-functional teams.
Posted 2 days ago
8.0 years
6 - 7 Lacs
Chennai
On-site
We are looking for a Full stack core software engineer with deep understanding of Java/Python and its ecosystems, and strong hands-on experience in building high-performing, scalable, enterprise-grade applications. You will be part of a talented software team that works on mission-critical applications. As a full stack core software engineer, your responsibilities include understanding user requirements and working with a development team on the design, implementation and deliver of Java/Python application while providing expertise in the full software development lifecycle, from concept and design to testing. Candidate will be working closely with business architecture group to design and implement current and target state business process by using various tools and technologies. Candidate should ideally be having knowledge in few of these technologies like Java/Python/Unix technology stack, Angular, java script, SQL / NonSQL and Graph DB are used for data storage (we tailor the tools to the needs) and is integrated with other bank systems via RESTful APIs/web services and Kafka Streams. Qualifications: 8 to 12 years of industry experience, with a strong hands-on experience in the hands-on development of mission-critical applications using Java/Python technologies, aligning each project with the firm's strategic objectives, and overseeing team operations to ensure project success. Experience with complex system integration projects. Java, Spring, Spring Boot, Spring Cloud, J2EE Design Patterns, REST services. Front End Technologies like JavaScript and Angular version, CSS2/CSS3, HTML Strong Knowledge of SQL, JDBC, Unix commands. Hands-on Database experience in relational (Oracle/DB2) and No-SQL (MongoDB). Hands-on experience on working / deploying application on Cloud. Hands-on experience in code testing tools like Junit / Mockito / Cucumber. Deployment Acquaintance in Apache Tomcat, Open shift or other cloud environments. Expertise in Test driven development (Junit, JMeter), Continuous Integration (Jenkins), Build tool (Maven) and Version Control (Git), Development tools (Eclipse, IntelliJ). Excellent communication skills (written and verbal), ability to work in a team environment. Excellent analytical and problem-solving skills and the ability to work well independently. Experience working with business analysts, database administrators, project managers and technical architects in multiple geographical areas. Experience in the Financial Services industry is added advantage. Understanding Financial and Reporting Hierarchies will be beneficial. Required Skills: Minimum 8 to 12 years of application development experience in Java with: Spring Boot & Microservices; REST Web Services; JPA with hibernate; Core Java. Minimum 3+ years of Hands-on experience in designing architecture for enterprise applications. Angular and Java Script Experience in working on a native cloud platform. Experience with development IDEs such as Eclipse and IntelliJ Experience with SQL/NONSQL such as Oracle, PostgreSQL, Neo4j, and MongoDB Experience with caching framework such as Redis. Experience with CI/CD systems such as helm and harness. Experience with messaging services such as Kafka. Experience in Python, Unix shell scripting will be an added plus Excellent trouble shooting skills. Strong problem-solving skills, business acumen, and demonstrated excellent oral and written communication skills with both technical and non-technical audiences. Experience with Agile Software Development Lifecycle methodology and related tooling. For example -JIRA, Scrum. Education: Bachelor’s or equivalent degree in Computer Science - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 2 days ago
3.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Job Description: We are seeking an extremely passionate and experienced Node.js Backend Developer to join our team and work on building and maintaining a high-performance IoT backend system. You will be responsible for developing robust and scalable APIs, implementing data synchronization services, and integrating with 3rd party SAAS solutions at the platform level. This role is critical to ensuring the performance, reliability, and availability of our backend systems. Youll work on cutting-edge technologies with a real-world impact. Responsibilities: Design, develop, and maintain backend services for our IoT platform using Node.js. Develop and optimize high-performance APIs to handle large data volumes & a growing user base. Implement data synchronization services across distributed systems. Integrate 3rd party data sources and APIs into our platform. Work with both SQL and NoSQL databases. Collaborate with the frontend developers and other team members to ensure seamless integration. Troubleshoot and resolve issues related to the backend system. Has to ensure a 99.999% uptime & performance SLAs of the production environment. Manage basic DevOps tasks such as CI/CD pipelines, Kubernetes cluster management, and application deployment processes. Write clean, efficient, and well-documented code with a high test coverage. Apply logical problem-solving skills to address complex challenges. Required Qualifications: B.Tech. degree or higher educational qualification. 3+ years of experience as a Node.js developer in a production environment. Proven experience building and maintaining high-performance APIs. Hands-on experience working with SQL and NoSQL databases (e.g., PostgreSQL, ClickHouse). Strong understanding of Microservice architecture concepts & hands-on experience implementing it in production systems. Deep understanding of use cases & experience with Apache Kafka & Redis. Strong understanding of backend development principles and best practices. Familiarity with basic DevOps practices and CI/CD tools. Excellent coding, debugging and logical problem-solving skills. Passion for technology and building innovative solutions. Preferred Qualifications: Experience developing APIs & services, with a deeper understanding of the Quality Controls. Knowledge of IoT data and related technologies. Experience with Kubernetes Cluster setup & management is a plus. Experience with Graph DB (e.g. Neo4J).
Posted 3 days ago
8.0 - 13.0 years
12 - 17 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
At Shakudo, we re building the world s first operating system for data and AI a unified platform that streamlines powerful open-source and proprietary tools into a seamless, production-ready environment. We re looking for a Principal Software Development Engineer to lead the development of full end-to-end applications on our platform. This role is ideal for engineers who love solving real customer problems, moving across the stack, and delivering high-impact solutions that showcase what s possible on Shakudo. What You ll Do Design and build complete applications from backend to frontend using Shakudo and open-source tools like Neo4J, ollama, Spark, and many more Solve real-world data and AI challenges with elegant, production-ready solutions Collaborate with Product and Customer Engineering to translate needs into scalable systems Drive architecture and design patterns for building on Shakudo with high autonomy and self-direction Set the standard for building efficient, reusable, and impactful solutions What You Bring 8+ years building production systems across the stack Strong backend and frontend experience (e.g. Python, React, TypeScript) Familiarity with cloud infrastructure, Kubernetes, and data/AI tooling A hands-on, solutions-first mindset and a passion for fast, high-quality delivery Why This Role You ll lead by example, building flagship applications that demonstrate the power of Shakudo. This role offers high ownership, high impact, and the chance to shape how modern data and AI solutions are built.
Posted 3 days ago
2.0 - 5.0 years
3 - 7 Lacs
Bengaluru
Work from Office
A Python developer will be responsible for development of a robust backend. You will be involved with the technical aspects of a project. You will also be involved with the maintenance and update of an existing Project. You will be working alongside other engineers and developers working on different layers of the infrastructure. Therefore, a commitment to collaborative problem solving, sophisticated design, and the creation of quality products is essential. Collaborate with a team to define, design, and ship new feature Understand the User Stories and derive Design guidelines Involve end to end to Design Involve end to end Development Conduct unit testing based on the test case template Ensure successful roll out into production based on priorities and adhering to timelines. Ensure the performance, quality, and responsiveness of applications DB : Mongo, MySQL, Neo4J, Postgres Database App Servers : Apache, Ngnix Skills 2+ years of experience in Python Proficient in Python 3, Mysql Experience with Python web frameworks like Flask, Django Experience with REST APIs and Integration Knack for benchmarking and optimization Proficient understanding of code versioning tools {{such as Git, Mercurial or SVN}} Experience with Agile and Waterfall SDLC models Python 3, Team Handling, Rest Api, Svn, Code Versioning Tools, Waterfall And Agile, Team Handeling, Mercurial, Flask, Mysql, Dijango Templates And Backbonejs, Git, Sdlc, Integration
Posted 3 days ago
9.0 - 14.0 years
20 - 25 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Hybrid
We are seeking a highly skilled and motivated Knowledge Graph Engineer to design, develop, and maintain graph-based data solutions using Neo4j , Cypher , and AWS . The ideal candidate will have a strong background in graph databases, data modeling, and cloud infrastructure, with a passion for turning complex data into meaningful insights. Key Responsibilities: Design and implement scalable knowledge graph solutions using Neo4j . Write efficient and optimized Cypher queries for data retrieval and manipulation. Develop data pipelines to ingest, transform, and load data into graph databases. Collaborate with data scientists, architects, and domain experts to model complex relationships. Deploy and manage graph database solutions on AWS infrastructure. Ensure data quality, consistency, and security across the knowledge graph. Monitor performance and troubleshoot issues in graph-based applications. Stay updated with the latest trends and advancements in graph technologies and cloud services. Required Skills & Qualifications: Proven experience with Neo4j and Cypher query language . Strong understanding of graph theory , data modeling , and semantic relationships . Hands-on experience with AWS services such as EC2, S3, Lambda, RDS, and IAM. Proficiency in Python , Java , or Scala for data processing and integration. Experience with ETL pipelines , data integration , and API development . Familiarity with RDF , SPARQL , or other semantic web technologies is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities.
Posted 3 days ago
0.0 years
0 - 0 Lacs
Bengaluru, Karnataka
On-site
Role: Internship Opportunity – Full Stack Developer Duration: 6 Months Type: Internship (6 days a week) Location: Peenya III Phase, Peenya, Bengaluru, Karnataka 560058 Stipend: As Applicable Who can apply - Students currently pursuing B.E. / B.Tech / BCA / MCA - Available for a full-time internship (6 months) - Can work from the office as required(6 days a week) Responsibilities As a Full Stack Intern, you will: Execute small time-bound projects, including Technical Proof of Concepts (PoC) and Proof of Value (PoV) initiatives. Rapidly build and prototype using platforms like GitHub Copilot, Firebase Studio, Replit, and similar GenAI tools. Develop end-to-end solutions covering: Frontend using modern reactive UI frameworks Backend using Spring Boot, Python, and event-driven architectures Databases – SQL, NoSQL, GraphDB, VectorDB, and Object Stores Contribute to microservice-based architectures with cloud-native deployment. Integrate and interface with AI/ML systems, analytics pipelines, and CI/CD automation. Package and deploy applications using Docker and Kubernetes. Work across Development, Staging, QA, and Production environments. Technical Skill Requirements Familiarity with: SQL and NoSQL databases (e.g., PostgreSQL, MongoDB) Graph and Vector databases (e.g., Neo4j, Milvus) Spring Stack (Spring Boot, Spring Cloud) Python (Flask, FastAPI, NumPy, Pandas, etc.) Frontend frameworks (e.g., React, Vue, Svelte) CI/CD pipelines and Git strategies Awareness of: Event-based and reactive programming CQRS, CDC (Change Data Capture) Cloud infrastructure provisioning and monitoring tools Preferred Tools and Platforms GitHub Copilot / CodeWhisperer Firebase Studio Replit / Codesandbox Docker, Kubernetes Git, GitHub, GitLab Open-source observability and analytics frameworks Job Type: Internship Contract length: 6 months Pay: ₹15,000.00 - ₹18,000.00 per month Schedule: Day shift Application Question(s): Are you familiar with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB) & Graph and Vector databases (e.g., Neo4j, Milvus) ? Are you familiar with Spring Stack (Spring Boot, Spring Cloud)? Are you familiar with Python (Flask, FastAPI, NumPy, Pandas, etc.) & Frontend frameworks (e.g., React, Vue, Svelte) ? Are you available for 6 months internship opportunity?If yes from when? Work Location: In person Expected Start Date: 01/07/2025
Posted 3 days ago
0 years
0 Lacs
India
On-site
🚀 We Are #Hiring! 🚀 Express Analytics is looking for a " System & Network Administrator (Full-Time)"role based in India to support our IT infrastructure and ensure seamless operations. If you have skill experts proficient in managing and troubleshooting complex IT environments, ** CTC Offering > In between 5 to 10 LPA (Yearly) ** ** Exp. should be 5-6 yrs only applicable to apply for this role ** ** Note > Linux experience is a must ** ** Other Required Skills (Must-Have) below:- ** ✅ Windows Server & Desktop ✅ Ubuntu Linux & Redhat Linux ✅ VMware ESXI setup & VM configuration (Ubuntu/RHEL/Windows) ✅ Fortinet Firewall configuration & Networking troubleshooting ✅ External DNS management (GoDaddy, Namecheap) ✅ Apache/Nginx application configuration ✅ GitLab setup & configuration ✅ User application setup & troubleshooting ✅ Neo4j installation & troubleshooting ✅ Hardware knowledge ✅ Informatica & Oracle DBA expertise ✅ AWS S3 (backup management) ✅ Application Password Management System We want to hear from you! APPLY NOW!
Posted 3 days ago
2.0 - 7.0 years
10 - 15 Lacs
Hyderabad
Work from Office
About ValGenesis ValGenesis is a leading digital validation platform provider for life sciences companies. ValGenesis suite of products are used by 30 of the top 50 global pharmaceutical and biotech companies to achieve digital transformation, total compliance and manufacturing excellence/intelligence across their product lifecycle. Learn more about working for ValGenesis, the de facto standard for paperless validation in Life Sciences: https://www.youtube.com/watch?v=tASq7Ld0JsQ About the Role: We are seeking a highly skilled AI/ML Engineer to join our dynamic team to build the next gen applications for our global customers. If you are a technology enthusiast and highly passionate, we are eager to discuss with you about the potential role. Responsibilities Implement, and deploy Machine Learning solutions to solve complex problems and deliver real business value, ie. revenue, engagement, and customer satisfaction. Collaborate with data product managers, software engineers and SMEs to identify AI/ML opportunities for improving process efficiency. Develop production-grade ML models to enhance customer experience, content recommendation, content generation, and predictive analysis. Monitor and improve model performance via data enhancement, feature engineering, experimentation and online/offline evaluation. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Requirements 2 - 4 years of experience in AI/ML engineering, with a track record of handling increasingly complex projects. Strong programming skills in Python, Rust. Experience with Pandas, NumPy, SciPy, OpenCV (for image processing) Experience with ML frameworks, such as scikit-learn, Tensorflow, PyTorch. Experience with GenAI tools, such as Langchain, LlamaIndex, and open source Vector DBs. Experience with one or more Graph DBs - Neo4J, ArangoDB Experience with MLOps platforms, such as Kubeflow or MLFlow. Expertise in one or more of the following AI/ML domains: Causal AI, Reinforcement Learning, Generative AI, NLP, Dimension Reduction, Computer Vision, Sequential Models. Expertise in building, deploying, measuring, and maintaining machine learning models to address real-world problems. Thorough understanding of software product development lifecycle, DevOps (build, continuous integration, deployment tools) and best practices. Excellent written and verbal communication skills and interpersonal skills. Advanced degree in Computer Science, Machine Learning or related field. We’re on a Mission In 2005, we disrupted the life sciences industry by introducing the world’s first digital validation lifecycle management system. ValGenesis VLMS® revolutionized compliance-based corporate validation activities and has remained the industry standard. Today, we continue to push the boundaries of innovation enhancing and expanding our portfolio beyond validation with an end-to-end digital transformation platform. We combine our purpose-built systems with world-class consulting services to help every facet of GxP meet evolving regulations and quality expectations. The Team You’ll Join Our customers’ success is our success. We keep the customer experience centered in our decisions, from product to marketing to sales to services to support. Life sciences companies exist to improve humanity’s quality of life, and we honor that mission. We work together. We communicate openly, support each other without reservation, and never hesitate to wear multiple hats to get the job done. We think big. Innovation is the heart of ValGenesis. That spirit drives product development as well as personal growth. We never stop aiming upward. We’re in it to win it. We’re on a path to becoming the number one intelligent validation platform in the market, and we won’t settle for anything less than being a market leader. How We Work Our Chennai, Hyderabad and Bangalore offices are onsite, 5 days per week. We believe that in-person interaction and collaboration fosters creativity, and a sense of community, and is critical to our future success as a company. ValGenesis is an equal-opportunity employer that makes employment decisions on the basis of merit. Our goal is to have the best-qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristics protected by local law.
Posted 4 days ago
5.0 - 10.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Overview Core Responsibiities -------------------------------------1.Orchestration and AutomationAutomate service activation and management across different network domains, vendors, and ayers.2.Troubeshooting and Probem SovingDiagnose and resove network and service issues, requiring strong anaytica and probem-soving skis.3.Monitoring and Visuaization:Utiize toos to monitor network performance and correate service issues with network events.4.Data Coection and Anaysis:Gather and anayze data from network devices and systems to identify trends and root causes.5.Too Deveopment:Potentiay deveop or enhance toos for monitoring, data coection, and automation.6.Communication and Coaboration:Communicate technica information ceary to both technica and non-technica audiences, and coaborate with goba teams. Responsibiities Required Skis and Experience --------------------------------------- Technica Skis: 1.Strong Linux skis and scripting experience (e.g., Python, She Scripting).2.Experience with network troubeshooting and network management soutions (OSS/BSS).3.Famiiarity with network protocos (e.g., SNMP, Sysog, ICMP, SSH).4.Experience with databases (e.g., PostgreSQL, Neo4j, MySQL).5.Knowedge of Bue Panet products (BPI, BPO, ROA). Soft Skis: 1.Strong anaytica and probem-soving skis.2.Exceent communication (written and verba) skis.3.Abiity to work independenty and as part of a team.4.Abiity to work with a gobay distributed team. Specific Knowedge 1.Understanding of network architecture and technoogies.2.Knowedge of service orchestration principes.3.Experience with Bue Panet MDSO or simiar orchestration patforms.4.Famiiarity with network automation toos and techniques. Educationa 1.Bacheor's degree in Computer Science, Information Technoogy, or a reated fied.2.Reevant certifications (e.g., AWS Certified Soutions Architect, CCNA, Python certifications) are a pus.
Posted 4 days ago
3.0 - 8.0 years
6 - 10 Lacs
Chennai
Work from Office
Overview Java deveopment with hands-on experience in Spring Boot.Strong knowedge of UI frameworks, particuary Anguar, for deveoping dynamic, interactive web appications.Experience with Kubernetes for managing microservices-based appications in a coud environment.Famiiarity with Postgres (reationa) and Neo4j (graph database) for managing compex data modes.Experience in Meta Data Modeing and designing data structures that support high-performance and scaabiity.Expertise in Camunda BPMN and business process automation.Experience impementing rues with Droos Rues Engine.Knowedge of Unix/Linux systems for appication depoyment and management.Experience buiding data Ingestion Frameworks to process and hande arge datasets. Responsibiities Java deveopment with hands-on experience in Spring Boot.Strong knowedge of UI frameworks, particuary Anguar, for deveoping dynamic, interactive web appications.Experience with Kubernetes for managing microservices-based appications in a coud environment.Famiiarity with Postgres (reationa) and Neo4j (graph database) for managing compex data modes.Experience in Meta Data Modeing and designing data structures that support high-performance and scaabiity.Expertise in Camunda BPMN and business process automation.Experience impementing rues with Droos Rues Engine.Knowedge of Unix/Linux systems for appication depoyment and management.Experience buiding data Ingestion Frameworks to process and hande arge datasets. Java deveopment with hands-on experience in Spring Boot.Strong knowedge of UI frameworks, particuary Anguar, for deveoping dynamic, interactive web appications.Experience with Kubernetes for managing microservices-based appications in a coud environment.Famiiarity with Postgres (reationa) and Neo4j (graph database) for managing compex data modes.Experience in Meta Data Modeing and designing data structures that support high-performance and scaabiity.Expertise in Camunda BPMN and business process automation.Experience impementing rues with Droos Rues Engine.Knowedge of Unix/Linux systems for appication depoyment and management.Experience buiding data Ingestion Frameworks to process and hande arge datasets.
Posted 4 days ago
3.0 - 6.0 years
0 - 0 Lacs
Chennai
Work from Office
AI Engineer : We are seeking a specialized AI Engineer to build the core intelligence of InzightEd. This role requires specific expertise in integrating large language models via APIs and structuring data within relational and graph databases. Performance bonus Over time allowance Work from home
Posted 4 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary DT-US Product Engineering - Data Scientist Manager We are seeking an exceptional Data Scientist who combines deep expertise in AI/ML with a strong focus on data quality and advanced analytics. This role requires a proven track record in developing production-grade machine learning solutions, implementing robust data quality frameworks, and leveraging cutting-edge analytical tools to drive business transformation through data-driven insights . Work you will do The Data Scientist will be responsible for developing and implementing end-to-end AI/ML solutions while ensuring data quality excellence across all stages of the data lifecycle. This role requires extensive experience in modern data science platforms, AI frameworks, and analytical tools, with a focus on scalable and production-ready implementations. Project Leadership and Management: Lead complex data science initiatives utilizing Databricks, Dataiku, and modern AI/ML frameworks for end-to-end solution development Establish and maintain data quality frameworks and metrics across all stages of model development Design and implement data validation pipelines and quality control mechanisms for both structured and unstructured data Strategic Development: Develop and deploy advanced machine learning models, including deep learning and generative AI solutions Design and implement automated data quality monitoring systems and anomaly detection frameworks Create and maintain MLOps pipelines for model deployment, monitoring, and maintenance Team Mentoring and Development: Lead and mentor a team of data scientists and analysts, fostering a culture of technical excellence and continuous learning Develop and implement training programs to enhance team capabilities in emerging technologies and methodologies Establish performance metrics and career development pathways for team members Drive knowledge sharing initiatives and best practices across the organization Provide technical guidance and code reviews to ensure high-quality deliverables Data Quality and Governance: Establish data quality standards and best practices for data collection, preprocessing, and feature engineering Implement data validation frameworks and quality checks throughout the ML pipeline Design and maintain data documentation systems and metadata management processes Lead initiatives for data quality improvement and standardization across projects Technical Implementation: Design, develop and deploy end-to-end AI/ML solutions using modern frameworks including TensorFlow, PyTorch, scikit-learn, XGBoost for machine learning, BERT and GPT for NLP, and OpenCV for computer vision applications Architect and implement robust data processing pipelines leveraging enterprise platforms like Databricks, Apache Spark, Pandas for data transformation, Dataiku and Apache Airflow for ETL/ELT processes, and DVC for data version control Establish and maintain production-grade MLOps practices including model deployment, monitoring, A/B testing, and continuous integration/deployment pipelines Technical Expertise Requirements: Must Have: Enterprise AI/ML Platforms: Demonstrate mastery of Databricks for large-scale processing, with proven ability to architect solutions at scale Programming & Analysis: Advanced Python (NumPy, Pandas, scikit-learn), SQL, PySpark with production-level expertise Machine Learning: Deep expertise in TensorFlow or PyTorch, and scikit-learn with proven implementation experience Big Data Technologies: Advanced knowledge of Apache Spark, Databricks, and distributed computing architectures Cloud Platforms: Strong experience with at least one major cloud platform (AWS/Azure/GCP) and their ML services (SageMaker/Azure ML/Vertex AI) Data Processing & Analytics: Extensive experience with enterprise-grade data processing tools and ETL pipelines MLOps & Infrastructure: Proven experience in model deployment, monitoring, and maintaining production ML systems Data Quality: Experience implementing comprehensive data quality frameworks and validation systems Version Control & Collaboration: Strong proficiency with Git, JIRA, and collaborative development practices Database Systems: Expert-level knowledge of both SQL and NoSQL databases for large-scale data management Visualization Tools: Tableau, Power BI, Plotly, Seaborn Large Language Models: Experience with GPT, BERT, LLaMA, and fine-tuning methodologies Good to Have: Additional Programming: R, Julia Additional Big Data: Hadoop, Hive, Apache Kafka Multi-Cloud: Experience across AWS, Azure, and GCP platforms Advanced Analytics: Dataiku, H2O.ai Additional MLOps: MLflow, Kubeflow, DVC (Data Version Control) Data Quality & Validation: Great Expectations, Deequ, Apache Griffin Business Intelligence: SAP HANA, SAP Business Objects, SAP BW Specialized Databases: Cassandra, MongoDB, Neo4j Container Orchestration: Kubernetes, Docker Additional Collaboration Tools: Confluence, BitBucket Education: Advanced degree in quantitative discipline (Statistics, Math, Computer Science, Engineering) or relevant experience. Qualifications: 10-13 years of experience with data mining, statistical modeling tools and underlying algorithms. 5+ years of experience with data analysis software for large scale analysis of structured and unstructured data. Proven track record of leading and delivering large-scale machine learning projects, including production model deployment, data quality framework implementation and experience with very large datasets to create data-driven insights thru predictive and prescriptive analytic models. E xtensive knowledge of supervised and unsupervised analytic modeling techniques such as linear and logistic regression, support vector machines, decision trees / random forests, Naïve-Bayesian, neural networks, association rules, text mining, and k-nearest neighbors among other clustering models. Extensive experience with deep learning frameworks, automated ML platforms, data processing tools (Databricks Delta Lake, Apache Spark), analytics platforms (Tableau, Power BI), and major cloud providers (AWS, Azure, GCP) Experience architecting and implementing enterprise-grade solutions using cloud-native ML services while ensuring cost optimization and performance efficiency Strong track record of team leadership, stakeholder management, and driving technical excellence across multiple concurrent projects Expert-level proficiency in Python, R, and SQL, with deep understanding of statistical analysis, hypothesis testing, feature engineering, model evaluation, and validation techniques in production environments Demonstrated leadership experience in implementing MLOps practices, including model monitoring, A/B testing frameworks, and maintaining production ML systems at scale. Working knowledge of supervised and unsupervised learning techniques, such as Regression/Generalized Linear Models, decision tree analysis, boosting and bagging, Principal Components Analysis, and clustering methods. Strong oral and written communication skills, including presentation skills The Team Information Technology Services (ITS) helps power Deloitte’s success. ITS drives Deloitte, which serves many of the world’s largest, most respected organizations. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. The ~3,000 professionals in ITS deliver services including: Security, risk & compliance Technology support Infrastructure Applications Relationship management Strategy Deployment PMO Financials Communications Product Engineering (PxE) Product Engineering (PxE) team is the internal software and applications development team responsible for delivering leading-edge technologies to Deloitte professionals. Their broad portfolio includes web and mobile productivity tools that empower our people to log expenses, enter timesheets, book travel and more, anywhere, anytime. PxE enables our client service professionals through a comprehensive suite of applications across the business lines. In addition to application delivery, PxE offers full-scale design services, a robust mobile portfolio, cutting-edge analytics, and innovative custom development. Work Location: Hyderabad Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303069 Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We are seeking an extremely passionate and experienced Node.js Backend Developer to join our team and work on building and maintaining a high-performance IoT backend system. You will be responsible for developing robust and scalable APIs, implementing data synchronization services, and integrating with 3rd party SAAS solutions at the platform level. This role is critical to ensuring the performance, reliability, and availability of our backend systems. You'll work on cutting-edge technologies with a real-world impact. Responsibilities Design, develop, and maintain backend services for our IoT platform using Node.js. Develop and optimize high-performance APIs to handle large data volumes & a growing user base. Implement data synchronization services across distributed systems. Integrate 3rd party data sources and APIs into our platform. Work with both SQL and NoSQL databases. Collaborate with the frontend developers and other team members to ensure seamless integration. Troubleshoot and resolve issues related to the backend system. Has to ensure a 99.999% uptime & performance SLAs of the production environment. Manage basic DevOps tasks such as CI/CD pipelines, Kubernetes cluster management, and application deployment processes. Write clean, efficient, and well-documented code with a high test coverage. Apply logical problem-solving skills to address complex challenges. Required Qualifications B.Tech. degree or higher educational qualification. 3+ years of experience as a Node.js developer in a production environment. Proven experience building and maintaining high-performance APIs. Hands-on experience working with SQL and NoSQL databases (e.g., PostgreSQL, ClickHouse). Strong understanding of Microservice architecture concepts & hands-on experience implementing it in production systems. Deep understanding of use cases & experience with Apache Kafka & Redis. Strong understanding of backend development principles and best practices. Familiarity with basic DevOps practices and CI/CD tools. Excellent coding, debugging and logical problem-solving skills. Passion for technology and building innovative solutions. Preferred Qualifications Experience developing APIs & services, with a deeper understanding of the Quality Controls. Knowledge of IoT data and related technologies. Experience with Kubernetes Cluster setup & management is a plus. Experience with Graph DB (e.g. Neo4J). Skills: javascript,node.js,database management,microservices architecture,version control,agile methodologies,problem solving,software testing,apis,backenddevelopment,sql,nosql,postgresql,microservices,kafka,devops,debugging,problemsolving,iot,productionsupport Show more Show less
Posted 4 days ago
6.0 - 11.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Position Overview: We are seeking an experienced and skilled Senior Database Developer to join our dynamic team. The ideal candidate will have at least 8 years of hands-on experience in database development, with a strong focus on Neo4j (graph) databases. The role involves working on cutting-edge projects, contributing to data modelling, and ensuring the scalability and efficiency of our database systems. Responsibilities : Design, develop, and maintain databases, with a primary focus on Cypher/graph databases. Modify databases according to requests and perform tests. Advanced Query, performance tuning of databases and optimization of database systems. Solve database usage issues and malfunctions. Analyze all databases and monitor them for all design specifications and prepare associated test strategies. Evaluate and engineer efficient backup-recovery processes for various databases. Promote uniformity of database-related programming effort by developing methods and procedures for database programming Remain current with the industry by researching available products and methodologies to determine the feasibility of alternative database management systems, communication protocols, middleware, and query tools. Liaise with developers to improve applications and establish best practices. Ensure the performance, security, and scalability of database systems. Develop and optimize PL/SQL queries for efficient data storage and retrieval. Implement and maintain data models, ensuring accuracy and alignment with business needs. Train, mentor and motivate the junior team members. Contribute to assessing the teams performance evaluation. Stay updated on emerging database technologies and contribute to continuous improvement initiatives. Skills Required: 6+ years work experience as a Database developer Bachelor's or master's degree in computer science, Engineering, or a related field. Proficiency in Neo4j (graph) databases is mandatory. Strong experience with PL/SQL, data modeling, and database optimization techniques. Why us? Impactful Work: Your contributions will play a pivotal role in ensuring the quality and reliability of our platform. Professional Growth: We believe in investing in our employees' growth and development. You will have access to various learning resources, books, training programs, and opportunities to enhance your technical skills & expand your knowledge Collaborative Culture: We value teamwork and collaboration. You will work alongside talented professionals from diverse backgrounds, including developers, product managers, and business analysts, to collectively solve challenges and deliver exceptional software. Benefits: Health insurance covered for you and your family. Quarterly team outing, twice a month team lunch & personal and professional learning development session. Top performers win a chance on an international trip completely sponsored by the company.
Posted 5 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data warehouse,data engineering,etl,data,python,sql,data pipeline,azure synapse,azure datafactory,pipelines,skills,azure databricks,architect designing,pyspark,azure,airflow Show more Show less
Posted 5 days ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Data Modeller JD We are seeking a skilled Data Modeller to join our Corporate Banking team. The ideal candidate will have a strong background in creating data models for various banking services, including Current Account Savings Account (CASA), Loans, and Credit Services. This role involves collaborating with the Data Architect to define data model structures within a data mesh environment and coordinating with multiple departments to ensure cohesive data management practices. Data Modelling: oDesign and develop data models for CASA, Loan, and Credit Services, ensuring they meet business requirements and compliance standards. Create conceptual, logical, and physical data models that support the bank's strategic objectives. Ensure data models are optimized for performance, security, and scalability to support business operations and analytics. Collaboration With Data Architect Work closely with the Data Architect to establish the overall data architecture strategy and framework. Contribute to the definition of data model structures within a data mesh environment. Data Quality And Governance Ensure data quality and integrity in the data models by implementing best practices in data governance. Assist in the establishment of data management policies and standards. Conduct regular data audits and reviews to ensure data accuracy and consistency across systems. Data Modelling Tools: ERwin, IBM InfoSphere Data Architect, Oracle Data Modeler, Microsoft Visio, or similar tools. Databases: SQL, Oracle, MySQL, MS SQL Server, PostgreSQL, Neo4j Graph Data Warehousing Technologies: Snowflake, Teradata, or similar. ETL Tools: Informatica, Talend, Apache NiFi, Microsoft SSIS, or similar. Big Data Technologies: Hadoop, Spark (optional but preferred). Technologies: Experience with data modelling on cloud platforms Microsoft Azure (Synapse, Data Factory) Show more Show less
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Neo4j, a popular graph database management system, is seeing a growing demand in the job market in India. Companies are looking for professionals who are skilled in working with Neo4j to manage and analyze complex relationships in their data. If you are a job seeker interested in Neo4j roles, this article will provide you with valuable insights to help you navigate the job market in India.
The average salary range for Neo4j professionals in India varies based on experience levels. - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum
In the Neo4j skill area, a typical career progression may look like: - Junior Developer - Developer - Senior Developer - Tech Lead
Apart from expertise in Neo4j, professionals in this field are often expected to have or develop skills in: - Cypher Query Language - Data modeling - Database management - Java or Python programming
As you explore Neo4j job opportunities in India, it's essential to not only possess the necessary technical skills but also be prepared to showcase your expertise during interviews. Stay updated with the latest trends in Neo4j and continuously enhance your skills to stand out in the competitive job market. Prepare thoroughly, demonstrate your knowledge confidently, and land your dream Neo4j job in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane