5 years
0 Lacs
Posted:5 days ago|
Platform:
On-site
Full Time
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace.
Experience: 5-10years
Education: BTech / BE / MCA / MSc Computer Science
Seeking an experienced Data Engineer to design, develop, and productionize graph database solutions using Neo4j for economic data analysis and modeling. This role requires expertise in graph database architecture, data pipeline development, and production system deployment.
- Design and implement Neo4j graph database schemas for complex economic datasets
- Develop efficient graph data models representing economic relationships, transactions, and market dynamics
- Create and optimize Cypher queries for complex analytical workloads
- Build graph-based data pipelines for real-time and batch processing
- Architect scalable data ingestion frameworks for structured and unstructured economic data
- Develop ETL/ELT processes to transform relational and time-series data into graph formats
- Implement data validation, quality checks, and monitoring systems
- Build APIs and services for graph data access and manipulation
- Deploy and maintain Neo4j clusters in production environments
- Implement backup, disaster recovery, and high availability solutions
- Monitor database performance, optimize queries, and manage capacity planning
- Establish CI/CD pipelines for graph database deployments
- Model financial market relationships, economic indicators, and trading networks
- Create graph representations of supply chains, market structures, and economic flows
- Develop graph analytics for fraud detection, risk assessment, and market analysis
- Collaborate with economists and analysts to translate business requirements into graph solutions
- Neo4j Expertise- : 3+ years hands-on experience with Neo4j database development
- Graph Modeling- : Strong understanding of graph theory and data modeling principles
- Cypher Query Language- : Advanced proficiency in writing complex Cypher queries
- Programming- : Python, Java, or Scala for data processing and application development
- Data Pipeline Tools- : Experience with Apache Kafka, Apache Spark, or similar frameworks
- Cloud Platforms- : AWS, GCP, or Azure with containerization (Docker, Kubernetes)
- Experience with graph database administration and performance tuning
- Knowledge of distributed systems and database clustering
- Understanding of data warehousing concepts and dimensional modeling
- Familiarity with other databases (PostgreSQL, MongoDB, Elasticsearch)
- Experience working with financial datasets, market data, or economic indicators
- Understanding of financial data structures and regulatory requirements
- Knowledge of data governance and compliance in financial services
-Neo4j Certification- : Neo4j Certified Professional or Graph Data Science certification
-Advanced Degree- : Master's in Computer Science, Economics, or related field
-Industry Experience- : 5+ years in financial services, fintech, or economic research
-Additional Skills- : Machine learning on graphs, network analysis, time-series analysis
- Neo4j Enterprise Edition with APOC procedures
- Apache Kafka for streaming data ingestion
- Apache Spark for large-scale data processing
- Docker and Kubernetes for containerized deployments
- Git, Jenkins/GitLab CI for version control and deployment
- Monitoring tools: Prometheus, Grafana, ELK stack
- Portfolio demonstrating Neo4j graph database projects
- Examples of production graph systems you've built
- Experience with economic or financial data modeling preferred
CLOUDSUFI
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowNoida, Uttar Pradesh, India
Salary: Not disclosed
Noida, Uttar Pradesh, India
Salary: Not disclosed