Home
Jobs

2 Apache Druid Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

20 - 25 Lacs

Pune

Work from Office

Job Purpose VI is looking for an experienced Apache Druid Developer to join our data engineering team. The ideal candidate will have a deep understanding of real-time data ingestion, processing, and querying in large-scale analytics environments. He will be responsible for designing, implementing, and optimizing data pipelines in Apache Druid to support real-time data analytics and drive business insights. Key Result Areas/Accountabilities Data Ingestion Pipelines : Design and implement data ingestion workflows in Apache Druid, including real-time and batch data ingestion. Query Optimization : Develop optimized Druid queries and leverage Druids indexing and storage capabilities to ensure low-latency, high-performance analytics. Data Modeling : Create and maintain schemas optimized for time-series data analysis, supporting aggregation, filtering, and complex analytical functions. Cluster Management : Deploy, configure, and manage Druid clusters, monitoring performance, reliability, and cost-effectiveness. Data Integration : Collaborate with other teams to integrate Druid with data sources (e.g., Kafka, Hadoop, S3) and downstream applications. Performance Monitoring & Tuning : Continuously monitor cluster performance, fine-tune data configurations, and troubleshoot any issues that may impact availability and response times. Core Competencies, Knowledge, Experience In-depth experience with Apache Druid (setup, configuration, tuning, and operations). Strong knowledge of SQL and familiarity with Druid SQL . Experience with data ingestion and ETL pipelines , especially with Kafka, Hadoop, Spark, and other data sources. Proficiency in Java, Python , or other programming languages for custom data processing and integration. Familiarity with distributed data systems and big data frameworks (e.g., Hadoop, Apache Kafka, Apache Spark). Must have technical / professional qualifications Bachelor’s degree in Computer Science, with 6+ experience in Data Science, Engineering, or a related field. Experience with data visualization tools (e.g., Superset, Tableau, Looker) and integration with Druid Experience with other analytics databases (e.g., ClickHouse, Snowflake, BigQuery).

Posted 1 week ago

Apply

4 - 9 years

16 - 25 Lacs

Chennai

Work from Office

Job Summary: We are seeking a skilled Java Developer with experience in Drools (business rules management system) or Apache Druid (real-time analytics database) to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining robust, scalable Java-based applications, with a focus on rule engines or real-time data processing. Responsibilities: Design, develop, test, and deploy Java applications and services. Integrate and configure Drools for rule-based processing, or Apache Druid for real-time analytics (depending on skillset). Collaborate with cross-functional teams to define technical solutions and requirements. Write clean, maintainable, and efficient code following best practices. Monitor and troubleshoot application performance and reliability issues. Develop unit and integration tests to ensure software quality. Maintain technical documentation for systems and processes. Required Skills and Qualifications: 3+ years of experience in Java development. Hands-on experience with Drools or Apache Druid (at least one is mandatory). Strong understanding of object-oriented design and design patterns. Experience with Spring/Spring Boot frameworks. Familiarity with RESTful APIs and microservices architecture. Knowledge of relational databases (e.g., MySQL, PostgreSQL) and NoSQL stores. Experience with build tools (Maven, Gradle) and version control systems (Git). Excellent problem-solving and debugging skills. Strong communication and teamwork abilities. Preferred Qualifications: Experience working in agile development environments. Familiarity with containerization tools (Docker, Kubernetes). Exposure to CI/CD pipelines. Knowledge of cloud platforms (AWS, GCP, or Azure)

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies