Home
Jobs

20 Clickhouse Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 - 11.0 years

12 - 17 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Educational Bachelor of Engineering,Bachelor Of Science,Bachelor Of Technology,Bachelor Of Comp. Applications,Master Of Technology,Master Of Engineering,Master Of Science,Master Of Comp. Applications Service Line Engineering Services Responsibilities Collect, clean, and organize large datasets from various sources Perform data analysis using statistical methods, machine learning techniques, and data visualization tools Identify patterns, trends, and anomalies within datasets to uncover insights Develop and maintain data models to represent the organization's business operations Create interactive dashboards and reports to communicate data findings to stakeholders Document data analysis procedures and findings to ensure knowledge transfer Additional Responsibilities: High analytical skills A high degree of initiative and flexibility High customer orientation High quality awareness Excellent verbal and written communication skills Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Technical and Professional : 5+ years of experience as a Data Analyst or similar role. Proven track record of collecting, cleaning, analyzing, and interpreting large datasets Expertise in Pipeline designing and Validation Expertise in statistical methods, machine learning techniques, and data mining techniques Proficiency in SQL, Python, PySpark, Looker, Prometheus, Carbon, Clickhouse, Kafka, HDFS and ELK stack (Elasticsearch, Logstash, and Kibana) Experience with data visualization tools such as Grafana and Looker Ability to work independently and as part of a team Problem-solving and analytical skills to extract meaningful insights from data Strong business acumen to understand the implications of data findings Preferred Skills: Technology-Analytics - Packages-Python - Big Data Technology-Reporting Analytics & Visualization-Pentaho Reporting Technology-Cloud Platform-Google Big Data Technology-Cloud Platform-GCP Container services-Google Container Registry(GCR) Generic Skills: Technology-Machine Learning-Python

Posted 1 day ago

Apply

3.0 - 4.0 years

2 - 12 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

Design and develop scalable data pipelines to migrate user knowledge objects from Splunk to ClickHouse and Grafana. Implement data ingestion, transformation, and validation processes to ensure data integrity and performance. Collaborate with cross-functional teams to automate and optimize data migration workflows. Monitor and troubleshoot data pipeline performance and resolve issues proactively. Work closely with observability engineers and analysts to understand data requirements and deliver solutions. Contribute to the continuous improvement of the observability stack and migration automation tools. Required Skills and Qualifications Proven experience as a Big Data Developer or Engineer working with large-scale data platforms. Strong expertise with ClickHouse or other columnar databases, including query optimization and schema design. Hands-on experience with Splunk data structures, dashboards, and reports. Proficiency in data pipeline development using technologies such as Apache Spark, Kafka, or similar frameworks. Strong programming skills in Python, Java, or Scala. Experience with data migration automation and scripting. Familiarity with Grafana for data visualization and monitoring. Understanding of observability concepts and monitoring systems. Would be a plus Experience with Bosun or other alerting platforms. Knowledge of cloud-based big data services and infrastructure as code. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Experience working in agile POD-based teams.

Posted 3 days ago

Apply

2.0 - 4.0 years

3 - 11 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

Proven experience as a Data Analyst, preferably with exposure to observability or monitoring data. Strong proficiency in SQL, especially with ClickHouse or similar columnar databases. Experience with data visualization tools such as Grafana or equivalent. Familiarity with Splunk data structures, dashboards, and reports is a plus. Strong analytical and problem-solving skills with attention to detail. Ability to work collaboratively in a POD-based agile team environment. Good communication skills to present data insights effectively. Key Responsibilities Analyze and validate data during the migration of user knowledge objects from Splunk to ClickHouse and Grafana. Collaborate with engineering teams to ensure data integrity and consistency post-migration. Create and maintain comprehensive reports and dashboards to monitor migration progress and outcomes. Identify discrepancies or data quality issues and work with technical teams to resolve them. Support automation efforts by providing data insights and requirements. Translate complex data findings into clear, actionable recommendations for stakeholders. Team and Work Environment Current team size: [Insert number] Team locations: [Insert locations] The team is growing to support this critical migration, offering opportunities for professional growth and learning. Qualifications Data Analyst with experience in Splunk, clickhouse, Grafana. Nice to Have Experience with alerting systems like Bosun. Knowledge of data migration processes and automation tools. Basic scripting skills (Python, Bash) for data manipulation. Understanding of observability concepts and monitoring frameworks.

Posted 3 days ago

Apply

5.0 - 7.0 years

8 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

8 - 14 Lacs

Kanpur

Work from Office

Naukri logo

We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.

Posted 2 weeks ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Chennai

Work from Office

Naukri logo

Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming

Posted 2 weeks ago

Apply

3.0 - 5.0 years

12 - 20 Lacs

Pune

Remote

Naukri logo

Role & responsibilities Lead the creation, development, and implementation of critical system design changes, enhancements, and software projects. Ensure timely execution of project deliverables. Work with other engineers to ensure the system and product is consistent and aligned through all processes. Improve product quality, performance, and security through substantial process improvements. Follow development standards and promote best practices. Individual contributor as an engineer. Requirement and Qualification: 3+ years experience in Python programming. Experience with Neo4j for graph database management and querying. Familiarity with Postgres and Clickhouse for database management and optimization. Experience with cloud platforms including AWS, Azure, and GCP. Understanding of serverless architecture for building and deploying applications. Experience with SaaS (Software as a Service) /product development. Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes). Exceptional problem-solving and analytical skills. Excellent communication and teamwork abilities. Bonus points if you... Experience in AWS ECS, EKS Familiarity with any open-source vulnerability/secret scanning tool. Benefits Our Culture: We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly. Flat hierarchy with fast decision making and a startup-oriented get things done culture. A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment.

Posted 2 weeks ago

Apply

10.0 - 17.0 years

40 - 45 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Naukri logo

Role & responsibilities We are looking for Python backend developer for permanent position with MNC company for Remote. Preferred candidate profile Deep hands-on experience with Python SQL & NoSQL required, Need to be able to dockerize their microservices they built, but not setting up pods, services, deploying, etc. Proven expertise in microservices architecture, containerization (Docker), and cloud-native app development (any cloud). Build and scale RESTful APIs , async jobs, background schedulers, and data pipelines for high-volume systems . Strong understanding of API design , rate limiting, secure auth (OAuth2), and best practices . Create and optimize NoSQL and SQL data models (MongoDB, DynamoDB, PostgreSQL, ClickHouse) Soft Skills Clear communication, ownership mindset and self-driven

Posted 3 weeks ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Indi

Work from Office

Naukri logo

- Proficient in Python programming. - Experience with Neo4j for graph database management and querying. - Knowledge of cloud platforms including AWS, Azure, and GCP. - Familiarity with Postgres and Clickhouse for database management and optimization. - Understanding of serverless architecture for building and deploying applications. - Experience with Docker for containerization and deployment. Role & responsibilities Preferred candidate profile

Posted 3 weeks ago

Apply

4.0 - 9.0 years

7 - 16 Lacs

Pune, Bengaluru, Greater Noida

Work from Office

Naukri logo

About the Role We are seeking a skilled and security-conscious Backend Engineer to join our growing engineering team. In this role, you will be responsible for designing, developing, and maintaining secure backend systems and services. Youll work with modern technologies across cloud platforms, graph databases, and containerized environments to build scalable and resilient infrastructure. Key Responsibilities Design and implement backend services and APIs using Python. Manage and query graph data using Neo4j. Work across cloud platforms (AWS, Azure, GCP) to build and deploy secure, scalable applications. Optimize and maintain relational and analytical databases including PostgreSQL and ClickHouse. Develop and deploy serverless applications and microservices. Containerize applications using Docker and manage deployment pipelines. Collaborate with security teams to integrate best practices and tools into the development lifecycle. Mandatory Skills Proficiency in Python programming . Hands-on experience with Neo4j for graph database management and Cypher querying. Working knowledge of AWS , Azure , and Google Cloud Platform (GCP) . Experience with PostgreSQL and ClickHouse for database optimization and management. Understanding of serverless architecture and deployment strategies. Proficiency with Docker for containerization and deployment. Nice to Have Experience with AWS ECS and EKS for container orchestration. Familiarity with open-source vulnerability/secret scanning tools (e.g., Trivy, Gitleaks, etc.). Exposure to CI/CD pipelines and DevSecOps practices. What We Offer Competitive compensation and benefits. Flexible work environment. Opportunities to work on cutting-edge security and cloud technologies. A collaborative and inclusive team culture.

Posted 3 weeks ago

Apply

10.0 - 17.0 years

20 - 35 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

1. Project description Migration of User Knowledge Objects from Splunk to Clickhouse / Grafana / Bosun: PRE Observability team is undertaking a large-scale observability transformation initiative aimed at migrating from Splunk to a more cost-effective and scalable open-source observability stack based on Clickhouse and Grafana. As part of this effort, we are seeking POD-based teams to support the migration of user knowledge objects such as dashboards, reports, alerts, macros and lookups. 2. Client description 3. Key Responsibilities: Lead the design, development, and deployment of AI-based automation tools for data artifact migration. Develop machine learning models to intelligently map, transform, and validate data across different big data platforms. Build robust data pipelines to handle high-volume, high-velocity data migration. Collaborate with data engineers and architects to integrate AI-driven solutions into existing data workflows. Implement NLP and pattern recognition algorithms to automate schema conversion and data validation. Design custom algorithms for automated data quality checks and anomaly detection. Mentor junior engineers and contribute to technical leadership within the AI engineering team. Stay updated with the latest advancements in AI, big data technologies, and automation frameworks. Create comprehensive technical documentation and best practice guidelines for AI-based data migration. 4. Details on tech stack Splunk ClickHouse Grafana Data Migration Automation Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, and advanced analytics services. Fusing technical vision with business acumen, we enable positive business outcomes for enterprise companies undergoing business transformation by solving their most pressing technical challenges. A key differentiator for Grid Dynamics is our 7+ years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization, and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India. Follow us on LinkedIn.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

In this role, you will play a key role in designing, building, and optimizing scalable data products within the Telecom Analytics domain. You will collaborate with cross-functional teams to implement AI-driven analytics, autonomous operations, and programmable data solutions. This position offers the opportunity to work with cutting-edge Big Data and Cloud technologies, enhance your data engineering expertise, and contribute to advancing Nokias data-driven telecom strategies. If you are passionate about creating innovative data solutions, mastering cloud and big data platforms, and working in a fast-paced, collaborative environment, this role is for you! You have: Bachelors or masters degree in computer science, Data Engineering, or related field with 8+ years of experience in data engineering with a focus on Big Data, Cloud, and Telecom Analytics. Hands-on expertise in Ab Initio for data cataloguing, metadata management, and lineage. Skills in data warehousing, OLAP, and modelling using BigQuery, Clickhouse, and SQL. Experience with data persistence technologies like S3, HDFS, and Iceberg. Hold on, Python and scripting languages. It would be nice if you also had: Experience with data exploration and visualization using Superset or BI tools. Knowledge in ETL processes and streaming tools such as Kafka. Background in building data products for the telecom domain and understanding of AI and machine learning pipeline integration. Data Governance: Manage source data within the Metadata Hub and Data Catalog. ETL Development: Develop and execute data processing graphs using Express It and the Co-Operating System. ETL Optimization: Debug and optimize data processing graphs using the Graphical Development Environment (GDE). API Integration: Leverage Ab Initio APIs for metadata and graph artifact management. CI/CD Implementation: Implement and maintain CI/CD pipelines for metadata and graph deployments. Team Leadership & Mentorship: Mentor team members and foster best practices in Ab Initio development and deployment.

Posted 4 weeks ago

Apply

3.0 - 5.0 years

50 - 60 Lacs

Bengaluru

Work from Office

Naukri logo

Staff Data Engineer Experience: 3 - 5 Years Exp Salary : INR 50-60 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 4:00PM to 1:00AM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : ClickHouse, DuckDB, AWS, Python, SQL Good to have skills : DBT, Iceberg, Kestra, Parquet, SQLGlot Rill Data (One of Uplers' Clients) is Looking for: Staff Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Rill is the worlds fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution thats easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelors degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Rill is an operational BI tool that provides fast dashboards that your team will actually use. Data teams build fewer, more flexible dashboards for business users, while business users make faster decisions and perform root cause analysis, with fewer ad hoc requests. Rills unique architecture combines a last-mile ETL service, an in-memory database, and operational dashboards - all in a single solution. Our customers are leading media & advertising platforms, including Comcast's Freewheel, tvScientific, AT&T's DishTV, and more. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

1.0 - 4.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Job Area: Information Technology Group, Information Technology Group > IT Data Engineer General Summary: We are looking for a savvy Data Engineer expert to join our analytics team. The Candidate will be responsible for expanding and optimizing our data and data pipelines, as well as optimizing data flow and collection for cross functional teams. The ideal candidate has python development experience and is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. We believe that candidate with solid Software Engineering/Development is a great fit. However, we also recognize that each candidate has a unique blend of skills. The Data Engineer will work with database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams. The right candidate will be excited by the prospect of optimizing data to support our next generation of products and data initiatives.Responsibilities for Data Engineer Create and maintain optimal data pipelines, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvementsautomating manual processes, optimizing data delivery, re-designing for greater scalability, etc. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Work with data and analytics experts to strive for greater functionality in our data systems. Performing ad hoc analysis and report QA testing. Follow Agile/SCRUM development methodologies within Analytics projects. Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing big data data pipelines, and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Good communication skills, a great team player and someone who has the hunger to learn newer ways of problem solving. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. A successful history of manipulating, processing, and extracting value from large, disconnected datasets. Working knowledge on Unix or Shell scripting Constructing methods to test user acceptance and usage of data. Knowledge of predictive analytics tools and problem solving using statistical methods is a plus. Experience supporting and working with cross-functional teams in a dynamic environment. Demonstrated understanding of the Software Development Life Cycle Ability to work independently and with a team in a diverse, fast paced, and collaborative environment Excellent written and verbal communication skills A quick learner with the ability to handle development tasks with minimum or no supervision Ability to multitask We are looking for a candidate with 7+ years of experience in a Data Engineering role. They should also have experience using the following software/tools Experience in Python, Java, etc. Experience with Google Cloud Platform. Experience with bigdata frameworks & tools - Apache Hadoop/Beam/Spark/Kafka. Exposure to workflow management & scheduling using Airflow/Prefect/Dagster Exposure to databases like (Big Query , Clickhouse). Experience to container orchestration (Kubernetes) Optional Experience on one or more BI tools (Tableau, Splunk or equivalent).. Minimum Qualifications:6+ years of IT-related work experience without a Bachelors degree. 2+ years of work experience with programming (e.g., Java, Python). 1+ year of work experience with SQL or NoSQL Databases. 1+ year of work experience with Data Structures and algorithms.'Bachelor's degree and 7+ years Data Engineer/ Software Engineer (Data) Experience Minimum Qualifications: 4+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field. OR 6+ years of IT-related work experience without a Bachelors degree. 2+ years of work experience with programming (e.g., Java, Python). 1+ year of work experience with SQL or NoSQL Databases. 1+ year of work experience with Data Structures and algorithms. Bachelors / Masters or equivalent degree in computer engineering or in equivalent stream Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

Designing implementing Data Pipelines Frameworks to provide a better developer experience for our dev teams. Helping other PODs in IDfy define their data landscape and onboarding them onto our platform. Keep abreast of the latest trends and technologies in Data Engineering, GenAI, and Natural Language Query. Set up logging, monitoring, and alerting mechanisms for better visibility into data pipelines and platform health. Automate repetitive data tasks to improve efficiency and free up engineering bandwidth. Maintain technical documentation to ensure knowledge sharing and onboarding efficiency. Troubleshoot and resolve bottlenecks in data processing, ingestion, and transformation pipelines. We Are the Perfect Match If You Have experience creating and managing large-scale data ingestion pipelines using the ELT (Extract, Load, Transform) model. In your current role, take ownership of defining data models, transformation logic, and data flow. Are proficient in Logstash, Apache BEAM Dataflow, Apache Airflow, ClickHouse, Grafana, InfluxDB/VictoriaMetrics, and BigQuery. Strong understanding and hands-on experience with data warehouses, with at least 3 years of experience in any data warehousing stack. Have a keen eye for data and can derive meaningful insights from it. Understand product development methodologies; we follow Agile. Have experience with Time Series Databases (we use InfluxDB VictoriaMetrics) and alerting/anomaly detection frameworks (preferred but not mandatory). Are familiar with visualization tools such as Metabase, Power BI, or Tableau. Have experience developing software in the cloud (GCP/AWS is preferred, but hands-on experience is not mandatory). Are passionate about exploring new technologies and enjoy sharing your knowledge through technical blogs.

Posted 1 month ago

Apply

6 - 10 years

7 - 17 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Job Scope: Responsible for creating, monitoring and maintaining various databases such as Mysql, MongoDB , postgres, etc. Job Responsibilities: Ensure optimal health, integrity, performance, and security of all databases. Develop and maintain data categorization and security standards. Develop and maintain data movement, archiving and purging scripts. Evaluate and recommend new database technologies and management tools; optimize existing and future technology investments to maximize returns. Provide day-to-day support to internal IT support groups, external partners, and customers as required. Manage outsourced database administration services to perform basic monitoring and administrative-level tasks as directed. Participate in change and problem management activities, root cause analysis, and development of knowledge articles to support the organizations program. Provide subject matter expertise to internal and external project teams, applications developers, and others as needed. Support application testing and production operations. Serve as database administration. Document, monitor, test, and adjust backup and recovery procedures to ensure important data is available in a disaster. Serve as on-call database administrator on a rotating basis. Develop, Implement, and Maintain MySQL, PostgreSQL, Mongo Instances including scripts for monitoring and maintenance of individual databases. File system management and monitoring Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability Qualification and Experience B.E./B.Tech/MCA 6-10 years of experience in managing enterprise databases Knowledge and Skills MySQL, PostgreSQL & knowledge on NoSQL like Mongo & Redis etc. Clickhouse DB Admin skills in an added advantage. Backup and Recovering MySQL, PostgreSQL and other databases. User level Access: Risks & Threats. Synchronous and Asynchronous replication, converged systems, partitioning, and storage-as-a-service (cloud technologies) Linux operating systems, including shell scripting Windows Server operating system Industry-leading database monitoring tools and platforms Data integration techniques, platforms, and tools Modern database backup technologies and strategies Why join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. https://www.tanla.com

Posted 1 month ago

Apply

3 - 5 years

10 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking a highly skilled DevOps Engineer with deep expertise in database management (MongoDB, Redis, ClickHouse), containerization (Kubernetes, Docker), and cloud security. You will be responsible for designing, implementing, and maintaining scalable infrastructure while ensuring security, reliability, and performance across our cloud environments. Role & responsibilities Database Management: Deploy, optimize and maintain MongoDB, Redis, SQL and ClickHouse databases for high availability, scalability, and performance. Monitor database health, optimize queries, and ensure data integrity and backup strategies. Implement replication, sharding, and clustering strategies for distributed database systems. Implement database security best practices, including authentication, encryption, and access controls. Automate database provisioning and configuration. Containerization & Orchestration: Deploy and manage Docker containers and Kubernetes clusters across multiple environments. Automate container orchestration, scaling, and load balancing. Implement Helm charts, operators, and custom Kubernetes configurations to enhance deployment efficiency. Security & Compliance: Enforce RBAC, IAM policies, and security best practices across infrastructure. Perform vulnerability assessments and manage patching strategies for databases and containers. Cloud & Infrastructure: Work with cloud providers such as AWS, GCP, to optimize cloud-based workloads. Implement backup and disaster recovery strategies for critical data and infrastructure. Performance Optimization & Reliability: Enhance system performance by fine-tuning Kubernetes clusters, databases, and caching mechanisms. Implement disaster recovery, failover strategies, and high availability architectures. Work on incident response, troubleshooting, and RCA (Root Cause Analysis) for production issues. Monitor and fine-tune NGINX performance to handle high-traffic workloads efficiently. With NGINX Ingress controllers in Kubernetes environments. Required Skills & Experience: 4 + years of experience in a DevOps, SRE, or Cloud Engineering role. Strong expertise in MongoDB, Redis, and ClickHouse, including replication, clustering, and optimization Strong experience with Docker & Kubernetes, including Helm and operators. Proficiency in Linux administration, networking, and system performance tuning. Deep understanding of cloud security principles, including encryption, authentication, and compliance. Knowledge of GCP and their managed Kubernetes services (GKE,). Security-first mindset, with experience in RBAC, IAM, and security hardening Preferred candidate profile Familiarity with service mesh architecture (Istio) and API gateways. Knowledge of Kafka

Posted 1 month ago

Apply

6 - 10 years

12 - 22 Lacs

Coimbatore

Work from Office

Naukri logo

Looking for Database Developer

Posted 1 month ago

Apply

1 - 4 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

What Youll Own Full Stack Systems: Architect and build end-to-end applications using Flask, FastAPI, Node.js, React (or Next.js), and Tailwind. AI Integrations: Build and optimize pipelines involving LLMs (OpenAI, Groq, LLaMA), Whisper, TTS, embeddings, RAG, LangChain, LangGraph, and vector DBs like Pinecone/Milvus. Cloud Infrastructure: Deploy, monitor, and scale systems on AWS/GCP using EC2, S3, IAM, Lambda, Kafka, and ClickHouse. Real-time Systems: Design asynchronous workflows (Kafka, Celery, WebSockets) for voice-based agents, event tracking, or search indexing. System Orchestration: Set up scalable infra with autoscaling groups, Docker, and Kubernetes (PoC ready, if not full prod). Growth-Ready Features: Implement in-app nudges, tracking with Amplitude, AB testing, and funnel optimization. Tech Stack Youll Work With: Backend & Infrastructure Languages/Frameworks: Python (Flask, FastAPI), Node.js Databases: PostgreSQL, Redis, ClickHouse Infra: Kafka, Docker, Kubernetes, GitHub Actions, Cloudflare Cloud: AWS (EC2, S3, RDS), GCP Frontend React / Next.js, TailwindCSS, Zustand, Shadcn/UI WebGL, Three.js for 3D rendering AI/ML & Computer Vision LangChain, LangGraph, HuggingFace, OpenAI, Groq Whisper (ASR), Eleven Labs (TTS) Diffusion Models, StyleGAN, Stable Diffusion GANs, MediaPipe, ARKit/ARCore Computer Vision: Face tracking, real-time try-on, pose estimation Virtual Try-On: Face/body detection, cloth/hairstyle try-ons APIs Stripe, VAPI, Algolia, OpenAI, Amplitude Vector DB & Search Pinecone, Milvus (Zilliz), custom vector search pipelines Other Vibe Coding culture, prompt engineering, system-level optimization Must-Haves: 1+ years of experience building production-grade full-stack systems Fluency in Python and JS/TS (Node.js, React) shipping independently without handholding Deep understanding of LLM pipelines, embeddings, vector search, and retrieval-augmented generation (RAG) Experience with AR frameworks (ARKit, ARCore), 3D rendering (Three.js), and real-time computer vision (MediaPipe) Strong grasp of modern AI model architectures: Diffusion Models, GANs, AI Agent Hands-on with system debugging, performance profiling, infra cost optimization Comfort with ambiguity fast iteration, shipping prototypes, breaking things to learn faster Bonus if youve built agentic apps, AI workflows, or virtual try-ons

Posted 1 month ago

Apply

3 - 5 years

8 - 18 Lacs

Gurgaon

Work from Office

Naukri logo

Title: Sr. Business Analyst Location: Gurgaon, India Type: Hybrid (work from office) Job Description Who We Are: Fareportal is a travel technology company powering a next-generation travel concierge service. Utilizing its innovative technology and company owned and operated global contact centers, Fareportal has built strong industry partnerships providing customers access to over 600 airlines, a million lodgings, and hundreds of car rental companies around the globe. With a portfolio of consumer travel brands including CheapOair and OneTravel, Fareportal enables consumers to book-online, on mobile apps for iOS and Android, by phone, or live chat. Fareportal provides its airline partners with access to a broad customer base that books high-yielding international travel and add-on ancillaries. Fareportal is one of the leading sellers of airline tickets in the United States. We are a progressive company that leverages technology and expertise to deliver optimal solutions for our suppliers, customers, and partners. FAREPORTAL HIGHLIGHTS: Fareportal is the number 1 privately held online travel company in flight volume. Fareportal partners with over 600 airlines, 1 million lodgings, and hundreds of car rental companies worldwide. 2019 annual sales exceeded $5 billion. Fareportal sees over 150 million unique visitors annually to our desktop and mobile sites. Fareportal, with its global workforce of over 2,600 employees, is strategically positioned with 9 offices in 6 countries and headquartered in New York City. Role Overview The BI Engineer will generally support many areas of the business with analysis, visualization, and recommendations by leveraging our diverse data sources and applying them appropriately by interpreting the business needs and goals. Responsibilities Create data-driven, high-impact insights independently. Ideate, develop, and deploy dashboards and visualizations for key business metrics. Perform advanced data profiling, modeling, and business logic analysis. Implement alerting tools and systems to quickly identify issues, notify stakeholders, and coordinate to resolve the issues. Collaborate with business units to perform requirements analysis, project scoping, data analysis and business logic transformation. Support data warehousing and automation projects, including logic and validation, for use in BI analysis and insights. Provide guidance to reporting users to maximize understanding and use of reporting technologies. Efficiently manage the backlog and delivery of analytical projects. Requirements Bachelors degree in technical or analytical field, or other fields with related work experience 2-3 years of work experience with business intelligence or other data analysis roles Strong experience querying relational databases such as Microsoft SQL Server, Oracle Database, or MySQL & ClickHouse. High proficiency with visualization tools such as Power BI. Proven track record of data-driven insights Advanced Excel skills Data modeling, validation, Data Storytelling, and statistical analysis Critical thinking and problem solving Preferred Experience in travel or e-commerce industries. Disclaimer This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee. Fareportal reserves the right to change the job duties, responsibilities, expectations or requirements posted here at any time at the Companys sole discretion, with or without notice.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies