Jobs
Interviews

2 Influx Db Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

We are seeking a Senior Data Engineer who is proficient in Azure Databricks, PySpark, and distributed computing to create and enhance scalable ETL pipelines specifically for manufacturing analytics. Your responsibilities will include working with industrial data to support real-time and batch data processing needs. Your role will involve constructing scalable real-time and batch processing workflows utilizing Azure Databricks, PySpark, and Apache Spark. You will be responsible for data pre-processing tasks such as cleaning, transformation, deduplication, normalization, encoding, and scaling to guarantee high-quality input for downstream analytics. Designing and managing cloud-based data architectures, like data lakes, lakehouses, and warehouses, following the Medallion Architecture, will also be part of your duties. You will be expected to deploy and optimize data solutions on Azure, AWS, or GCP, focusing on performance, security, and scalability. Developing and optimizing ETL/ELT pipelines for structured and unstructured data sourced from IoT, MES, SCADA, LIMS, and ERP systems and automating data workflows using CI/CD and DevOps best practices for security and compliance will also be essential. Monitoring, troubleshooting, and enhancing data pipelines for high availability and reliability, as well as utilizing Docker and Kubernetes for scalable data processing, will be key aspects of your role. Collaboration with automation teams will also be required for effective project delivery. The ideal candidate will hold a Bachelors or Masters degree in Computer Science, Information Technology, or a related field, with a specific requirement for IIT Graduates. You should possess at least 4 years of experience in data engineering with a focus on cloud platforms like Azure, AWS, or GCP. Proficiency in PySpark, Azure Databricks, Python, Apache Spark, and expertise in various databases (relational, time series, and NoSQL) is necessary. Experience in containerization tools like Docker and Kubernetes, strong analytical and problem-solving skills, familiarity with MLOps and DevOps practices, excellent communication and collaboration abilities, and the flexibility to adapt to a dynamic startup environment are desirable qualities for this role.,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

punjab

On-site

The key responsibilities for the Monitoring Platform Integrator role include designing, building, deploying, and configuring the new monitoring infrastructure to enhance efficiency and effectiveness. You will collaborate with tech-leads of system migrations to ensure proper monitoring of new platforms, establish alerting rules, and define escalation paths. It is essential to monitor the monitoring system itself and set up redundant escalation paths to detect failures. Developing and maintaining any required code-base for solutions and customer-specific configurations is also part of the role. As a Monitoring Platform Integrator, you will focus on configuring the platform as automatically as possible using technologies like service discovery, ansible, and git to minimize manual configuration. Additionally, you will assist tech-leads and system owners in setting up Grafana and other dashboarding tools. Working closely with NOC teams and system owners, you will gather monitoring and alerting requirements to ensure smooth system transitions. You will also play a key role in transitioning custom monitoring scripts from Nagios to either Prometheus or icinga2 platforms and integrating existing monitoring systems into the new design. Qualifications for this position include a basic degree or diploma in IT and certifications from Microsoft, Enterprise Linux, Cloud Foundations, AWS Cloud Practitioner, or similar DevOps-centered training. The ideal candidate should have over 5 years of experience in a systems admin role, focusing on implementing, developing, and maintaining enterprise-level platforms, preferably in the media industry. Proficiency in various areas such as Docker and Kubernetes management, Redhat/Oracle Linux/CentOS administration, AWS Cloud toolsets, and monitoring technologies like Prometheus, Grafana, and Nagios is crucial. Experience in logging technologies such as Kibana, Elasticsearch, and CloudWatch, as well as orchestration management tools like Ansible, Terraform, or Puppet, is highly desirable. Strong skills in Python development, JSON, API integration, and NetBox are essential for this role. Knowledge of GO and Alerta.io may also be advantageous for the Monitoring Platform Integrator position.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies