Jobs
Interviews

36175 Docker Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

GCP Platform cloud Engineer : Immediate Joiner only Exp- 4-7 Years Location- Pune/Bangalore GCP Core Service : IAM, VPC, GCE ( Google Compute Engine) , GCS ( Google Cloud Storage) , CloudSQL, MySQL, CI/CD Tool (Code Build/GitHub Action/), Other Tool : GitHub, Terraform, Shell Script, Ansible. Role purpose: To Develop, build, implement and operate 24x7 Public Cloud infrastructure services mainly into the GCP and technology solution. To design, plan and implement a growing set of public cloud platforms and solutions used to provide mission critical infrastructure services. To constantly analyse, optimise, migrate and transform the global legacy IT infrastructure environment into cloud ready & cloud native solutions and responsible for providing software-related operations support, including managing level two and level three incident and problem management. Core competencies, knowledge, and experience: Profound Cloud Technology, Network, Security and Platform Expertise (AWS) Expertise in GCP cloud services like VPC, Compute Instance, Cloud Storage, Kubernetes Engine, etc. Working experience with Cloud Functions. Expertise in automation and workflow like Terraform, Ansible scripts and Python scripts. DevOps Tools: Jenkins pipeline, Gocd Pipeline, HashiCorp Stack (Packer, Terraform etc.), Docker, Kubernetes Work experience in GCP organisation and multi-tenant project setup. Good documentation and communication skills. Degree in IT (any), 3 years of experience in cloud computing or 5 years in enterprise IT Adapt in ITIL, SOX and security regulations Three to five years of work experience in programming and /or systems analysis applying agile frameworks Experience with Web applications and Web hosting skills. Experience with DevOps concept in cloud environment. Working experience in managing highly business critical environments. GCP Cloud Engineer / GCP Professional Cloud Architect certification with experience preferred.

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Backend Developer Job Type: Full-time Location: Hybrid - Pune, India Job Summary: Join one of our top customer's team as a Backend Developer and be at the forefront of building next-generation server-side solutions. You will play a pivotal role in developing robust APIs and integrating complex systems, ensuring security, scalability, and world-class performance across our platforms. Key Responsibilities: Design, develop, and maintain scalable and secure RESTful APIs using Python. Manage and optimize MySQL databases for seamless data storage and retrieval. Integrate external systems and services to enable smooth data exchanges and workflow automation. Collaborate closely with frontend developers, DevOps engineers, and data teams to ensure unified application architecture and deployment. Implement API authentication, authorization, and security best practices. Contribute to CI/CD pipeline enhancements to streamline development and deployment cycles. Monitor, troubleshoot, and optimize backend performance, ensuring high system reliability and uptime. Required Skills and Qualifications: Strong proficiency in Python with hands-on experience in backend development. Advanced knowledge of MySQL, including database modeling, queries, and performance tuning. Expertise in designing and building REST APIs following best practices. Practical experience with Docker for containerization and environment consistency. Solid grasp of CI/CD tools and workflows for automated testing and deployment. Excellent written and verbal communication skills, with a focus on clear, collaborative team interaction. Demonstrated commitment to clean architecture and secure coding practices. Preferred Qualifications: Previous experience with multi-environment configuration and system integration projects. Familiarity with API gateway solutions and advanced security mechanisms. Exposure to cloud platforms or microservices-based architectures.

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

About AiSensy AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco., Wipro, Asian Paints, India Today Group Skullcandy, Vivo, Physicswallah, Cosco grow their revenues via WhatsApp. Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more High Impact as Businesses drive 25-80% Revenues using AiSensy Platform Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀 What You’ll Do (Key Responsibilities) 🔹 CI/CD & Automation: Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins . Automate deployment processes to improve efficiency and reduce downtime. 🔹 Infrastructure Management: Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code. Deploy and maintain Dockerized applications on Kubernetes clusters for scalability. 🔹 Cloud & Security: Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure. Optimize cloud costs and ensure security best practices are in place. 🔹 Monitoring & Troubleshooting: Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime. Proactively identify and resolve infrastructure-related issues. 🔹 Scripting & Automation: Use Python or Bash scripting to automate repetitive DevOps tasks. Build internal tools for system health monitoring, logging, and debugging. What We’re Looking For (Must-Have Skills) ✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket) ✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins ✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi ✅ Containerization & Orchestration: Experience with Docker & Kubernetes ✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers ✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana ✅ Scripting Knowledge: Python or Bash for automation Bonus Skills (Good to Have, Not Mandatory) ➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking ➕ Experience with Microsoft/Linux/F5 Technologies ➕ Hands-on knowledge of Database servers

Posted 1 day ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Cvent is looking for a Manager, Site Reliability Engineering to help us scale our systems and ensure stability, reliability and performance and rapid deployments of our platform. We build teams that are inclusive, collaborative, and have a strong sense of ownership for the things they build. If you have a passion and track record for solving problems; moreover, have strong leadership skills, this is a great fit for you. As Manager, SRE you will demonstrate both emerging and current technologies, methods, and processes contributing to the evolution of software deployment processes, enhancing security, reducing risk, and improving the overall end-user experience. As part of the Technology R&D Team, you will play an integral part in advancing DevOps maturity and be a part of a new culture of quality and site reliability. You will continually improve our CI/CD tools, processes, and procedures. You will also be responsible for regular reporting to Senior Technology Leaders and providing updates on organizational risk exposure and risk related issues. What You Will Be Doing: Set the direction and strategy for your team, and help shape the overall SRE program for the company Support the growth by ensuring a robust, scalable, cloud-first infrastructure Own site stability, performance and capacity planning Participate early in the SDLC to ensure reliability is built in from the beginning, and creating plans for successful implementations/launches Foster a learning and ownership culture within the team and the larger Cvent organization Ensure best engineering practices through automation, infrastructure as code, robust system monitoring, alerting, auto scaling, self-healing, etc... Manage complex technical projects and a team of SREs Recruit and develop staff; build a culture of excellence in site reliability and automation Lead by example – roll up your sleeves by debugging and coding; participate in on-call rotation & occasional travel Represent the technology perspective and priorities to leadership and other stakeholders by continuously communicating timeline, scope, risks, and technical road map What You Need for this Position: 12+ years of hands-on technical leadership and people management experience 3+ years of demonstrable experience leading site reliability and performance in large-scale, high-traffic environments Strong leadership, communication and interpersonal skills geared to getting things done Developing themselves and the talent within their charge – fostering and creating opportunity for the team Architect-level understanding of one or more of the major public cloud services (AWS, GCP or Azure), using them to effectively design secure and scalable services Strong understanding of SRE concepts and the DevOps culture, with a focus on leveraging software engineering tools, methodologies and concepts In-depth understanding of automation and CI/CD processes to go along with excellent reasoning and problem-solving skills Experience with Unix/Linux environments with a deep grasp on system internals Worked on large-scale distributed systems including multi-tiered architecture Strong knowledge of modern platforms like Fargate, Docker, Kubernetes etc. Experience working with monitoring tools (Datadog, NewRelic, ELK stack, etc) and Database technologies (SQL Server, Postgres and Couchbase preferred) Validated breadth of understanding and development of solutions based on multiple technologies, including networking, cloud, database, and scripting languages. Experience in prompt engineering, building AI Agents, or MCP is a plus

Posted 1 day ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: DevOps Engineer Experience: 5–7 Years Location: Pune Job Overview: We are looking for a highly skilled DevOps Engineer with deep expertise in Kubernetes, Helm Charts, GitOps, GitHub, and cloud platforms like AWS. The ideal candidate will have a strong background in CI/CD automation, infrastructure as code, and container orchestration, and will be responsible for managing and improving our deployment pipelines and cloud infrastructure. Key Responsibilities: • Design, implement, and maintain CI/CD pipelines using GitHub Actions or other automation tools. • Manage and optimize Kubernetes clusters for high availability and scalability. • Use Helm Charts to define, install, and upgrade complex Kubernetes applications. • Implement and maintain GitOps workflows (preferably using ArgoCD). • Ensure infrastructure stability, scalability, and security across AWS • Collaborate with development, QA, and infrastructure teams to streamline delivery processes. • Monitor system performance, troubleshoot issues, and ensure reliable deployments. • Automate infrastructure provisioning using tools like Terraform, Pulumi, or ARM templates (optional but preferred). • Maintain clear documentation and enforce best practices in DevOps processes. Key Skills & Qualifications: • 7–9 years of hands-on experience in DevOps • Strong expertise in Kubernetes and managing production-grade clusters. • Experience with Helm and writing custom Helm charts. • In-depth knowledge of GitOps-based deployments (preferably using ArgoCD). • Proficient in using GitHub, including GitHub Actions for CI/CD. • Solid experience with AWS • Familiarity with Infrastructure as Code (IaC) tools (preferably Terraform) • Strong scripting skills (e.g., Bash, Python, or PowerShell) • Understanding of containerization technologies like Docker. • Excellent problem-solving and troubleshooting skills. • Strong communication and collaboration abilities. Nice to Have: • Experience with monitoring tools like Prometheus, Grafana, or ELK stack. • Knowledge of security practices in DevOps and cloud environments. • Certification in AWS is a plus

Posted 1 day ago

Apply

4.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Purpose of the Role Forming the Yum Digital Tech Team, you’ll be working across teams to support the DevOps of the Digital products for web and app as well as the integrations to the many external services. Your focus will be on creating and supporting high quality, highly scalable, infrastructure. Mandatory Skills 4 - 6 years of experience in DevOps with exposure on building CI/CD pipelines. Hands on experience in Jenkins, Kunbernetes, Docker and Terraform. Comprehensive AWS knowledge and experience. Serverless Framework and AWS Serverless knowledge including Step Functions, Lambda, SNS, Dynamo DB and SQS. Exposure on Docker and AWS ECS support / deployment and troubleshooting. Knowledge of Datalake configuration and event streaming including AWS Firehose, Kinesis etc.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Hiring Alert!!! We are looking for highly skilled Lead Site Reliability Engineer (SRE) for our Product Development team based out at Noida Location!!! Only Immediate Joiners preferred!! Job Description We are seeking a highly skilled Site Reliability Engineer (SRE) to join our team. The ideal candidate will have a deep understanding of both software engineering and systems administration, with a focus on creating scalable and reliable systems. You will work closely with development and operations teams to ensure the reliability, availability, and performance of our services. Key Responsibilities Collaborate with engineering teams to design and implement scalable, robust systems. Ensure the reliability and performance of our services through monitoring, incident response, and capacity planning. Develop and maintain automation tools for system provisioning, configuration management, and deployment. Implement and manage monitoring tools to ensure visibility into the health and performance of our systems. Lead incident response efforts, perform root cause analysis, and implement preventative measures. Utilize Infrastructure as Code (IaC) practices to manage and provision infrastructure. Work closely with development and operations teams to ensure smooth deployments and continuous improvement of processes. Ensure that our systems are secure and comply with industry standards and best practices. Create and maintain detailed documentation for systems and processes. Qualifications Bachelor’s degree in computer science, Information Technology, or a related field, or equivalent experience. 8+ Years experience as a Site Reliability Engineer or in a similar role. Experience with cloud platforms (e.g., Azure, AWS & GCP). Strong background in Linux/Unix administration. Proficiency in programming languages such as Python, Go, or Ruby. Experience with configuration management tools (e.g., Ansible, Puppet, Chef). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack, loggly). Understanding of networking concepts and protocols. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work in a fast-paced, dynamic environment. Preferred Qualifications Experience with CI/CD pipelines and tools (e.g., Jenkins, GitLab CI). Familiarity with database management (e.g., MySQL, PostgreSQL, MongoDB). Experience with distributed systems and microservices architecture. Certification in relevant technologies (e.g., AWS Certified Solutions Architect). Exp Required: 8+ Years Competency loggly, PagerDuty, Azure & AWS, Google Cloud, Azure, Site Reliability Engineer. We are looking for candidates with strong Azure & AWS cloud exp. Note: Candidates who can join on immediate basis or max 15 days' notice period can only apply. Interested candidates can share their updated CV with below details at Abhishekkumar.saini@corrohealth.com Total Exp: Current CTC: Expected CTC: Notice Period: Reason for change: Current Location: At CorroHealth, we want to assure all job seekers that we do not require any payment or monetary arrangement as a condition for employment. CorroHealth does not authorize any third party, agency, company, or individual to request money or financial contributions in exchange for a job opportunity. If you receive any request for payment or suspect fraudulent activity related to job applications at Corrohealth, please do not respond. Instead, contact us immed iately at Compliance@corro health.com or report the incident to our Compliance Ho tline via www.lighthouse-services.com/C orroHealth.”

Posted 1 day ago

Apply

3.0 years

0 Lacs

Surat, Gujarat, India

On-site

Job Title: MERN Stack Developer Experience: 3+ Years (Minimum 2+ years in Node.js & 1 year as Full Stack Developer) Location: Surat Employment Type: Full-time Job Summary: We are looking for a skilled and passionate MERN Stack Developer to join our development team. The ideal candidate will have a strong background in Node.js (2+ years) and at least 1 year of hands-on experience as a Full Stack Developer using the MERN stack (MongoDB, Express.js, React.js, Node.js). You will be responsible for developing and maintaining scalable web applications, working across the full development lifecycle. Key Responsibilities: Develop, test, and maintain high-quality web applications using the MERN stack. Write clean, scalable, and efficient code in JavaScript/TypeScript. Develop and integrate RESTful APIs using Node.js and Express.js . Build responsive and dynamic front-end interfaces using React.js . Design and manage NoSQL databases using MongoDB . Collaborate with cross-functional teams including UI/UX designers, product managers, and QA teams. Optimize applications for performance, scalability, and security. Participate in code reviews and maintain high coding standards. Debug and resolve technical issues across the stack. Required Skills & Qualifications: Minimum 3+ years of total software development experience . 2+ years of strong experience in back-end development using Node.js . 1+ year of hands-on experience as a Full Stack Developer with MERN . Strong understanding of RESTful APIs, JSON, and HTTP protocols. Experience with version control systems like Git . Familiarity with Agile/Scrum development practices. Good understanding of web security, performance optimization, and deployment. Nice to Have: Experience with cloud platforms (e.g., AWS, Azure). Familiarity with Docker, CI/CD pipelines. Knowledge of GraphQL, TypeScript. Prior experience with testing frameworks like Jest, Mocha. Why Join Us? Opportunity to work on innovative projects with a modern tech stack. Collaborative and growth-focused work environment. Competitive salary and benefits. Flexible work culture.

Posted 1 day ago

Apply

1.0 years

1 - 1 Lacs

Pune, Maharashtra, India

On-site

🌟 Backend Engineering Intern Location: Pune (In-office) Duration: 12 Months | Type: Full-Time Internship 🚀 Excited to build scalable backend systems for real-world SaaS platforms? As a Backend Engineering Intern at Bynry Inc., you’ll join a focused 20-member team building enterprise-grade platforms that serve thousands of users across industries. You’ll contribute to backend systems powering APIs, data infrastructure, and system integrations in a cloud-first, multi-tenant environment. If you're passionate about backend development, solving complex engineering challenges, and working in a fast-paced startup—this internship is your launchpad! 👥 Who Can Apply We’d Love To Hear From You If You Are available for a full-time, in-office internship Can start immediately and commit to a 1-year duration Are based in Pune or willing to relocate Have a basic understanding of backend development concepts Are comfortable working with APIs, databases, and backend logic Are excited to learn about SaaS system design and large-scale architecture Are curious, motivated, and quality-focused in your approach to engineering 📅 Day-to-Day Responsibilities Build and maintain RESTful APIs for real-world B2B use cases Design and model relational and/or NoSQL databases Work on multi-tenant architectures and data isolation strategies Optimize backend systems for performance and scalability Collaborate with frontend, product, and DevOps teams Integrate third-party APIs and internal microservices Participate in code reviews, unit testing, and technical discussions Own end-to-end development of small to medium-sized features 📚 What You’ll Learn Real-world backend engineering in a B2B SaaS environment Enterprise-scale system design, API development, and data modeling Development workflows using Git, CI/CD, and deployment practices How to collaborate with product and DevOps teams for full delivery cycles Best practices in clean code, documentation, testing, and performance tuning 🎓 Qualifications Pursuing or completed a degree in Computer Science, IT, or related field Familiarity with backend programming concepts in any language Understanding of databases (SQL or NoSQL) and data structures Some experience building or consuming REST APIs (projects, internships, etc.) Exposure to Git, HTTP protocols, and basic debugging Strong analytical and problem-solving skills Willingness to learn and thrive in a fast-paced startup environment 🧰 Skills You’ll Use or Develop Technical Skills Backend development with frameworks like Express.js, Flask, or Spring API creation and integration Database modeling (PostgreSQL, MongoDB, etc.) Performance optimization Git and collaboration workflows Basic cloud and deployment understanding (AWS/GCP, Docker) Soft Skills Problem-solving and debugging Clear technical communication Time management and ownership Agile development and collaboration Documentation and clean code practices 💰 Compensation & Benefits Stipend: ₹10,000 month Learning & development budget Access to real project codebases and cloud environments Opportunity for full-time conversion based on performance 💡 Why Bynry? At Bynry, we’re modernizing the utility sector through Smart360—a powerful, cloud-based platform transforming how cities and businesses operate. As a Backend Engineering Intern, you’ll work on meaningful challenges, grow alongside experienced engineers, and build systems that deliver impact at scale. Join a team that values ownership, learning, and innovation—and get real experience in solving enterprise-scale engineering problems. Note: This is a paid internship.Skills: system integration,restful apis,git,performance optimization,debugging,backend development with frameworks like express.js, flask, or spring,database modeling,clear technical communication,problem-solving and debugging,time management and ownership,api creation and integration,git and collaboration workflows,api,backend programming,documentation and clean code practices,database modeling (postgresql, mongodb, etc.),sql,agile development and collaboration,basic cloud and deployment understanding (aws/gcp, docker)

Posted 1 day ago

Apply

6.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Location: Kolkata Experience Required: 6 to 8+ Employment Type: Full-time CTC: 8 to 12 LPA About Company: At Gintaa, were redefining how Indians order food. With our focus on affordability, exclusive restaurant partnerships, and hyperlocal logistics, we aim to scale across India's Tier 1 and Tier 2 cities. Were backed by a mission-driven team and expanding rapidly – now’s the time to join the core tech leadership and build something impactful from the ground up. Job Description : We are seeking a talented and experienced Mid-Senior Level Software Engineer (Backend) to join our dynamic team. The ideal candidate will have strong expertise in backend technologies, microservices architecture, and cloud environments. You will be responsible for designing, developing, and maintaining high-performance backend systems to support scalable applications. Responsibilities: Design, develop, and maintain robust, scalable, and secure backend services and APIs. Work extensively with Java, Spring Boot, Spring MVC, Hibernate to build and optimize backend applications. Develop and manage microservices-based architectures. Implement and optimize RDBMS (MySQL, PostgreSQL) and NoSQL (MongoDB, Cassandra, etc.) solutions. Build and maintain RESTful services for seamless integration with frontend and third-party applications. Basic understanding of Node.js and Python is a bonus. Ability to learn and work with new technologies. Optimize system performance, security, and scalability. Deploy and manage applications in cloud environments (AWS, GCP, or Azure). Collaborate with cross-functional teams including frontend engineers, DevOps, and product teams. Convert business requirements into technical development items using critical thinking and analysis. Lead a team and manage activities, including task distribution. Write clean, maintainable, and efficient code following best practices. Participate in code reviews, technical discussions, and contribute to architectural decisions. Required Skills: 6+ years of experience in backend development with Java and Spring framework (Spring Boot, Spring MVC). Strong knowledge of Hibernate (ORM) and database design principles. Hands-on experience with Microservices architecture and RESTful API development. Proficiency in RDBMS (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Cassandra, etc.). Experience with cloud platforms such as AWS, GCP, or Azure. Experience with Kafka or equivalent tool for messaging and stream processing. Basic knowledge of Node.js for backend services and APIs. Proven track record of working in fast-paced, Agile/Scrum methodology. Proficient with Git. Familiarity with IDE tools such as Intellij and VS Code. Strong problem-solving and debugging skills. Understanding of system security, authentication and authorization best practices. Excellent communication and collaboration skills. Preferred Skills (Nice to Have): Experience with Elasticsearch for search and analytics. Familiarity with Firebase tools for real-time database, firestore, authentication, and notifications. Hands-on experience with Google Cloud Platform (GCP) services. Hands-on experience of working with Node.js and Python. Exposure to containerization and orchestration tools like Docker and Kubernetes. Experience in CI/CD pipelines and basic DevOps practices.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Zeotap Founded in Berlin in 2014, Zeotap started with a mission to provide high-quality data to marketers. As we evolved, we recognized a greater challenge: helping brands create personalized, multi-channel experiences in a world that demands strict data privacy and compliance. This drive led to the launch of Zeotap’s Customer Data Platform (CDP) in 2020—a powerful, AI-native SaaS suite built on Google Cloud that empowers brands to unlock and activate customer data securely. Today, Zeotap is trusted by some of the world’s most innovative brands, including Virgin Media O2, Amazon, and Audi, to create engaging, data-driven customer experiences that drive better business outcomes across marketing, sales, and service. With an unique background in high-quality data solutions, Zeotap is a leader in the European CDP market, empowering enterprises with a secure, privacy-first solution to harness the full potential of their customer data. About the Role: As a Senior Solutions Engineer - Support, you will play a key role in providing technical expertise and driving customer success for our enterprise clients using Zeotap’s SaaS platform. In this senior role, you’ll not only resolve complex issues and ensure seamless integrations but will also take on additional responsibilities like managing escalations, leading client-facing calls, mentoring the support team, and driving process improvements. You will collaborate across teams — Engineering, Product, Sales, and Customer Success — ensuring customers have an exceptional experience and derive maximum value from Zeotap’s solutions.This role is ideal for someone with a strong technical background, an ownership mindset, and a passion for delivering excellent customer service, with a proven ability to mentor and drive team efficiency in a high-growth, fast-paced environment. Responsibilities: Client-Facing Expertise: Act as the primary technical advisor and product expert for enterprise customers, ensuring they receive high-quality support and expert guidance. Lead and manage client-facing calls, providing timely and effective resolutions to complex technical issues. Build and maintain strong relationships with customers to understand their business needs and deliver tailored support, ensuring maximum value from Zeotap’s platform. Actively engage with clients during escalations and ensure customer satisfaction through effective issue resolution. Escalation Management: Own and manage escalated issues from customers, ensuring that issues are resolved promptly within SLAs. Provide detailed context and collaborate closely with internal teams to resolve complex cases, including technical configurations and root cause analysis. Establish best practices for escalation management, ensuring all team members follow a consistent and effective process. Team Mentoring & Leadership: Mentor junior members of the support team, providing guidance on technical problem-solving, customer interactions, and escalation management. Foster a collaborative team environment, encouraging knowledge sharing, continuous improvement, and high performance. Take ownership of process improvements within the support function, developing and implementing new procedures, tools, and training to optimize team effectiveness. Process Building & Continuous Improvement: Lead the development of internal knowledge bases, documentation, and troubleshooting guides to empower customers and internal teams. Drive improvements in internal support processes, ensuring efficient and consistent customer experiences while reducing response and resolution times. Proactively identify trends and root causes of recurring issues and collaborate with engineering and product teams to address them at the source. Reporting & Metrics: Take ownership of support metrics, including ticket volume, resolution times, customer satisfaction (CSAT), and technical issue trends. Prepare and deliver regular performance reports to leadership, providing actionable insights and recommendations for improvements. Collaboration with Internal Teams: Work closely with Engineering, Product, Sales, and Customer Success teams to align technical solutions with customer needs. Collaborate on cross-functional projects and initiatives to continuously enhance the customer experience. Adhere to Security and Compliance Standards: Follow Zeotap’s security and privacy policies, ensuring that customer data is handled in compliance with internal guidelines and industry standards. Requirements: 4+ years of experience in a technical support, solutions engineering, or customer success engineering role within a SaaS or enterprise software environment. Proven experience managing escalations and providing high-quality support for large-scale enterprise customers. Demonstrated success in mentoring and leading teams, fostering collaboration, and driving process improvements. SaaS & Cloud Application Support: Expertise with SaaS applications and cloud-based infrastructure (particularly Google Cloud Platform, but any cloud experience is valuable). API & Integrations: Deep experience with RESTful APIs, troubleshooting integrations, and providing solutions for complex customer issues. SQL & Querying: Strong knowledge of SQL and ability to write and optimize queries for troubleshooting data-related issues. Scripting & Automation: Experience with scripting (Python, Bash, Javascript, or Java) to automate workflows and resolve technical issues. Monitoring & Troubleshooting: Familiarity with cloud monitoring tools (e.g., Stackdriver, BigQuery, Datadog, Kibana, Grafana, Splunk). Exceptional verbal and written communication skills, able to effectively communicate complex technical concepts to both technical and non-technical stakeholders. Strong relationship-building skills, with the ability to manage and maintain customer relationships while aligning internal teams to solve customer challenges. Excellent analytical and troubleshooting skills with the ability to quickly identify root causes and resolve complex technical issues. Ability to manage multiple priorities simultaneously while maintaining a high standard of customer satisfaction. A passion for delivering customer satisfaction and high-quality support. Proven ability to handle customer issues under pressure, maintaining a positive customer experience in challenging situations. Strong sense of ownership, accountability, and initiative. Willingness to take responsibility for the success of the support function and customer experience. Willingness to work across different time zones to support customers, particularly during EU working hours. Nice-to-Have: Cloud Certifications: Certifications such as Google Cloud Professional Cloud Architect or similar are beneficial. Technical Knowledge: Experience with Kubernetes, Docker, or Terraform for managing cloud-based infrastructure Industry Knowledge: Familiarity with Ad-tech, Mar-tech, or similar industries, especially in areas related to privacy, data security, and cloud-based analytics. Measures of Success: Customer Satisfaction (CSAT): High CSAT scores based on customer feedback, demonstrating your ability to solve problems effectively. Escalation Management: High success rate in managing and resolving escalated issues within SLAs. Team Growth & Development: Successful mentoring of junior team members and leadership in process improvements. SLA Adherence: Consistent adherence to SLA targets for response and resolution times. Team Performance: High team performance based on metrics such as ticket resolution time, first-call resolution rate, and customer satisfaction. Knowledge Base Contribution: Regular contributions to the internal knowledge base, improving team efficiency and customer self-service. Proactive Monitoring Initiatives: Active involvement in identifying and addressing potential issues before they escalate. Demonstrated success in setting up and managing proactive monitoring systems to prevent disruptions and optimize system performance. What do we offer: Competitive compensation and attractive perks Health Insurance coverage Flexible working support, guidance and training provided by a highly experienced team Fast paced work environment Work with very driven entrepreneurs and a network of global senior investors across telco, data, advertising, and technology Zeotap welcomes all – we are equal employment opportunity & affirmative action employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. Interested in joining us? We look forward to hearing from you!

Posted 1 day ago

Apply

0.0 - 2.0 years

0 - 0 Lacs

Peelamedu, Coimbatore, Tamil Nadu

On-site

About the Role We are seeking a Full Stack Developer who is passionate about building intelligent, scalable web applications. The ideal candidate thrives in a fast-paced environment, has a strong grasp of both frontend and backend frameworks, and brings innovative thinking with AI integration capabilities Responsibilities Design, develop, and maintain scalable web applications using Next.js , Vue.js/Nuxt , React , and NestJS Build RESTful and GraphQL APIs using NestJS Design and optimize MySQL databases for performance and scalability Integrate and experiment with AI models to enhance product capabilities Collaborate with cross-functional teams in an agile environment Apply logical reasoning to solve complex problems and create efficient workflows Write clean, maintainable, and testable code Participate in code reviews, daily stand-ups, and sprint planning Required Skills Proficiency in JavaScript/TypeScript and modern frameworks ( Next.js , React , Vue.js , Nuxt ) Strong backend experience with NestJS , Node.js , and API design Experience with MySQL , including schema design and optimization Familiarity with AI/ML technologies (e.g., OpenAI, TensorFlow, LangChain, or similar APIs/libraries) Solid understanding of logical reasoning, data structures, and algorithms Strong debugging, troubleshooting, and performance tuning skills Ability to thrive in team-based environments and contribute to collaborative success 2+YEARS experience must Nice to Have Experience deploying apps on GCP , AWS , or Vercel Familiarity with Docker, CI/CD, and version control (Git) Basic understanding of UI/UX design principles Experience working in agile/scrum settings Why Join Us? Supportive, friendly work culture that values collaboration and creativity Negotiable salary based on skills and experience Opportunity to work with AI-driven products in an innovative environment Ongoing learning and career growth opportunities How to Apply Send your resume, portfolio (if available), and a brief note about your recent project to careers@nanonino.com . Let’s build the future together! Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Paid sick time Provident Fund Schedule: Day shift Morning shift Experience: Full-stack development: 2 years (Preferred) Language: English (Preferred) Location: Peelamedu, Coimbatore, Tamil Nadu (Preferred) Shift availability: Night Shift (Preferred) Work Location: In person

Posted 1 day ago

Apply

6.0 - 10.0 years

22 - 27 Lacs

Bengaluru

Work from Office

About Zscaler Serving thousands of enterprise customers around the world including 45% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world’s largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We're looking for an experienced Staff Engineer to join our Data Path team. Reporting to the Manager, Software Engineering, you'll be responsible for: Handling process packets at ingress and egress points at high speed to ensure no noticeable latency for clients Collaborating with the operations team to deploy, monitor, patch, and scale systems as needed Identifying and resolving hotspots to ensure smooth performance as the user base grows What We're Looking for (Minimum Qualifications) 6+ years of experience in C/C++ programming in a distributed and enterprise-scale environment Experience with networking protocols and traffic management technologies (e.g. TCP/IP, HTTP/HTTPS, SSL/TLS, QUIC) Demonstrated ability to troubleshoot network performance issues (latency/packet loss) including analysis using traffic tools Strong knowledge in one of the cloud platforms like AWS, Azure, or GCP Familiarity with Kubernetes components and containerization technologies such as Docker or Podman What Will Make You Stand Out (Preferred Qualifications) Familiarity with Hashicorp packer for building and customizing VM images for cloud for AWS, Azure, GCP or Vmware ESX Stay current with industry trends and propose enhancements to improve system reliability, scalability and network efficiency Master's or Bachelor's degree in either Computer Science, Computer Engineering, Electrical Engineering or Software Engineering #LI-Hybrid #LI-MS6 At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Job Title: Solution Architect – Enterprise Applications with AI/ML Exposure Experience: 8–10 Years Location: Thiruvananthapuram Employment Type: Full-time (WFO) Salary Offered : Max 22 LPA Job Summary We’re looking for a talented Solution Architect with a strong foundation in designing and developing large-scale enterprise applications, and a growing interest or experience in modern AI/ML-driven technologies. This role is ideal for someone who is confident in architecture, passionate about emerging trends like AI/ML, and eager to help shape intelligent systems in collaboration with engineering and business teams. Design scalable, secure, and maintainable enterprise application architectures. Translate business needs into clear technical solutions and design patterns. Lead design discussions, code reviews, and solution planning with internal teams. Guide development teams by providing architectural direction and mentoring. Collaborate with DevOps for smooth deployment and CI/CD implementation. Participate in client meetings and technical solutioning discussions. Explore and propose the use of AI/ML capabilities where relevant, especially in areas like intelligent search, automation, and data insights Must-Have Skills & Qualifications 8–10 years of experience in software development and solution architecture. Hands-on expertise in either Python or C# .Net. Deep understanding of software architecture patterns—microservices, event-driven, layered designs. Experience with cloud platforms (AWS, Azure, or GCP). Solid knowledge of databases (SQL & NoSQL), APIs, and integration techniques. Exposure to or strong interest in AI/ML technologies—especially those involving intelligent automation or data-driven systems. Good interpersonal and communication skills; experience interfacing with clients. Capability to lead technical teams and ensure delivery quality. Preferred Skills Awareness of LLMs, vector databases (e.g., Pinecone, FAISS), or RAG-based systems is a plus. Familiarity with Docker, Kubernetes, or DevOps workflows. Knowledge of MLOps or experience working alongside data science teams. Certifications in cloud architecture or AI/ML are a bonus.

Posted 1 day ago

Apply

9.0 years

0 Lacs

India

On-site

Job Summary: We are looking for a detail-oriented and skilled Automation Test Engineer to join our QA team. The ideal candidate will design, develop, and execute automated tests to ensure the quality and reliability of software products. This role involves working closely with developers, product managers, and other stakeholders to create and maintain robust test frameworks and improve overall software performance and stability. Key Responsibilities: Design, develop, and execute automation scripts using testing tools (e.g., Selenium, Cypress, Playwright, etc.). Collaborate with cross-functional teams to understand business requirements and translate them into test cases. Maintain and improve existing automation test frameworks. Conduct performance, regression, and functional testing. Identify, record, and track bugs through to resolution. Generate test reports and metrics for stakeholders. Continuously explore new tools and technologies to enhance test automation. Required Qualifications: Bachelors degree in Computer Science, Engineering, or related field. 9+ years of experience in software QA with a focus on automation testing. Proficient in at least one programming/scripting language (e.g., Java, JavaScript, C#). Strong knowledge of test automation frameworks (e.g., Selenium WebDriver, TestNG, JUnit, Cypress). Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI/CD). Experience with version control systems (e.g., Git). Understanding of SDLC, STLC, and Agile methodologies. Preferred Qualifications: Experience with API testing tools like Postman, REST Assured, or SoapUI. Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to cloud platforms (e.g., AWS, Azure, Google Cloud). ISTQB certification or equivalent is a plus. Soft Skills: Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Ability to work independently and manage multiple priorities.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are seeking a highly capable Data Platform Engineer to build and maintain a secure, scalable, and air-gapped-compatible data pipeline that supports multi-tenant ingestion, transformation, warehousing, and dashboarding . You’ll work across the stack: from ingesting diverse data sources (files, APIs, DBs), transforming them via SQL or Python tools, storing them in an OLAP-optimized warehouse, and surfacing insights through customizable BI dashboards. Key Responsibilities: 1. Data Ingestion (ETL Engine): Design and maintain data pipelines to ingest from: File systems: CSV, Excel, PDF, binary formats Databases: Using JDBC connectors (PostgreSQL, MySQL, etc.) APIs: REST, XML, GraphQL endpoints Implement and optimize: Airflow for scheduling and orchestration Apache NiFi for drag-and-drop pipeline development Kafka / Redis Streams for real-time or event-based ingestion Develop custom Python connectors for air-gapped environments Handle binary data using PyPDF2 , protobuf , OpenCV , Tesseract , etc. Ensure secure storage of raw data in MinIO , GlusterFS , or other vaults 2. Transformation Layer: Implement SQL/code-based transformation using: dbt-core for modular SQL pipelines Dask or Pandas for mid-size data processing Apache Spark for large-scale, distributed ETL Integrate Great Expectations or other frameworks for data quality validation (optional in on-prem) Optimize data pipelines for latency, memory, and parallelism 3. Data Warehouse (On-Prem): Deploy and manage on-prem OLAP/RDBMS options including: ClickHouse for real-time analytics Apache Druid for event-driven dashboards PostgreSQL , Greenplum , and DuckDB for varied OLAP/OLTP use cases Architect multi-schema / multi-tenant isolation strategies Maintain warehouse performance and data consistency across layers 4. BI Dashboards: Develop and configure per-tenant dashboards using: Metabase (preferred for RBAC + multi-tenant) Apache Superset or Redash for custom exploration Grafana for technical metrics Embed dashboards into customer portals Configure PDF/Email-based scheduled reporting Work with stakeholders to define marketing, operations, and executive KPIs Required Skills & Qualifications: 5+ years of hands-on experience with ETL tools , data transformation , and BI platforms Advanced Python skills for custom ingestion and transformation logic Strong understanding of SQL , data modeling , and query optimization Experience with Apache NiFi , Airflow , Kafka , or Redis Streams Familiarity with at least two: ClickHouse , Druid , PostgreSQL , Greenplum , DuckDB Experience building multi-tenant data platforms Comfort working in air-gapped / on-prem environments Strong understanding of security, RBAC , and data governance practices Nice-to-Have Skills: Experience in regulated industries (BFSI, Telecom, government) Knowledge of containerization (Docker/Podman) and orchestration (K8s/OpenShift) Exposure to data quality and validation frameworks (e.g., Great Expectations) Experience with embedding BI tools in web apps (React, Django, etc.) What We Offer: Opportunity to build a cutting-edge, open-source-first data platform for real-time insights Collaborative team environment focused on secure and scalable data systems Competitive salary and growth opportunities

Posted 1 day ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role/ Job Title: Developer & Senior Developer Function/ Department: Information Technology Job Purpose: Senior Developer works with the Technology Delivery Managers, Business Units, Enterprise / Solution Architects, and vendor partners to implement API solutions to solve mission-critical business challenges. The Developer builds and maintains integrations for multiple on premises and/or cloud systems and must be capable of understanding business requirements, working with end users and developing and deploying the integrations. Roles and Responsibilities: Minimum of 2+ years of experience in Microservices architecture with the ability to collaborate effectively with team members and build positive working relationships. Design and build and deploy APIs to meet business requirements. High level of commitment to business satisfaction and agility. Strong work ethic and a passion for the role, with a positive attitude and a willingness to learn. Communicate effectively with the tech lead to thoroughly understand the requirements and highlight any blockers immediately. Handle programming and software development, including requirement gathering, bug fixing, testing, documenting, and implementation. Work in an agile environment to deliver high-quality solutions. Understand and implement Security, Logging, Auditing, Policy Management, and Performance Monitoring. Familiarity with relational databases ( E.g.: Oracle ), non-relational databases (E.g.: MongoDB ), MSK Kafka, Docker, Kubernetes, and CICD Technologies (Jenkins, GitHub, Maven) Education Qualification: Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA)

Posted 1 day ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Summary: We are seeking a proactive and detail-oriented Data Scientist to join our team and contribute to the development of intelligent AI-driven production scheduling solutions. This role is ideal for candidates passionate about applying machine learning, optimization techniques, and operational data analysis to enhance decision-making and drive efficiency in manufacturing or process industries. You will play a key role in designing, developing, and deploying smart scheduling algorithms integrated with real-world constraints like machine availability, workforce planning, shift cycles, material flow, and due dates. Experience: 1 Year Responsibilities: 1. AI-Based Scheduling Algorithm Development Develop and refine scheduling models using: Constraint Programming Mixed Integer Programming (MIP) Metaheuristic Algorithms (e.g., Genetic Algorithm, Ant Colony, Simulated Annealing) Reinforcement Learning or Deep Q-Learning Translate shop floor constraints (machines, manpower, sequence dependencies, changeovers) into mathematical models. Create simulation environments to test scheduling models under different scenarios. 2. Data Exploration & Feature Engineering Analyze structured and semi-structured production data from MES, SCADA, ERP, and other sources. Build pipelines for data preprocessing, normalization, and handling missing values. Perform feature engineering to capture important relationships like setup times, cycle duration, and bottlenecks. 3. Model Validation & Deployment Use statistical metrics and domain KPIs (e.g., throughput, utilization, makespan, WIP) to validate scheduling outcomes. Deploy solutions using APIs, dashboards (Streamlit, Dash), or via integration with existing production systems. Support ongoing maintenance, updates, and performance tuning of deployed models. 4. Collaboration & Stakeholder Engagement Work closely with production managers, planners, and domain experts to understand real-world constraints and validate model results. Document solution approaches, model assumptions, and provide technical training to stakeholders. Qualifications: Bachelor’s or Master’s degree in: Data Science, Computer Science, Industrial Engineering, Operations Research, Applied Mathematics, or equivalent. Minimum 1 year of experience in data science roles with exposure to: AI/ML pipelines, predictive modelling, Optimization techniques or industrial scheduling Proficiency in Python, especially with: pandas, numpy, scikit-learn ortools, pulp, cvxpy or other optimization libraries, matplotlib, plotly for visualization Solid understanding of: Production planning & control processes (dispatching rules, job-shop scheduling, etc.), Machine Learning fundamentals (regression, classification, clustering) Familiarity with version control (Git), Jupyter/VSCode environments, and CI/CD principles Preferred (Nice-to-Have) Skills: Experience with: Time-series analysis, sensor data, or anomaly detection, Manufacturing execution systems (MES), SCADA, PLC logs, or OPC UA data, Simulation tools (SimPy, Arena, FlexSim) or digital twin technologies Exposure to containerization (Docker) and model deployment (FastAPI, Flask) Understanding of lean manufacturing principles, Theory of Constraints, or Six Sigma Soft Skills: Strong problem-solving mindset with ability to balance technical depth and business context. Excellent communication and storytelling skills to convey insights to both technical and non-technical stakeholders. Eagerness to learn new tools, technologies, and domain knowledge.

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Nium, the Leader in Real-Time Global Payments Nium , the global leader in real-time, cross-border payments, was founded on the mission to deliver the global payments infrastructure of tomorrow, today. With the onset of the global economy, its payments infrastructure is shaping how banks, fintechs, and businesses everywhere collect, convert, and disburse funds instantly across borders. Its payout network supports 100 currencies and spans 220+ markets, 100 of which in real-time. Funds can be disbursed to accounts, wallets, and cards and collected locally in 35 markets. Nium's growing card issuance business is already available in 34 countries. Nium holds regulatory licenses and authorizations in more than 40 countries, enabling seamless onboarding, rapid integration, and compliance – independent of geography. The company is co-headquartered in San Francisco and Singapore. About the Team: Tech Support team’s goal is to offer better customer service and manage anything that happens in a live/production environment. Nium is one of the beasts to use all the latest tools for support functions. Tools like Kibana, Nagios, and cloud watch enable us to have greater visibility of our services offered to clients and eventually makes our system available round the clock, our uptime is always greater than 99.95%. About the Role: As part of Tech support team, you will be responsible for resolving technical issues faced by users, whether related to software, hardware, or network systems. They troubleshoot problems, offer solutions, and escalate complex cases to specialized teams when necessary. Using ticketing systems, they manage and prioritize support requests to ensure timely and effective resolutions. This role requires strong problem-solving abilities, excellent communication skills, and a solid understanding of technical systems to help users maintain productivity. Key Responsibilities: Based on customer insights and channel performance data, develop and execute on a content roadmap that engages key personas at each point in the customer journey, from top-funnel acquisition to nurture and ongoing customer education, both on Nium offerings as well as the industry Build, develop and manage a high-performing team and culture to achieve breakthrough results; exceptionally high standards and holds self and others accountable Generating editorial ideas and concepts Work with regional Growth Marketing teams to ensure content development aligns with funnel-building objectives for each target segment Measure the impact of our content strategy as well as the performance of individual assets and proactively refine our resource allocation and prioritization accordingly Requirements: 5-7 yrs experience in Supporting production applications on AWS or other cloud platforms Good knowledge of RDBMS (PostgreSQL or MSSQL) and NoSQL databases Willing to work in day/night shifts Understanding of troubleshooting and monitoring microservice and serverless architectures Working knowledge of containerization technology and various orchestration platforms. e.g., Docker, Kubernetes etc. for troubleshooting and monitoring purposes Experience in build and deploy automation tools (Ansible/Jenkins/Chef) Experienced in release and change management, incident, and problem management both from a technology and process perspective Familiar with Server log Management with tools like ELK, and Kibana Certification in ITIL, COBIT or Microsoft Operations Framework would be an added plus Experience with Scripting languages or shell scripting to automate daily tasks would be an added plus Ability to Diagnose and Troubleshoot Technical Issues Ability to work proactively to identify the issue with the help of log monitoring Experienced in monitoring tools, frameworks, and processes Excellent interpersonal skills Experience with one or more case-handling tools like: Freshdesk, Zendesk, JIRA Skilled at triaging and root cause analysis Ability to provide step-by-step technical help, both written and verbal What we offer at Nium We Value Performance: Through competitive salaries, performance bonuses, sales commissions, equity for specific roles and recognition programs, we ensure that all our employees are well rewarded and incentivized for their hard work. We Care for Our Employees: The wellness of Nium’ers is our #1 priority. We offer medical coverage along with 24/7 employee assistance program, generous vacation programs including our year-end shut down. We also provide a flexible working hybrid working environment (3 days per week in the office). We Upskill Ourselves: We are curious, and always want to learn more with a focus on upskilling ourselves. We provide role-specific training, internal workshops, and a learning stipend We Constantly Innovate: Since our inception, Nium has received constant recognition and awards for how we approach both our business and talent opportunities. - 2022 Great Place To Work Certification - 2023 CB Insights Fintech 100 List of Most Promising Fintech Companies . - CNBC World’s Top Fintech Companies 2024. We Celebrate Together: We recognize that work is also about creating great relationships with each other. We celebrate together with company-wide social events, team bonding activities, happy hours, team offsites, and much more! We Thrive with Diversity: Nium is truly a global company, with more than 33 nationalities, based in 18+ countries and more than 10 office locations. As an equal opportunity employer, we are committed to providing a safe and welcoming environment for everyone. For more detailed region specific benefits : https://www.nium.com/careers#careers-perks For more information visit www.nium.com Depending on your location, certain laws may regulate the way Nium manages the data of candidates. By submitting your job application, you are agreeing and acknowledging that you have read and understand our Candidate Privacy Notice located at www.nium.com/privacy/candidate-privacy-notice .

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position: DevOps Engineer Location: Ahmedabad (On-site at office) Working Day: 5.5 Working Days Experience: 3 to 7 years of relevant experience Purpose: We are looking for a highly skilled DevOps professional with 3 to 7 years of experience to work with us. The candidate will bring expertise in GCP Platform, containerization & orchestration, SDLC, operating systems, version control, languages, scripting, CI/CD, infrastructure as code, and databases. Experience in the Azure Platform, in addition to the GCP platform, will be highly valued. Experience:  3-7 years of experience in DevOps.  Proven experience in implementing DevOps best practices and driving automation.  Demonstrated ability to manage and optimize cloud-based infrastructure. Roles and Responsibilities: The DevOps professional will be responsible for:  Implementing and managing the GCP Platform, including Google Kubernetes Engine (GKE), CloudBuild and DevOps practices.  Leading efforts in containerization and orchestration using Docker and Kubernetes.  Optimizing and managing the Software Development Lifecycle (SDLC).  Administering Linux and Windows Server environments proficiently.  Managing version control using Git (BitBucket) and GitOps (preferred).  Automating and configuring tasks using YAML and Python.  Developing and maintaining Bash and PowerShell scripts.  Designing and developing CI/CD pipelines using Jenkins and optionally CloudBuild.  Implementing infrastructure as code through Terraform to optimize resource management.  Managing CloudSQL and MySQL databases for reliable performance. Education Qualification  Bachelor’s degree in Computer Science, Engineering, or a related field.  Master’s degree in a relevant field (preferred). Certifications Preferred  Professional certifications in GCP, Kubernetes, Docker, and DevOps methodologies.  Additional certifications in CI/CD tools and infrastructure as code (preferred). Behavioural Skills  Strong problem-solving abilities and keen attention to detail.  Excellent communication and collaboration skills.  Ability to adapt to a fast-paced and dynamic work environment.  Strong leadership and team management capabilities. Technical Skills  Proficiency in Google Kubernetes Engine (GKE), CloudBuild, and DevOps practices.  Expertise in Docker and Kubernetes for containerization and orchestration.  Deep understanding of the Software Development Lifecycle (SDLC).  Proficiency in administering Linux and Windows Server environments.  Experience with Git (BitBucket) and GitOps (preferred).  Proficiency in YAML and Python for automation and configuration.  Skills in Bash and PowerShell scripting.  Strong ability to design and manage CI/CD pipelines using Jenkins and optionally CloudBuild.  Experience with Terraform for infrastructure as code.  Management of CloudSQL and MySQL databases. Non-Negotiable Skills  GCP Platform: Familiarity with Google Kubernetes Engine (GKE), CloudBuild, and DevOps practices.  Experience with Azure  Containerization & Orchestration: Expertise in Docker and Kubernetes.  SDLC: Deep understanding of the Software Development Lifecycle.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Node.js / Full Stack Developer Experience: 2–3 Years Location: Ahmedabad Job Type: Full-time CTC: Better than market standards for the right candidate. Job Summary: We are seeking a talented and experienced Node.js / Full Stack Developer with 2–3 years of hands-on experience to join our development team. You will be responsible for building high-performance web applications and APIs, collaborating with cross-functional teams, and delivering scalable backend and frontend solutions. Key Responsibilities: Design, develop, test, and maintain scalable web applications using Node.js and modern JavaScript frameworks. Develop RESTful APIs and microservices. Work on both frontend (React.js / Angular / Vue.js) and backend (Node.js, Express.js) components. Integrate third-party APIs, services, and databases (MongoDB, MySQL, PostgreSQL). Write clean, reusable, and well-documented code. Collaborate with UI/UX designers, product managers, and QA teams. Optimize applications for speed, scalability, and security. Participate in code reviews, debugging, and deployment processes. Required Skills & Qualifications: Bachelor's degree in Computer Science, IT, or related field. 2–3 years of professional experience in Node.js development. Strong understanding of JavaScript / TypeScript . Experience with frontend frameworks like React.js, Angular, or Vue.js. Proficient in working with databases – MongoDB, MySQL, or PostgreSQL. Familiarity with Git, CI/CD pipelines, Docker is a plus. Knowledge of RESTful APIs, JSON, and modern backend architecture. Experience with Agile/Scrum methodology.

Posted 1 day ago

Apply

2.0 - 5.0 years

0 Lacs

Mohali district, India

Remote

Job Description: SDE-II – Python Developer Job Title SDE-II – Python Developer Department Operations Location In-Office Employment Type Full-Time Job Summary We are looking for an experienced Python Developer to join our dynamic development team. The ideal candidate will have 2 to 5 years of experience in building scalable backend applications and APIs using modern Python frameworks. This role requires a strong foundation in object-oriented programming, web technologies, and collaborative software development. You will work closely with the design, frontend, and DevOps teams to deliver robust and high-performance solutions. Key Responsibilities • Develop, test, and maintain backend applications using Django, Flask, or FastAPI. • Build RESTful APIs and integrate third-party services to enhance platform capabilities. • Utilize data handling libraries like Pandas and NumPy for efficient data processing. • Write clean, maintainable, and well-documented code that adheres to industry best practices. • Participate in code reviews and mentor junior developers. • Collaborate in Agile teams using Scrum or Kanban workflows. • Troubleshoot and debug production issues with a proactive and analytical approach. Required Qualifications • 2 to 5 years of experience in backend development with Python. • Proficiency in core and advanced Python concepts, including OOP and asynchronous programming. • Strong command over at least one Python framework (Django, Flask, or FastAPI). • Experience with data libraries like Pandas and NumPy. • Understanding of authentication/authorization mechanisms, middleware, and dependency injection. • Familiarity with version control systems like Git. • Comfortable working in Linux environments. Must-Have Skills • Expertise in backend Python development and web frameworks. • Strong debugging, problem-solving, and optimization skills. • Experience with API development and microservices architecture. • Deep understanding of software design principles and security best practices. Good-to-Have Skills • Experience with Generative AI frameworks (e.g., LangChain, Transformers, OpenAI APIs). • Exposure to Machine Learning libraries (e.g., Scikit-learn, TensorFlow, PyTorch). • Knowledge of containerization tools (Docker, Kubernetes). • Familiarity with web servers (e.g., Apache, Nginx) and deployment architectures. • Understanding of asynchronous programming and task queues (e.g., Celery, AsyncIO). • Familiarity with Agile practices and tools like Jira or Trello. • Exposure to CI/CD pipelines and cloud platforms (AWS, GCP, Azure). Company Overview We specialize in delivering cutting-edge solutions in custom software, web, and AI development. Our work culture is a unique blend of in-office and remote collaboration, prioritizing our employees above everything else. At our company, you’ll find an environment where continuous learning, leadership opportunities, and mutual respect thrive. We are proud to foster a culture where individuals are valued, encouraged to evolve, and supported in achieving their fullest potential. Benefits and Perks • Competitive Salary: Earn up to ₹6 –10 LPA based on skills and experience. • Generous Time Off: Benefit from 18 annual holidays to maintain a healthy work-life balance. • Continuous Learning: Access extensive learning opportunities while working on cutting-edge projects. • Client Exposure: Gain valuable experience in client-facing roles to enhance your professional growth.

Posted 1 day ago

Apply

0 years

0 Lacs

Tamil Nadu, India

On-site

We are looking for a seasoned Senior MLOps Engineer to join our Data Science team. The ideal candidate will have a strong background in Python development, machine learning operations, and cloud technologies. You will be responsible for operationalizing ML/DL models and managing the end-to-end machine learning lifecycle from model development to deployment and monitoring while ensuring high-quality and scalable solutions. Mandatory Skills: Python Programming: Expert in OOPs concepts and testing frameworks (e.g., PyTest) Strong experience with ML/DL libraries (e.g., Scikit-learn, TensorFlow, PyTorch, Prophet, NumPy, Pandas) MLOps & DevOps: Proven experience in executing data science projects with MLOps implementation CI/CD pipeline design and implementation Docker (Mandatory) Experience with ML lifecycle tracking tools such as MLflow, Weights & Biases (W&B), or cloud-based ML monitoring tools Experience in version control (Git) and infrastructure-as-code (Terraform or CloudFormation) Familiarity with code linting, test coverage, and quality tools such as SonarQube Cloud & Orchestration: Hands-on experience with AWS SageMaker or GCP Vertex AI Proficiency with orchestration tools like Apache Airflow or Astronomer Strong understanding of cloud technologies (AWS or GCP) Software Engineering: Experience in building backend APIs using Flask, FastAPI, or Django Familiarity with distributed systems for model training and inference Experience working with Feature Stores Deep understanding of the ML/DL lifecycle from ideation, experimentation, deployment to model sunsetting Understanding of software development best practices, including automated testing and CI/CD integration Agile Practices: Proficient in working within a Scrum/Agile environment using tools like JIRA Cross-Functional Collaboration: Ability to collaborate effectively with product managers, domain experts, and business stakeholders to align ML initiatives with business goals Preferred Skills: Experience building ML solutions for: (Any One) Sales Forecasting Marketing Mix Modelling Demand Forecasting Certified in machine learning or cloud platforms (e.g., AWS or GCP) Strong communication and documentation skills

Posted 1 day ago

Apply

3.0 years

0 Lacs

Uttar Pradesh, India

On-site

Job Description Be part of the solution at Technip Energies and embark on a one-of-a-kind journey. You will be helping to develop cutting-edge solutions to solve real-world energy problems. About us: Technip Energies is a global technology and engineering powerhouse. With leadership positions in LNG, hydrogen, ethylene, sustainable chemistry, and CO2 management, we are contributing to the development of critical markets such as energy, energy derivatives, decarbonization, and circularity. Our complementary business segments, Technology, Products and Services (TPS) and Project Delivery, turn innovation into scalable and industrial reality. Through collaboration and excellence in execution, our 17,000+ employees across 34 countries are fully committed to bridging prosperity with sustainability for a world designed to last. About the role: We are currently seeking a Machine Learning (Ops) - Engineer , to join our Digi team based in Noida. Key Responsibilities: ML Pipeline Development and Automation: Design, build, and maintain end-to-end AI/ML CI/CD pipelines using Azure DevOps and leveraging Azure AI Stack (e.g., Azure ML, AI Foundry …) and Dataiku Model Deployment and Monitoring: Deliver tooling to deploy AI/ML products into production, ensuring they meet performance, reliability, and security standards. Implement and maintain a transversal monitoring solutions to track model performance, detect drift, and trigger retraining when necessary Collaboration and Support: Work closely with data scientists, AI/ML engineers, and platform team to ensure seamless integration of products into production. Provide technical support and troubleshooting for AI/ML pipelines and infrastructure, particularly in Azure and Dataiku environments Operational Excellence : Define and implement MLOps best practices with a strong focus on governance, security, and quality, while monitoring performance metrics and cost-efficiency to ensure continuous improvement and delivering optimized, high-quality deployments for Azure AI services and Dataiku Documentation and Reporting: Maintain comprehensive documentation of AI/ML pipelines, and processes, with a focus on Azure AI and Dataiku implementations. Provide regular updates to the AI Platform Lead on system status, risks, and resource needs About you: Proven track record of experience in MLOps, DevOps, or related roles Strong knowledge of machine learning workflows, data analytics, and Azure cloud Hands-on experience with tools and technologies such as Dataiku, Azure ML, Azure AI Services, Docker, Kubernetes, and Terraform Proficiency in programming languages such as Python, with experience in ML and automation libraries (e.g., TensorFlow, PyTorch, Azure AI SDK …) Expertise in CI/CD pipeline management and automation tools using Azure DevOps Familiarity with monitoring tools and logging frameworks Catch this opportunity and invest in your skills development, should your profile meet these requirements. Additional attributes: A proactive mindset with a focus on operationalizing AI/ML solutions to drive business value Experience with budget oversight and cost optimization in cloud environments. Knowledge of agile methodologies and software development lifecycle (SDLC). Strong problem-solving skills and attention to detail Work Experience: 3-5 years of experience in MLOps Minimum Education: Advanced degree (Master’s or PhD preferred) in Computer Science, Data Science, Engineering, or a related field. What’s next? Once receiving your application, our Talent Acquisition professionals will screen and match your profile against the role requirements. We ask for your patience as the team completes the volume of applications with reasonable timeframe. Check your application progress periodically via personal account from created candidate profile during your application. We invite you to get to know more about our company by visiting and follow us on LinkedIn, Instagram, Facebook, X and YouTube for company updates.

Posted 1 day ago

Apply

15.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. The Netskope Risk Insights team delivers complex, distributed and hybrid cloud systems that provide customers with a multidimensional view of the applications, devices and users on their network. We illuminate unsanctioned and unsupported applications and devices and track user behavior so customers can visualize and create policies to minimize risk. Risk analysis, policy enforcement, compliance and audit mechanisms are some customer use cases we satisfy. The team owns a scalable, cloud-managed on-prem and public cloud platform that hosts customer facing data plane and log parsing services. Thus, extending the Netskope cloud to the customer data center and public cloud environments. What’s In It For You In the Risk Insights team, you will wear multiple hats of manager, technical leader, influencer, coach and mentor. You will lead a growing engineering team focused on building cloud and on-prem services in the SaaS and IaaS space. Be an owner and influencer in a geographically distributed organization, partner with the Netskope business and interact with external customers. Opportunities to bridge the gap between cloud and on-prem solutions. What You Will Be Doing Lead, coach, mentor and inspire a team fostering a culture of ownership, trust, innovation and continuous improvement. Drive the strategic direction and technical leadership of the team Collaborate across organizations to ensure creation of a shared vision, roadmap and timely delivery of products/services Be accountable for a high quality software lifecycle Hands-on participant in technical discussions, designs and code development Build microservices, abstraction layers and platforms to make the hybrid and multi-cloud footprint low latency, cost efficient and customer friendly Required Skills And Experience 15+ years of demonstrable experience building services/products with 10+ years in a management role Effective verbal and written communication Demonstrable experience balancing business and engineering priorities Proven history of building customer focused high performing teams Comfortable with ambiguity and taking initiative to find and solve problems Experience contributing in a geographically distributed environment preferred Good understanding of distributed systems, data-structures and algorithms Strong background in designing scalable services with monitoring and alerting systems Programming in Python, Go, C/C++, etc. Experience working with CI/CD environments, automation frameworks Experience in designing and building RESTful services Knowledge of network security, databases, authentication & authorization mechanisms, messaging technologies, HTTP, TCP, Familiarity with file systems and data storage technologies (e.g. ceph, NFS, etc.) Experience building and debugging software on Linux platforms Experience working with docker, kubernetes (K8s), public clouds (AWS, GCP, Azure, etc.) Education B.tech or equivalent required, Master's or equivalent strongly preferred Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies