Jobs
Interviews

11688 Containerization Jobs - Page 42

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 4 days ago

Apply

4.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Specialty Development Practitioner Location: Chennai Work Type: Onsite Position Description: 4 to 7 years of experience with JMeter to perform end-to-end Performance Testing of software products Strong experience in developing JMeter scripts for testing Web based applications (Angular, React, Pega, SAP e.t.c) Strong skills on Java and/or Python programming languages Fair understanding of AI especially Prompt Engineering using LLM or APIs including a good working knowledge of integrating AI features with day-to-day automation tasks DevOps experience especially working with pipeline creation (Tekton or CloudBuild or GITHUBACTIONS) and testing tool containerization (dockers, Kubernates e.t.c) Expertise in production log analysis for workload modelling including ability to analyze client and server-side metrics to validate performance of applications. Deep understanding of Dynatrace or New Relic or AppDynamics e.t.c to identify performance bottlenecks and provide performance engineering recommendations Ability to contribute to performance engineering of applications like SW/HW Sizing, Network, Server & code optimization Exposure to GCP or similar Cloud Platform (Azure or Amazon) & ability to interpret metrics using Cloud monitoring tools (OpenShift CaaS, CloudRun metrics dashboards e.t.c) Strong problem solving and analytical skills, Ability to work independently and Self-Motivated Excellent written and verbal communication skills, in English. Skills Required: Python, Dynatrace, New Relic, AppDynamics, DevOps, JMeter, Web based Applications Experience Required: 4-7 Years Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 4 days ago

Apply

8.0 years

0 Lacs

Delhi, India

On-site

Staff Engineer, CASB Inline Experience: 8 - 12 Years Exp Salary : Competitive Preferred Notice Period : Within 60 Days Shift : 10:00AM to 6:00PM IST Opportunity Type: Hybrid (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Java, Python, Golang, Node.js, AWS, Google Cloud, Azure, API Design, Microservices Architecture, REST API, GraphQL Netskope (One of Uplers' Clients) is Looking for: Staff Engineer, CASB Inline who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About the role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Netskope CASB Inline team is responsible for architecting and designing a modular and scalable front end, along with backend services, to support the CASB-Inline management plane. The team also builds and maintains multiple AI/ML pipelines, including RAG and Copilot-style capabilities. What’s in it for you In this role, you will be part of a high performing engineering team and contribute to building our cloud based CASB Inline products. You will be responsible for full life-cycle software development, including requirements analysis, technical architecture, design, implementation, testing and documentation, the recipe for deployment to production, and post-production ownership. If you are passionate about solving complex problems and developing cloud services that are reliable, performant and scalable, we would like to speak with you. Netskope has been recognized by Gartner as a market leader in the 2023 Security Service Edge (SSE) Magic Quadrant. What you will be doing Design and develop cloud systems and services to handle billions of events. Coordinate with other service development teams, product management and support teams to ensure scalability, supportability and availability for owned services and dependent services. Work on customer issues in a timely manner to improve issue resolution response time and customer satisfaction. Evaluate open source technologies to find the best fit for our needs, and contribute to some of them to meet our unique needs and help the community. Required Skills and Experience Professional Experience: Minimum of 8+ years of relevant work experience. Proven track record of building and supporting high-performance microservices. Technical Expertise: Strong expertise in building and scaling microservices architectures Proficiency in backend languages like Go, Python, Java or Node.js Experience with API design and system integration Hands-on experience with containerization (Docker) and orchestration (Kubernetes) Familiarity with CI/CD pipelines, infrastructure as code Deep knowledge of cloud platforms (AWS, GCP, or Azure) Understanding of modern UI frameworks (e.g., React, Angular, Vue) will be a big plus Experience with responsive design, accessibility, and performance optimization Expertise in monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK) Good to Have Technical Skills: Understanding of modern UI frameworks (e.g., React, Angular, Vue) Experience building or integrating AI/ML pipelines Familiarity with concepts like Retrieval-Augmented Generation (RAG) and LLM-based copilots Software Development Practices: Advocate for Test-Driven Development (TDD) with expertise in a wide range of unit testing frameworks. Advanced understanding of algorithms and data structures for real-time, in-line data processing. Skilled in recommending coding best practices and providing effective, actionable code review feedback.Soft Skills: Exceptional verbal and written communication skills, with the ability to engage openly, transparently, and consistently with teams and stakeholders. Strong customer focus, with a proactive, hands-on approach to meeting customer needs promptly. Education BSCS or equivalent required, MSCS or equivalent strongly preferred How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organisations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimised access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivalled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organisational and network changes, and new regulatory requirements About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 4 days ago

Apply

12.0 years

0 Lacs

Delhi, India

Remote

Job Profile- Contractual role for Java Professionals - 1)Java Tech Lead- (8-12 yrs) 2) Java Developer- (4-6 yrs) Shift- 2pm-11 pm Location- Remote Duration- 6 months (ext) Contract JD- 1) Java Tech Lead- • Bachelor's degree in Computer Science, Engineering, or a related field. • 8 – 12 years of proven experience working with Java, Springboot and Azure services and hands-on experience in cloud-based application development. • Strong understanding of cloud architecture patterns, microservices, and serverless computing. • Demonstrated leadership skills with the ability to lead a team effectively. • Experience in designing highly available, scalable, and fault-tolerant systems on Azure. • Excellent problem-solving and analytical skills. Good to have Skills: • Docker and AKS • Mocking Unit testing framework 2) Sr Java Developer- (4-6 yrs) • Bachelor's degree in Computer Science, Engineering, or related field. • 4-6 years of hands-on experience in Java development with expertise in Spring Boot framework. • Proficiency in working with database, SQL, and ORM frameworks (e.g., Hibernate). • Experience with Azure and containerization (Docker, Kubernetes). • Excellent problem-solving skills, analytical thinking, and attention to detail. • Strong communication and collaboration skills, with the ability to work effectively in a team environment. • Relevant certifications (e.g., Spring Professional, Azure certifications) are a plus.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Summary: Bank Keeping is seeking a highly skilled and experienced DevOps Engineer to join our dynamic team in Mumbai, Hyderabad, Chennai, or Bengaluru. The ideal candidate will have a minimum of 5 years of hands-on experience in the DevOps field, demonstrating a strong understanding of continuous integration, continuous deployment, and cloud infrastructure management. As a DevOps Engineer at Bank Keeping, you will be responsible for designing, implementing, and maintaining the infrastructure and tools necessary to support our software development and deployment processes. You will work closely with development, QA, and operations teams to ensure seamless integration and delivery of high-quality software products. The role requires proficiency in scripting languages, containerization technologies, and cloud platforms such as AWS, Azure, or Google Cloud. Additionally, you should have experience with configuration management tools like Ansible, Puppet, or Chef, and be well-versed in monitoring and logging tools. This is an in-office position, offering you the opportunity to collaborate directly with a talented team of professionals in a fast-paced and innovative environment. If you are passionate about automation, scalability, and reliability, and are looking to take your career to the next level, we encourage you to apply for this exciting opportunity at Bank Keeping. Responsibilities Design, implement, and maintain the infrastructure and tools necessary to support software development and deployment processes. Manage and optimize cloud infrastructure on platforms such as AWS, Azure, or Google Cloud. Develop and maintain continuous integration and continuous deployment (CI/CD) pipelines. Automate infrastructure provisioning, configuration, and deployment using tools like Ansible, Puppet, or Chef. Requirements Minimum of 5 years of hands-on experience in the DevOps field Experience in cloud infrastructure management Proficiency in scripting languages (e.g., Python, Bash, etc.) Experience with containerization technologies (e.g., Docker, Kubernetes) Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Experience with configuration management tools like Ansible, Puppet, or Chef Qualifications Master of Computer Applications Bachelor of Engineering

Posted 4 days ago

Apply

8.0 years

0 Lacs

Tamil Nadu, India

On-site

Staff Engineer, CASB Inline Experience: 8 - 12 Years Exp Salary : Competitive Preferred Notice Period : Within 60 Days Shift : 10:00AM to 6:00PM IST Opportunity Type: Hybrid (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Java, Python, Golang, Node.js, AWS, Google Cloud, Azure, API Design, Microservices Architecture, REST API, GraphQL Netskope (One of Uplers' Clients) is Looking for: Staff Engineer, CASB Inline who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About the role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Netskope CASB Inline team is responsible for architecting and designing a modular and scalable front end, along with backend services, to support the CASB-Inline management plane. The team also builds and maintains multiple AI/ML pipelines, including RAG and Copilot-style capabilities. What’s in it for you In this role, you will be part of a high performing engineering team and contribute to building our cloud based CASB Inline products. You will be responsible for full life-cycle software development, including requirements analysis, technical architecture, design, implementation, testing and documentation, the recipe for deployment to production, and post-production ownership. If you are passionate about solving complex problems and developing cloud services that are reliable, performant and scalable, we would like to speak with you. Netskope has been recognized by Gartner as a market leader in the 2023 Security Service Edge (SSE) Magic Quadrant. What you will be doing Design and develop cloud systems and services to handle billions of events. Coordinate with other service development teams, product management and support teams to ensure scalability, supportability and availability for owned services and dependent services. Work on customer issues in a timely manner to improve issue resolution response time and customer satisfaction. Evaluate open source technologies to find the best fit for our needs, and contribute to some of them to meet our unique needs and help the community. Required Skills and Experience Professional Experience: Minimum of 8+ years of relevant work experience. Proven track record of building and supporting high-performance microservices. Technical Expertise: Strong expertise in building and scaling microservices architectures Proficiency in backend languages like Go, Python, Java or Node.js Experience with API design and system integration Hands-on experience with containerization (Docker) and orchestration (Kubernetes) Familiarity with CI/CD pipelines, infrastructure as code Deep knowledge of cloud platforms (AWS, GCP, or Azure) Understanding of modern UI frameworks (e.g., React, Angular, Vue) will be a big plus Experience with responsive design, accessibility, and performance optimization Expertise in monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK) Good to Have Technical Skills: Understanding of modern UI frameworks (e.g., React, Angular, Vue) Experience building or integrating AI/ML pipelines Familiarity with concepts like Retrieval-Augmented Generation (RAG) and LLM-based copilots Software Development Practices: Advocate for Test-Driven Development (TDD) with expertise in a wide range of unit testing frameworks. Advanced understanding of algorithms and data structures for real-time, in-line data processing. Skilled in recommending coding best practices and providing effective, actionable code review feedback.Soft Skills: Exceptional verbal and written communication skills, with the ability to engage openly, transparently, and consistently with teams and stakeholders. Strong customer focus, with a proactive, hands-on approach to meeting customer needs promptly. Education BSCS or equivalent required, MSCS or equivalent strongly preferred How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organisations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimised access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivalled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organisational and network changes, and new regulatory requirements About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Konovo is a global healthcare intelligence company on a mission to transform research through technology- enabling faster, better, connected insights. Konovo provides healthcare organizations with access to over 2 million healthcare professionals—the largest network of its kind globally. With a workforce of over 200 employees across 5 countries: India, Bosnia and Herzegovina, the United Kingdom, Mexico, and the United States, we collaborate to support some of the most prominent names in healthcare. Our customers include over 300 global pharmaceutical companies, medical device manufacturers, research agencies, and consultancy firms. We are expanding our hybrid Bengaluru team to help our transition from a services-based model toward a scalable product and platform-driven organization. As DevOps Engineer you will support the deployment, automation, and maintenance of our software development process and cloud infrastructure on AWS. In this role you will get hands-on experience collaborating with a global, cross-functional team working to improve healthcare outcomes through market research. We are an established but fast-growing business – powered by innovation, data, and technology. Konovo's capabilities are delivered through our cloud-based platform, enabling customers to collect data from healthcare professionals and transform it into actionable insights using cutting-edge AI combined with proven market research tools and techniques. As DevOps Engineer, you will learn new tools, improve existing systems, and grow your expertise in cloud operations and DevOps practices. What You'll Do Infrastructure automation using Infrastructure as Code tools. Support and improve CI/CD pipelines for application deployment. Work closely with engineering teams to streamline and automate development workflows. Monitor infrastructure performance and help troubleshoot issues. Contribute to team documentation, knowledge sharing, and process improvements. What We're Looking For 3+ years of experience in a DevOps or similar technical role. Familiarity with AWS or another cloud provider. Exposure to CI/CD tools such as GitHub Actions, Jenkins, or GitLab CI. Some experience with scripting languages (e.g., Bash, Python) for automation. Willingness to learn and adapt in a collaborative team environment. Nice to Have (Not Required) Exposure to Infrastructure as Code (e.g., CDK, CloudFormation). Experience with containerization technologies (e.g., Docker, ECS). Awareness of cloud security and monitoring concepts. Database management & query optimization experience. Why Konovo? Lead high-impact projects that shape the future of healthcare technology. Be part of a mission-driven company that is transforming healthcare decision-making. Join a fast-growing global team with career advancement opportunities. Thrive in a hybrid work environment that values collaboration and flexibility. Make a real-world impact by helping healthcare organizations innovate faster. This is just the beginning of what we can achieve together. Join us at Konovo and help shape the future of healthcare technology! Apply now to be part of our journey.

Posted 4 days ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Position: Technical Lead Location: Ahmedabad | Remote Experience : 8+ Years Job Summary: We are seeking a seasoned Development Manager with a strong technical background in software development (Python, Java), a solid understanding of DevOps practices, and optional exposure to AI/ML technologies. And Front-end as well This role demands a hands-on leader who can drive project execution, lead high - performing development teams, and align engineering efforts with business objectives. Key Responsibilities: · Lead, mentor, and manage a team of software engineers and developers · Oversee full software development lifecycle including planning, design, development, testing, and deployment · Participate in architectural decisions and ensure technical solutions meet performance, scalability, and security standards · Collaborate closely with cross-functional teams including DevOps, QA, Product, and Data Science · Drive engineering best practices such as code reviews, automated testing, CI/CD, and agile methodologies · Ensure timely delivery of high-quality features and solutions · Identify opportunities for process improvement, technical innovation, and team growth · Report on team performance, risks, and project progress to senior leadership Required Skills & Experience: · Strong hands-on experience with Python and/or Java development · Good understanding of DevOps tools like Jenkins, Docker, Kubernetes, Git, CI/CD pipelines · Experience leading development teams (min. 3+ years in a managerial role) · Strong grasp of system design, architecture patterns, and software engineering best practices · Excellent problem-solving, decision-making, and stakeholder management skills · Comfortable working in agile/scrum environments Good to Have: · Exposure to AI/ML concepts, tools, or platforms · Experience working with cloud platforms such as AWS, GCP, or Azure Familiarity with containerization and infrastructure-as-code (Terraform, Ansible

Posted 4 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Responsibilities: Deploy, scale, and manage large language models (LLMs) in production environments, ensuring optimal resource usage and performance. Design, implement, and manage CI/CD pipelines to automate the software delivery process, ensuring fast and reliable deployments. Monitor and analyze model performance in real-time, addressing issues like model drift, latency, and accuracy degradation, and initiating model retraining or adjustments when necessary. Manage cloud environments (AWS, GCP, Azure etc.,) to provision and scale infrastructure to meet the needs of training, fine-tuning, and inference for large models. Collaborate with development teams to integrate CI/CD pipelines into the development workflow, promoting continuous integration and delivery best practices. Implement infrastructure as code (IaC) using tools such as Terraform, Ansible, or CloudFormation to automate the provisioning and management of infrastructure. Manage and maintain cloud infrastructure on platforms such as AWS, Azure, or Google Cloud, ensuring scalability, security, and reliability. Develop and implement monitoring, logging, and alerting solutions to ensure the health and performance of applications and infrastructure. Work closely with security teams to integrate security practices into the CI/CD pipelines, ensuring compliance with industry standards and regulations. Optimize build and release processes to improve efficiency and reduce deployment times, implementing strategies such as parallel builds and incremental deployments. Automate testing processes within the CI/CD pipelines to ensure high-quality software releases, including unit tests, integration tests, and performance tests. Manage and monitor version control systems, such as Git, to ensure code integrity and facilitate collaboration among development teams. Provide technical support and troubleshooting for CI/CD-related issues, ensuring timely resolution and minimal disruption to development workflows. Develop and maintain documentation for CI/CD pipelines, infrastructure configurations, and best practices, ensuring clarity and accessibility for team members. Stay updated on the latest trends and advancements in DevOps, CI/CD, and cloud computing, and incorporate new tools and practices into the organization's workflows. Lead and participate in code reviews and technical discussions, providing insights and recommendations for continuous improvement. Conduct training sessions and workshops for internal teams to promote knowledge sharing and best practices in DevOps and CI/CD. Collaborate with IT and development teams to implement and manage containerization solutions using Docker and orchestration platforms such as Kubernetes. Implement and manage configuration management solutions to maintain consistency and manage changes across environments. Develop and implement disaster recovery and business continuity plans to ensure the resilience and availability of applications and infrastructure. Optimize resource utilization and cost management for cloud infrastructure, implementing strategies such as auto-scaling and resource tagging. Facilitate communication between development, operations, and business stakeholders to ensure alignment on DevOps goals and practices. Participate in the evaluation and selection of DevOps tools and technologies that align with organizational goals and improve software delivery processes. Manage and monitor application performance, implementing strategies to optimize performance and resolve bottlenecks. Ensure compliance with organizational policies and industry regulations related to software development and deployment. Required Skills: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Extensive experience in DevOps practices and CI/CD implementation. Strong proficiency in CI/CD tools such as Jenkins, GitLab CI, or CircleCI. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Proficiency in infrastructure as code (IaC) tools such as Terraform, Ansible, or CloudFormation. Strong understanding of containerization and orchestration platforms such as Docker and Kubernetes. Experience with monitoring, logging, and alerting tools such as Prometheus, Grafana, ELK Stack, or Datadog. Proficiency in scripting languages such as Python, Bash, or PowerShell. Strong understanding of version control systems such as Git. Excellent problem-solving and analytical skills, with the ability to troubleshoot and resolve technical issues. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders. Certification in DevOps or cloud platforms (e.g., AWS Certified DevOps Engineer, Azure DevOps Engineer Expert) is preferred. Preferred Skills: Experience with monitoring, logging, and alerting tools such as Prometheus, Grafana, ELK Stack, or Datadog. Proficiency in scripting languages such as Python, Bash, or PowerShell. Strong understanding of version control systems such as Git. Excellent problem-solving and analytical skills, with the ability to troubleshoot and resolve technical issues. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders. Certification in DevOps or cloud platforms (e.g., AWS Certified DevOps Engineer, Azure DevOps Engineer Expert) is preferred.

Posted 4 days ago

Apply

5.0 years

0 Lacs

India

On-site

Job Title - Data Migration & Automation Engineer Total exp : 5 Years to 8 Years Location - Bangalore, Pune, Hyderabad, Chennai, Noida, Kolkata, Mumbai,Coimbatore. Interview Mode : Virtual Job Description- Job Description: We are seeking a highly skilled and detail-oriented Data Migration & Automation Engineer with strong experience in MongoDB, MS SQL Server/Azure SQL Database, and hands-on expertise across both SQL and NoSQL environments. The ideal candidate will have a proven track record in data migration, automation of pipelines for data comparison and validation, and working with heterogeneous database systems. Experience in PostgreSQL is desirable as a secondary skill. Key Responsibilities: Design and implement backend solutions to support data-intensive migration workflows Lead and execute data migration strategies across diverse systems including SQL and NoSQL databases Develop and maintain automated pipelines for data comparison, validation, and transformation Ensure data integrity, consistency, and security during migration and transformation processes Collaborate with cross-functional teams to define and refine data management and migration strategies Troubleshoot and optimize performance in MongoDB and MS SQL Server/Azure SQL DB environments Document standards, processes, and best practices for data operations and migration Required Skills & Qualifications: 5+ years of experience in backend development and database engineering Strong hands-on experience with MongoDB (NoSQL) and MS SQL Server / Azure SQL Database Solid understanding of data migration, transformation, and automation Proficiency in SQL scripting and data comparison techniques Experience in building automated ETL/ELT pipelines and scripting with Python or Shell Exposure to CI/CD pipeline tools for automation workflows Understanding of data governance, compliance, and security best practices Strong problem-solving abilities and attention to detail Excellent communication and collaboration skills Preferred / Secondary Skills: Working knowledge of PostgreSQL (as a secondary RDBMS) Experience with cloud platforms (AWS, Azure, or GCP) Familiarity with containerization tools like Docker/Kubernetes Exposure to big data technologies and data lakes is a plus

Posted 4 days ago

Apply

2.0 years

0 Lacs

India

On-site

About Abacus.AI: Abacus.AI is an AI research and SaaS company that helps enterprises build, deploy, and manage real-time deep learning models in production. We work with some of the most data-driven companies to solve complex AI infrastructure challenges — and we’re growing rapidly. Role Overview: Abacus.AI is seeking a Cloud Infra Engineer to join our dynamic team of top engineers and scientists. In this role, you’ll not only help build and operate the cloud infrastructure powering the Abacus.AI SaaS platform, but also work directly with our customers to ensure their success and satisfaction. Our modern technology stack includes Kubernetes, Spark, TensorFlow, Python, Go, MySQL, and Redis. As we continue to grow rapidly, we’re looking for someone who is excited to collaborate with both internal teams and external customers, helping to expand and optimize our platform. Requirements: BS/MS in Computer Science 2+ years hands-on engineering, communicating technical concepts to customers/stakeholders 2+ years operating/supporting public cloud production environments (AWS, GCP, Azure), troubleshooting customer-facing issues Strong Linux/Unix fundamentals, guiding customers through technical challenges Experience with modern cloud environments (containerization, IaC, DevOps, CI/CD, automation) for reliable, customer-centric solutions Preferred: Experience operating and supporting Kubernetes clusters, including customer onboarding and troubleshooting Familiarity with ML Ops tools (Spark, TensorFlow, GPUs) in customer-facing scenarios Hands-on experience with Terraform and infrastructure automation, including customer-driven customization and support Strong background in network security and database systems, with ability to advise and support customers on best practices What You’ll Do: Collaborate with customers to understand needs, resolve issues, and ensure a seamless Abacus.AI experience Serve as a technical point of contact for customer escalations, offering expert guidance and troubleshooting Advocate for customer requirements and drive platform improvements with engineering and product teams Help design, build, and maintain scalable, secure, and reliable cloud infrastructure Contribute to automation, monitoring, and operational excellence, always prioritizing customer experience

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 4 days ago

Apply

3.0 - 11.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Python Fullstack Developer Location: Hyderabad (Work from Office/Hybrid – as applicable) Experience Required: 3 to 11 years Notice Period: Immediate to 15 Days Preferred Openings Across Experience Bands: 3–5 Years 6–8 Years 9–11 Years Job Description: We are seeking skilled and motivated Python Fullstack Developers to join our dynamic team in Hyderabad. The ideal candidate should be proficient in backend development using Python, with experience or willingness to work on Ruby being a plus. On the frontend, expertise in Vue.js is preferred, though strong candidates with React.js experience will also be considered. The role involves end-to-end product development responsibilities, integrating multiple technologies across frontend, backend, databases, and cloud. Key Responsibilities: Design, develop, and maintain scalable and robust backend services using Python . Contribute to frontend development using Vue.js or React.js based on project needs. Develop RESTful APIs and integrate them with front-end components. Collaborate with cross-functional teams including product managers, designers, and QA engineers to deliver high-quality features. Work on cloud-native solutions, preferably with AWS . Implement best practices for security, scalability, performance, and monitoring. Write unit and integration tests to ensure code quality. Participate in code reviews and maintain documentation of design and implementation. Technical Skills Required: Mandatory Skills: Backend: Python (Primary), Ruby (Good to Have) Frontend: Vue.js (Primary), React.js (Acceptable Alternative) Database: Strong in RDBMS (MySQL, PostgreSQL, etc.) Experience in NoSQL (MongoDB, DynamoDB, etc.) – Good to Have Node.js : Experience in server-side logic or utility tools Cloud: Hands-on experience with AWS Native Services (Lambda, EC2, S3, RDS, etc.) or equivalent services on other cloud platforms (Azure, GCP) Preferred Qualifications: Bachelor's or Master’s degree in Computer Science, Engineering, or related fields Strong problem-solving and debugging skills Exposure to CI/CD pipelines and version control (Git) Understanding of containerization (Docker) and orchestration (Kubernetes) is a plus Excellent communication and interpersonal skills

Posted 4 days ago

Apply

0.0 - 2.0 years

2 - 6 Lacs

Mohali, Punjab

On-site

The Role As a Software Engineer , you will play a pivotal role in designing, developing, and optimizing BotPenguin’s AI chatbot & Agents platform. You’ll collaborate with product managers, senior engineers, and customer success teams to develop robust backend APIs, integrate with frontend applications, and enhance system performance. This role offers exciting opportunities to build impactful AI-driven solutions and shape the future of conversational automation. What you need for this role Education: Bachelor's degree in Computer Science, IT, or related field. Experience: 1-3 years in software development roles. Technical Skills: Strong understanding of MEAN/MERN Stack technologies. Experience in designing and deploying end-to-end solutions. Hands-on experience in backend API development & UI integration. Familiarity with cloud platforms like AWS and containerization (Docker, Kubernetes). Understanding of AI/ML concepts in development. Knowledge of version control tools like GitLab/GitHub and project management tools like Notion . Soft Skills: Willingness to build something big, Strong problem-solving mindset, proactive approach, and a willingness to learn. What you will be doing Collaborate with the Product Team to plan and implement new features. Work alongside Technical Leads & Senior Developers to define solutions & low-level design. Develop backend APIs and integrate them with frontend applications. Conduct automated unit & integration testing to ensure high code quality. Document technical processes, APIs, and troubleshooting guides. Monitor system performance and suggest improvements to optimize efficiency. Assist the Customer Success Team in resolving technical challenges and enhancing user experience. Top reasons to work with us Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. Flexible work hours and an emphasis on work-life balance. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: ₹200,000.00 - ₹600,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: relevant: 2 years (Required) Work Location: In person

Posted 4 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity At Adobe, we are pioneers in digital experiences, shaping how the world creates, shares, and engages with content. Our Noida team is at the forefront of innovation, developing cutting-edge solutions that empower creators worldwide. We are seeking a passionate Front-End Developer to join our dynamic team. This role involves crafting user-friendly interfaces for next-generation products, ensuring seamless integration across various platforms. You will collaborate with cross-functional teams to deliver high-quality software solutions. What You'll Do Design and Development: Design, develop, and maintain high-performance responsive, accessible web applications using HTML, CSS, and JavaScript frameworks like React.js. Collaboration: Work closely with designers, backend developers, and project managers to implement features effectively. Testing and Debugging: Participate in code reviews and testing to ensure the reliability and quality of our applications. Participate in code reviews to maintain high code quality standards. Innovation: Stay updated with industry trends and contribute new ideas to enhance our products and processes. What You Need To Succeed Proficiency in Front-End Technologies: Strong expertise in JavaScript, React.js, TypeScript, Redux, and related libraries. Full-Stack Knowledge: Familiarity with backend technologies including Java, REST APIs, and databases (NoSQL/SQL). Cloud and DevOps: Experience with cloud platforms like AWS/Azure, and containerization tools such as Docker and Kubernetes. Problem-Solving Skills: A solid understanding of algorithms and data structures to tackle complex challenges. Passion for learning new technologies and staying up-to-date with industry trends. Communication: Excellent verbal and written communication skills to articulate design and code choices across teams. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 4 days ago

Apply

6.0 - 12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the Role: We are looking for a passionate and experienced Sr. Python Developer to join our team and play key role in building the next generation of risk systems. You will be responsible for the design, development, and implementation of data intensive quantitive solutions that meet the demanding requirements of Risk applications. Responsibilities: Design, develop, and test data intensive, quantitive solutions. Prepare technical documentation and mentor team members. Collaborate with Risk, Tech, and other teams to understand business requirements and translate them into technical solutions. Work with the team to provide technical guidance, review code etc. Good knowledge in test case automation. Work with Principal Engineer for ARB review, Infra review etc. Stay up-to-date with the latest technologies and trends. Qualifications: Minimum 6-12 years of experience in design and development of large-scale systems. Hands on experience on large scale system development in Python, Django, FastAPI. Prior experience on design and architecture on AWS services. Hands on experience on large scale system development. Strong understanding of design patterns, Python, OpenAPI specifications and API integrations. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Ability to work independently and as part of a team. Bonus Points: Experience with scalable data intensive financial systems. Awareness on Risk domain. Knowledge of cloud-based deployment and containerization technologies. Experience with Agile development methodologies.

Posted 4 days ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Description Job Title - < Site Reliability Engineer> + + Management Level : Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Go/Java Good to have skills: Docker and Kubernetes Job Summary: As a Site Reliability Engineer (SRE), you’ll bring together your software engineering expertise and systems knowledge to ensure our systems are scalable, reliable, and efficient. You’ll be instrumental in automating operations, solving complex infrastructure challenges, and driving continuous improvement to deliver seamless and resilient services. Roles & Responsibilities Design, build, and maintain scalable infrastructure and systems. Automate operational tasks to improve efficiency and reliability. Implement application monitoring and continuous improvement of application performance and stability. Develop and implement disaster recovery and incident management strategies. Collaborate with developers to improve application architecture and deployment. Optimize system availability, latency, and performance metrics. Manage CI/CD pipelines for seamless software delivery. Perform root cause analysis and lead detailed post-mortems. Consult with software development teams to implement reliability best practices. Write and maintain infrastructure and operational documentation. Operational responsibility of a number of distributed applications. Including on-call shifts. Professional & Technical Skills: Strong experience in software engineering and systems architecture. Multiple years of experience programming in languages such as Python, Go, or Java. Expertise with cloud platforms (AWS, Azure, GCP) and tools. Hands-on experience with infrastructure as code (Terraform, Ansible, etc.). Familiarity with Linux/Unix systems and networking fundamentals. Familiarity with containerization and orchestration tools like Docker and Kubernetes. Proven ability to monitor, debug, and optimize distributed systems. Experience managing CI/CD pipelines and automation frameworks. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills for cross-functional teamwork. Ability to analyze and improve complex systems for reliability and scalability. Self-motivated with a passion for continuous learning and improvement. Additional Information About Our Company | Accenture (do not remove the hyperlink)

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

Kochi, Kerala, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 5 days ago

Apply

8.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 5 days ago

Apply

8.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

TCS Hiring for Observability Tools Tech Lead_PAN India Experience: 8 to 12 Years Only Job Location: PAN India TCS Hiring for Observability Tools Tech Lead_PAN India Required Technical Skill Set: Core Responsibilities: Designing and Implementing Observability Solutions: This involves selecting, configuring, and deploying tools and platforms for collecting, processing, and analyzing telemetry data (logs, metrics, traces). Developing and Maintaining Monitoring and Alerting Systems: Creating dashboards, setting up alerts based on key performance indicators (KPIs), and ensuring timely notification of issues. Instrumenting Applications and Infrastructure: Working with development teams to add instrumentation code to applications to generate meaningful telemetry data. This often involves using open standards like Open Telemetry. Analyzing and Troubleshooting System Performance: Investigating performance bottlenecks, identifying root causes of issues, and collaborating with development teams to resolve them. Defining and Tracking Service Level Objectives (SLOs) and Service Level Indicators (SLIs): Working with stakeholders to define acceptable levels of performance and reliability and tracking these metrics. Improving Incident Response and Post-Mortem Processes: Using observability data to understand incidents, identify contributing factors, and implement preventative measures. Collaborating with Development, Operations, and SRE Teams: Working closely with other teams to ensure observability practices are integrated throughout the software development lifecycle. Educating and Mentoring Teams on Observability Best Practices: Promoting a culture of observability within the organization. Managing and Optimizing Observability Infrastructure Costs: Ensuring the cost-effectiveness of observability tools and platforms. Staying Up to Date with Observability Trends and Technologies: Continuously learning about new tools, techniques, and best practices. Key Skills: Strong Understanding of Observability Principles: Deep knowledge of logs, metrics, and traces and how they contribute to understanding system behavior. Proficiency with Observability Tools and Platforms: Experience with tools like: Logging: Elasticsearch, Splunk, Fluentd, Logstash, etc., Metrics: Prometheus, Grafana, InfluxDB, Graphite, etc., Tracing: OpenTelemetry, DataDog APM, etc., APM (Application Performance Monitoring): DataDog, New Relic, AppDynamics, etc, Programming and Scripting Skills: Proficiency in languages like Python, Go, Java, or scripting languages like Bash for automation and tool integration. Experience with Cloud Platforms: Familiarity with cloud providers like AWS, Azure, or GCP and their monitoring and logging services. Understanding of Distributed Systems: Knowledge of how distributed systems work and the challenges of monitoring and troubleshooting them. Troubleshooting and Problem-Solving Skills: Strong analytical skills to identify and resolve complex issues. Communication and Collaboration Skills: Ability to effectively communicate technical concepts to different audiences and work collaboratively with other teams. Knowledge of DevOps and SRE Practices: Understanding of continuous integration/continuous delivery (CI/CD), infrastructure as code, and site reliability engineering principles. Data Analysis and Visualization Skills: Ability to analyze telemetry data and create meaningful dashboards and reports. Experience with Containerization and Orchestration: Familiarity with Docker, Kubernetes, and related technologies. Kind Regards, Priyankha M

Posted 5 days ago

Apply

5.0 years

0 Lacs

Delhi, India

Remote

About Arena Club Arena Club, co-founded by 5x World Series Champion Derek Jeter and entrepreneur Brian Lee, is revolutionizing the trading card industry. We're home to the first-ever digital card show , where fans buy, sell, and showcase trading cards like never before. With transparent grading, secure vaulting, and personalized online showrooms, we’re on a mission to make collecting more accessible, secure, and fun. Whether you're a lifelong collector or just getting started, Arena Club is where passion meets innovation. About The Role Join our fast-growing startup as a Senior Backend Engineer and help build the backbone of our platform. We are a dynamic, high-growth company where your work will have a direct impact on product success and scalability. This position is remote in India. The work shift requirement is 2PM to 11PM IST. What You'll Be Doing Design, build, and maintain scalable backend applications and APIs using Node.js, TypeScript, and Postgres Leverage AI-powered coding tools (e.g., GitHub Copilot, Tabnine) to improve development speed, code quality, and system reliability Architect and optimize systems for high scalability and performance as the business rapidly expands Implement third-party integrations and external APIs to enhance the platform’s capabilities Develop and maintain internal tools that support fulfillment operations, internal workflows, and machine learning initiatives Collaborate cross-functionally with product, data, and frontend teams to ship high-impact features quickly Ensure backend systems are secure, well-documented, and thoroughly tested What We're Looking For 5+ years of experience as a professional backend engineer Strong proficiency in Node.js, TypeScript, and Postgres Proven experience building and scaling eCommerce or online marketplace platforms Passion for using the latest AI coding tools and AI-driven best practices to enhance development workflows Deep understanding of cloud platforms (AWS or GCP) and containerization/infrastructure-as-code tools (Docker, Terraform) Ability to move fast and iterate quickly in a startup environment while maintaining high code quality Excellent problem-solving skills and the ability to communicate technical trade-offs effectively Tech Stack Languages & Frameworks: TypeScript, Node.js, Python Database: Postgres Cloud & Infrastructure: AWS (S3, SQS, etc.) and/or GCP, Docker, Terraform AI Development Tools: AI-powered coding assistants (e.g., GitHub Copilot, Tabnine), LLM-based development enhancements Benefits Competitive Pay Health, Dental, and Vision insurance Disability and Life Insurance Vacation and Sick Time Room for growth in a fast-growing startup Work alongside passionate collectors and industry innovators Apply Today! If you’re ready to be part of something big, join the Arena Club team today. We welcome all backgrounds—whether you're into sports, collectibles, gaming, Pokémon, or just love learning new things, we want to hear from you! No warehouse or fulfillment experience required—we’ll train the right person. Bonus Question Do you collect trading cards or other memorabilia? Let us know when you apply! Location: India (remote) Work Shift: 2PM- 11PM IST Job Type: Full-Time | Senior-Level

Posted 5 days ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in cybersecurity focus on protecting organisations from cyber threats through advanced technologies and strategies. They work to identify vulnerabilities, develop secure systems, and provide proactive solutions to safeguard sensitive data. As a cybersecurity generalist at PwC, you will focus on providing comprehensive security solutions and experience across various domains, maintaining the protection of client systems and data. You will apply a broad understanding of cybersecurity principles and practices to address diverse security challenges effectively. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Azure Kubernetes Service (AKS) and Docker: Proficient in containerization and orchestrating applications using Kubernetes, particularly on Azure, to effectively deploy and scale cloud applications. Azure Services Familiarity: Solid understanding of Azure infrastructure and services, complemented by expertise in tools like JFrog, to enhance cloud-native development and deployment. CI/CD Pipelines: Skilled in setting up and managing CI/CD pipelines across various platforms using DevOps tools such as GitLab, Azure DevOps, and GitHub Actions, essential for continuous integration and deployment processes. Experienced in integrating Java and/or .NET applications with Azure SDK components (e.g., Storage, Key Vault, Service Bus). Infrastructure as Code (IaC): Knowledgeable in using Terraform for provisioning Azure infrastructure and Ansible playbooks for VM configuration, aligning with IaC principles to automate resource management. Process Improvement: Capable of analyzing and managing existing processes to identify opportunities for improvement and automation. Communication and Client Relationships: Strong verbal and written communication skills, with experience in building and leveraging client relationships. Mandatory Skill Sets-Devops Preferred Skill Sets- Devops Years of Experience required-3 to 6 Education Qualifications-B.Tech,B.E Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills DevOps Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Methodology, Analytical Thinking, Azure Data Factory, Communication, Creativity, Cybersecurity, Cybersecurity Framework, Cybersecurity Policy, Cybersecurity Requirements, Cybersecurity Strategy, Embracing Change, Emotional Regulation, Empathy, Encryption Technologies, Inclusion, Intellectual Curiosity, Learning Agility, Managed Services, Optimism, Privacy Compliance, Regulatory Response, Security Architecture {+ 8 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

Remote

About Arena Club Arena Club, co-founded by 5x World Series Champion Derek Jeter and entrepreneur Brian Lee, is revolutionizing the trading card industry. We're home to the first-ever digital card show , where fans buy, sell, and showcase trading cards like never before. With transparent grading, secure vaulting, and personalized online showrooms, we’re on a mission to make collecting more accessible, secure, and fun. Whether you're a lifelong collector or just getting started, Arena Club is where passion meets innovation. About The Role Join our fast-growing startup as a Senior Backend Engineer and help build the backbone of our platform. We are a dynamic, high-growth company where your work will have a direct impact on product success and scalability. This position is remote in India. The work shift requirement is 2PM to 11PM IST. What You'll Be Doing Design, build, and maintain scalable backend applications and APIs using Node.js, TypeScript, and Postgres Leverage AI-powered coding tools (e.g., GitHub Copilot, Tabnine) to improve development speed, code quality, and system reliability Architect and optimize systems for high scalability and performance as the business rapidly expands Implement third-party integrations and external APIs to enhance the platform’s capabilities Develop and maintain internal tools that support fulfillment operations, internal workflows, and machine learning initiatives Collaborate cross-functionally with product, data, and frontend teams to ship high-impact features quickly Ensure backend systems are secure, well-documented, and thoroughly tested What We're Looking For 5+ years of experience as a professional backend engineer Strong proficiency in Node.js, TypeScript, and Postgres Proven experience building and scaling eCommerce or online marketplace platforms Passion for using the latest AI coding tools and AI-driven best practices to enhance development workflows Deep understanding of cloud platforms (AWS or GCP) and containerization/infrastructure-as-code tools (Docker, Terraform) Ability to move fast and iterate quickly in a startup environment while maintaining high code quality Excellent problem-solving skills and the ability to communicate technical trade-offs effectively Tech Stack Languages & Frameworks: TypeScript, Node.js, Python Database: Postgres Cloud & Infrastructure: AWS (S3, SQS, etc.) and/or GCP, Docker, Terraform AI Development Tools: AI-powered coding assistants (e.g., GitHub Copilot, Tabnine), LLM-based development enhancements Benefits Competitive Pay Health, Dental, and Vision insurance Disability and Life Insurance Vacation and Sick Time Room for growth in a fast-growing startup Work alongside passionate collectors and industry innovators Apply Today! If you’re ready to be part of something big, join the Arena Club team today. We welcome all backgrounds—whether you're into sports, collectibles, gaming, Pokémon, or just love learning new things, we want to hear from you! No warehouse or fulfillment experience required—we’ll train the right person. Bonus Question Do you collect trading cards or other memorabilia? Let us know when you apply! Location: India (remote) Work Shift: 2PM- 11PM IST Job Type: Full-Time | Senior-Level

Posted 5 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title : Lead AI Agent Engineer Location : Hyderabad Type : Full-Time Experience : 5+ years (including 2+ years in AI agent development) Company Description Replicant Systems is a technology startup building agentic AI architectures that autonomously execute and optimize procurement, finance, and HR operations at scale. Established in 2024 and promoted by a well-known business family in Hyderabad. About the Role We’re looking for a hands-on Lead Engineer to join us in building our core AI agent stack. As the technical owner of our AI systems, you’ll lead a small and focused team of engineers to design, build, and deploy intelligent agents that deliver measurable business outcomes. You’ll play a foundational role in shaping both the product and the engineering culture. Responsibilities Architect and lead the development of LLM-powered autonomous agents and tool-using workflows Manage and mentor a lean team of ML and software engineers Own the AI infrastructure pipeline: from data ingestion and training to evaluation and deployment Integrate open-source LLMs, vector stores, orchestration frameworks (e.g. LangChain, CrewAI, Autogen), and retrieval mechanisms Collaborate cross-functionally with product and design to align tech with business value Continuously evaluate new models, frameworks, and tools to improve performance and scalability Establish coding, documentation, and MLOps best practices within the team Qualifications 5+ years of work experience; at least 2 years building LLM-based applications or agents. Proficient in Python, with strong knowledge of PyTorch, HuggingFace, LangChain, or similar frameworks. Hands-on experience with autonomous agents, retrieval-augmented generation (RAG), and multi-agent orchestration. Deep understanding of LLM internals, prompt engineering, and model fine-tuning. Experience with cloud infrastructure (AWS, GCP, or Azure), containerization (Docker), and deployment pipelines. Strong leadership skills with a bias for execution in fast-paced environments. Startup mindset: proactive, resourceful, and outcome-driven. Nice-to-Have Publications or open-source contributions in the agentic AI or LLM community Experience building copilots or automation tools for enterprise workflows Exposure to enterprise SaaS or domain-specific AI products (e.g. procurement, finance, HR) Why Join Us Chance to shape a category-defining AI product Work with cutting-edge tools and a culture that prioritizes technical excellence and speed Flat, collaborative team with a bias toward action and learning

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies