Jobs
Interviews

22 Packer Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Java, Spring, Spring Boot, Kafka, Rest API, Microservices, Azure, CD - Developer at NTT DATA in Chennai, Tamil Nadu (IN-TN), India, you will be part of a dynamic team that values exceptional, innovative, and passionate individuals who are eager to grow with the organization. You will be responsible for utilizing your hands-on experience in Java, Spring, Springboot, Kafka, and designing robust RESTful APIs and Microservices. Additionally, you will work with technologies such as Hashicorp Vault, Terraform, Packer, Kubernetes, and cloud platforms like Azure. Your expertise in API Management, cloud infrastructure deployment, CD processes, testing frameworks, and modern programming languages will be essential for success in this role. To excel in this position, you should have a Bachelor's degree in computer science or engineering, along with at least 6-9 years of experience in Java, Springboot, Oracle, Kubernetes, and other relevant technologies. Your strong communication skills, leadership experience, and ability to collaborate with cross-functional teams will be valuable assets. Moreover, your strategic thinking, problem-solving skills, and familiarity with ITIL processes will contribute to the continuous improvement of software solutions. As a key member of the team, you will play a crucial role in driving meaningful discussions, staying updated on technology trends, and implementing CI/CD practices to deploy changes efficiently. Your willingness to work in a hybrid environment, including the client location at Ramanujam IT Park, Taramani, Chennai, and your commitment to a return to the office by 2025, will align with the general expectations of the role. If you are ready to take on this challenging yet rewarding opportunity with NTT DATA, a trusted global innovator in business and technology services, apply now to be part of a diverse team that is dedicated to helping clients innovate, optimize, and transform for long-term success.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior DevOps Engineer in our Life Sciences & Healthcare DevOps team, you will have the opportunity to work on cutting-edge Life Sciences and Healthcare products in a DevOps environment. If you are passionate about coding in Python or any scripting language, experienced with Linux, and have worked in a cloud environment, we are excited to hear from you! Our team specializes in container orchestration, Terraform, Datadog, Jenkins, Databricks, and various AWS services. If you have expertise in these areas, we would love to connect with you. You should have at least 7+ years of professional software development experience and 5+ years as a DevOps Engineer or similar role with proficiency in various CI/CD and configuration management tools such as Jenkins, Maven, Gradle, Spinnaker, Docker, Ansible, Cloudformation, Terraform, etc. Additionally, you should possess at least 3+ years of AWS experience managing resources in services like S3, ECS, RDS, EC2, IAM, OpenSearch Service, Route53, VPC, CloudFront, Glue, and Lambda. A minimum of 5 years of experience in Bash/Python scripting, wide knowledge in operating system administration, programming languages, cloud platform deployment, and networking protocols is required. You will be responsible for being on-call for critical production issues and should have a good understanding of SDLC, patching, releases, and basic systems administration activities. It would be beneficial if you also held AWS Solution Architect Certifications and had Python programming experience. In this role, your responsibilities will include designing, developing, and maintaining the product's cloud infrastructure architecture, collaborating with different teams to provide end-to-end infrastructure setup, designing and deploying secure infrastructure as code, staying updated with industry best practices, trends, and standards, owning the performance, availability, security, and reliability of the products running across public cloud and multiple regions worldwide, and documenting solutions and maintaining technical specifications. The products you will be working on rely on container orchestration, Jenkins, various AWS services, Databricks, Datadog, Terraform, and more, supporting the Development team in building them. You will be part of the Life Sciences & HealthCare Content DevOps team, focusing on DevOps operations on Production infrastructure related to Life Sciences & HealthCare Content products. The team consists of five members and reports to the DevOps Manager, providing support for various application products internal to Clarivate. The team also handles Change process on the production environment, Incident Management, Monitoring, and customer service requests. The shift timing for this role is 12PM to 9PM, and you must provide on-call support during non-business hours based on team bandwidth. At Clarivate, we are dedicated to offering equal employment opportunities and comply with applicable laws and regulations governing non-discrimination in all locations.,

Posted 2 days ago

Apply

5.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Service Reliability Engineer at Proofpoint, you will develop a deep understanding of the various services and applications that come together to deliver Proofpoint's next-generation security products. Your primary responsibility will be maintaining and extending the Elasticsearch and Splunk clusters used for critical near-real-time data analysis. This role involves continually evaluating the performance of these clusters, identifying and addressing developing problems, planning changes for high-load events, applying security fixes, testing and performing upgrades, as well as enhancing the monitoring and alert infrastructure. You will also play a key role in maintaining other components of the data pipeline, which may involve serverless or server-based systems for data ingestion into the Elasticsearch pipeline. Optimizing cost vs. performance will be a focus, including testing new hosts or configurations. Automation is a priority, utilizing tools like Puppet and various scripting mechanisms to achieve a build once/run everywhere system. Your work will span various types of infrastructure, including public cloud, Kubernetes clusters, and private data centers, providing exposure to diverse operational environments. Building effective partnerships across different teams within the organization, such as Product, Engineering, and Operations, is crucial. Participation in an on-call rotation and addressing escalated issues promptly are also part of the role. To excel in this position, you are expected to have a Bachelor's degree in computer science, information technology, engineering, or a related discipline. Your expertise should include proficient administration and management of Elasticsearch clusters, with secondary experience in managing Splunk clusters. Proficiency in provisioning and Configuration Management tools like Puppet, Ansible, and Rundeck is essential. Experience in building Automations and Infrastructure as Code using tools like Terraform, Packer, or CloudFormation templates is a plus. You should also be familiar with monitoring and logging tools such as Splunk, Prometheus, and PagerDuty, as well as scripting languages like Python, Bash, Go, Ruby, and Perl. Experience with CI/CD tools like Jenkins, Pipelines, and Artifactory will be beneficial. An inquisitive mind, effective troubleshooting skills, and the ability to navigate a complex system to extract meaningful data are essential qualities for success in this role. In addition to a competitive salary and benefits package, Proofpoint offers a culture focused on talent development, regular promotion cycles, company-sponsored education, and certifications. You will have the opportunity to work with cutting-edge technologies, participate in employee engagement initiatives, and benefit from annual health check-ups and insurance coverage. The company is committed to fostering diversity and inclusion in the workplace, offering hybrid work options, flexible hours, and inclusive facilities to support employees with diverse needs. Persistent Ltd. is an Equal Opportunity Employer that values diversity and prohibits discrimination and harassment. Join us to accelerate your growth professionally and personally, make a positive impact using the latest technologies, and collaborate in an innovative and inclusive environment to unlock global opportunities for learning and development. Let's unleash your full potential at Persistent.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for maintaining the Global Messaging Network of Tata Communications in the Network Operations Center, Global Customer Service Center, Enterprise Services. You will work in a 24*7 shift environment as part of the Service Assurance team, focusing on resolving faults swiftly and proactively mitigating network issues. Your primary duty will be to ensure the availability of CPaaS services and platform elements on the Global Messaging Network. Your responsibilities will include controlling network faults, performing Root Cause Analysis, discussing Permanent Corrective Actions with stakeholders, and implementing service routing configurations. Additionally, you will identify faults in customers" networks globally and collaborate to address and resolve them. You will work closely with cross-functional teams such as Engineering, Product, and Delivery to address customer and supplier issues effectively. As a qualified candidate, you should have a Graduate Engineer or Diploma in Electronics, Electronics & Telecom, or Computers. Preferred certifications include Unix/Linux, AWS Cloud, CI/CD (e.g., GIT, Jenkins, Ansible), and SQL Databases. You must possess a good understanding of various telecom standards and protocols (SMPP, SS7, SIP, GSM, RCS, API's like REST/HTTP, JSON/XML) and be proficient in Linux, relational databases, and scripting languages like Python or PHP. Moreover, you should excel in incident management tools like Service Now or Jira, as well as monitoring and alerting tools such as Zabbix, New Relic, Quick Sight, and Nagios. Your troubleshooting skills, ability to diagnose complex technical issues, and passion for learning new technologies will be essential in this role. It is crucial to exhibit good analytical, diagnostic, and problem-solving skills, along with effective communication abilities and a customer-centric approach. Your willingness to embrace challenges, dynamic nature, and teamwork will contribute to your success in this role.,

Posted 1 week ago

Apply

0.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Key Responsibilities: A day in the life of an Infosys Equinox employee As part of the Infosys Equinox delivery team your primary role would be to ensure effective Design Development Validation and Support activities to assure that our clients are satisfied with the high levels of service in the technology domain You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements A Clear understanding of HTTP Network protocol concepts designs operations TCP dump Cookies Sessions Headers Client Server Architecture Core strength in Linux and Azure infrastructure provisioning including VNet Subnet Gateway VM Security groups MySQL Blob Storage Azure Cache AKS Cluster etc Expertise with automating Infrastructure as a code using Terraform Packer Ansible Shell Scripting and Azure DevOps Expertise with patch management APM tools like AppDynamics Instana for monitoring and alerting Knowledge in technologies including Apache Solr MySQL Mongo Zookeeper RabbitMQ Pentaho etc Knowledge with Cloud platform including AWS and GCP are added advantage Ability to identify and automate recurring tasks for better productivity Ability to understand implement industry standard security solutions Experience in implementing Auto scaling DR HA Multi region with best practices is added advantage Ability to work under pressure managing expectations from various key stakeholders You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers You would be a key contributor to building efficient prog Technical Requirements: Ability to grasp cloud platforms AWS Azure GCP Kubernetes and containerization for scalable deployments Basic knowledge with Performance testing tools like JMeter LoadRunner or any other related tool Good Expertise in any of the programming languages like Java Python C or C Ability to analyze system metrics using profiling monitoring tools like Instana Dynatrace Prometheus and Grafana Additional Responsibilities: Ability to identify bottlenecks debugging hotspots in optimizing performance Continuously learning with the latest trends in performance engineering frameworks and methodologies Preferred Skills: Technology->Analytics - Packages->Python - Big Data,Technology->Infra_ToolAdministration-Others->Loadrunner,Technology->Java->Java - ALL,Technology->Performance Testing->Performance Engineering->Apache Jmeter,Technology->Performance Testing->Performance Testing - ALL

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

The Lead Platform Engineer - Gen AI at Elanco, based in Bengaluru, India, will be a part of the Software Engineering & Platforms team. Reporting to the IT Engineering Associate Director, you will be instrumental in driving the direction of platform and automation capabilities, specifically focusing on generative AI and its implementation at Elanco. Your role will involve collaborating with a diverse team to work on cutting-edge engineering initiatives that ensure secure, reliable, and efficient solutions using the latest technology. As a successful candidate, you must possess a highly motivated and innovative mindset, with the ability to articulate complex technical topics, collaborate with external partners, and ensure the quality delivery of solutions. You will have the opportunity to contribute to the growth of a highly skilled engineering organization and play a key role in shaping the future of GenAI capabilities at Elanco. Your responsibilities will include staying updated on the latest AI research and technologies, contributing to continuous improvement and innovation within the team, identifying opportunities for enhancing application team and developer experience, and working collaboratively with various stakeholders to deliver high-quality technical solutions. Additionally, you will be responsible for building and running GenAI capabilities, supporting distributed teams on AI/ML consumption, and maintaining robust support processes. To qualify for this role, you must have a minimum of 2+ years of hands-on experience in Generative AI and LLMs, along with a total of 8+ years of experience as a Software Engineer. Proficiency in programming languages such as Python, TensorFlow, PyTorch, and other AI/ML frameworks is essential, as is a strong understanding of neural networks, NLP, computer vision, and other AI domains. Experience with cloud platforms and AI tools (e.g., Google Cloud, Azure), familiarity with AI technologies, data pipelines, and deploying AI solutions in real-world applications are also required. It would be beneficial to have experience with Cloud Native design patterns, core technologies like Terraform, Ansible, and Packer, cloud cognitive services, AI/Embeddings technologies, modern application architecture methodologies, and API-centric design. Knowledge of authentication and authorization protocols, AI security, model evaluation, and safety will be advantageous. This role offers an exciting opportunity to work on cutting-edge technology, contribute to the growth of a new engineering organization, and make a significant impact on Elanco's AI capabilities. If you are passionate about innovation, collaboration, and driving tangible outcomes, we encourage you to apply for this role.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

Job Title: Platform & DevOps Engineer Location: Hyderabad / Pune Job Type: Contract to Hire Domain: Banking and Finance About the Client: Our client is a leading tech firm specializing in digital transformation, cloud solutions, and automation. They focus on innovation, efficiency, and continuous improvement through advanced technology and community-driven initiatives. About the Role: iO associates is seeking an experienced Platform & DevOps Engineer to manage DevOps tools, implement CI/CD pipelines, and maintain infrastructure on Google Cloud Platform (GCP). This role requires expertise in automation, infrastructure management, and cloud-based solutions within the Banking and Finance sector. Responsibilities: Develop and maintain CI/CD pipelines using Jenkins. Manage Terraform-based Infrastructure as Code (IaC). Work with GCP services: GCE, GKE, BigQuery, Pub/Sub, Monitoring. Implement GitOps best practices and automation. Requirements: 5+ years" experience with Jenkins, Terraform, Docker, and Kubernetes (GKE). Strong understanding of security, monitoring, and compliance. Experience with image management using Packer and Docker. Banking industry experience is a plus. If you're a skilled DevOps professional with a passion for automation and cloud infrastructure, apply now or send your CVs at .,

Posted 1 week ago

Apply

0.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Key Responsibilities: A day in the life of an Infosys Equinox employee As part of the Infosys Equinox delivery team your primary role would be to ensure effective Design Development Validation and Support activities to assure that our clients are satisfied with the high levels of service in the technology domain You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements A Clear understanding of HTTP Network protocol concepts designs operations TCP dump Cookies Sessions Headers Client Server Architecture Core strength in Linux and Azure infrastructure provisioning including VNet Subnet Gateway VM Security groups MySQL Blob Storage Azure Cache AKS Cluster etc Expertise with automating Infrastructure as a code using Terraform Packer Ansible Shell Scripting and Azure DevOps Expertise with patch management APM tools like AppDynamics Instana for monitoring and alerting Knowledge in technologies including Apache Solr MySQL Mongo Zookeeper RabbitMQ Pentaho etc Knowledge with Cloud platform including AWS and GCP are added advantage Ability to identify and automate recurring tasks for better productivity Ability to understand implement industry standard security solutions Experience in implementing Auto scaling DR HA Multi region with best practices is added advantage Ability to work under pressure managing expectations from various key stakeholders You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers You would be a key contributor to building efficient prog Technical Requirements: AWS Azure GCP Linux shell scripting IaaC Docker Kubernetes Jenkins GitHub Additional Responsibilities: Knowledge of design principles and fundamentals of architecture Understanding of performance engineering Knowledge of quality processes and estimation techniques Basic understanding of project domain Ability to translate functional nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of SDLC and agile methodologies Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Preferred Skills: Technology->Cloud Platform->AWS Database,Technology->Cloud Platform->Azure Devops,Technology->Cloud Platform->GCP Database,Technology->Container Platform->Docker,Technology->Open System->Linux,Technology->Open System->Shell scripting

Posted 1 week ago

Apply

1.0 - 3.0 years

13 - 23 Lacs

Gurugram

Hybrid

Note: This role requires hands-on experience in DSA coding in any of the programming languages (.Net, C# or Java). If you have relevant experience, then only apply. Responsibilities You will be working across functional (development/testing, deployment, systems/infrastructure, cloud) and project teams to ensure continuous operation of build and test systems. You will be driving and perfecting our vision of Continuous Delivery and making the release experience easy and enjoyable. You will be building high quality tools and automation for internal use to support continuous delivery and increasing the velocity and productivity of engineering teams. You will develop and maintain tools and scripts to build, deploy, test, automate and streamline product delivery from engineers to customers. You will design and implement tools and scripts to allow automated configuration management. Qualifications You have an experience of 1 to 3 years with a minimum of 1 years of software development experience using .NET, C# or Java based technologies. Expert knowledge of at least one of the following programming languages: C#, Java Solid understanding and experience of integrating security tools and practices within the DevOps process. Expert knowledge of PowerShell, Groovy and other scripting languages Solid knowledge of continuous integration and continuous delivery practices and tools such as Jenkins. Good working knowledge of Source Code repository systems (Git, TFS) Good knowledge of cloud infrastructure such as AWS, Azure, GCP Experience of working with Windows, Linux, Unix, iOS and Android operating systems Demonstrated ability to learn and acquire new technologies in the areas of DevOps. Experience of working with Packer, Kitchen, Chef, Ansible, Artifactory, SonarQube, Docker containers, Kubernetes, and other tools used for orchestration.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

17 - 30 Lacs

Chennai

Hybrid

Client: Celestica Job Location: Chennai Jawaharlal Nehru Salai, SIDCO Industrial Estate, Guindy, Chennai, Tamil Nadu Experience Required :6+ years Mode of Work: Hybrid 3 days Work from the office and 2 days work from home Job Title: Azure DevOps Engineer The Senior Specialist, IT Infrastructure, will install, maintain, upgrade, and continuously improve the site's operating environment, ensuring ongoing reliability, performance, and security of the infrastructure. This includes monitoring and upkeep of operating environments; responding to incidents & problems, planning for growth, deployment of new technologies, as well as designing, installing, configuring, maintaining, and performing testing of PC/server operating systems, networks, and related utilities and hardware. Other responsibilities include troubleshooting problems as reported by users, supporting web access and telephony services, and managing the acquisition, replacement, and decommissioning of related equipment, software, and services. Performs tasks such as, but not limited to, the following: Provide technical support and perform maintenance for cloud and on-premises infrastructure. Perform service monitoring and reporting, focusing on risk management and compliance. Explore new solutions, enhancements, and opportunities for continuous improvement. Perform acquisition, provisioning, and decommissioning of related equipment, software, and services. Lead projects and change management initiatives in the areas of Azure cloud, automation, and IT infrastructure. Manage external service providers and vendors. Design, implement, and maintain automation solutions using Python (API level), Shell scripting, and PowerShell scripting. Develop and maintain CI/CD pipelines and infrastructure as code using Azure DevOps, Ansible, and Terraform. Deploy, manage, and optimize SQL databases and support database administration tasks. Administer and troubleshoot both Linux and Windows environments. Ensure security, compliance, and performance of the IT infrastructure. Knowledge/Skills/Competencies Strong customer service orientation. Strong analytical, troubleshooting, and problem-solving skills. Good communication (oral and written), documentation, and presentation skills. Good leadership and teamwork capabilities. Negotiation and conflict resolution skills. Ability to plan projects and tasks effectively. Good understanding of IT requirements for end-to-end solutions, including security, business continuity, disaster recovery, and risk analysis. In-depth knowledge of Windows OS (server and client), and desktop operating environments; working knowledge of Linux and other mainstream OS is advantageous. Fundamental understanding of networking concepts. Advanced knowledge of infrastructure service management and diagnostic tools/processes. Experience with endpoint security management tools and processes. Understanding of IT Infrastructure asset lifecycle management. Proficiency in Azure and Azure DevOps for cloud infrastructure and automation. Programming and automation experience using Python (API level), Shell scripting, and PowerShell scripting. Hands-on experience with Ansible and Terraform for configuration management and infrastructure as code. Experience managing and optimizing SQL databases. Physical Demands Duties of this position are performed in a normal office environment. Duties may require extended periods of sitting and sustained visual concentration on a computer monitor or numbers and other detailed data. Repetitive manual movements (e.g., data entry, using a computer mouse, using a calculator, etc.) are frequently required. Please fill in all the important details provided below and attach your updated resume. Send it to Ralish.sharma@compunnel.com 1. Total Experience: 2. Relevant Experience in Azure DevOps Engineer : 3. Experience in Packer : 4. Experience in Ansible : 5. Experience in Terraform : 6. Experience in Python/API : 7. Experience in SQL : 8. Experience in Azure DevOps: 9. Experience in Docker/Kubernetes : 10. Current company : 11. Current Designation : 12. Tenure of your time with this company : 13. Highest Education : 14. Notice Period: 15 Current CTC: 16. Expected CTC: 17. Current Location: 18. Preferred Location: 19. Hometown: 20. Contact No: 21. If you have any offer from some other company, please mention the Offer amount and Offer Location: 22. Reason for looking for change: If the job description is suitable for you, please get in touch with me at the number below: 9910044363 .

Posted 3 weeks ago

Apply

0.0 - 1.0 years

0 - 1 Lacs

Hyderabad, Telangana, India

On-site

In- Warehouse Job Description Job Title: Associate (Full Time ) Department: Retail/Customer Service Reports to: Assistant warehouse Manager / Team Leader Job Summary: As a Picker, you will be responsible for accurately selecting and preparing orders for shipment. Your attention to detail and efficiency will play a key role in ensuring that our customers receive their products on time and in excellent condition. Education: 10th, Diploma, Graduates, can apply Age Limit : 18 to 34 years Within 5 years passed out graduates and diploma candidates are preferred freshers are welcome Preferred immediate joiners Salary is based on your education 10th : 14K Diploma : 17K ( 2020-2025 ) Passed Out Graduates : 18K (2020-2025 ) Passed Out Key Responsibilities Accurately pick items from shelves according to order specifications. Verify items against packing lists to ensure accuracy. Pack items securely for shipping, ensuring that they are protected during transit. Maintain organization and cleanliness in the picking area and throughout the warehouse. Assist with inventory management by reporting low stock levels and discrepancies. Operate warehouse equipment such as forklifts and pallet jacks (if certified). Follow safety protocols and guidelines to ensure a safe working environment. Collaborate with team members to meet daily productivity targets. Based on picking lists, he needs to pick the material Qualifications and Required Skills Education 10th,Diploma, Graduates, can apply Previous experience in a warehouse or picking role is a plus. Strong attention to detail and accuracy. Ability to work in a fast-paced environment. Basic math skills and the ability to read and interpret order documents. Physical stamina to lift heavy items and stand for extended periods. Ability to work independently and as part of a team. What We Offer: Competitive salary and benefits package. Opportunities for advancement and professional development. A supportive and inclusive work environment.

Posted 4 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Title: Development and Operations Engineer Location: Chennai , India Department: Trimble Cloud Core Platform We are seeking a self-motivated and enthusiastic Senior Site Reliability Engineer to join the Trimble Cloud Core Platform's Site Reliability Engineering team, which is responsible for provisioning and operating our core services in the public cloud. Key Responsibilities . Quickly grasp and analyze new or new-to-you systems that are complex and rapidly changing. . Root cause analysis for production issues . Identify problems and opportunities for improvements that are common across many teams and services. . Develop automation and monitoring solutions . Utilize best practices in cloud security and operations . Optimize application for maximum speed and scalability . Collaborate with other team members and stakeholder . Evaluate new tools, technologies, and processes to improve speed, efficiency, and scalability of continuous integration environments . Responsible for fixing compliance issues and requirements raised by SecOps tools . Responsible for optimize cost across cloud platforms , logging and monitoring tools . Foster collaboration with software product development, architecture, and engineering team to ensure releases are delivered with repeatable and auditable processes . Learn and be passionate about cloud computing Required Skills and Experience .Two to three year of strong experience with demonstrably deep AWS knowledge, monitoring, troubleshooting, and related DevOps technologies . Strong experience with CI/CD pipelines including Jenkins , Github Actions, and Azure DevOps . Infrastructure automation using Terraform, CloudFormation , Ansible, Packer or similar . Experience with Cloud Orchestration frameworks, development and SRE support of these systems . OS image build for Linux, Windows and patch automation . Deep understanding of Linux/Unix operating systems . Familiarity and experience with architectural design. . Serverless technology experience. . Experience with scalability, security and performance engineering for web services. . Responsible for mentoring and training others on the team . Be an open team collaborator . Support and troubleshoot scalability, high availability, performance, monitoring, backup and restores of different environments . Work independently across multiple platforms and applications to understand dependencies . Experience with scripting and automated process management via scripting, such as Shell and Python Desirable Skills and Experience . Experience with monitoring tools like SumoLogic, DataDog , ELK , InfluxDB , Grafana, Prometheus . Experience in Atlassian tools , Bitbucket , Jira and Confluence . Experience with containers . Experience with serverless application models . Experience with microservices . Experience with NoSQL databases . Experience with enterprise messaging

Posted 4 weeks ago

Apply

6.0 - 10.0 years

25 - 37 Lacs

Bengaluru

Remote

Role & responsibilities : Design and develop solutions for deploying highly secure, highly available, performant, and scalable services in elastically provisioned environments Design and maintain persistent storage solutions in our infrastructure Own all operational aspects of running persistent storage including automation, monitoring, and alerting, reliability, and performance Have a direct impact on running a business by thinking about innovative solutions to operational problems Drive solutions and communication for production-impacting incidents Running technical projects and being responsible for project-level deliveries Partner well with engineering and business teams across continents Mandatory skills- Bachelors or advanced degree in Computer Science or a closely related field 7+ years of professional experience in DevOps, with at least 1/2 years in Linux / Unix Very strong in core CS concepts around operating systems, networks, and systems architecture including web services Strong scripting experience in Python and Bash Deep experience administering, running, and deploying AWS-based services Strong knowledge of PostgreSQL, redis, Elasticache (OpenSearch), Neptune internals, performance tuning, query optimization, and indexing strategies Experience with Aurora, redis, Elasticache, Neptune architecture, including replication, failover, and backup/restore mechanisms Familiarity with AWS services commonly used with above services (e.g., RDS, CloudWatch, IAM, VPC, Lambda) Experience with high availability and disaster recovery solutions like Aurora, OpenSearch, Neptune, Redis Experience with database migration tools like AWS DMS , pg_dump, or logical replication. Solid experience with Terraform, Packer, and Docker or their equivalents Knowledge o f security protocols and certificate infrastructure. Strong debugging, troubleshooting, and problem-solving skills Broad experience with cloud-hosted applications including virtualization platforms, relational and non-relational data stores, reverse proxies, and orchestration platforms Curiosity, continuous learning, and drive to continually raise the bar Strong partnering and communication skills Great to have Experience and Qualifications Past experience as a senior developer or application architect is strongly preferred. Experience working with, and preferably designing, a system compliant to any security framework (PCI DSS, ISO 27000, HIPPA, SOC 2, ... ) Preferred candidate profile preferred from product based companies, AI based startups, product startups.

Posted 4 weeks ago

Apply

0.0 - 1.0 years

1 - 1 Lacs

Hyderabad

Work from Office

Pick orders accurately and efficiently from designated locations within the warehouse. Pack items securely using appropriate packaging materials and methods to prevent damage during transit. Organize and maintain inventory storage areas.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Software Engineer @ Cisco Common Services Engineering We are Cisco Secure Common Services Engineering, a team of cybersecurity experts and innovative engineers who support the products and developers across Cisco Security. We put our people first, we take > Were adding dedicated members to our growing team who will help us take Platform and Security to the next level because we believe that there is always room for growth. Your impact Common Services Platform Engineering provides the basic building blocks for the Cisco Security Cloud. As a Software Engineer, you will help us in our continuous mission to make our products work together seamlessly, creating a fantastic user experience for our customers. You will tackle challenges with a new lens and iterate with a passion for improvement. Problem-solving and innovating will be the name of the game. What youll do Build cloud applications for various platform initiatives. Build data pipelines in a scalable cloud environment. Partner closely with product management to identify, scope, estimate work for team. Contribute to POCs and perform quantitative and qualitative technology comparisons. Take ownership for large portions of the platform, and help us design, deliver and maintain that code going forward. Mentor junior engineers to produce their best work. Reviewing code Handling production issues Who You'll Work With Product and Engineering Management, to identify short term milestones and long-term goals. Devops group, to jointly govern infrastructure. Architects & Tech leads to deliver solutions. Data Engineers, in your group to achieve near and long-term goals. Who You Are An agile, pragmatic, and hardworking developer with good programming skills You pride yourself on your communication and interpersonal skills, on your keen eye for both aesthetic design and code, and an ability to autonomously plan and prioritize your work assignments based on the objectives of the team. 5+ years of designing and development skills. Proficient in Golang (must have), python/nodejs is a Experience with devops tools like Docker, Jenkins, Terraform, Ansible, Packer Building CI-CD pipelines Experience with writing and optimizing SQL Queries Experience taking difficult problems and translating them into solutions. Familiarity with Git, Confluence and Jira. Ability to communicate clearly and summarize information to others concisely. Experience working on any of the public cloud providers like AWS/GCP/Azure Good to have knowledge in data streaming technologies like Spark, Kafka etc

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Key Responsibilities: A day in the life of an Infosys Equinox employee As part of the Infosys Equinox delivery team your primary role would be to ensure effective Design Development Validation and Support activities to assure that our clients are satisfied with the high levels of service in the technology domain You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers Ensure high availability of the infrastructure administration and overall support Strong analytical skills and troubleshooting problem solving skills root cause identification and pro active service improvement staying up to date on technologies and best practices Team and Task management with tools like JIRA adhering to SLAs A Clear understanding of HTTP Network protocol concepts designs operations TCP dump Cookies Sessions Headers Client Server Architecture More than 5 years of working experience in AWS Azure GCP Cloud Platform Core strength in Linux and Azure infrastructure provisioning including VNet Subnet Gateway VM Security groups MySQL Blob Storage Azure Cache AKS Cluster etc Expertise with automating Infrastructure as a code using Terraform Packer Ansible Shell Scripting and Azure DevOps Expertise with patch management APM tools like AppDynamics Instana for monitoring and alerting Knowledge in technologies including Apache Solr MySQL Mongo Zookeeper RabbitMQ Pentaho etc Knowledge with Cloud platform including AWS and GCP are added advantage Ability to identify and automate recurring tasks for better productivity Ability to understand implement industry standard security solutions Experience in implementing Auto scaling DR HA Multi region with best practices is added advantage Ability to work under pressure managing expectations from various key stakeholders Technical Requirements: AWS Azure GCP Linux shell scripting IaaC Docker Kubernetes Mongo MySQL Solr Jenkins Github Automation TCP HTTP network protocols Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities Strong Technical Skills Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving analytical and debugging skills Preferred Skills: Technology->Cloud Platform->AWS App Development,Technology->Cloud Platform->Azure Devops,Technology->Cloud Platform->GCP Database,Technology->Open System->Linux,Technology->Open System->Shell scripting

Posted 1 month ago

Apply

0.0 - 1.0 years

1 - 2 Lacs

Talegaon-Dabhade

Work from Office

Urgent Requirement Loding - Unloding ( Male ) Location - Naulakh Umbre Talegaon salary - 15 to 20k + Bus + PF Esic Call Rajesh Sir - 8485874099 Bus - Induri/ Talegaon / Kamshet

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Position Summary The F5 NGINX Business Unit is seeking a Devops Software Engineer III based in India. As a Devops engineer, you will be an integral part of a development team delivering high-quality features for exciting next generation NGINX SaaS products. In this position, you will play a key role in building automation, standardization, operations support, and tools to implement and support world-class products; design, build, and maintain infrastructure, services and tools used by our developers, testers and CI/CD pipelines. You will champion efforts to improve reliability and efficiency in these environments and explore and lead efforts towards new strategies and architectures for pipeline services, infrastructure, and tooling. When necessary, you are comfortable wearing a developer hat to build a solution. You are passionate about automation and tools. You'll be expected to handle most development tasks independently, with minimal direct supervision. Primary Responsibilities Collaborate with a globally distributed team to design, build, and maintain tools, services, and infrastructure that support product development, testing, and CI/CD pipelines for SaaS applications hosted on public cloud platforms. Ensure Devops infrastructure and services maintain the required level of availability, reliability, scalability, and performance. Diagnose and resolve complex operational challenges involving network, security, and web technologies. This includes troubleshooting problems with HTTP load balancers, API gateways (e.g., NGINX proxies), and related systems. Take part in product support, bug triaging, and bug-fixing activities on a rotating schedule to ensure the SaaS service meets its SLA commitments. Consistently apply forward-thinking concepts relating to automation and CI/CD processes. Skills Experience with deploying infrastructure and services in one or more cloud environments such as AWS, Azure, Google Cloud. Experience with configuration management and deployment automation tools, such as Terraform, Ansible, Packer. Experience with Observability platforms like Grafana, Elastic Stack etc. Experience with source control and CI/CD tools like git, Gitlab CI, Github Actions, AWS Code Pipeline etc. Proficiency in scripting languages such as Python and Bash. Solid understanding of Unix OS Familiarity or experience with container orchestration technologies such as Docker and Kubernetes. Good understanding of computer networking (e.g., DNS, DHCP, TCP, IPv4/v6) Experience with network service technologies (e.g., HTTP, gRPC, TLS, REST APIs, OpenTelemetry). Qualifications Bachelors or advanced degree; and/or equivalent work experience. 5+ years of experience in relevant roles.

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 18 Lacs

Pune, Mumbai (All Areas)

Hybrid

Hashicorp Terraform, Packer & Kubernetes - Setup & configuration, trainings, operations, maintenance, incident handling

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Key Responsibilities: A day in the life of an Infosys Equinox employee As part of the Infosys Equinox delivery team your primary role would be to ensure effective Design Development Validation and Support activities to assure that our clients are satisfied with the high levels of service in the technology domain You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers Ensure high availability of the infrastructure administration and overall support Strong analytical skills and troubleshooting problem solving skills root cause identification and pro active service improvement staying up to date on technologies and best practices Team and Task management with tools like JIRA adhering to SLAs A Clear understanding of HTTP Network protocol concepts designs operations TCP dump Cookies Sessions Headers Client Server Architecture More than 5 years of working experience in AWS Azure GCP Cloud Platform Core strength in Linux and Azure infrastructure provisioning including VNet Subnet Gateway VM Security groups MySQL Blob Storage Azure Cache AKS Cluster etc Expertise with automating Infrastructure as a code using Terraform Packer Ansible Shell Scripting and Azure DevOps Expertise with patch management APM tools like AppDynamics Instana for monitoring and alerting Knowledge in technologies including Apache Solr MySQL Mongo Zookeeper RabbitMQ Pentaho etc Knowledge with Cloud platform including AWS and GCP are added advantage Ability to identify and automate recurring tasks for better productivity Ability to understand implement industry standard security solutions Experience in implementing Auto scaling DR HA Multi region with best practices is added advantage Ability to work under pressure managing expectations from various key stakeholders Technical Requirements: AWS Azure GCP Linux shell scripting IaaC Docker Kubernetes Mongo MySQL Solr Jenkins Github Automation TCP HTTP network protocols Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities Strong Technical Skills Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving analytical and debugging skills Preferred Skills: Technology->Open System->Shell scripting,Technology->Cloud Platform->AWS App Development,Technology->Open System->Linux,Technology->Cloud Platform->Azure Devops,Technology->Cloud Platform->GCP Database

Posted 1 month ago

Apply

1.0 - 6.0 years

0 - 2 Lacs

Sanand, Bavla, Ahmedabad

Work from Office

Role & responsibilities Pulling, packing, weighing, and labeling products based on daily orders. Assembling, lining, and padding cartons, crates, and containers, using hand tools. Locating items in warehouses, packing goods into containers and updating customer invoices. Picking items from an order sheet, preparing and packaging items for shipment, and general material handling within the warehouse. Pulling, packing, weighing, and labeling products based on daily orders.

Posted 1 month ago

Apply

1 - 2 years

1 - 1 Lacs

Noida, Sector - 131, Gautam Budh Nagar

Work from Office

We are hiring male candidates for the role of Picker / Packer responsible for efficient packaging and order picking in a warehouse environment. Eligibility Criteria: Gender: Male Age: 18 28 years Minimum Qualification: High School (10th Pass) Must be able to read in Hindi and English Additional Information: Candidates must be physically fit and willing to work in a fast-paced warehouse environment. Job involves standing for long hours, lifting, sorting, and packing. Key Responsibilities: Pick and pack items as per order requirements Ensure accurate labeling and sorting of packages Maintain cleanliness and organization of the warehouse Follow safety protocols and operational procedures Assist in inventory stock checks when required Shift timing 7 am to 5 pm 1 pm to 11pm 10 pm to 8 am Shift are planned on rotational basis. Rotational off 6 days working 4 weekky off

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies