Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Developer P3 C3 TSTS Hybrid US Shift Primary Skills DevOps and infrastructure engineering CI/CD tools AWS networking services, storage services, certificate management, secrets management, and database setup (RDS) Terraform/Cloud Formation/AWS CDK Python and Bash Secondary Skills Expertise in AWS CDK and CDK Pipelines for IaC. Understanding of logging and monitoring services like AWS CloudTrail, CloudWatch, GuardDuty, and other AWS security services Communication and collaboration skills to work effectively in a team-oriented environment. JD Design, implement, and maintain cloud infrastructure using AWS Cloud Development Kit (CDK) Develop and evolve Infrastructure as Code (IaC) to ensure efficient provisioning and management of AWS resources. Develop and automate Continuous Integration/Continuous Deployment (CI/CD) pipelines for infrastructure provisioning and application deployment. Configure and manage various AWS services, including but not limited to EC2, VPC, Security Group, NACL, S3, CloudFormation, CloudWatch, AWS Cognito, IAM, Transit Gateway, ELB, CloudFront, Route53, and more. Collaborate with development and operations teams, bridging the gap between infrastructure and application development. Monitor and troubleshoot infrastructure performance issues, ensuring high availability and reliability. Implement proactive measures to optimize resource utilization and identify potential bottlenecks. Implement security best practices, including data encryption and adherence to security protocols. Ensure compliance with industry standards and regulations. Must Have 5+ years of hands-on experience in DevOps and infrastructure engineering Solid understanding of AWS services and technologies, including EC2, VPC, S3, Lambda, Route53, and CloudWatch Experience with CI/CD tools, DevOps implementation and HA/DR setup In-depth experience with AWS networking services, storage services, certificate management, secrets management, and database setup (RDS) Proven expertise in Terraform/Cloud Formation/AWS CDK Strong scripting and programming skills, with proficiency in languages such as Python and Bash Nice to have Proven expertise in AWS CDK and CDK Pipelines for IaC. Familiarity or understanding of logging and monitoring services like AWS CloudTrail, CloudWatch, GuardDuty, and other AWS security services. Excellent communication and collaboration skills to work effectively in a team-oriented environment.
Posted 1 day ago
5.0 - 10.0 years
5 - 9 Lacs
Hyderabad
Work from Office
As a IBM Spectrum LSF Backend Software Developer, you will be responsible for designing and developing components and features for IBM Spectrum LSF, and would be involved in designing , developing and discussing product delivery & strategy. You should also have leadership quality to manage and work as technical leads/software architect and be able to deliver end to end features. As part of worldwide development team, you will be collaborating with team members and clients from different timezone to support business success. You will be addressing product issues reported from clients and providing solutions of fixes in timely manner. Be an avid coder who can get his hands dirty and be involved in the coding to the deepest level. Work other developers in the dev team to maintain and improve code base. Work in an Agile environment of continuous deliverable. You’ll learn directly from Sr members/leaders in this field Required education Bachelor's Degree Required technical and professional expertise Proven knowledge of software development principles and agile development experience 5+ years of experience and strong knowledge in C, C++ Working experience of Java and Python 3+ years of experience in development of systems or enterprise software on Linux Good knowledge of Linux kernel, system administration, networking, and performance Good knowledge of distributed system and enterprise software Self learner Proactive approach Excellent communication skills Preferred technical and professional experience Experience with container (docker, singularity, podman) and container-based platform Experience working with Git, AWS, Azure, Google Cloud Good understanding and development experience on Windows Development experience with GPU Client interaction experience
Posted 1 day ago
8.0 - 13.0 years
10 - 15 Lacs
Mumbai
Work from Office
Skill—Java AWS Experience:6-9Yrs Ro leT2 Responsibilities: Strong proficiency in Java (8 or higher) and Spring Boot framework. Hands-on experience with AWS services such as EC2, Lambda, API Gateway, S3, CloudFormation, DynamoDB, RDS. Experience developing microservices and RESTful APIs.ac Understanding of cloud architecture and deployment strategies. Familiarity with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Knowledge of containerization (Docker) and orchestration tools (ECS/Kubernetes) is a plus. Experience with monitoring/logging tools like CloudWatch, ELK Stack, or Prometheus is desirable. Familiarity with security best practices for cloud-native apps (IAM roles, encryption, etc.).Develop and maintain robust backend services and RESTful APIs using Java and Spring Boot. Design and implement microservices that are scalable, maintainable, and deployable in AWS. Integrate backend systems with AWS services including but not limited to Lambda, S3, DynamoDB, RDS, SNS/SQS, and CloudFormation. Collaborate with product managers, architects, and other developers to deliver end-to-end features. Participate in code reviews, design discussions, and agile development processes.
Posted 1 day ago
8.0 - 13.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Experience: 8 years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data PipelinesDesign, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL DevelopmentDevelop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow AutomationBuild and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data IntegrationWork with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and ScalingOptimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.
Posted 1 day ago
5.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Strong Application Development work experience - Agile environment preferred Solid application design, coding, testing, maintenance and debugging skills Experience with Junit and Cucumber testing. Experience with APM Monitoring tools and logging tools like Splunk Proficiency with JIRA, Confluence (preferred). AWS solution impliemntation hands on experiance is mandatory Expertise in development using Core Java, J2EE, XML, Web Services/SOA and used Java. frameworks - Spring, spring batch,Spring-boot, JPA, REST, MQ. Knowledgeable in developing RESTful micro services with technical stack, Amazon ECS ,Ec2,S3,API Gateway, amazon aurora , ALB, and Route 53 extencive knowledge and implementation experience Working with GIT/Bitbucket, Maven, Gradle, Jenkins tools to build and deploy code deployment to production environments. Hands on for CI/CD kubarnatees handas on experience
Posted 1 day ago
2.0 - 7.0 years
7 - 11 Lacs
Bengaluru
Work from Office
A Cloud Software Developer for OpenShift is responsible for designing, developing, deploying, and maintaining cloud-native applications on Red Hat OpenShift . Their role primarily involves working with containers, Kubernetes, and DevOps practices to build scalable, resilient, and secure cloud applications. Roles & Responsibilities: Design and develop cloud-native applications using OpenShift. Containerize applications using Docker and deploy them on Kubernetes . Implement microservices architecture and ensure scalability. Develop applications using languages like Java, Python, Go, or Node.js . Configure and manage OpenShift clusters . Develop and manage Operators for automating OpenShift workflows. Work with Operators for efficient application deployment Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 2 years of industrial experience in working with Unix/Linux based products developed using C or C++ or Go Lang programming language. Minimum 2 - 3 years of experience in leading development or support teams troubleshooting to resolve issues. Good experience in development/support experience in working with various network protocols (Layer 2 - Layer5) and devices (routers, switches, firewalls, load balancers, VPN, QoS). Must have knowledge of virtualization, Operating systems internals and Hypervisor (kVM, z/VM, Hyper-V) Expertise in Translate Technical specification or customer requirements, Preparing of HLD/LLD and Working closely with Team members in translating the Specifications /design into product deliverables. Good understanding of Enterprise servers, firmware, patches, hotfixes, and security configurations. Proven operational experience in network operations including incident, change and problem management. Excellent analytical and problem-solving skills. Excellent written and verbal communication skills. Ability to effectively communicate product architectures, design proposals and negotiate options at senior management levels. Experience in working with global teams/partner labs. Preferred technical and professional experience Solid understanding of systems hardware & architecture Good understanding operating systems internals/Kernel (Process Management, Memory Management, Virtualization, Scheduling, I/O (Networking & Storage), Security, etc.) Understanding of AI/ML model deployments, AI lifecycle, Hands on experience on deploying AI/ML models on cloud
Posted 1 day ago
3.0 - 8.0 years
10 - 14 Lacs
Gurugram
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Analytics Good to have skills : Microsoft SQL Server, AWS RedshiftMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also engage in problem-solving discussions, providing insights and solutions to enhance application performance and user experience. Additionally, you will mentor team members, fostering a collaborative environment that encourages innovation and continuous improvement. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Analyze application performance metrics and implement improvements. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Analytics.- Good To Have Skills: Experience with Microsoft SQL Server, AWS Redshift.- Strong analytical skills to interpret complex data sets.- Experience with data visualization tools to present findings effectively.- Ability to work with large datasets and perform data cleaning and transformation. Additional Information:- The candidate should have minimum 3 years of experience in Data Analytics.- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 day ago
8.0 - 13.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Experience in SQL and understanding of ETL best practices Should have good hands on in ETL/Big Data development Extensive hands on experience in Scala Should have experience in Spark/Yarn, troubleshooting Spark, Linux, Python Setting up a Hadoop cluster, Backup, recovery, and maintenance.
Posted 1 day ago
9.0 - 14.0 years
12 - 16 Lacs
Gurugram
Work from Office
1. AWS Design experience to architect and implement AWS solutions 2. Required proficiency in deploying, managing, and optimizing Amazon EKS. 3. Advanced skills in implementing using Terraform for infrastructure as code, ensuring consistent and repeatable deployments. 4. Experience in implementing Security Best Practices including developer solutions, IAM roles, policies, and encryption. 5. Proficiency in setting up efficient logging, monitoring and alerting solution. 6. Experience in Cost Management with ability to optimize AWS costs and manage budgets effectively. 7. Experience in developing and managing scalable solutions using EKS, Istio, API gateway, Lambda and serverless computing.
Posted 1 day ago
14.0 - 19.0 years
8 - 12 Lacs
Hyderabad
Work from Office
Strong proficiency in Java (8 or higher) and Spring Boot framework. Hands-on experience with AWS services such as EC2, Lambda, API Gateway, S3, CloudFormation, DynamoDB, RDS. Experience developing microservices and RESTful APIs. Understanding of cloud architecture and deployment strategies. Familiarity with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Knowledge of containerization (Docker) and orchestration tools (ECS/Kubernetes) is a plus. Experience with monitoring/logging tools like CloudWatch, ELK Stack, or Prometheus is desirable. Familiarity with security best practices for cloud-native apps (IAM roles, encryption, etc.).
Posted 1 day ago
7.0 - 12.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Information Lifecycle management ILM Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : BE Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure timely project delivery- Provide guidance and support to team members Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Information Lifecycle management ILM- Strong understanding of data lifecycle management- Experience in data archiving and retention policies- Knowledge of SAP data management solutions- Hands-on experience in SAP data migration- Experience in SAP data governance Additional Information:- The candidate should have a minimum of 7.5 years of experience in SAP Information Lifecycle management ILM- This position is based at our Hyderabad office- A BE degree is required Qualification BE
Posted 1 day ago
10.0 - 15.0 years
12 - 17 Lacs
Hyderabad
Work from Office
8 years of hands-on experience in AWS, Kubernetes, Prometheus, Cloudwatch,Splunk.Datadog Terraform, Scripting (Python/Go), Incident Management Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Design and manage scalable CI/CD pipelines for cloud-native apps Automate infrastructure using Terraform/CloudFormation Implement container orchestration using Kubernetes and ECS Ensure cloud security, compliance, and cost optimization Monitor performance and implement high-availability setups Collaborate with dev, QA, and security teams; drive architecture decisions Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or Master's degree in Computer Science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge, Strong communication and collaboration skills.
Posted 1 day ago
3.0 - 8.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Project Role : Operations Engineer Project Role Description : Support the operations and/or manage delivery for production systems and services based on operational requirements and service agreement. Must have skills : Kubernetes Good to have skills : Linux, Ansible on Microsoft AzureMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Operations Engineer, you will support the operations and/or manage delivery for production systems and services based on operational requirements and service agreement. Your day will involve ensuring seamless operations and timely service delivery, contributing to system enhancements, and collaborating with cross-functional teams to meet service level agreements. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Ensure smooth operations and timely service delivery.- Collaborate with cross-functional teams to enhance system performance.- Implement best practices for system maintenance and optimization.- Troubleshoot and resolve operational issues efficiently.- Contribute to the development and implementation of operational strategies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Kubernetes.- Strong understanding of containerization technologies.- Experience with cloud platforms like Microsoft Azure.- Hands-on experience with Linux operating systems.- Knowledge of automation tools like Ansible. Additional Information:- The candidate should have a minimum of 3 years of experience in Kubernetes.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 day ago
8.0 - 13.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Design, develop, test, and deploy scalable and resilient microservices using Java and Spring Boot. Collaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Should Be a Java Full Stack Developer. Bachelor's or Master's degree in Computer Science or related field. 6+ years of hands-on experience in JAVA FULL STACK - ANGULAR + JAVA SPRING BOOT Proficiency in Spring Boot and other Spring Framework components. Extensive experience in designing and developing RESTful APIs. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills.
Posted 1 day ago
7.0 - 12.0 years
9 - 14 Lacs
Hyderabad
Work from Office
As a Generative AI Platform Support Engineer you will be responsible for providing technical support for our AI platform focusing on the integration of cloud infrastructure deployment and ongoing maintenance You will work closely with cross functional teams to troubleshoot technical issues implement platform enhancements monitor system performance and ensure the platform runs efficiently and effectively Your role will leverage expertise in AWS Cloud Administration and Infrastructure management to support platform operations and ensure optimal system performance Key Responsibilities Assess and enhance the AI platforms cloud infrastructure and data pipeline resilience using AWS and cloud based technologies Ensure scalability and fault tolerance of AI ML models within cloud environments Identify and resolve bottlenecks in model inference and training pipelines focusing on performance and resource optimization Optimize cloud resource utilization on AWS for real time use cases including AI model deployment Collaborate with the DevOps team on improving cloud deployment processes and managing AWS infrastructure Implement automated testing to simulate fault tolerance and ensure high availability Provide ongoing technical support for users of the Generative AI platform troubleshooting issues and responding to queries to ensure seamless operations Monitor cloud platform performance on AWS identifying and implementing optimization strategies to improve cost efficiency and scalability Work with AWS cloud services eg EC2 S3 Lambda VPC to ensure proper configuration management and performance Document key processes issues and solutions for knowledge sharing and future reference Stay updated with industry trends in Generative AI cloud technologies and AWS cloud administration
Posted 1 day ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Red Hat OpenShift Good to have skills : Laboratory Information and Execution SystemsMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with organizational goals, addressing challenges that arise during the development process, and providing guidance to team members to foster a productive work environment. You will also engage in strategic discussions to enhance application performance and user experience, ensuring that the applications meet the needs of stakeholders effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in Red Hat OpenShift.- Good To Have Skills: Experience with Laboratory Information and Execution Systems.- Strong understanding of container orchestration and management.- Experience with application deployment and scaling in cloud environments.- Familiarity with CI/CD pipelines and DevOps practices. Additional Information:- The candidate should have minimum 7.5 years of experience in Red Hat OpenShift.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 day ago
8.0 - 13.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Strong proficiency in Java (8 or higher) and Spring Boot framework. Hands-on experience with AWS services such as EC2, Lambda, API Gateway, S3, CloudFormation, DynamoDB, RDS. Experience developing microservices and RESTful APIs.ac Understanding of cloud architecture and deployment strategies. Familiarity with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Knowledge of containerization (Docker) and orchestration tools (ECS/Kubernetes) is a plus. Experience with monitoring/logging tools like CloudWatch, ELK Stack, or Prometheus is desirable. Familiarity with security best practices for cloud-native apps (IAM roles, encryption, etc.).Develop and maintain robust backend services and RESTful APIs using Java and Spring Boot. Design and implement microservices that are scalable, maintainable, and deployable in AWS. Integrate backend systems with AWS services including but not limited to Lambda, S3, DynamoDB, RDS, SNS/SQS, and CloudFormation. Collaborate with product managers, architects, and other developers to deliver end-to-end features. Participate in code reviews, design discussions, and agile development processes.
Posted 1 day ago
5.0 - 10.0 years
7 - 12 Lacs
Ahmedabad
Work from Office
Build, maintain, document, configure, support and monitor Envizi production, deployment (CI/CD) and pre-production environments. Implement and maintain security best practices to adhere to strict compliance requirements with ISO 27001, SOC 1 & SOC 2 and GDPR. Daily monitor, administrate, and implement preventative maintenance work for Envizi application Administration of IIS Web Server environments and Windows Server systems Identify and implement solutions to improve platform reliability Implement and maintain monitoring/alerting/logging systems Ensure scalability and efficiency of cloud infrastructure and systems Develop and maintain platform solutions and automate infrastructure Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise 5+ years of experience with similar role - experience in AWS, cloud operation/administration, DevOps and application support 3+ years experience in Windows Server System Administration (Windows Server 2019/2022) 3+ years Experience working with any configuration management and infrastructure orchestration tools such as Ansible, CloudFormation or Terraform; 3+ years in scripting and automation using languages like PowerShell or Python Strong understanding of operating systems, networking and systems architecture; Experience working on a globally scalable SaaS platform Experience providing 24/7 support duties and with incident response and security-focused mindset Strong problem-solving skills Preferred technical and professional experience Experience with any other major cloud (such as Azure, GCS or OCI) Develop and maintain CI/CD pipelines to automate software deployment and testing (using tool such as Jenkins) Experience with AWS ECS, MSK and RDS Programming experience with NET and/or JavaScript Linux servers
Posted 1 day ago
4.0 - 8.0 years
9 - 13 Lacs
Mysuru
Work from Office
As a Brand Technical Specialist, you'll work closely with clients to develop relationships, understand their needs, earn their trust and show them how IBM's industry leading solutions will solve their problems whilst delivering value to their business. We're committed to success. In this role, your achievements will drive your career, team, and clients to thrive. A typical day may involve Strategic Mainframe SolutionsCrafting client strategies for mainframe infrastructure and applications. Comprehensive zStack SolutionsDefining and detailing IBM zStack solutions for client enhancement. Effective Client EducationDelivering simplified proof of concepts and educating clients. Building Trust for Cloud DealsBuilding trust for closing complex Cloud technology deals. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Should have worked on at least 3 engagements modernizing client applications to Container based solutions. Should be expert in any of the programming languages like Java, .NET, Node .js, Python, Ruby, Angular .js Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle
Posted 1 day ago
4.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
As a Brand Technical Specialist, you'll work closely with clients to develop relationships, understand their needs, earn their trust and show them how IBM's industry leading solutions will solve their problems whilst delivering value to their business. We're committed to success. In this role, your achievements will drive your career, team, and clients to thrive. A typical day may involveStrategic Mainframe SolutionsCrafting client strategies for mainframe infrastructure and applications. Comprehensive zStack SolutionsDefining and detailing IBM zStack solutions for client enhancement. Effective Client EducationDelivering simplified proof of concepts and educating clients. Building Trust for Cloud DealsBuilding trust for closing complex Cloud technology deals. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Should have worked on at least 3 engagements modernizing client applications to Container based solutions. Should be expert in any of the programming languages like Java, .NET, Node .js, Python, Ruby, Angular .js Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle
Posted 1 day ago
2.0 - 5.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Urgent Opening for AWS developer- Bangalore Posted On 01st Feb 2020 12:16 PM Location Bangalore Role / Position AWS Developer Experience (required) 2-5 yrs Description Our Client is a leading Big Data analyticscompany having its headquarter in Bangalore DesignationAWS Developer LocationBangalore Experience 2-5 yrs AWS certified with solid 1+ years experiencein Production environment Candidate Profile AWS certified candidate with 1+ yr experience of working in production environment is the specific ask. Within AWS, the key needs are working knowledge on Kinesis (streaming data), Redshift / RDS ( Querying), Dynamo DBNo SQL DB. Send Resumes to girish.expertiz@gmail.com -->Upload Resume
Posted 1 day ago
3.0 - 5.0 years
8 - 12 Lacs
Karur, Coimbatore
Work from Office
At Mallow Technologies, our DevOps team is dedicated to optimizing infrastructure, automating deployments, and ensuring seamless operations. We are seeking a DevOps Engineer with 3+ years of experience who is highly proficient in cloud platforms, CI/CD pipelines, and infrastructure automation, with strong troubleshooting and problem-solving skills. In this role, you will collaborate with cross-functional teams to enhance system reliability, optimize deployment workflows, and drive continuous improvements in scalability, performance, and cost efficiency, while also playing a key role in architecting robust cloud solutions and mentoring junior engineers to strengthen the team's expertise. Responsibilities Design, implement, and optimize AWS infrastructure for highly available applications. Deploy and manage AWS ECS (Fargate & EC2), Kubernetes (EKS, k3s, or self-hosted) . Architect and implement serverless solutions using AWS Lambda, API Gateway, DynamoDB, and EventBridge . Develop and manage Infrastructure as Code (IaC) using Terraform and Ansible . Design and manage CI/CD pipelines using GitLab CI/CD, ArgoCD, or AWS CodePipeline . Implement centralized logging, monitoring, and alerting with CloudWatch, OpenTelemetry, Prometheus, and the ELK Stack . Ensure AWS security best practices , including IAM, VPC security, and secret management ( SSM Parameter Store, HashiCorp Vault ). Optimize AWS cost efficiency and performance tuning for cloud environments . Troubleshoot complex infrastructure, networking, and container issues . Architect cloud-native solutions to enhance performance, cost efficiency, and scalability . Mentor junior DevOps engineers and contribute to team knowledge sharing . Requirements 3+ years of experience in AWS Cloud and DevOps engineering . Strong hands-on experience with AWS ECS (Fargate), Kubernetes (EKS, self-managed), and Docker . Experience in serverless application architecture using AWS Lambda, API Gateway, DynamoDB, EventBridge, SQS, and SNS . Expertise in Terraform and CloudFormation for infrastructure automation . Advanced CI/CD pipeline development skills using GitLab CI/CD, Jenkins, AWS CodePipeline & CodeBuild . Strong scripting/programming skills in Python or Bash . Deep understanding of AWS networking, VPC, and security best practices . Experience in AWS cost optimization and performance tuning . AWS Certifications ( AWS Certified DevOps Engineer, Solutions Architect Associate/Professional ) are a plus.
Posted 1 day ago
5.0 - 7.0 years
12 - 20 Lacs
Karur, Coimbatore
Work from Office
We are looking for a Senior DevOps Engineer with 5 + years of experience who can take ownership of large-scale cloud infrastructure, design highly resilient and cost-optimized architectures, and guide the evolution of DevOps culture and practices within the organization. As a senior member of the DevOps team, you'll be a key technical leader driving cloud strategy, automation, observability, and security. You'll work closely with stake holders, developers, QA, and Product owners to deliver reliable, scalable, and secure solutionswhile mentoring and upskilling the next generation of DevOps engineers at Mallow. Responsibilities Solution Architect, implement, and maintain scalable, secure, and highly available AWS cloud infrastructure across multiple environments. Design and manage CI/CD pipelines to support reliable, repeatable deployments with zero-downtime strategies (blue/green, canary, rolling). Deploy and orchestrate containerized applications using AWS ECS (Fargate/EC2), Kubernetes (EKS/self-hosted), and Docker. Develop and maintain Infrastructure as Code (IaC) using tools like Terraform, CloudFormation, and Ansible to automate infrastructure provisioning and configuration. Implement and enhance monitoring, logging, and alerting systems using CloudWatch, Prometheus, Grafana, ELK stack, and OpenTelemetry. Optimize infrastructure for performance, scalability, and cost-efficiency , regularly reviewing usage and recommending improvements. Ensure infrastructure and application security best practices , including IAM, VPC security, secret management (AWS SSM/Vault), and compliance with internal standards. Troubleshoot complex infrastructure and networking issues , leading root cause analysis and resolution of production incidents. Collaborate with cross-functional teams including backend, frontend, QA, and product to streamline development and deployment workflows. Mentor and support junior and mid-level DevOps engineers , conduct code and infrastructure reviews, and contribute to internal documentation and knowledge sharing. Requirements 5+ years of hands-on experience in DevOps, Site Reliability Engineering, or Cloud Infrastructure roles. Proven expertise in AWS Cloud Services , including ECS (Fargate/EC2), EKS/Kubernetes, Lambda, API Gateway, S3, DynamoDB, and VPC. Strong experience with containerization and orchestration using Docker and Kubernetes (managed or self-hosted). Deep knowledge of Infrastructure as Code (IaC) using Terraform , CloudFormation , or Ansible . Proficiency in CI/CD pipeline design and implementation using tools like GitLab CI/CD, Jenkins, AWS CodePipeline, or ArgoCD. Solid scripting skills in Python, Bash, or similar languages for automation and tooling. In-depth understanding of cloud networking, security best practices , IAM policies, and secrets management (SSM, KMS, Vault). Experience implementing observability solutions (monitoring, logging, alerting) with tools like CloudWatch, ELK Stack, Prometheus, Grafana, or OpenTelemetry. Demonstrated ability to troubleshoot and resolve complex infrastructure, deployment, and networking issues . Strong communication and collaboration skills; able to work closely with developers, QA, and leadership. AWS Certification (e.g., DevOps Engineer – Professional, Solutions Architect – Professional) is a strong plus.
Posted 1 day ago
6.0 - 8.0 years
8 - 10 Lacs
Hyderabad
Work from Office
Urgent Requirement for Big Data, Notice Period Immediate Location Hyderabad/Pune Employment Type C2H Primary Skills 6-8yrs of Experience in working as bigdata developer/supporting environemnts Strong knowledge in Unix/BigData Scripting Strong understanding of BigData (CDP/Hive) Environment Hands-on with GitHub and CI-CD implementations. Attitude to learn understand ever task doing with reason Ability to work independently on specialized assignments within the context of project deliverable Take ownership of providing solutions and tools that iteratively increase engineering efficiencies. Excellent communication Skills & team player Good to have hadoop, Control-M Tooling knowledge. Good to have Automation experience, knowledge of any Monitoring Tools. Role You will work with team handling application developed using Hadoop/CDP, Hive. You will work within the Data Engineering team and with the Lead Hadoop Data Engineer and Product Owner. You are expected to support existing application as well as design and build new Data Pipelines. You are expected to support Evergreening or upgrade activities of CDP/SAS/Hive You are expected to participate in the service management if application Support issue resolution and improve processing performanceavoid issue reoccurring Ensure the use of Hive, Unix Scripting, Control-M reduces lead time to delivery Support application in UK shift as well as on-call support over night/weekend This is mandatory
Posted 1 day ago
9.0 - 12.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Roles and Responsibility Design, implement, and maintain scalable and secure infrastructure architectures. Collaborate with development teams to ensure smooth application deployment. Develop and enforce best practices for coding, testing, and deployment. Troubleshoot and resolve complex technical issues efficiently. Implement continuous integration and delivery pipelines. Ensure compliance with industry standards and security protocols. Job Requirements Strong understanding of cloud computing platforms and containerization. Experience with agile methodologies and version control systems. Proficiency in scripting languages such as Python or Java. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a fast-paced environment. Strong knowledge of DevOps tools and technologies.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane