Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description 👋🏼We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 36 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Experience: 7+ Years Extensive Experience with Azure cloud platform Good Experience in maintaining cost-efficient, scalable cloud environments for the organization involving best practices for monitoring and cloud governance Experience with CI tools like Jenkins and building end to end CI/CD pipelines for projects Experience with various build tools like Maven/Ant/Gradle Rich Experience with container frameworks like Docker, Kubernetes or cloud native container services Good Experience in Infrastructure as a Code (IaC) using tools like Terraform Good Experience with anyone CM tools of following: Ansible, Chef, Saltstack, Puppet Good Experience in monitoring tools like Prometheus & Grafana, Nagios/ DataDog/Zabbix and logging tools like Splunk/LogStash Good Experience in scripting and automation using languages like Bash/Shell, Python, PowerShell, Groovy, Perl. Configure and manage data sources like MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, PostgreSQL, Neo4J etc Good experience on managing version control tool like Git, SVN/BitBucket Good problem-solving ability, strong written and verbal communication skills RESPONSIBILITIES: Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the client’s requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Patna, Bihar, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
0 years
0 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
0 years
0 Lacs
Surat, Gujarat, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
0 years
0 Lacs
Thane, Maharashtra, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
0 years
0 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
21 - 31 years
50 - 70 Lacs
Bengaluru
Work from Office
What we’re looking for As a member of the infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you’ll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices. Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery. Provide Technical Leadership & Mentorship Mentor and guide senior engineers to build technical expertise and drive a culture of excellence in software development. Foster collaboration within the engineering team, ensuring the adoption of best practices in coding, testing, and deployment. Review code and provide constructive feedback to ensure code quality and adherence to architectural principles. Collaboration & Cross-Functional Leadership Collaborate with cross-functional teams (Product, Security, and other Engineering teams) to drive the roadmap and ensure alignment with business objectives. Provide technical leadership in meetings and discussions, influencing key decisions on architecture, design, and implementation. Innovation & Continuous Improvement Propose, evaluate, and integrate new tools and technologies to improve the performance, security, and scalability of the cloud platform. Drive initiatives for optimizing cloud resource usage and reducing operational costs without compromising performance. Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems. Participate in on-call rotation. Support and partner with other teams on improving our observability systems to monitor site stability and performance We’d love to hear from people with: 12+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience leading design sessions and evolving well-architected environments in AWS at scale. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, OpenTelemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid
Posted 1 month ago
21 - 31 years
35 - 42 Lacs
Bengaluru
Work from Office
What we’re looking for As a member of the Infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you’ll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery Support and maintain AWS services, such as EKS, Heroku Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems Support and partner with other teams on improving our observability systems to monitor site stability and performance Work closely with developers in supporting new features and services. Work in a highly collaborative team environment. Participate in on-call rotation We’d love to hear from people with 8+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, Open Telemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid
Posted 1 month ago
3 - 8 years
10 - 20 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Job Description: As an ELK (Elastic, Logstash & Kibana) Data Engineer, you would be responsible for developing, implementing, and maintaining the ELK stack-based solutions for Kyndryl s clients. This role would be responsible to develop efficient and effective, data & log ingestion, processing, indexing, and visualization for monitoring, troubleshooting, and analysis purposes. Key Responsibilities : Configure Logstash to receive, filter, and transform logs from diverse sources (e.g., servers, applications, AppDynamics, Storage, Databases and so son) before sending them to Elasticsearch. Configure ILM policies, Index templates etc. Develop Logstash configuration files to parse, enrich, and filter log data from various input sources (e.g., APM tools, Database, Storage and so on) Implement techniques like grok patterns, regular expressions, and plugins to handle complex log formats and structures. Ensure efficient and reliable data ingestion by optimizing Logstash performance, handling high data volumes, and managing throughput. Utilize Kibana to create visually appealing dashboards, reports, and custom visualizations. Collaborate with business users to understand their data integration & visualization needs and translate them into technical solutions Establishing the correlation within the data and develop visualizations to detect the root cause of the issue. Integration with ticketing tools such as Service Now Hands on with ML and Watcher functionalities Monitor Elasticsearch clusters for health, performance, and resource utilization Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides Education, Experience, and Certification Requirements: BS or MS degree in Computer Science or a related technical field 5+ years overall IT Industry Experience. 3+ years of development experience with Elastic, Logstash and Kibana in designing, building, and maintaining log & data processing systems 3+ years of Python or Java development experience 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling Experience working with Machine Learning model is a plus Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes) is a plus Elastic Certified Engineer” certification is preferrable
Posted 1 month ago
6 - 11 years
3 - 7 Lacs
Bengaluru
Work from Office
About The Role We are looking for a skilled Elasticsearch Developer to design, develop, and optimize search solutions using Elasticsearch. The ideal candidate will have strong experience in managing Elasticsearch clusters, implementing search functionalities, and integrating Elasticsearch with various applications. Key Responsibilities: Design, implement, and maintain Elasticsearch clusters to support large-scale search applications. Develop, optimize, and maintain custom search queries, aggregations, and indexing strategies . Work with data pipelines , including ingestion, transformation, and storage of structured and unstructured data. Integrate Elasticsearch with web applications, APIs, and other data storage systems . Implement scalability, performance tuning, and security best practices for Elasticsearch clusters. Troubleshoot search performance issues and enhance the relevance and efficiency of search results. Work with Kibana , Logstash, and Beats for visualization and data analysis. Collaborate with developers, data engineers, and DevOps teams to deploy and maintain search infrastructure. Stay updated on the latest Elasticsearch features, plugins, and best practices. Primary Skills Strong experience with Elasticsearch (versions 7.x/8.x) and related tools (Kibana, Logstash, Beats). Proficiency in writing complex Elasticsearch queries, aggregations, and analyzers . Experience with full-text search, relevance tuning, and ranking algorithms . Knowledge of indexing, mapping, and schema design for optimal search performance. Proficiency in Python, Java, or Node.js for developing search applications. Experience with RESTful APIs and integrating Elasticsearch with various platforms. Familiarity with distributed systems, clustering, and high-availability configurations . Hands-on experience with Docker, Kubernetes, and cloud platforms (AWS, Azure, GCP) is a plus. Strong problem-solving skills and ability to troubleshoot performance bottlenecks. Preferred Qualifications: Experience with machine learning-based search ranking and recommendation systems. Knowledge of vector search and Elasticsearch's kNN capabilities . Understanding of security best practices , including authentication and role-based access. Familiarity with log analytics and monitoring tools . Education: Bachelors/Masters degree in Computer Science, Information Technology, or a related field.
Posted 1 month ago
4 - 9 years
7 - 11 Lacs
Hyderabad
Work from Office
Primary Skills 1.Java (8/11/17+) Strong expertise in Core Java, multithreading, collections, and functional programming. 2.Spring Boot Hands-on experience with Spring Boot for developing RESTful microservices. 3.Microservices Architecture Understanding of microservices design patterns, inter-service communication, and distributed systems. 4.Google Cloud Platform (GCP) Experience with Google Kubernetes Engine (GKE) for deploying and managing containerized applications, Cloud Run for running containerized applications in a serverless environment, Cloud Functions for serverless function execution, Cloud Pub/Sub for event-driven communication, and Firestore / Cloud SQL for working with NoSQL and relational databases on GCP. 5.Containers & Docker Experience in containerizing applications using Docker and managing images. 6.Kubernetes (GKE Preferred) Strong knowledge of Pods, Deployments, Services, ConfigMaps, Secrets, and Helm Charts for Kubernetes resource management. 7.RESTful APIs Experience in designing, building, and consuming REST APIs with security best practices. 8.CI/CD Pipelines Hands-on experience with Jenkins, GitHub Actions, GitLab CI/CD, or Google Cloud Build for automated testing and deployment of microservices. 9.Cloud Networking Understanding of VPCs, Load Balancers, and Service Mesh (Istio). 10.SQL & NoSQL Databases Experience with PostgreSQL, MySQL, Firestore, or MongoDB. 11.Logging & Monitoring Familiarity with Google Cloud Logging (Stackdriver), Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana). Secondary Skills Infrastructure as Code (IaC) Terraform for GCP infrastructure automation. Event-Driven Architecture Working knowledge of Kafka, Pub/Sub, or RabbitMQ. Security Best Practices Authentication/Authorization using OAuth2, JWT, and IAM roles. Testing Frameworks JUnit, Mockito, and integration testing for microservices. GraphQL Exposure to GraphQL API development. Agile Methodologies Experience working in Agile/Scrum teams. Performance Tuning Experience optimizing application performance and memory management. Multi-Cloud Exposure Knowledge of AWS or Azure is a plus. DevSecOps Exposure to security scanning tools like Snyk, SonarQube, and OWASP best practices. API Management Experience with API Gateways like Apigee or Kong is beneficial.
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka
Work from Office
About this opportunity: A&AI (SL IT & ADM) team is currently seeking a versatile and motivated DevOps Engineer (with expertise in Kubernetes and Cloud Infrastructure) to join the AI/ML team. This role will be pivotal in managing multiple platforms and systems, focusing on Kubernetes, ELK/Opensearch, and various DevOps tools to ensure seamless data flow for our machine learning and data science initiatives. The ideal candidate should have a strong foundation in Python programming, experience with Elasticsearch, Logstash, and Kibana (ELK), proficiency in MLOps, and expertise in machine learning model development and deployment. Additionally, familiarity with basic Spark concepts and visualization tools like Grafana and Kibana is desirable. What you will do: Design and implement robust AI/ML infrastructure using cloud services and Kubernetes to support machine learning operations (MLOps) and data processing workflows. Deploy, manage, and optimize Kubernetes clusters specifically tailored for AI/ML workloads, ensuring optimal resource allocation and scalability across different network configurations. Develop and maintain CI/CD pipelines tailored for continuous training and deployment of machine learning models, integrating tools like Kubeflow, MLflow, ArgoFlow or TensorFlow Extended (TFX). Collaborate with data scientists to oversee the deployment of machine learning models and set up monitoring systems to track their performance and health in production. Design and implement data pipelines for large-scale data ingestion, processing, and analytics essential for machine learning models, utilizing distributed storage and processing technologies such as Hadoop, Spark, and Kafka. . The skills you bring: Extensive experience with Kubernetes and cloud services (AWS, Azure, GCP, private cloud) with a focus on deploying and managing AI/ML environments. Strong proficiency in scripting and automation using languages like Python, Bash, or Perl. Experience with AI/ML tools and frameworks (TensorFlow, PyTorch, Scikit-learn) and MLOps tools (Kubeflow, MLflow, TFX). In-depth knowledge of data pipeline and workflow management tools, distributed data processing (Hadoop, Spark), and messaging systems (Kafka, RabbitMQ). Expertise in implementing CI/CD pipelines, infrastructure as code (IaC), and configuration management tools. Familiarity with security standards and data protection regulations relevant to AI/ML projects. Proven ability to design and maintain reliable and scalable infrastructure tailored for AI/ML workloads. Excellent analytical, problem-solving, and communication skills. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 766746
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka
Work from Office
About this opportunity: This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous What you will do: Generative AI & LLM Development, 12-15 Yrs of experience as Enterprise Software Architect with strong hands-on experience Strong hands-on experience in Python and microservice architecture concepts and development Expertise in crafting technical guides, architecture designs for AI platform Experience in Elastic Stack , Cassandra or any Big Data tool Experience with advance distributed systems and tooling, for example, Prometheus, Terraform, Kubernetes, Helm, Vault, CI/CD systems. Prior experience to build multiple AI/ML based models and deployed the models into production environment and creating the data pipelines Experience in guiding teams working on AI, ML, BigData and Analytics Strong understanding of development practices like architecture design, coding, test and verification. Experience with delivering software products, for example release management, documentation What you will Bring: Python Development: Write clean, efficient, and maintainable Python code to support data engineering tasks, including data collection, transformation, and integration with machine learning models. Data Pipeline Development: Design, develop, and maintain robust data pipelines that efficiently gather, process, and transform data from various sources into a format suitable for machine learning and data science tasks using ELK stack, Python and other leading technologies. Spark Knowledge: Apply basic Spark concepts for distributed data processing when necessary, optimizing data workflows for performance and scalability. ELK Integration: Utilize ElasticSearch, Logstash, and Kibana (ELK) for data management, data indexing, and real-time data visualization. Knowledge of OpenSearch and related stack would be beneficial. Grafana and Kibana: Create and manage dashboards and visualizations using Grafana and Kibana to provide real-time insights into data and system performance. Kubernetes Deployment: Deploy data engineering solutions and machine learning models to a Kubernetes-based environment, ensuring security, scalability, reliability, and high availability. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 766747
Posted 1 month ago
0 - 6 years
0 Lacs
Mumbai, Maharashtra
Work from Office
Work location: Mumbai Interview locaiton: Pune Interview date: 15th Feb 25 L2- 4 to 6 years experience Job Description- Must have hands on experience of working on Elasticsearch, Logstash, Kibana, Prometheus and Grafana monitoring system. Experience on installation, upgrade and managing of ELK, Prometheus and Grafana system. Hands-on experience with ELK, Prometheus, Grafana Administration, Configuration, Performance Tuning and Troubleshooting Knowledge of various clustering topologies like Redundant Assignments, Active-Passive setups etc.,and two or more Cloud Platforms (e.g.: AWS EC2 & Azure) for deploying the clusters. Experience on Logstash pipeline design and management, search index optimization and tuning. Implement security measures and ensure compliance with security policies and procedures like CIS benchmark. Collaborate with other teams to ensure seamless integration of environment with other systems. Create and maintain documentation related to the environment. Key Skills – Certified in monitoring system like ELK. Certified in RHCSA/RHCE Experience on Linux Platform. Must have knowledge on Monitoring tools such as Prometheus, Grafana, ELK stack, ManageEngine or any APM tool. Educational Qualifications- Bachelor’s degree in computer Science, Information Technology, or related field. Job Location: Mumbai
Posted 1 month ago
7 - 12 years
45 - 65 Lacs
Pune
Work from Office
We're hiring a Senior Backend Python Developer (7+ yrs )to build scalable, AI-powered systems using Django/Flask/FastAPI, GCP, Kubernetes & GraphQL. Design APIs, drive architecture, mentor teams & integrate ML for high-performance platforms.
Posted 1 month ago
4 - 8 years
900 - 1000 Lacs
Chennai
Remote
Join us and be a part of this journey as we write customer success stories about these products. WHAT YOU DO Interface with business customers, gathering and understanding requirements. Interface with customer and Genesys data science teams in discovery, extraction, loading, data transformation, and analysis of results. Define and utilize data intuition process to cleanse and verify the integrity of customer & Genesys data to be used for analysis. Implement, own, and improve data pipelines using best practices in data modeling, ETL/ELT processes. Build, improve, and provide ongoing optimization of high-quality models. Work with PS & Engineering to deliver specific customer requirements and report back customer feedback, issues, and feature requests. Continuous improvement in reporting, analysis, and overall process. Visualize, present, and demonstrate findings as required. Perform knowledge transfer to customer and internal teams. Communicate within the global community respecting cultural, language, and time zone variations. Demonstrate flexibility to adjust working hours to match customer and team interactions. ABOUT YOU Bachelors / Masters degree in quantitative field (e.g., Computer Science, Statistics, Engineering) 5+ years of relevant experience in Data Science or Data Engineering 5+ years of hands-on experience in Elasticsearch, Kibana, and real-time analytics solution development Hands-on application development experience in AWS/Azure and experience in Snowflake, Tableau, or Power BI Expertise with major statistical & analytical software like Python, R, SAS Good working knowledge on any programming language like Java, NodeJS. Application development background of using any contact center product suites such as Genesys, Avaya, Cisco etc. is an added advantage Expertise with data modeling, data warehousing, and ETL/ELT development Expertise with database solutions such as SQL, MongoDB, Redshift, Hadoop, Hive Proficiency with REST API, JSON, AWS Experience in working and delivering projects independently. Ability to multi-task and context switch between projects and tasks Curiosity, passion, and drive for data queries, analysis, quality, and models Excellent communication, initiative, and coordination skills with great attention to detail. Ability to explain and discuss complex topics with both experts and business leaders.
Posted 1 month ago
2 - 7 years
4 - 9 Lacs
Pune
Work from Office
Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Splunk Good to have skills : NA Minimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Please refer below details for Position Name - SPLUNK/ELK Developer – Professional & Technical Skills: Must Have Skills:Proficiency in Splunk & ELK Administration and Development Must Have Skills:Hands on Experience with ELK Stack components (Elasticsearch, Logstash, Kibana) and their seamless integration Log management:Utilize the ELK stack to collect, process and analyze log data ,ensuring efficient log management and searchability Familiarity with Kibana dashboard creation, Health checks ,Linux system administration, Shell scripting and any one cloud platform Develop fields extraction, lookups, and data transformations to ensure accurate and meaningful data analysis Creating dashboards, alerts, saved searches, lookups, macros, field extractions, field transformations, tags, event types. Experience in architecting and administering Splunk distributed environments with components like Universal/Heavy Forwarders, Indexers, Cluster Masters, Deployment Servers , Search Head, License Master and Search Head Cluster. Manage and edit various .conf files such as index.conf, input.conf, output.conf, props.conf, transform.conf, server.conf Experience on log parsing, complex Splunk searches, external table lookup Create and manage KPIs, Glass Tables, and Service Health Scores to provide real-time visibility into IT operations Installed, configured, and maintained Splunk, Splunk Add Ons and Apps Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 2 years of experience in Splunk. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a talented Solutions Architect with a keen interest in Amazon Connect-based contact center solutions and telephony. The ideal candidate will have foundational knowledge in contact center technologies and a solid desire to learn and grow alongside industry experts. This role offers the opportunity to work on cutting-edge projects involving AI, GenAI, and cloud-based platforms Primary Responsibilities: Solution Design Support Assist in designing and developing Amazon Connect-based contact center solutions Contribute to focus areas such as Product Development, Data & Analytics, Routing, Desktop/CTI, WFM/WFO, and SBC/Telephony Participate in integrating AI and GenAI technologies into contact center solutions Technical Contribution Work hands-on to create accelerators and tools that enhance the productivity of engineering and delivery teams Support the collection of requirements and assist in converting them into technical specifications in collaboration with engineering and delivery teams Problem Solving & Support Help address production and non-production issues by providing timely support and solutions Participate in the end-to-end process from feature grooming to Day 2 support Collaboration & Communication Collaborate with cross-functional teams to understand project requirements and deliverables Clearly articulate ideas and technical concepts in both written and verbal formats Utilize design tools like Draw.io, PlantUML, Mermaid, PowerPoint, Miro, and Figma to create documentation and diagrams Learning & Development Develop a deep understanding of application, technology, and data architecture principles Acquire knowledge of protocol stacks and data entities relevant to contact center technologies Stay updated with industry standards and technologies from vendors like Amazon, Google, Microsoft, Oracle, etc. Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Completion of Graduate degree Hands-on experience with Cloud native tech stack: Experience working with diverse technologies: Java, Public Cloud (Azure), Cloud (Docker, Microservices/SpringBoot), RDBMS (MySQL) + nosql (Cassandra, MongoDB, Elastic), APIs (REST, Graph QL), API gateways (Kong etc.), Data Streaming (Kafka), Visualization (Grafana, Kibana), ELK stack (Elastic, Logstash, and Kibana); API Gateway, Gen AI, AI/ML Experience in solution architecture with a focus on contact center technologies and telephony systems. This could include experience as full stack engineer in the mentioned platforms Experience with design and documentation tools such as Draw.io, PlantUML, Mermaid, PowerPoint, Word, Excel, Miro, and Figma Basic understanding of application development and architecture principles Proven solid communication and interpersonal skills Proven ability to articulate thoughts clearly and effectively in written and verbal communication Proven eagerness to learn and adapt to new technologies and methodologies Proven analytical mindset with problem-solving abilities Preferred Qualifications: Certifications such as AWS Certified Cloud Practitioner or AWS Certified Developer - Associate Experience with agile development processes and collaboration tools Exposure to AI and GenAI technologies Knowledge of cloud platforms (AWS, Azure, Google Cloud) and basic AI concepts Familiarity with agile methodologies and DevOps practices At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 month ago
3 - 8 years
4 - 9 Lacs
Bangalore Rural, Bengaluru
Work from Office
Skills: Elasticsearch, Talend, Grafana Responsibilities: Build dashboards, manage clusters, optimize performance Tech: API, Python, cloud platforms (AWS, Azure, GCP) Preference: Immediate joiners Contact: 6383826448 || jensyofficial23@gmail.com
Posted 1 month ago
6 - 9 years
12 - 22 Lacs
Bengaluru
Hybrid
Skills Go, Kafka, RestAssured, REST Web Services, NoSQL ELK Stack (Elastic Search, Logstash, Kibana) Git (GitHub, GitLab, BitBucket, SVN) Postgres / Postgresql Couchbase, Jenkins, Docker, Kubernetes Details Key Responsibilities: Responsible for designing system solutions, developing custom applications, and modifying existing applications to meet distinct and changing business requirements. Handle coding, debugging, and documentation, as well working closely with SRE team. Provide post implementation and ongoing production support Develop and design software applications, translating user needs into system architecture.Assess and validate application performance and integration of component systems and provide process flow diagrams. Test the engineering resilience of software and automation tools. You will be challenged with identifying innovative ideas and proof of concept to deliver against the existing and future needs of our customers. Software Engineers who join our Loyalty Technology team will be assigned to one of several exciting teams that are developing a new, nimble, and modern loyalty platform which will support the key element of connecting with our customers where they are and how they choose to interact with American Express. Be part of an enthusiastic, high performing technology team developing solutions to drive engagement and loyalty within our existing cardmember base and attract new customers to the Amex brand. The position will also play a critical role partnering with other development teams, testing and quality, and production support, to meet implementation dates and allow smooth transition throughout the development life-cycle. The successful candidate will be focused on building and executing against a strategy and roadmap focused on moving from monolithic, tightly coupled, batch-based legacy platforms to a loosely coupled, event-driven, microservices-based architecture to meet our long-term business goals. Minimum Qualifications: Position requires a Bachelors degree in Computer Science, Engineering, or a related field followed by 6+ years of experience in a modern development stack Golang, Kafka, REST API Experience in application design, software development, and testing in an Agile environment. Experience with relational and NoSQL databases, including ElasticSearch, PostgreSQL, Couchbase, or Cassandra. Experience designing and developing REST APIs for high volume clients. Experience with continuous integration tools (Jenkins, Github). Experience with automated build and test frameworks a plus. Experience in American Express Technologies is highly desired. A proven hunger to learn new technologies and translate them into working software. Experience with container and container orchestration technologies, such as Docker and Kubernetes. Experience with Atlassian software development and collaboration tools (JIRA, Confluence, etc.). Strong ability to develop unique, outside the box ideas Strong analytical, problem-solving/quantitative skills Willing to take risks, experiment, and share fresh perspectives Aptitude for learning and applying programming concepts. Ability to effectively communicate with internal and external business partners. Preferred Additional: Knowledge of Loyalty/Rewards and Credit card industry Experience with coding skills across a variety of distributed technologies Experience with open-source frameworks is a plus especially maintaining or contributing to open source projects! Experience with a broad range of software languages and payments technologies
Posted 1 month ago
8.0 years
0 Lacs
Hyderabad, Telangana
On-site
General information Country India State Telangana City Hyderabad Job ID 42999 Department Development Experience Level MID_SENIOR_LEVEL Employment Status FULL_TIME Workplace Type Hybrid Description & Requirements As a software lead, you will play a critical role in defining and driving the architectural vision of our RPA product. You will ensure technical excellence, mentor engineering teams, and collaborate across departments to deliver innovative automation solutions. This is a unique opportunity to influence the future of RPA technology and make a significant impact on the industry. RESPONSIBILITIES: Define and lead the architectural design and development of the RPA product, ensuring solutions are scalable, maintainable, and aligned with organizational strategic goals. Provide technical leadership and mentor team members on architectural best practices. Analyze and resolve complex technical challenges, including performance bottlenecks, scalability issues, and integration challenges, to ensure high system reliability and performance. Collaborate with cross-functional stakeholders, including product managers, QA, and engineering teams, to define system requirements, prioritize technical objectives, and design cohesive solutions. Provide architectural insights during sprint planning and agile processes. Establish and enforce coding standards, best practices, and guidelines across the engineering team, conducting code reviews with a focus on architecture, maintainability, and future scalability. Develop and maintain comprehensive documentation for system architecture, design decisions, and implementation details, ensuring knowledge transfer and facilitating team collaboration. Architect and oversee robust testing strategies, including automated unit, integration, and regression tests, to ensure adherence to quality standards and efficient system validation. Research and integrate emerging technologies, particularly advancements in RPA and automation, to continually enhance the product’s capabilities and technical stack. Drive innovation and implement best practices within the team. Serve as a technical mentor and advisor to engineering teams, fostering professional growth and ensuring alignment with the overall architectural vision. Ensure that the RPA product adheres to security and compliance standards by incorporating secure design principles, conducting regular security reviews, and implementing necessary safeguards to protect data integrity, confidentiality, and availability. EDUCATION & EXPERIENCE: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. 8+ years of professional experience in software development. REQUIRED SKILLS: Expertise in object-oriented programming languages such as Java, C#, or similar, with a strong understanding of design patterns and principles. Deep familiarity with software development best practices, version control systems (e.g., Git), and continuous integration/continuous delivery (CI/CD) workflows. Proven experience deploying and managing infrastructure on cloud platforms such as AWS, Azure, or Google Cloud, including knowledge of containerization technologies like Docker and orchestration tools like Kubernetes. Strong proficiency in architecting, building, and optimizing RESTful APIs and microservices, with familiarity in tools like Swagger/OpenAPI and Postman for design and testing Comprehensive knowledge of SQL databases (e.g., PostgreSQL, SQLServer) with expertise in designing scalable and reliable data models, including creating detailed Entity-Relationship Diagrams (ERDs) and optimizing database schemas for performance and maintainability. Demonstrated experience in building and maintaining robust CI/CD pipelines using tools such as Jenkins or GitLab CI. Demonstrated ability to lead teams in identifying and resolving complex software and infrastructure issues using advanced troubleshooting techniques and tools. Exceptional communication and leadership skills, with the ability to guide and collaborate with cross-functional teams, bridging technical and non-technical stakeholders. Excellent written and verbal communication skills, with a focus on documenting technical designs, code, and system processes clearly and concisely. Comfortable and experienced in agile development environments, demonstrating adaptability to evolving requirements and timelines while maintaining high productivity and focus on deliverables. Familiarity with security best practices in software development, such as OWASP guidelines, secure coding principles, and implementing authentication/authorization frameworks (e.g., OAuth, SAML, JWT). Experience with microservices architecture, message brokers (e.g., RabbitMQ, Kafka), and event-driven design. Extensive experience in performance optimization and scalability, with a focus on designing high-performance systems and utilizing profiling tools and techniques to optimize both code and infrastructure for maximum efficiency. PREFERRED SKILLS: Experience with serverless architecture, including deploying and managing serverless applications using platforms such as AWS Lambda, Azure Functions, or Google Cloud Functions, to build scalable, cost-effective solutions. Experience with RPA tools or frameworks (e.g., UiPath, Automation Anywhere, Blue Prism) is a plus. Experience with Generative AI technologies, including working with frameworks like TensorFlow, PyTorch, or Hugging Face, and integrating AI/ML models into software applications. Hands-on experience with data analytics or logging tools like ELK Stack (Elasticsearch, Logstash, Kibana) or Splunk for monitoring and troubleshooting application performance About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 1 month ago
10 - 15 years
22 - 37 Lacs
Mumbai
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Job Description : As ELK architect (Elasticsearch, Logstash, and Kibana), you will be responsible for designing and implementing the data architecture and infrastructure for data analytics, log management, and visualization solutions using the ELK stack. You will collaborate with cross-functional teams, including data engineers, developers, system administrators, and stakeholders, to define data requirements, design data models, and ensure efficient data processing, storage, and retrieval. Your expertise in ELK and data architecture will be instrumental in building scalable and performant data solutions. Responsibilities: 1. Data Architecture Design : Collaborate with stakeholders to understand business requirements and define the data architecture strategy for ELK-based solutions. Design scalable and robust data models, data flows, and data integration patterns. 2. ELK Stack Implementation : Lead the implementation and configuration of ELK stack infrastructure to support data ingestion, processing, indexing, and visualization. Ensure high availability, fault tolerance, and optimal performance of the ELK environment. 3. Data Ingestion and Integration : Design and implement efficient data ingestion pipelines using Logstash or other relevant technologies. Integrate data from various sources, such as databases, APIs, logs, AppDynamics, storage and streaming platforms, into ELK for real-time and batch processing. 4. Data Modeling and Indexing : Design and optimize Elasticsearch indices and mappings to enable fast and accurate search and analysis. Define index templates, shard configurations, and document structures to ensure efficient storage and retrieval of data. 5. Data Visualization and Reporting : Collaborate with stakeholders to understand data visualization and reporting requirements. Utilize Kibana to design and develop visually appealing and interactive dashboards, reports, and visualizations that enable data-driven decision-making. 6. Performance Optimization : Analyze and optimize the performance of data processing and retrieval in ELK. Tune Elasticsearch settings, queries, and aggregations to improve search speed and response time. Optimize data storage, caching, and memory management. 7. Data Security and Compliance : Implement security measures and access controls to protect sensitive data stored in ELK. Ensure compliance with data privacy regulations and industry standards by implementing appropriate encryption, access controls, and auditing mechanisms. 8. Documentation and Collaboration : Create and maintain documentation of data models, data flows, system configurations, and best practices. Collaborate with cross-functional teams, providing guidance and support on data architecture and ELK-related topics Who You Are Candidate should have minimum 8+ years of experience. Apply Architectural Methods. Design Information System Architecture. Lead Systems Engineering Management. AD & AI leadership . Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 month ago
0.0 - 5.0 years
0 Lacs
Indore, Madhya Pradesh
On-site
Job Information Job Opening ID ZR_672_JOB Date Opened 05/06/2025 Industry IT Services Work Experience 3-5 years Job Type Full time Salary Confidential City Indore State/Province Madhya Pradesh Country India Zip/Postal Code 452001 Job Description Company- is a rapidly growing, private equity backed SaaS product company that is powering some of the world’s most important missions. We were founded by engineers and built to give our outstanding teams the best possible environment to write great code to build world-class products that our customers love. We are looking for a DevOps engineer that can help refine our development processes and infrastructure to bring the latest technology and methodologies to the team. RESPONSIBILITIES: Manage, maintain, and deliver tasks on the DevOps roadmap. Help shape, maintain, and support large-scale distributed cloud infrastructure. Manage our ECS and EKS infrastructure. Exposure to infrastructure as code tools like terraform and ansible. Implement and manage containerization solutions using Docker and orchestration tools such as Kubernetes. Maintain continuous integration and deployment pipelines. Maintain documentation of infrastructure. Requirements REQUIREMENTS:- 2+ years as an engineer on a software development team. Prior experience working in cross-functional teams. Systems architecture and design skills. Proficiency in scripting languages such as Bash, Python, or PowerShell. Experience with CI/CD tools such as Jenkins, GitLab CI/CD, or CircleCI. Build and deployment automation experience especially in a containerized world. Proficiency with common ops tools (ECS, Logstash, Datadog + Kibana, EKS etc) Experience with AWS or Azure. Comfort maintaining live production systems. Strong communication and collaboration skills, with the ability to work effectively in a fast-paced team environment. Experience with microservices architectures and serverless computing. Knowledge of security best practices for cloud environments, including identity and access management, network security, and encryption. Benefits As per industry.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane