Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You will be part of a dynamic team at Equifax, where we are seeking creative, high-energy, and driven software engineers with hands-on development skills to contribute to various significant projects. As a software engineer at Equifax, you will have the opportunity to work with cutting-edge technology alongside a talented group of engineers. This role is perfect for you if you are a forward-thinking, committed, and enthusiastic individual who is passionate about technology. Your responsibilities will include designing, developing, and operating high-scale applications across the entire engineering stack. You will be involved in all aspects of software development, from design and testing to deployment, maintenance, and continuous improvement. By utilizing modern software development practices such as serverless computing, microservices architecture, CI/CD, and infrastructure-as-code, you will contribute to the integration of our systems with existing internal systems and tools. Additionally, you will participate in technology roadmap discussions and architecture planning to translate business requirements and vision into actionable solutions. Working within a closely-knit, globally distributed engineering team, you will be responsible for triaging product or system issues and resolving them efficiently to ensure the smooth operation and quality of our services. Managing project priorities, deadlines, and deliverables will be a key part of your role, along with researching, creating, and enhancing software applications to advance Equifax Solutions. To excel in this position, you should have a Bachelor's degree or equivalent experience, along with at least 7 years of software engineering experience. Proficiency in mainstream Java, SpringBoot, TypeScript/JavaScript, as well as hands-on experience with Cloud technologies such as GCP, AWS, or Azure, is essential. You should also have a solid background in designing and developing cloud-native solutions and microservices using Java, SpringBoot, GCP SDKs, and GKE/Kubernetes. Experience in deploying and releasing software using Jenkins CI/CD pipelines, infrastructure-as-code concepts, Helm Charts, and Terraform constructs is highly valued. Moreover, being a self-starter who can adapt to changing priorities with minimal supervision could set you apart in this role. Additional advantageous skills include designing big data processing solutions, UI development, backend technologies like JAVA/J2EE and SpringBoot, source code control management systems, build tools, working in Agile environments, relational databases, and automated testing. If you are ready to take on this exciting opportunity and contribute to Equifax's innovative projects, apply now and be part of our team of forward-thinking software engineers.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
About Us: LSEG (London Stock Exchange Group) is more than a diversified global financial markets infrastructure and data business. We are dedicated, open-access partners with a dedication to excellence in delivering the services our customers expect from us. With extensive experience, deep knowledge and worldwide presence across financial markets, we enable businesses and economies around the world to fund innovation, manage risk and create jobs. It's how we've contributed to supporting the financial stability and growth of communities and economies globally for more than 300 years. Analytics group is part of London Stock Exchange Group's Data & Analytics Technology division. Analytics has established a very strong reputation for providing prudent and reliable analytic solutions to financial industries. With a strong presence in the North American financial markets and rapidly growing in other markets, the group is now looking to increase its market share globally by building new capabilities as Analytics as a Service - A one-stop-shop solution for all analytics needs through API and Cloud-first approach. Position Summary: Analytics DevOps group is looking for a highly motivated and skilled DevOps Engineer to join our dynamic team to help build, deploy, and maintain our cloud and on-prem infrastructure and applications. You will play a key role in driving automation, monitoring, and continuous improvement in our development, modernizations, and operational processes. Key Responsibilities & Accountabilities: Infrastructure as Code (IaC): Develop and manage infrastructure using tools like Terraform, Helm Charts, CloudFormation, or Ansible to ensure consistent and scalable environments. CI/CD Pipeline Development: Build, optimize, and maintain continuous integration and continuous deployment (CI/CD) pipelines using Jenkins, GitLab, GitHub, or similar tools. Cloud and on-prem infrastructure Management: Work with Cloud providers (Azure, AWS, GCP) and on-prem infrastructure (VMware, Linux servers) to deploy, manage, and monitor infrastructure and services. Automation: Automate repetitive tasks, improve operational efficiency, and reduce human intervention for building and deploying applications and services. Monitoring & Logging: Work with SRE team to set up monitoring and alerting systems using tools like Prometheus, Grafana, Datadog, or others to ensure high availability and performance of applications and infrastructure. Collaboration: Collaborate with architects, operations, and developers to ensure seamless integration between development, testing, and production environments. Security Best Practices: Implement and enforce security protocols/procedures, including access controls, encryption, and vulnerability scanning and remediation. Provide support for issue resolution related to application deployment and/or DevOps-related activities. Essential Skills, Qualifications & Experience: - Bachelor's or Master's degree in computer science, engineering, or a related field with experience (or equivalent 3-5 years of practical experience). - 5+ years of experience in practicing DevOps. - Proven experience as a DevOps Engineer or Software Engineer in an agile, cloud-based environment. - Strong understanding of Linux/Unix system management. - Hands-on experience with cloud platforms (AWS, Azure, GCP), Azure preferred. - Proficient in Infrastructure automation tools such as Terraform, Helm Charts, Ansible, etc. - Strong experience with CI/CD tools - GitLab, Jenkins. - Experience/knowledge of version control systems - Git, GitLab, GitHub. - Experience with containerization (Kubernetes, Docker) and orchestration. - Experience in modern monitoring & logging tools such as Grafana, Prometheus, Datadog. - Working experience in scripting languages such as Bash, Python, or Groovy. - Strong problem-solving and troubleshooting skills. - Excellent communication skills and ability to work in team environments. - Experience with serverless architecture and microservices is a plus. - Strong knowledge of networking concepts (DNS, Load Balancers, etc.) and security practices (Firewalls, encryptions). - Working in an Agile/Scrum environment is a plus. - Certifications in DevOps or Cloud Technologies (e.g., Azure DevOps Solutions, AWS Certified DevOps) are a plus. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies, and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence, and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision-making and everyday actions. Working with us means that you will be part of a dynamic organization of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy, and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days, and wellbeing initiatives. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it's used for, and how it's obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.,
Posted 1 day ago
2.0 - 5.0 years
5 - 9 Lacs
Mumbai
Work from Office
Job Profile Description To build & run platform for Digital Applications, our esteemed customer is looking for an experienced DevOps Engineer We need engg with solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka, On call Incidents handling Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and ?fixes Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Attend on call incidents Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills And Qualifications BSc in Computer Science, IT/ Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 2 5Yrs to 4 5 Yrs Proficient with git and git workflows Good knowledge of Kubernetes EKS, Teraform, CICD ,AWS Problem-solving attitude Collaborative team spirit
Posted 2 days ago
4.0 - 8.0 years
7 - 12 Lacs
Noida
Work from Office
Technical Expertise Solid Python programming (OOPS, REST, API), SQL & Linux experience, handling ODBC/JDBC/Arrowflight connections. Have done projects handling large volume of data, implemented MMP tools and strong knowledge of storage solutions (NAS, S3, HDFS) and data formats (parquet, avro, iceberg) Strong knowledge of Kubernetes, containerization, Helm chart, templates and overlay, Vault, SSL/TSL, KB stores Prior knowledge of key libraries e.g. S3, MongoDB, Elastic, Trident NAS, Grafana etc will be big plus. Used Git and built CI-CD pipelines in recent projects. Additional Criteria Proactive in asking questions, indulge with team in group conversations, actively participate in issues discussions, share inputs/ideas/suggestions. Candidate needs to be responsive, actively pick new tasks/issues by him/herself without much mentoring/monitoring. NO REMOTELY located candidate. Mandatory Competencies DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Docker Beh - Communication Programming Language - Python - Django Big Data - Big Data - Flask DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes) Data Science and Machine Learning - Data Science and Machine Learning - Python Middleware - API Middleware - API (SOAP, REST) Cloud - AWS - AWS S3, S3 glacier, AWS EBS Cloud - AWS - Amazon Elastic Container Registry (ECR), AWS Elastic Kubernetes Service (EKS)
Posted 2 days ago
4.0 - 8.0 years
6 - 11 Lacs
Mumbai
Work from Office
Your Role We are hiring a GCP Kubernetes Engineer with 912 years of experience. Ideal candidates should have strong expertise in cloud-native technologies, container orchestration, and infrastructure automation. This is a Pan India opportunity offering flexibility and growth. Join us to build scalable, secure, and innovative cloud solutions across diverse industries. Design, implement, and manage scalable, highly available systems on Google Cloud Platform (GCP). Work with GCP IaaS componentsCompute Engine, VPC, VPN, Cloud Interconnect, Load Balancing, Cloud CDN, Cloud Storage, and Backup/DR solutions. Utilize GCP PaaS servicesCloud SQL, App Engine, Cloud Functions, Pub/Sub, Firestore/Cloud Spanner, and Dataflow. Deploy and manage containerized applications using Google Kubernetes Engine (GKE), Helm charts, and Kubernetes tooling. Automate infrastructure provisioning using gcloud CLI, Deployment Manager, or Terraform. Implement CI/CD pipelines using Cloud Build for automated deployments. Monitor infrastructure and applications using Cloud Monitoring, Logging, and related tools. Manage IAM, VPC Service Controls, Cloud Armor, and Security Command Center. Troubleshoot and resolve complex infrastructure and application issues. Your Profile 6+ years of cloud engineering experience with a strong focus on GCP. Proven hands-on expertise in GCP IaaS, PaaS, and GKE. Experience with monitoring, logging, and automation tools in GCP. Strong problem-solving, analytical, and communication skills. What you"ll love about working here You can shape yourcareer with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. At Capgemini, you can work oncutting-edge projects in tech and engineering with industry leaders or createsolutions to overcome societal and environmental challenges.
Posted 2 days ago
10.0 - 14.0 years
0 Lacs
ahmedabad, gujarat
On-site
As the DevOps Lead, you will be responsible for leading the design, implementation, and management of enterprise Container orchestration platforms using Rafey and Kubernetes. Your role will involve overseeing the onboarding and deployment of applications on Rafey platforms, utilizing AWS EKS and Azure AKS. You will play a key role in developing and maintaining CI/CD pipelines to ensure efficient and reliable application deployment using Azure DevOps. Collaboration with cross-functional teams is essential to ensure seamless integration and operation of containerized applications. Your expertise will also be required to implement and manage infrastructure as code using tools such as Terraform, ensuring the security, reliability, and scalability of containerized applications and infrastructure. In addition to your technical responsibilities, you will be expected to mentor and guide junior DevOps engineers, fostering a culture of continuous improvement and innovation within the team. Monitoring and optimizing system performance, troubleshooting issues, and staying up-to-date with industry trends and best practices are also crucial aspects of this role. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 10+ years of experience in DevOps, with a focus on Container orchestration platforms. - Extensive hands-on experience with Kubernetes, EKS, AKS. Knowledge of Rafey platform is a plus. - Proven track record of onboarding and deploying applications on Kubernetes platforms, including AWS EKS and Azure AKS. - Strong knowledge of Kubernetes manifest files, Ingress, Ingress Controllers, and Azure DevOps CI/CD pipelines. - Proficiency in infrastructure as code tools like Terraform. - Excellent problem-solving skills, knowledge of Secret Management, RBAC configuration, and hands-on experience with Helm Charts. - Strong communication and collaboration skills, experience with cloud platforms (AWS, Azure), and security best practices in a DevOps environment. Preferred Skills: - Strong Cloud knowledge (AWS & Azure) and Kubernetes expertise. - Experience with other enterprise Container orchestration platforms and tools. - Familiarity with monitoring and logging tools like Datadog, understanding of network topology, and system architecture. - Ability to work in a fast-paced, dynamic environment. Good to Have: - Knowledge of Rafey platform (A Kubernetes Management Platform) and hands-on experience with GitOps technology.,
Posted 4 days ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
Greetings from VOIS! We are currently seeking a dynamic Senior TIBCO Integration Developer & Analyst for our Pune location. As the Senior Manager in this role, you will be responsible for TIBCO integration development and analysis with 8 to 12 years of experience. This position offers a hybrid working mode, allowing for flexibility and collaboration. In this role, you will primarily focus on TIBCO Business Works, designing and implementing solutions based on project requirements using a waterfall methodology. Your responsibilities will include participating in the full development lifecycle, from design to testing and deployment of integration solutions on our TIBCO platform. Key Responsibilities: - Collaborate with business analysts, architects, and other developers to provide TIBCO integration development. - Deliver technical designs to meet business and IT needs. - Develop code based on the design and ensure adherence to coding standards and best practices. - Conduct unit testing and provide 3rd level production support for bug fixes and enhancements. - Maintain documentation for delivered solutions. Required Skills: - Proficiency in TIBCO Business Works 5.x and TIBCO EMS. - Strong analytical skills and experience in enterprise integration and design patterns. - Knowledge of web services, REST APIs, XML, XSLT, XPath, JSON, and TIBCO BPM products. - Advanced skills in SQL, Oracle PL/SQL, GitHub, and familiarity with UML. - Experience with DevOps principles and CI/CD tools is advantageous. - Exposure to telecommunication sector and Java programming language knowledge is a plus. Preferred Skills: - Familiarity with UML, DevOps principles, CI/CD tools, and experience in the telecommunication sector. - Proficiency in Java programming language. At VOIS, we are committed to diversity and inclusivity in our hiring practices. Join us in this exciting opportunity to contribute to our TIBCO integration projects in a collaborative and innovative environment.,
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You should have a minimum of 5 years of experience in DevOps, SRE, or Infrastructure Engineering. Your expertise should include a strong command of Azure Cloud and Infrastructure-as-Code using tools such as Terraform and CloudFormation. Proficiency in Docker and Kubernetes is essential. You should be hands-on with CI/CD tools and scripting languages like Bash, Python, or Go. A solid knowledge of Linux, networking, and security best practices is required. Experience with monitoring and logging tools such as ELK, Prometheus, and Grafana is expected. Familiarity with GitOps, Helm charts, and automation will be an advantage. Your key responsibilities will involve designing and managing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, and GitHub Actions. You will be responsible for automating infrastructure provisioning through tools like Terraform, Ansible, and Pulumi. Monitoring and optimizing cloud environments, implementing containerization and orchestration with Docker and Kubernetes (EKS/GKE/AKS), and maintaining logging, monitoring, and alerting systems (ELK, Prometheus, Grafana, Datadog) are crucial aspects of the role. Ensuring system security, availability, and performance tuning, managing secrets and credentials using tools like Vault and Secrets Manager, troubleshooting infrastructure and deployment issues, and implementing blue-green and canary deployments will be part of your responsibilities. Collaboration with developers to enhance system reliability and productivity is key. Preferred skills include certification as an Azure DevOps Engineer, experience with multi-cloud environments, microservices, and event-driven systems, as well as exposure to AI/ML pipelines and data engineering workflows.,
Posted 5 days ago
6.0 - 9.0 years
3 - 8 Lacs
Delhi, India
On-site
The Expertise You Bring 6-8 years of hands-on experience with AWS in a production environment Experience building and deploying Docker images including Docker Compose Production experience running Kubernetes workloads ideally on AWS EKS Experience managing and maintaining Kubernetes Clusters on AWS EKS Experience creating and deploying Helm charts & libraries Production experience with infrastructure-as-code (IaC), Terraform preferred Hands-on experience with Jenkins Core, including authoring and maintaining declarative CI/CD pipelines and libraries Experience with monitoring tools e.g., CloudWatch, Datadog & Splunk Cloud Proficiency with UNIX operating systems and shell scripting Programming experience, e.g., Python preferred Experience with distributed version control systems, Git preferred Experience with the agile software development lifecycle and Kanban preferred Experience with CDN Providers e.g., Akamai preferred The Skills that are good to have for this Role Experience with Amazon Web Services (AWS), having managed services and applications in a large AWS cross-account environment using IAM and federated SSO Experience crafting and maintaining logging, monitoring, and alerting capabilities using tools like Datadog and Splunk Ability to communicate at all levels with track record of strong written and verbal communications See problems as opportunities to automate Ability to work independently with minimal direction Drive and champion the overall design of highly available, secure, scalable microservices-based applications in AWS
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
The job is located in Chennai, Tamil Nadu, India with the company Hitachi Energy India Development Centre (IDC). As part of the Engineering & Science profession, the job is full-time and not remote. The primary focus of the India Development Centre is on research and development, with around 500 R&D engineers, specialists, and experts dedicated to creating and sustaining digital solutions, new products, and technology. The centre collaborates with Hitachi Energy's R&D and Research centres across more than 15 locations in 12 countries. The mission of Hitachi Energy is to advance the world's energy system to be more sustainable, flexible, and secure while considering social, environmental, and economic aspects. The company has a strong global presence with installations in over 140 countries. As a potential candidate for this role, your responsibilities include: - Meeting milestones and deadlines while staying on scope - Providing suggestions for improvements and being open to new ideas - Collaborating with a diverse team across different time zones - Enhancing processes for continuous integration, deployment, testing, and release management - Ensuring the highest standards of security - Developing, maintaining, and supporting Azure infrastructure and system software components - Providing guidance to developers on building solutions using Azure technologies - Owning the overall architecture in Azure - Ensuring application performance, uptime, and scalability - Leading CI/CD processes design and implementation - Defining best practices for application deployment and infrastructure maintenance - Monitoring and reporting on compute/storage costs - Managing deployment of a .NET microservices based solution - Upholding Hitachi Energy's core values of safety and integrity Your background should ideally include: - 3+ years of experience in Azure DevOps, CI/CD, configuration management, and test automation - 2+ years of experience in various Azure technologies such as IAC, ARM, YAML, Azure PaaS, Azure Active Directory, Kubernetes, and Application Insight - Proficiency in Bash scripting - Hands-on experience with Azure components and services - Building and maintaining large-scale SaaS solutions - Familiarity with SQL, PostgreSQL, NoSQL, Redis databases - Expertise in infrastructure as code automation and monitoring - Understanding of security concepts and best practices - Experience with deployment tools like Helm charts and docker-compose - Proficiency in at least one programming language (e.g., Python, C#) - Experience with system management in Linux environment - Knowledge of logging & visualization tools like ELK stack, Prometheus, Grafana - Experience in Azure Data Factory, WAF, streaming data, big data/analytics Proficiency in spoken and written English is essential for this role. If you have a disability and require accommodations during the job application process, you can request reasonable accommodations through Hitachi Energy's website by completing a general inquiry form. This assistance is specifically for individuals with disabilities needing accessibility support during the application process.,
Posted 6 days ago
1.0 - 5.0 years
0 Lacs
kochi, kerala
On-site
The Software DevOps Engineer (1-3 Years Experience) position requires a Bachelor's degree in Computer Science, Information Technology, or a related field along with 1-3 years of experience in a DevOps or related role. As a Software DevOps Engineer, your responsibilities will include designing, implementing, and maintaining CI/CD pipelines to ensure efficient and reliable software delivery. You will collaborate with Development, QA, and Operations teams to streamline the deployment and operation of applications. Monitoring system performance, identifying bottlenecks, and troubleshooting issues to ensure high availability and reliability are also part of your role. Furthermore, you will automate repetitive tasks and processes to improve efficiency and reduce manual intervention. Participating in code reviews, contributing to the improvement of best practices and standards, and implementing and managing infrastructure as code (IaC) using Terraform are essential duties. Documentation of processes, configurations, and procedures for future reference is required. Staying updated with the latest industry trends and technologies to continuously improve DevOps processes, as well as creating POC for the latest tools and technologies are part of the job. The mandatory skills for this position include proficiency in Azure Cloud, Azure DevOps, CI/CD Pipeline, Version control (git), Linux Commands, Bash Script, Docker, Kubernetes, Helm Charts, Monitoring tools like Grafana, Prometheus, ELK Stack, Azure Monitoring, Azure, AKS, Azure Storage, Virtual Machine, an understanding of micro-services architecture, orchestration, and Sql Server. Optional skills that are beneficial for this role include Ansible Script, Kafka, MongoDB, Key Vault, and Azure CLI. Overall, the ideal candidate for this role should possess a strong understanding of CI/CD concepts and tools, experience with cloud platforms and containerization technologies, a basic understanding of networking and security principles, strong problem-solving skills, attention to detail, excellent communication and teamwork skills, and the ability to learn and adapt to new technologies and methodologies. Additionally, being ready to work with clients directly is a key requirement for this position.,
Posted 6 days ago
5.0 - 10.0 years
25 - 30 Lacs
Bengaluru
Work from Office
At Kotak Mahindra Bank, customer experience is at the forefront of everything we do on Digital Platform. To help us build & run platform for Digital Applications , we are now looking for an experienced Sr. DevOps Engineer . They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. If you have a solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka/MOngoDB wed love to speak with you. Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills and Qualifications BSc in Computer Science, Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 5 Yrs Proficient with git and git workflows Good knowledge of Kubernets EKS,Teraform,CICD ,AWS Problem-solving attitude Collaborative team spirit Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and fixes Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Python Developer, you should have strong experience in Python development and scripting. It is essential to have a solid background in network automation and orchestration. Awareness or experience with any Network Services Orchestrator (e.g., Cisco NSO) would be a plus for you. Proficiency in YANG data modeling and XML for network configuration and management is required. Experience with cloud-native development practices (Microservices/Containers) on any public cloud platform is preferred. You should also have experience with CI/CD tools and practices. Your main focus will be on Connectivity Network Engineering. You will be developing competency in your area of expertise and sharing your knowledge with others. Understanding clients" needs and completing your role independently or with minimum supervision are crucial aspects of this role. Identifying problems, generating solutions, teamwork, and customer interaction are key responsibilities. Experience with container orchestration tools like Kubernetes/Docker, working on Helm Charts, Ansible, and Terraform is necessary. Familiarity with Continuous Integration tools like Jenkins and best practices for DevOps is also required. A background in Wireline Network Automation (e.g., IP/MPLS, L3VPN, SD-WAN) and routers (e.g., Cisco/Juniper Routers) would be desirable. Experience with BPMN/Workflow managers such as Camunda, N8N, Temporal is beneficial. For lead profiles, a background in Web/UI development with frameworks such as React JS and awareness of AI/Agentic AI-based development patterns would be useful but not mandatory. Knowledge of security best practices and implementation is also expected from you.,
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
The role at Hitachi Energy India Development Centre (IDC) in Chennai offers you the opportunity to be part of a dedicated team of over 500 R&D engineers, specialists, and experts focused on creating innovative digital solutions, new products, and cutting-edge technology. As a part of the IDC team, you will collaborate with R&D and Research centers across more than 15 locations globally, contributing to the advancement of the world's energy system towards sustainability, flexibility, and security. Your primary responsibilities in this role include staying on track to meet project milestones and deadlines, actively suggesting and implementing process improvements, collaborating with a diverse team across different time zones, and enhancing processes related to continuous integration, deployment, testing, and release management. You will play a crucial role in developing, maintaining, and supporting azure infrastructure and system software components, providing guidance on azure tech components, ensuring application performance, uptime, and scalability, and leading CI/CD processes design and implementation. To excel in this position, you should possess at least 3 years of experience in azure DevOps, CI/CD, configuration management, and test automation, along with expertise in Azure PaaS, Azure Active Directory, Kubernetes, and application insight. Additionally, you should have hands-on experience with infrastructure as code automation, database management, system monitoring, security practices, containerization, and Linux system administration. Proficiency in at least one programming language, strong communication skills in English, and a commitment to Hitachi Energy's core values of safety and integrity are essential for success in this role. If you are a qualified individual with a disability and require accommodations during the job application process, you can request reasonable accommodations through our website. Please provide specific details about your needs to receive the necessary support. This opportunity is tailored for individuals seeking accessibility assistance, and inquiries for other purposes may not receive a response.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
You will be responsible for Python Development with strong experience in Python scripting. Your background should include expertise in network automation and orchestration. Having awareness or experience with any Network Services Orchestrator (e.g., Cisco NSO) will be considered a plus. Proficiency in YANG data modeling and XML for network configuration and management is essential. Experience with cloud-native development practices, such as Microservices and Containers, on any public cloud platform is preferred. Additionally, you should have experience with CI/CD tools and practices. Your main focus will be on Connectivity Network Engineering, where you will develop competency in your own area of expertise. You will be expected to share your expertise, provide guidance and support to others, and interpret clients" needs. You should be able to work independently or with minimum supervision, identifying problems and generating solutions in straightforward situations. Collaboration in teamwork and customer interaction will be key aspects of your role. It is required that you have experience with container orchestration tools like Kubernetes and Docker, as well as working knowledge of Helm Charts, Ansible, and Terraform. Familiarity with Continuous Integration tools like Jenkins and best practices for DevOps is essential. A background in Wireline Network Automation (e.g., IP/MPLS, L3VPN, SD-WAN) and routers (e.g., Cisco/Juniper Routers) would be desirable. Experience with BPMN/Workflow managers such as Camunda, N8N, and Temporal is also valuable. For lead profiles, having a background in Web/UI development with frameworks like React JS and an awareness of AI/Agentic AI-based development patterns would be useful, though not mandatory. Knowledge of security best practices and their implementation is expected to be part of your skill set.,
Posted 1 week ago
8.0 - 13.0 years
20 - 35 Lacs
Pune, Bengaluru
Hybrid
Qualifications & Experience: Completed studies in a technical, engineering, or scientific discipline, or equivalent professional training. 57 years of professional experience in IT, with a strong focus on modern cloud technologies and GO Lang with AWS . Proven experience in software cloud and container architectures . Hands-on expertise in infrastructure automation . In-depth knowledge of Linux system tools and network-related services . Experience with OpenStack and participation in Open Source programming projects is a strong plus. Skills & Competencies: Strong customer orientation and problem-solving mindset. Familiarity with agile development processes (e.g., Scrum, Kanban). Excellent mentoring and presentation skills . Ability to critically assess technical solutions and develop creative, scalable approaches . Fluent in written and spoken English . Additional Information: This role requires working within the European Union to comply with our client's data security and privacy requirements .
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Pune
Work from Office
Role Purpose The purpose of this role is to provide solutions and bridge the gap between technology and business know-how to deliver any client solution 7+ Years of experiance in Kubernetes, Helm charts and API tool experiance. Cloud and Kubernetes : 3+ years of hands-on experience with DevOps practices, especially in AWS EKS and Kubernetes ecosystem. Familiarity with container orchestration, cluster scaling, and networking tools like Calico and Karpenter . Proficiency in creating and managing Helm charts for Kubernetes-based applications. API Management : Experience with API gateways and platforms like Tyk , Kong , or similar API management tools. Strong understanding of API security, authentication, and performance optimization. CI/CD Tools and Automation : Expertise in building CI/CD pipelines using AWS CodeCommit , GitHub , GitLab , or similar platforms. Strong knowledge of scripting and automation Monitoring and Observability : Experience with monitoring and observability tools like Prometheus , Grafana , and OpenTelemetry for Kubernetes clusters. Hands-on experience with logging tools, specifically Elastic (ELK stack) for log management and analysis. Soft Skills : Strong problem-solving skills with attention to detail. Ability to work in a collaborative, fast-paced environment. Excellent communication skills and the ability to work cross-functionally with development and operations teams. 2. Skill upgradation and competency building Clear wipro exams and internal certifications from time to time to upgrade the skills Attend trainings, seminars to sharpen the knowledge in functional/ technical domain Write papers, articles, case studies and publish them on the intranet Mandatory Skills: Kubernetes. Experience: 5-8 Years.
Posted 1 week ago
1.0 - 4.0 years
8 - 12 Lacs
Thane
Work from Office
About The Role At Kotak Mahindra Bank, customer experience is at the forefront of everything we do on Digital Platform. To help us build & run platform for Digital Applications , we are now looking for an experienced Sr. DevOps Engineer . They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. If you have a solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka/MOngoDB we"d love to speak with you. Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and "fixes" Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills and Qualifications BSc in Computer Science, Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 5 Yrs Proficient with git and git workflows Good knowledge of Kubernets EKS,Teraform,CICD ,AWS Problem-solving attitude Collaborative team spiritn system.
Posted 1 week ago
7.0 - 12.0 years
15 - 30 Lacs
Pune, Chennai, Bengaluru
Hybrid
Are you curious, motivated, and forward-thinking? At FIS youll have the opportunity to work on some of the most challenging and relevant issues in financial services and technology. Our talented people empower us, and we believe in being part of a team that is open, collaborative, entrepreneurial, passionate and above all fun. About the team This role is a part of our OPF team. FIS Open Payment Framework (OPF) is a set of reusable and extensible components, frameworks, and technical services which can be assembled in different configurations to build a personalized Payment Processing System. From the Open Payment Framework, FIS has created predefined solutions around the bank payment hub, including Domestic & International payments (XCT) , SEPA Direct Debits & Credit Transfers (SEPA) , SCT INST ,UK Faster Payments ,Immediate Payments ,eBanking (EBK) ,Business Payments (BP), NPP, BACS ,US ACH. What you will be doing: Develop application code for java programs. Design, implement and maintain java application phases. Designing, coding, and debugging and maintenance of Java, J2EE application systems. Object-oriented Design and Analysis (OOA and OOD). Evaluate and identify new technologies for implementation. Ability to convert business requirement into executable code solution. Provide leadership to technical team. What you bring: Must have 7 to 14 years of experience in Java Technologies. Must have experience on Banking domain. Proficiency in Core Java, J2EE, ANSI SQL, XML, Struts, Hibernate, Spring and Springboot. Good experience in Database concepts (Oracle/DB2), docker (Helm Charts) ,kubernates, Core Java Language (Collections, Concurrency/Multi-Threading, Localization, JDBC), microservices. Hands on experience in Web Technologies (Either Spring or Struts, Hibernate, JSP, HTML/DHTML, Rest Web services, JavaScript) Must have knowledge of one J2EE Application Server e.g.: WebSphere Process Server, WebLogic, jboss. Working Knowledge of JIRA or equivalent. What we offer you An exciting opportunity be a part of Worlds Leading FinTech Product MNC To be a part of vibrant team and to build up a career on core banking/payments domain. Competitive salary and attractive benefits including GHMI/ Hospitalization coverage for employee and direct dependents. A multifaceted job with a high degree of responsibility and a broad spectrum of opportunities
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Senior Consultant at Dell Technologies in Bangalore, you will play a crucial role in helping organizations navigate their digital transformation journey. Your responsibilities will include providing technical guidance and solutions for complex engagements, collaborating with global teams to deliver innovative strategies, and deploying digital transformation software stacks. You will be a key member of the Services Delivery team, working alongside other experienced consultants to earn customer trust through your technical expertise and consulting skills. Your role will involve supporting customers in architecting Cloud-Native Apps, driving digital transformation within organizations, and implementing DevSecOps practices to enhance development processes. Additionally, you will evaluate new technology options, onboard automation solutions, and support Sales and Pre-Sales teams in solutioning for customers. Your extensive experience in various technical areas such as DevOps, Cloud Computing, AI & ML, and Data Science will be leveraged to lead large-scale digital transformation projects and guide technical teams effectively. To excel in this role, you should have at least 10 years of experience in the IT industry, with a background in computer science or engineering. Business consulting certifications and proven expertise in areas like Cloud Technologies, DevOps Engineering, and Application Modernization are essential. Experience with automation tools like Ansible, Terraform, and Kubernetes will be highly beneficial. If you are passionate about driving innovation, collaborating with diverse teams, and making a meaningful impact in the technology industry, this role at Dell Technologies could be the perfect opportunity for you. Join us in shaping a future where progress takes all of us. Application closing date: 18-Dec-2024 Dell Technologies is committed to promoting equal employment opportunities and creating a work environment free of discrimination and harassment. For more details, please refer to the full Equal Employment Opportunity Policy.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
noida, uttar pradesh
On-site
As an experienced Azure DevOps Architect at our leading pharmacy client, your primary responsibility will be to lead the implementation of DevOps solutions using the Azure DevOps platform. You will leverage your deep understanding of DevOps practices, infrastructure automation, continuous integration (CI), continuous deployment (CD), and cloud-native application development to ensure scalability, security, and efficiency in all aspects of the projects. With a minimum of 10-12 years of overall experience in DevOps and cloud automation, you must possess hands-on experience in DevOps, cloud architecture, and/or software development. Specifically, you should have a strong background in Azure DevOps, including building and managing CI/CD pipelines, ARM Templates or Terraform, cloud deployment, and CI/CD processes. Proficiency in creating build and release pipelines, as well as managing Azure DevOps, is essential for this role. Your expertise should extend to working with Docker and Azure Kubernetes for deployment, along with helm charts for containerization and orchestration. Experience in deploying and managing Azure Cloud, along with in-depth knowledge of Azure services such as Azure Storage Account, Virtual Network, Managed Identity, Service Principle, Key Vault, Event Hub, and different deployment strategies, will be highly valued. Furthermore, familiarity with monitoring and logging tools, Agile methodologies, and integrating DevOps with Agile workflows will be advantageous in fulfilling the responsibilities of this role. As an Azure DevOps Architect, you will play a pivotal role in defining and implementing scalable, secure, and high-performance DevOps pipelines using Azure DevOps. Your responsibilities will include devising DevOps strategies encompassing CI/CD, infrastructure as code (IaC), release management, and environment management. You will collaborate with stakeholders to understand technical requirements and translate them into effective DevOps solutions. Additionally, you will be responsible for building, maintaining, and scaling automated CI/CD pipelines, defining branching strategies, implementing release management, and continuous deployment pipelines for multiple environments. Your role will also involve leading the automation of infrastructure provisioning, designing secure and scalable cloud architectures on Microsoft Azure, and implementing robust monitoring and logging solutions across cloud environments. Moreover, you will work closely with development, QA, operations, and IT teams to promote a DevOps culture of collaboration and shared responsibility. Mentoring junior DevOps engineers and providing ongoing support to team members will also be part of your responsibilities. Your expertise in Azure DevOps, CI/CD pipelines, cloud architecture, automation, security, and compliance, as well as monitoring, will be crucial in driving the success of projects and ensuring the highest standards of quality and efficiency. At our organization, you will have the opportunity to work on exciting projects for leading global brands across various industries. You will collaborate with a diverse team of highly talented individuals in a supportive environment that values work-life balance, professional development, and employee well-being. Join us at GlobalLogic, where we redefine digital engineering and help our clients innovate and thrive in the modern world.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As a talented and experienced professional, you are invited to explore exciting opportunities in various engineering roles with a focus on cutting-edge technologies and cloud-native solutions. Join our team in Pune/Nagpur, Maharashtra, for full-time positions in UK shift timings. We are looking for individuals with a minimum of 6 to 8 years of relevant experience who are passionate about innovation and possess deep technical expertise. If you thrive in an agile environment and are eager to contribute to projects involving infrastructure automation, software development life cycle (SDLC), and cloud services, then we encourage you to consider the roles below: Automation Engineer: - Develop and maintain orchestration designs, RESTful APIs, and microservices. - Implement BDD frameworks and work on CI/CD pipelines using Jenkins. - Contribute to infrastructure management and cloud services deployment (AWS/Azure). - Collaborate within Scrum and Agile frameworks. Infrastructure Automation Delivery Manager: - Manage end-to-end infrastructure automation projects. - Provide leadership in project management, requirement gathering, and cloud architecture. - Ensure compliance with SDLC and GxP processes. DevOps/Release Engineer: - Streamline deployment pipelines and maintain release schedules. - Work on configuration management and cloud deployment processes. Chef/SCM Engineer & Developer: - Develop and manage Chef-based configuration management solutions. - Enhance SCM tools and processes for improved efficiency. Kubernetes Engineer: - Design, deploy, and manage Kubernetes clusters and orchestration tools. - Optimize deployments with Helm charts, ArgoCD, and Docker. AWS Engineer: - Architect and maintain cloud-based applications using AWS services. - Utilize AWS tools like Lambda, CloudFront, and CloudWatch for deployment and monitoring. Lead Frontend Software DevOps Engineer: - Lead frontend DevOps projects, ensuring best practices in serverless architecture. - Collaborate on cloud-native designs with a focus on performance and scalability. Frontend Software DevOps Engineer: - Implement frontend development with a focus on automation and scalability. - Leverage modern frameworks for cloud-native solutions. Azure Engineer: - Deploy and manage Azure-based infrastructure and applications. - Collaborate with cross-functional teams for cloud-native solutions. Scrum Master - Cloud Native: - Facilitate Agile/Scrum ceremonies for cloud-native projects. - Drive collaboration across teams for efficient delivery. Cloud Agnostic Engineer: - Design and implement cloud-agnostic solutions across AWS and Azure platforms. - Enhance cloud infrastructure with Kubernetes and Terraform. Additionally, we are seeking individuals with strong communication, writing, and presentation skills, experience in Agile/Scrum methodologies, and familiarity with modern tools such as Artifactory, Docker, CI/CD pipelines, and serverless architectures. Join us to work on innovative projects, collaborate with a highly skilled team, and explore opportunities for professional growth and certifications. If this opportunity excites you, please share your updated resume with the job title mentioned in the subject line to jagannath.gaddam@quantumintegrators.com. Let's build the future together!,
Posted 1 week ago
5.0 - 10.0 years
25 - 40 Lacs
Pune
Work from Office
Set up and manage GCP networking components like VPCs, subnets, and load balancers. Use tools like Terraform to write code for building & managing infrastructure automatically. Manage systems and automate setups using tools like Ansible or Chef. Required Candidate profile Exp in scripting language-Ruby, Go, or Bash. Exp in Kubernetes &Helm to manage containerized applications. Exp in GitLab CI or similar tools. Understand how networks work- TCP/IP, HTTP/HTTPS, etc.
Posted 2 weeks ago
8.0 - 12.0 years
10 - 14 Lacs
Bengaluru
Work from Office
FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity "We are seeking an experienced DevOps Engineer to join our development team to assist in the continuing evolution of our Platform Orchestration product. You will be able to demonstrate the required potential and technical curiosity to work on software that utilizes a range of leading-edge technologies and integration frameworks. Staff training, investment and career growth form an important part of our team ethos. Consequently, you will gain exposure to different software validation techniques supported by industry-standard engineering processes that will help to grow your skills and experience." - VP, Software Engineering. What Youll Contribute Build and maintain CI/CD pipelines for multi-tenant deployments using Jenkins and GitOps practices. Manage Kubernetes infrastructure (AWS EKS), Helm charts, and service mesh configurations (ISTIO). Use kubectl, Lens, or other dashboards for real-time workload inspection and troubleshooting. Evaluate security, stability, compatibility, scalability, interoperability, monitorability, resilience, and performance of our software. Support development and QA teams with code merge, build, install, and deployment environments. Ensure continuous improvement of the software automation pipeline to increase build and integration efficiency. Oversee and maintain the health of software repositories and build tools, ensuring successful and continuous software builds. Verify final software release configurations, ensuring integrity against specifications, architecture, and documentation. Perform fulfillment and release activities, ensuring timely and reliable deployments. What Were Seeking A Bachelors or Masters degree in Computer Science, Engineering, or a related field. 812 years of hands-on experience in DevOps or SRE roles for cloud-native Java-based platforms. Deep knowledge of AWS Cloud Services (EKS, IAM, CloudWatch, S3, Secrets Manager), including networking and security components. Strong experience with Kubernetes, Helm, ConfigMaps, Secrets, and Kustomize. Expertise in authoring and maintaining Jenkins pipelines integrated with security and quality scanning tools. Hands-on experience with infrastructure provisioning tools such as Docker and CloudFormation. Familiarity with CI/CD pipeline tools and build systems including Jenkins and Maven. Experience administering software repositories such as Git or Bitbucket. Proficient in scripting/programming languages such as Ruby, Groovy, and Java. Proven ability to analyze and resolve issues related to performance, scalability, and reliability. Solid understanding of DNS, Load Balancing, SSL, TCP/IP, and general networking and security best practices. Our Offer to You An inclusive culture strongly reflecting our core valuesAct Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at https://www.fico.com/en/privacy-policy
Posted 2 weeks ago
5.0 - 10.0 years
15 - 20 Lacs
Pune
Work from Office
Role Sr. DevOps Engineer Experience: 5-10 years of relevant experience Qualification: Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). Location: Pune Roles & Responsibilities: Collaborate with software development and IT teams to understand infrastructure requirements and design scalable solutions for applications and services. Develop and maintain CI/CD pipelines for automated build, testing, and deployment processes. Implement and manage containerization and orchestration using tools such as Docker and Kubernetes. Configure and maintain infrastructure as code using tools like Terraform and Ansible to ensure consistency and scalability. Monitor system performance, troubleshoot issues, and ensure high availability and disaster recovery mechanisms are in place. Oversee security measures and implement best practices to safeguard data and systems, including the configuration and maintenance of security, operating system administration, software installation, upgrades, and compatibility of system components. Schedule, implement, and automate security compliance patching and updates to maintain system security and integrity. Optimize infrastructure to achieve cost efficiency and performance improvements. Collaborate with development teams to streamline development workflows and ensure seamless integration between development and operations processes. Design, create, and maintain comprehensive technical documentation of best practices for all implemented system configurations to ensure efficient planning and execution. Stay up-to-date with the latest DevOps and infrastructure trends and technologies, proposing and implementing improvements as needed. Requirements & Experience: Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Proven experience as a DevOps Engineer or Infrastructure Engineer in a production environment, with 3-5 years of relevant experience. Strong knowledge of cloud platforms such as AWS (Amazon Web Services) and OCI (Oracle Cloud Infrastructure), and experience with cloud services like EC2, S3, RDS, Oracle Autonomous Database, etc. Proficiency in scripting languages like Python, Bash, or PowerShell. Experience with CI/CD tools like Jenkins, GitLab CI, or CircleCI. Hands-on experience with containerization technologies like Docker and container orchestration with Kubernetes. Solid understanding of networking principles and protocols. Knowledge of version control systems like Git, including experience with Git and Bitbucket for version control and collaboration. Working knowledge on Web Application Servers such as Nginx and Apache. Ability to troubleshoot complex infrastructure and application issues effectively. Strong problem-solving skills and a proactive attitude towards resolving challenges. Excellent communication and collaboration skills to work effectively within cross-functional teams. About RIA Advisory: RIA Advisory LLC (RIA) is a business advisory and technology company that specializes in the field of Revenue Management and Billing for Banking, Payments, Capital Markets, Exchanges, Utilities, Healthcare and Insurance industry verticals. With a highly experienced team in the field of Pricing, Billing & Revenue Management, RIA prioritizes understanding client needs and industry best practices to approach any problem with insight and careful strategic planning. Each one of RIA Advisory's Managing Partners have over 20 years of industry expertise and experience, our leadership and consulting team demonstrate our continued efficiency to serve our clients as a strategic partner especially for transforming ORMB and CC&B space. Our operation are spread across US, UK, India, Philippines, Australia Services Offered: • Business Process Advisory for Revenue management processes • Technology Consulting & Implementation • Help clients transition to latest technology suite and overcome business problems. • Managed Services • Quality Assurance • Cloud Services Product Offered: • Data Migration and Integration Hub • Data Analytics Platform • Test Automation • Document Fulfilment • Customer Self Service Top Industries/Verticals: • Financial Services • Healthcare • Energy and Utilities • Public Sector Revenue Management. We recognize the impact of technologies on the process & people that drive them, innovate scalable processes & accelerate the path to revenue realization. We value our people and are Great Place to Work – Certified.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France