Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 3.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Category: Administration Main location: India, Tamil Nadu, Chennai Position ID: J0625-1027 Employment Type: Full Time Position Description: Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: Python Ansible Automation Position: Senior Systems Engineer/Lead Analyst Experience: 7+ yrs Category: IT Infrastructure Main location: Bangalore Position ID: J0625-1027 Employment Type: Full Time Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 3 years of relevant experience. Job Description: 1) Scripting: Ability to script proficiently with your platform's built-in language (PowerShell for Windows of Bash Lor Linux) Basic programming concepts Tech-savvy with Python data structures to handle complex automation 2) Intermediate level knowledge in Linux Operating Systems (mainly Red Hat Enterprise Linux) 3) Advanced knowledge in Git - Source Code Management System, GitLab - Git Repository, Container Runtimes (Podman/Docker) 4) Familiarity of Red Hat container registry "registry.redhat.io" and container image registry "quay.io" 5) Comprehensive experience/knowledge in Ansible Automation Concepts: Extensive knowledge of Yaml and Json design and implementation of advanced Ansible playbooks simplifying ansible playbooks using ansible roles Thorough knowledge in Ansible Content Collections and Ansible Galaxy Excellent troubleshooting and analytical skills of ansible playbooks Configuration of Ansible Automation Controller UI components Integration of Ansible Automation platform with GitLab/GitHub and optionally DevOps CI/CD workflows Automation of Configuration management, deploying applications, provisioning and testing servers, patch management and server hardening 6) Intermediate level knowledge on (Kubernetes, VMware, KVM) platforms 7) Intermediate level knowledge in: Networking (Cisco ACI, F5 LB, FortiGate ecosystem) SAN Storage (Netapp, Purestorage, Hitachi etc..) Must to have Skills: Python /automation and Ansible with Linux /Redhat Knowledge Good to have Skills : Excellent customer interfacing skills. Excellent written and verbal communication skills. Participating in Daily Standups and weekly reviews Strong attention to detail and outstanding analytical and Problem-solving skills. Understanding of Business, emerging technologies in relevant industry (Banking/CIAM ) , strong understanding of trends (market and technology) in areas of specialization. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodations for people with disabilities in accordance with provincial legislation. Please let us know if you require a reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Life at CGI: It is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons Come join our team, one of the largest IT and business consulting services firms in the world Skills: Ansible English Unix Linux What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
At Takeda, we are guided by our purpose of creating better health for people and a brighter future for the world. Every corporate function plays a role in making sure we — as a Takeda team — can discover and deliver life-transforming treatments, guided by our commitment to patients, our people and the planet. People join Takeda because they share in our purpose. And they stay because we’re committed to an inclusive, safe and empowering work environment that offers exceptional experiences and opportunities for everyone to pursue their own ambitions. Job ID R0158759 Date posted 07/24/2025 Location Bengaluru, Karnataka I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description The Future Begins Here At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity: The Data Enginee r will work directly with architects and product owners on the delivery of data pipelines and platforms for structured and unstructured data as part of a transformational data program. This data program will include an integrated data flow with end-to end control of data, internalization of numerous systems and processes, broad enablement of automation and near-time data access, efficient data review and query, and enablement of disruptive technologies for next-generation trial designs and insight derivation. We are primarily looking for people who love taking complex data and making it easy to use. As a Data Engineer you will Provide leadership to develop and execute highly complex and large-scale data structures and pipelines to organize, collect and standardize data to generate insights and addresses reporting needs. Interpret and integrate advanced techniques to ingest structured and unstructured data across complex ecosystem Delivery & Business Accountabilities Build and maintain technical solutions required for optimal ingestion, transformation, and loading of data from a wide variety of data sources and large, complex data sets with a focus on clinical and operational data Develop data profiling and data quality methodologies and embed them into the processes involved in transforming data across the systems. Manages and influences the data pipeline and analysis approaches, uses different technologies, big data preparations, programming and loading as well as initial exploration in the process of searching and finding data patterns. Uses data science input and requests, translates these from data exploration - large record (billions) and unstructured data sets - to mathematic algorithms and uses various tooling from programming languages to new tools (artificial and machine learning) to find data patterns, build and optimize models. Leads and implements ongoing tests in the search for solutions in data modelling, collects and prepares the training of data, tunes the data, optimizes algorithm implementations to test, scale, and deploy future models. Conducts and facilitates analytical assessment conceptualizing business needs and translates them into analytical opportunities. Leads the development of technical roadmaps and approaches for data analyses to find patterns, to design data models, to scale the model to a managed production environment within the current or a technical landscape to develop. Influences and manages data exploration from analysis to scalable models, works independently and decides quickly on transfers in complex data analysis and modelling. Skills and Qualifications: Bachelor’s degree or higher in a quantitative discipline such as Statistics, Mathematics, Engineering, Computer Science, Econometrics or information sciences such as business analytics or informatics 5+ years of experience working in data engineering role in an enterprise environment Strong experience with ETL/ELT design and implementations in the context of large, disparate and complex datasets Demonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift, Athena, RDS, BigQuery Demonstrated experience with big data processing systems and distributed computing technology such as Databricks, Spark, Sagemaker, Kafka, Tidal/Airflow etc. Demonstrated experience with DevOps tools such as GitLab, Terraform, Ansible, Chef etc. Experience with developing solutions on cloud computing services and infrastructure in the data and analytics space Solution-oriented enabler mindset Prior experience with Data Engineering projects and teams at an Enterprise level Preferred : Understanding or Application of Machine Learning and / or Deep Learning Significant experience in an analytical role in the healthcare industry preferred WHAT TAKEDA ICC INDIA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are: Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs. Employee Assistance Program Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. #Li-Hybrid Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
You should have a Bachelor's degree in Computer Science or a related field, or equivalent experience. With at least 3 years of experience in a similar role, you must be proficient in at least one backend programming language such as Java, Python, or Go. Additionally, you should have hands-on experience with cloud platforms like AWS, Azure, or GCP. A strong understanding of DevOps principles and practices is essential for this role, along with experience in containerization technologies like Docker and Kubernetes. You should also be familiar with configuration management tools such as Ansible, Puppet, or Chef, and have worked with CI/CD tools like Jenkins or GitLab CI. Excellent problem-solving and troubleshooting skills are a must, along with strong communication and collaboration abilities. Previous experience with databases like PostgreSQL or MySQL, as well as monitoring and logging tools such as Prometheus, Grafana, and ELK stack, is required. Knowledge of security best practices and serverless technologies will be beneficial for this position. This job opportunity was posted by Ashok Kumar Samal from HDIP.,
Posted 1 week ago
9.0 - 13.0 years
0 Lacs
chennai, tamil nadu
On-site
As a seasoned professional, you will lead design, development, and optimization efforts within the Palo Alto Prisma suite, focusing on Prisma Access and Prisma Cloud. Your role will involve working on cloud-native architectures, data-plane applications, and scalable infrastructure to facilitate secure access and cloud operations. Your key responsibilities will include designing and implementing scalable software features for Prisma Access or Prisma Cloud, leading the development of data-plane applications and cloud-native services, and collaborating with cross-functional teams to integrate PanOS features into Prisma platforms. Additionally, you will profile and tune systems software for efficient cloud operation, optimize microservices and containerized workloads for performance and reliability, mentor junior engineers, and contribute to team growth. You will actively participate in design reviews and technical strategy discussions, work closely with DevOps and support teams to troubleshoot and resolve complex issues, build and automate performance testing scenarios, and ensure high reliability and quality through rigorous testing and validation. To be successful in this role, you should have 9-13 years of experience in software engineering or cloud infrastructure, strong programming skills in C/C++, Python, or Go, a deep understanding of operating systems (Linux/Unix), networking (TCP/IP, TLS), and cloud platforms, experience with microservices, container orchestration (Kubernetes), and CI/CD pipelines, as well as a proven track record of delivering enterprise-grade software solutions. Preferred experience includes hands-on experience with Palo Alto Prisma Access or Prisma Cloud, exposure to cloud providers such as AWS, Azure, GCP, familiarity with infrastructure-as-code tools like Terraform and Ansible, and strong debugging, profiling, and performance tuning skills.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a GCP Architect with AI Expertise, you will play a crucial role in leading the design, implementation, and optimization of cloud and AI-driven solutions on Google Cloud Platform (GCP). Your deep expertise in cloud infrastructure, AI/ML, networking, and security will be essential for architecting scalable, secure, and highly available cloud solutions while leveraging AI to drive automation and intelligence across enterprise environments. Your responsibilities will include designing and implementing scalable, secure, and cost-effective architectures on GCP, developing AI/ML-driven solutions using tools such as Vertex AI, BigQuery ML, TensorFlow, and AutoML, and architecting end-to-end data pipelines to support AI model training, deployment, and monitoring. You will also be responsible for building high-availability, backup, and disaster recovery solutions in GCP. In terms of infrastructure and security, you will implement identity management, IAM policies, security protocols, and network segmentation, design and automate provisioning of networks, firewalls, ACLs, and access control policies, and ensure compliance with security and governance frameworks for cloud-based AI workloads. Additionally, you will manage multi-VPC networking, hybrid cloud connectivity, and firewall configurations. Your role will also involve leading large-scale application and network migrations to GCP, developing Terraform-based infrastructure automation, and managing Infrastructure as Code (IaC). You will implement CI/CD pipelines for AI/ML workloads using tools like GitHub, Jenkins, and Ansible, and design DevOps solutions with cloud-native tools and containerized workloads on GKE. As a technical leader and advisor, you will provide strategic guidance on cloud adoption, AI/ML use cases, and infrastructure modernization, lead requirements gathering, solution design, and implementation for GCP cloud-based AI solutions, and support pre-sales activities, including POCs, RFP responses, and SOW creation. You will mentor and provide technical oversight to engineering teams and serve as a trusted cloud advisor for customers, helping them solve complex cloud infrastructure and AI challenges. Furthermore, you will be responsible for optimizing cloud resource usage, cost management, and AI model performance, conducting cloud usage analytics, security audits, and AI model monitoring, and developing strategies to improve network reliability, redundancy, and high availability. If you are passionate about cloud transformation, AI, and solving complex infrastructure challenges, this role presents a perfect opportunity for you to make a meaningful impact in the field of cloud and AI technologies.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
At BMC, trust is not just a word - it's a way of life! We are an award-winning, equal opportunity, culturally diverse, and fun place to be. Our commitment to giving back to the community drives us to be better every single day. In our work environment, we prioritize balancing your priorities because we know that you will bring your best every day. We celebrate your wins and support you every step of the way. Your peers will inspire, drive, and make you laugh out loud! We help our customers achieve autonomy in the digital world by freeing up time and space to become an Autonomous Digital Enterprise. We are relentless in our pursuit of innovation, with a focus on modernizing mainframe systems through our Intelligent Z Optimization & Transformation products. Our goal is to enhance the developer experience, mainframe integration, application development speed, code quality, and application security while reducing operational costs and risks. As we continue to grow, innovate, and perfect our solutions, we are looking for a talented SQA Engineer to join our family. In this exciting role at BMC, you will work with developers, architects, and other Quality Assurance team members to validate product functionality through manual or automated test execution. You will identify end user scenarios, document test cases, track issues and defects, and validate fixes provided by developers. Additionally, you will contribute to product and process improvements, refine QA practices, and understand product capabilities from a customer perspective. You will have the opportunity to showcase/demo products to customers and stakeholders when required. Your responsibilities will also include writing test automation scripts, problem-solving, debugging issues, and automating manual assignments in Robot/SuperTest. To succeed in this role, you should have a Bachelor's degree in computer science, engineering, or a related field with at least 6 years of experience as a Quality Test Engineer in a distributed/mainframe environment. You should have significant experience in functional, regression, and/or load testing, proficiency in at least one server-side language such as Java or Python, and experience with AI/ML implementation and tools. Hands-on experience with automation/scripting tools like Selenium, Cucumber, Robot Framework, or Ansible is required, along with strong test automation experience, REST APIs testing experience, and familiarity with SaaS products, cloud technologies, and agile software development methodologies. While these skills are essential, we are also looking for someone who can contribute to testing innovative solutions in the AI/ML domain, leverage data to derive insights, build predictive models, and enhance our mainframe services. If you are seeking to re-enter the workforce after a career break, we encourage you to apply as we value talents from diverse backgrounds and experiences. At BMC, our culture is built around our people, and we value authenticity and diversity. We offer a competitive compensation package that includes various rewards and benefits tailored to different countries. If you are passionate about BMC and excited about joining our team, we welcome you to apply and be part of our global community of brilliant minds.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
The role of a DevOps Engineer for Sensitive Data Detection involves being responsible for a variety of key tasks including deploying cloud infrastructure/services, managing day-to-day operations, troubleshooting issues related to cloud infrastructure/services, deploying and managing AKS clusters, collaborating with development teams to integrate infrastructure and deployment pipelines in the SDLC, building and setting up new CI/CD tooling and pipelines for application build and release, and implementing migrations, upgrades, and patches in all environments. In this position, you will be a part of the Sensitive Data Detection Services team based in the Pune EON-2 Office. The team focuses on analyzing, developing, and delivering global solutions to maintain or change IT systems in collaboration with business counterparts. The team culture emphasizes partnership with businesses, transparency, accountability, empowerment, and a passion for the future. As an experienced DevOps Engineer, you will have a significant role in constructing and maintaining GIT and ADO CI/CD pipelines, establishing scalable AKS clusters, deploying applications in various environments, and more. You will work alongside a group of highly skilled engineers who excel in delivering scalable enterprise engineering solutions. To excel in this role, ideally, you should possess 8 to 12 years of experience in the DevOps field. You should have a strong grasp of DevOps principles and best practices, with practical experience in implementing CI/CD pipelines, infrastructure as code, and automation solutions. Proficiency in Azure Kubernetes Services (AKS) and Linux is crucial, including monitoring, analyzing, configuring, deploying, enhancing, and managing containerized applications on AKS. Experience in managing helm charts, ADO and Git Lab pipelines is essential. Additionally, familiarity with cloud platforms like Azure, along with experience in cloud services, infrastructure provisioning, and management, is required. You will also be expected to help implement infrastructure as code (IaC) solutions using tools such as Terraform or Ansible to automate the provisioning and configuration of cloud and on-premises environments, maintain stability in non-prod environments, support GitLab pipelines across various MS Azure resources/services, research opportunities for automation, and efficiently interact with business and development teams at all levels. Flexibility in working hours to accommodate international project setups and collaboration with cross-functional teams (Database, UNIX, Cloud, etc.) are also key aspects of this role.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Security leader with a background in AWS and cloud Security, you play a crucial role in defining and enforcing the security policies and procedures of the organization. With excellent written and verbal communication skills, exceptional organizational abilities, and expert-level proficiency in IT and Cloud Security, you will be responsible for architecting and implementing IT Security policies while reporting to the Director of Information Technology. In this full-time role, your essential duties and responsibilities include providing leadership and technology vision to the IT Security team, performing internal and external security audits, documenting, implementing, and monitoring adherence to IT security standards, as well as assessing and improving security metrics. You will work on enhancing security tools and operations, monitor and manage IDS, vulnerability scanning, and assessments, and serve as the Data Privacy Officer (DPO) for the company. Creating awareness within the company regarding Security, Privacy, and compliance requirements, ensuring security and privacy training for staff involved in data processing, conducting security and privacy audits, and serving as the point of contact between the company and clients for privacy controls are key aspects of your role. Additionally, you will be responsible for log aggregation and analysis, managing Anti-Virus software, addressing security and data breach-related incidents, and ensuring customer satisfaction while being accountable for individual product/project success and quality. To qualify for this position, you must hold certifications such as CISSP, Security+, or equivalent, along with having 10+ years of Cyber Security experience, 5+ years of IT management experience, 5+ years of AWS experience, and 3+ years of experience with Identity & Access Management tools. Your extensive experience with Linux & Windows Security administration, managing Cloud and Container Security, Network and Application penetration testing, vulnerability scanners, IDS, IPS deployment and monitoring, SIEM tools, security automation, incident response & management, vulnerability management, and patch management will be essential. Moreover, your role will involve ensuring organization efficiencies through continual improvement programs, representing the organization in inspections and audits, driving action plans to closure, conducting deep dive RCAs and ensuring CAPAs are closed, and maintaining a metrics-driven approach. Additional qualifications such as experience with monitoring tools like Datadog, Change Management, Configuration Management, Infrastructure as Code tools, hardening Operating Systems and Applications, endpoint security management, working in GxP environments, and familiarity with various practices will be beneficial. With no travel expectations, this role requires a dedicated and experienced professional who can effectively lead security operations and teams, prioritize security and privacy, and drive continuous improvement initiatives to enhance organizational security posture.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
Join us as a DevOps & Cloud Engineer at Dedalus, one of the global leaders in healthcare technology, and be part of our CIS4U team working from our Chennai Office in India to shape the future of digital health infrastructure. As a DevOps & Cloud Engineer, you will play a key role in building and maintaining a scalable, secure, and resilient platform to support continuous integration, delivery, and operations of modern healthcare applications. Your work will directly contribute to enabling development teams to deliver better, faster, and safer solutions for patients and providers around the world. You will design and maintain tooling for deployment, monitoring, and operations of containerized applications across hybrid cloud and on-premises infrastructure. Implement and manage Kubernetes-based workloads ensuring high availability, scalability, and security. Develop new platform features using Go or Java, and maintain existing toolchains. Automate infrastructure provisioning using IaC tools such as Terraform, Helm, or Ansible. Collaborate with cross-functional teams to enhance platform usability and troubleshoot issues. Participate in incident response and on-call rotation to ensure uptime and system resilience. Create and maintain architecture and process documentation for shared team knowledge. At DH Healthcare, your work will empower clinicians and health professionals to deliver better care through reliable and modern technology. Join us and help shape the healthcare landscape by enabling the infrastructure that powers mission-critical healthcare systems. Essential Requirements: - 5+ years of experience in DevOps, Cloud Engineering, or Platform Development roles - Strong background in software engineering and/or system integrations - Proficiency in Go, Java, or similar languages - Hands-on experience with containerization and orchestration (Docker, Kubernetes) - Experience with CI/CD pipelines and DevOps methodologies - Practical knowledge of IaC tools like Terraform, Helm, Ansible - Exposure to Linux, Windows, and cloud-native environments - Strong written and verbal communication skills in English - Bachelor's degree in Computer Science, Information Systems, or equivalent Desirable Requirements: - Experience supporting large-scale or enterprise healthcare applications - Familiarity with Agile/Scrum practices and DevSecOps tools - Exposure to hybrid infrastructure and cloud operations - Enthusiastic about automation, security, and performance optimization - Passion for continuous improvement and collaboration At DH Healthcare, we are committed to transforming care delivery through smart, scalable, and resilient platforms. We value innovation, collaboration, and a deep sense of purpose in everything we do. You will join a global team dedicated to improving patient outcomes and supporting health professionals with technology that truly matters. If you're ready to be part of something meaningful, apply now. Application closing date: 15th of August 2025.,
Posted 1 week ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS CloudFormation Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, manage project timelines, and contribute to the overall success of application development initiatives. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor project progress and ensure alignment with business goals. CORE COMPETENCIES - Cloud Platforms: AWS (Landing Zone, Control Tower, Organizations, Backup, EKS) - DevOps Tooling: GitLab/Gitlab Runners, Jenkins, Docker, Kubernetes, Terraform Enterprise TFE, Rancher, Gerrit, Bamboo, Rally, UrbanCodeDeploy - IaC & Automation: Terraform, AWS Account Factory for Terraform (AFT), Ansible, Puppet, Ant, Maven, Groovy, Python, Bash - Governance & Security: IAM, SCPs, AWS Backup Policies, Sentinel Policies, OPA Policies - CI/CD & GitOps: GitLab Pipelines, Jenkins Pipelines, AWS CodePipeline, Helm, FluxCD, Kustomization, Nexus, Artifactory, SonarQube - Monitoring & Logging: CloudWatch - Leadership: Team Management, Stakeholder Engagement, Agile Delivery, Mentoring, Platform Strategy Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS CloudFormation. - Strong understanding of infrastructure as code principles. - Experience with cloud architecture and deployment strategies. - Familiarity with DevOps practices and tools. - Ability to troubleshoot and optimize cloud-based applications. Additional Information: - The candidate should have minimum 5 years of experience in AWS CloudFormation. - This position is based in Mumbai. - A 15 years full time education is required.
Posted 1 week ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Role : AWS cloud infrastructure Location : Pan India Exp : 3 to 10 Yrs Job Summary We are seeking a highly skilled Cloud Infrastructure Engineer to design, deploy, and manage our scalable, secure, and high-availability AWS cloud infrastructure. The ideal candidate will have extensive experience in network engineering, security solutions implementation, automation, scripting, system administration, and monitoring and Responsibilities : Cloud Infrastructure Management Design, deploy, and manage scalable, secure, and high-availability AWS cloud infrastructure. Optimize AWS services (EC2, VPC, S3, RDS, Lambda, etc.) to ensure efficient operation and cost management. Network Engineering Configure, manage, and troubleshoot network routing and switching across cloud and on-premises environments. Implement and maintain advanced network security solutions, including firewalls, VPNs, and intrusion detection/prevention systems. Security Solutions Implementation Develop and implement end-to-end network security solutions to protect against internal and external threats. Monitor network traffic and security logs to identify and mitigate potential security breaches. Automation And Scripting Automate infrastructure provisioning, configuration management, and deployment processes using tools such as Terraform and Ansible. Develop custom scripts and tools in Python to improve operational efficiency and reduce manual intervention. Implement automation strategies to streamline repetitive tasks and enhance productivity. System Administration Perform system administration tasks for Linux servers, including installation, configuration, maintenance, and troubleshooting. Manage and integrate Active Directory services for authentication and authorization. Firewall And Security Management Administer and troubleshoot Palo Alto firewalls and Panorama for centralized management and policy enforcement. Manage Cisco Meraki wireless and security stacks, ensuring robust network performance and security compliance. Monitoring And Optimization Implement monitoring solutions to track performance metrics, identify issues, and optimize network and cloud resources. Conduct regular performance tuning, capacity planning, and system audits to ensure optimal operation. Collaboration And Support Work closely with cross-functional teams, including DevOps, Security, and Development, to support infrastructure and application needs. Provide technical support and guidance to internal teams, ensuring timely resolution of network and system issues. Documentation And Compliance Maintain comprehensive documentation of network configurations, infrastructure designs, and operational procedures. Ensure compliance with industry standards and regulatory requirements through regular audits and updates. Continuous Improvement Stay updated with the latest trends and technologies in cloud computing, networking, and cybersecurity. Propose and implement improvements to enhance system reliability, security, and : Bachelors degree in computer science, Information Technology, or a related field. Proven experience as a Cloud Engineer, Network Engineer, or similar role. Strong knowledge of AWS services and cloud infrastructure management. Proficiency in network engineering, including routing, switching, and security solutions. Experience with automation tools such as Terraform, Ansible, and scripting languages like Python. Solid system administration skills, particularly with Linux servers. Experience managing firewalls and security solutions (e.g., Palo Alto, Cisco Meraki). Strong problem-solving skills and the ability to work in a collaborative environment. Excellent documentation and communication Qualifications : AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified SysOps Administrator). Familiarity with DevOps practices and tools. Knowledge of regulatory requirements and compliance standards (e.g., PCI, CIS) (ref:hirist.tech)
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Python Developer, you will be responsible for leveraging your expertise in Python development to work on Python SDKs for AWS, GCP, and OCI. Your strong knowledge and hands-on experience with API and ORM frameworks such as Flask and SQLAlchemy will be crucial in this role. Additionally, you will utilize your experience in Async and Event-based task execution programming to contribute effectively. Your proficiency in both Windows and Linux environments will allow you to thrive in this position, along with your familiarity with automation tools like Ansible or Chef. Hands-on experience with at least one cloud provider and the ability to write Terraform or cloud-native templates will be valuable assets. Moreover, your knowledge of container technology and experience with CI/CD processes will be essential in achieving success in this role. This is a full-time, permanent position with benefits including Provident Fund. The work location is in-person during the evening shift from Monday to Friday. As part of the application process, you will be asked about your experience with Python SDKs for AWS, GCP, and OCI, and your availability for the timing of 2 PM to 11 PM IST. Join our team and contribute your expertise to our dynamic projects that require a strong foundation in Python development and cloud technologies.,
Posted 1 week ago
6.0 - 12.0 years
0 Lacs
karnataka
On-site
As a DevOps Engineer at Capgemini, you will have the opportunity to shape your career according to your aspirations in a supportive and inspiring environment. You will work with a collaborative global community of colleagues to push the boundaries of what is achievable. By joining us, you will play a key role in assisting the world's top organizations in harnessing the full potential of technology to create a more sustainable and inclusive world. Your responsibilities will include building and managing CI/CD pipelines using tools such as Jenkins, GitLab CI, and Azure DevOps. You will automate infrastructure deployment using Terraform, Ansible, or CloudFormation, and set up monitoring systems with Prometheus, Grafana, and ELK. Managing containers with Docker and orchestrating them through Kubernetes will be a crucial part of your role. Additionally, you will collaborate closely with developers to integrate DevOps practices into the Software Development Life Cycle (SDLC). To excel in this position, you should ideally possess 6 to 12 years of experience in DevOps, CI/CD, and Infrastructure as Code (IaC). Your expertise should extend to Docker, Kubernetes, and cloud platforms such as AWS, Azure, or GCP. Experience with monitoring tools like Prometheus, Grafana, and ELK is essential, along with knowledge of security, compliance, and performance aspects. Being ready for on-call duties and adept at handling production issues are also required skills for this role. At Capgemini, you will enjoy a flexible work environment with hybrid options, along with a competitive salary and benefits package. Your career growth will be supported through opportunities for SAP and cloud certifications. You will thrive in an inclusive and collaborative workplace culture that values teamwork and diversity. Capgemini is a global leader in business and technology transformation, facilitating organizations in their digital and sustainable evolution. With a diverse team of over 340,000 members across 50 countries, Capgemini leverages its 55-year legacy to deliver comprehensive services and solutions, ranging from strategy and design to engineering. The company's expertise in AI, generative AI, cloud, and data, combined with industry knowledge and partnerships, enables clients to unlock the true potential of technology to meet their business requirements effectively.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
At Capgemini Engineering, the global leader in engineering services, you will be part of a team of engineers, scientists, and architects dedicated to helping the world's most innovative companies reach their full potential. From cutting-edge technologies like autonomous cars to life-saving robots, our digital and software technology experts are known for their out-of-the-box thinking, providing unique R&D and engineering services across various industries. Join us for a career filled with diverse opportunities where each day brings new challenges and possibilities. Your Profile: - Strong understanding of DevOps tools and methodologies - Hands-on experience with Ansible for automation and configuration management, Jenkins for CI/CD pipelines, and Terraform for IaC - Proficiency in Linux system administration and experience with Oracle databases - Familiarity with cloud platforms such as Azure and AWS - Strong focus on quality assurance, code integrity, and workflow optimization - Knowledge of the 3DEXPERIENCE platform, including CATIA 3DX and CATIA V5 POWER'BY Your Role: You will be working on the 3DEXPERIENCE platform utilizing your DevOps skills in Ansible and Jenkins. What You'll Love About Working Here: - Shape your career with a range of career paths and internal opportunities within the Capgemini group - Receive personalized career guidance from our leaders - Enjoy comprehensive wellness benefits including health checks, telemedicine, insurance coverage, elder care, partner support, and flexible work options - Access one of the industry's largest digital learning platforms with 250,000+ courses and numerous certifications Capgemini is a global business and technology transformation partner, committed to helping organizations accelerate their digital and sustainable journey while making a positive impact on enterprises and society. With a diverse team of over 340,000 members in more than 50 countries, Capgemini leverages its 55+ years of heritage and expertise to unlock technology's value for clients across various industries. From strategy and design to engineering, Capgemini provides end-to-end services and solutions powered by market-leading capabilities in AI, generative AI, cloud, and data, supported by deep industry knowledge and a strong partner ecosystem.,
Posted 1 week ago
0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Position AWS Cloud Monitoring and Ansible Specialist Job Description Key Responsibilities AWS Cloud Monitoring & Performance Management Design, implement, and manage monitoring solutions for AWS cloud infrastructure using tools like Amazon CloudWatch, AWS X-Ray, or third-party monitoring tools (e.g., Datadog, New Relic, Nagios). Define and set up metrics, alerts, and dashboards for system health, application performance, and infrastructure reliability. Troubleshoot and resolve AWS infrastructure issues to minimize downtime and optimize system performance. Automation Using Ansible Write, manage, and maintain Ansible playbooks for automating configuration management, deployments, patching, and other operational processes. Develop and test automation workflows to ensure reliable execution across different environments. Collaborate with DevOps and development teams to streamline CI/CD pipelines using Ansible. Cloud Infrastructure Management Migration from Chef to Ansible will be added advantage Deploy and manage AWS services, including EC2, S3, RDS, Lambda, VPC, CloudFormation, etc. Optimize AWS resources for cost efficiency and performance. Stay updated on the latest AWS offerings and recommend relevant services to enhance infrastructure. Incident Management and Problem Resolution Monitor system incidents and resolve them efficiently, ensuring adherence to SLAs. Perform root cause analysis and implement preventive measures to mitigate recurring issues. Maintain and improve incident response processes and documentation. Documentation and Reporting Maintain accurate documentation of infrastructure configurations, monitoring systems, and automation scripts. Create reports to demonstrate cloud environment health, resource utilization, and compliance. Share knowledge and best practices with team members through documentation and training sessions. Security and Compliance Implement security best practices for monitoring and automation scripts. Ensure systems are compliant with organizational and regulatory requirements. Collaborate with security teams to perform vulnerability assessments and patch management. Technical Skills Required Skills and Qualifications Extensive experience in AWS services, architecture, and tools (e.g., CloudWatch, CloudFormation, IAM, EC2, S3, Lambda, etc.). Proficient in writing and managing Ansible playbooks for automation and orchestration. Experience with monitoring tools and setting up dashboards (e.g., Datadog, Prometheus, Grafana, etc.). Strong understanding of networking concepts within AWS, including VPCs, subnets, routing, and security groups. Experience with Linux/Unix environments and scripting languages like Python, Bash, or PowerShell. Familiarity with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline. Knowledge of cloud cost optimization strategies and resource tagging. Soft Skills Strong problem-solving and troubleshooting abilities. Excellent communication and collaboration skills to work effectively with cross-functional teams. Ability to multitask and prioritize tasks in a fast-paced environment. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services
Posted 1 week ago
7.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Position DevOps with Github Action Job Description DevOps principles and Agile practices, including Infrastructure as Code (IaC) and GitOps, to streamline and enhance development workflows. Infrastructure Management: Oversee the management of Linux-based infrastructure and understand networking concepts, including microservices communication and service mesh implementations. Containerization & Orchestration: Leverage Docker and Kubernetes for containerization and orchestration, with experience in service discovery, auto-scaling, and network policies. Automation & Scripting: Automate infrastructure management using advanced scripting and IaC tools such as Terraform, Ansible, Helm Charts, and Python. AWS and Azure Services Expertise: Utilize a broad range of AWS and Azure services, including IAM, EC2, S3, Glacier, VPC, Route53, EBS, EKS, ECS, RDS, Azure Virtual Machines, Azure Blob Storage, Azure Kubernetes Service (AKS), and Azure SQL Database, with a focus on integrating new cloud innovations. Incident Management: Manage incidents related to GitLab pipelines and deployments, perform root cause analysis, and resolve issues to ensure high availability and reliability. Development Processes: Define and optimize development, test, release, update, and support processes for GitLab CI/CD operations, incorporating continuous improvement practices. Architecture & Development Participation: Contribute to architecture design and software development activities, ensuring alignment with industry best practices and GitLab capabilities. Strategic Initiatives: Collaborate with the leadership team on process improvements, operational efficiency, and strategic technology initiatives related to GitLab and cloud services. Required Skills & Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 7-9+ years of hands-on experience with GitLab CI/CD, including implementing, configuring, and maintaining pipelines, along with substantial experience in AWS and Azure cloud services. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
The Red Hat Customer Experience and Engagement (CEE) team is seeking an experienced engineer to join the Solutions Support team in LOCATION. In this role, you will specialize in Red Hat's technologies, including Red Hat OpenShift, Red Hat Enterprise Linux (RHEL), and Red Hat Ansible Automation Platform. Your main responsibility will be to offer expert technical support to a limited number of customers. You will collaborate closely with customers, Red Hat's Global Support team, Critical Accounts team, and Engineering teams, engaging with some of Red Hat's most strategic customers. As a part of Red Hat's unique culture, you will have the opportunity to contribute to open practices in management, decision making, DEI, and associate growth. Red Hat is recognized as one of the top workplaces in technology due to its emphasis on associate growth, work/life balance, and innovation. You will have the chance to propose innovative solutions to intricate problems and participate in various Red Hat Recognition programs. Your responsibilities will include: - Providing advanced technical support to customers via web-based and phone support - Collaborating with Red Hat enterprise customers globally on a 24x7 basis, involving periodic shifts - Regularly meeting with customers to ensure alignment of Red Hat with their support priorities - Working with other Red Hat teams engaged with customers - Performing technical diagnostics and troubleshooting customer technical issues to develop solutions - Exceeding customer expectations through exceptional communication and service - Consulting and building relationships with Red Hat engineering teams to enhance solutions and customer satisfaction - Contributing to the global Red Hat Knowledge Management System by sharing knowledge and presenting troubleshooting instructions and solutions to other engineers within Red Hat Qualifications sought: - 5+ years of relevant experience - Strong communication skills for technical and non-technical interactions with customers - Excellent troubleshooting and debugging abilities - A passion for technical investigation and issue resolution - Experience in Linux system administration, including installation, configuration, and maintenance - Basic knowledge of Linux containers Desirable qualifications include: - Experience with container orchestration (Kubernetes) - Familiarity with cloud services like AWS, Azure, GCP - Knowledge of Ansible and YAML - Linux scripting experience - Understanding of change window/change controls - Prior Red Hat Certified Engineer (RHCE) or other Linux certifications; candidates in this role are expected to pass the RHCE certification within 90 days Join Red Hat, the world's foremost provider of enterprise open-source software solutions, and be part of a community-powered organization delivering high-performance Linux, cloud, container, and Kubernetes technologies. With associates working flexibly across 40+ countries, Red Hat fosters an open and inclusive environment where all ideas are valued, regardless of title or tenure. Red Hat's culture is based on open source principles of transparency, collaboration, and inclusion, empowering diverse voices to drive innovation. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. Join us in celebrating diversity and innovation at Red Hat.,
Posted 1 week ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Position AWS Cloud Monitoring and Ansible Specialist Job Description Key Responsibilities AWS Cloud Monitoring & Performance Management Design, implement, and manage monitoring solutions for AWS cloud infrastructure using tools like Amazon CloudWatch, AWS X-Ray, or third-party monitoring tools (e.g., Datadog, New Relic, Nagios). Define and set up metrics, alerts, and dashboards for system health, application performance, and infrastructure reliability. Troubleshoot and resolve AWS infrastructure issues to minimize downtime and optimize system performance. Automation Using Ansible Write, manage, and maintain Ansible playbooks for automating configuration management, deployments, patching, and other operational processes. Develop and test automation workflows to ensure reliable execution across different environments. Collaborate with DevOps and development teams to streamline CI/CD pipelines using Ansible. Cloud Infrastructure Management Migration from Chef to Ansible will be added advantage Deploy and manage AWS services, including EC2, S3, RDS, Lambda, VPC, CloudFormation, etc. Optimize AWS resources for cost efficiency and performance. Stay updated on the latest AWS offerings and recommend relevant services to enhance infrastructure. Incident Management and Problem Resolution Monitor system incidents and resolve them efficiently, ensuring adherence to SLAs. Perform root cause analysis and implement preventive measures to mitigate recurring issues. Maintain and improve incident response processes and documentation. Documentation and Reporting Maintain accurate documentation of infrastructure configurations, monitoring systems, and automation scripts. Create reports to demonstrate cloud environment health, resource utilization, and compliance. Share knowledge and best practices with team members through documentation and training sessions. Security and Compliance Implement security best practices for monitoring and automation scripts. Ensure systems are compliant with organizational and regulatory requirements. Collaborate with security teams to perform vulnerability assessments and patch management. Technical Skills Required Skills and Qualifications Extensive experience in AWS services, architecture, and tools (e.g., CloudWatch, CloudFormation, IAM, EC2, S3, Lambda, etc.). Proficient in writing and managing Ansible playbooks for automation and orchestration. Experience with monitoring tools and setting up dashboards (e.g., Datadog, Prometheus, Grafana, etc.). Strong understanding of networking concepts within AWS, including VPCs, subnets, routing, and security groups. Experience with Linux/Unix environments and scripting languages like Python, Bash, or PowerShell. Familiarity with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline. Knowledge of cloud cost optimization strategies and resource tagging. Soft Skills Strong problem-solving and troubleshooting abilities. Excellent communication and collaboration skills to work effectively with cross-functional teams. Ability to multitask and prioritize tasks in a fast-paced environment. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Position Senior Engineer/Technical Lead (DevOps Engineer - Azure) Job Description Key Responsibilities: Key Responsibilities: Azure Cloud Management: Design, deploy, and manage Azure cloud environments. Ensure optimal performance, scalability, and security of cloud resources using services like Azure Virtual Machines, Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure SQL Database. Automation & Configuration Management: Use Ansible for configuration management and automation of infrastructure tasks. Implement Infrastructure as Code (IaC) using Azure Resource Manager (ARM) templates or Terraform. Containerization: Implement and manage Docker containers. Develop and maintain Dockerfiles and container orchestration strategies with Azure Kubernetes Service (AKS) or Azure Container Instances. Server Administration: Administer and manage Linux servers. Perform routine maintenance, updates, and troubleshooting. Scripting: Develop and maintain Shell scripts to automate routine tasks and processes. Helm Charts: Create and manage Helm charts for deploying and managing applications on Kubernetes clusters. Monitoring & Alerting: Implement and configure Prometheus and Grafana for monitoring and visualization of metrics. Use Azure Monitor and Azure Application Insights for comprehensive monitoring, logging, and diagnostics. Networking: Configure and manage Azure networking components such as Virtual Networks, Network Security Groups (NSGs), Azure Load Balancer, and Azure Application Gateway. Security & Compliance: Implement and manage Azure Security Center and Azure Policy to ensure compliance and security best practices. Required Skills and Qualifications: Experience: 5+ years of experience in cloud operations, with a focus on Azure. Azure Expertise: In-depth knowledge of Azure services, including Azure Virtual Machines, Azure Kubernetes Service, Azure App Services, Azure Functions, Azure Storage, Azure SQL Database, Azure Monitor, Azure Application Insights, and Azure Security Center. Automation Tools: Proficiency in Ansible for configuration management and automation. Experience with Infrastructure as Code (IaC) tools like ARM templates or Terraform. Containerization: Hands-on experience with Docker for containerization and container management. Linux Administration: Solid experience in Linux server administration, including installation, configuration, and troubleshooting. Scripting: Strong Shell scripting skills for automation and task management. Helm Charts: Experience with Helm charts for Kubernetes deployments. Monitoring Tools: Familiarity with Prometheus and Grafana for metrics collection and visualization. Networking: Experience with Azure networking components and configurations. Problem-Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot complex issues. Communication: Excellent communication skills, both written and verbal, with the ability to work effectively in a team environment. Preferred Qualifications: Certifications: Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect) are a plus. Additional Tools: Experience with other cloud platforms (AWS, GCP) or tools (Kubernetes, Terraform) is beneficial. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Cloud Architect at FICO, you will play a crucial role in architecting, designing, implementing, and managing cloud infrastructure solutions using tools like ArgoCD, Crossplane, GitHub, Terraform, and Kubernetes. You will lead initiatives to enhance our Cloud and GitOps best practices, mentor junior team members, collaborate with cross-functional teams, and ensure that our cloud environments are scalable, secure, and cost-effective. Your responsibilities will include designing, deploying, and managing scalable cloud solutions on public cloud platforms such as AWS, Azure, or Google Cloud, developing deployment strategies, utilizing Infrastructure as Code tools like Terraform and Crossplane, collaborating with various teams, providing mentorship, evaluating and recommending new tools and technologies, implementing security best practices, ensuring compliance with industry standards, and much more. To be successful in this role, you should have proven experience as a Senior level engineer/Architect in a cloud-native environment, extensive experience with ArgoCD and Crossplane, proficiency in GitHub workflows, experience with Infrastructure as Code tools, leadership experience, proficiency in scripting languages and automation tools, expert knowledge in containerization and orchestration tools like Docker and Kubernetes, network concepts and implementation on AWS, observability, monitoring and logging tools, security principles and frameworks, and familiarity with security-related certifications. Your unique strengths, leadership skills, and ability to drive and motivate a team will be essential in fulfilling the responsibilities of this role. At FICO, you will be part of an inclusive culture that values diversity, collaboration, and innovation. You will have the opportunity to make an impact, develop professionally, and participate in valuable learning experiences. FICO offers competitive compensation, benefits, and rewards programs to encourage you to bring your best every day. You will work in an engaging, people-first environment that promotes work/life balance, employee resource groups, and social events to foster interaction and camaraderie. Join FICO and be part of a leading organization in Big Data analytics, making a real difference in the business world by helping businesses use data to improve their decision-making processes. FICO's solutions are used by top lenders and financial institutions worldwide, and the demand for our solutions is rapidly growing. As part of the FICO team, you will have the support and freedom to develop your skills, grow your career, and contribute to changing the way businesses operate globally. Explore how you can fulfill your potential by joining FICO at www.fico.com/Careers.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a DevOps Engineer with over 5 years of experience, you will be responsible for automating system configurations and application deployments using Ansible playbooks. You will also be tasked with provisioning and managing cloud infrastructure using Terraform, while adhering to infrastructure-as-code principles. Your role will involve deploying, configuring, and maintaining Kubernetes clusters, whether on-premises or cloud-managed, to support scalable and containerized applications. Additionally, you will manage Kubernetes clusters using Rancher, which includes tasks such as provisioning, upgrades, monitoring, and user access control. Collaboration with development, operations, and security teams is a key aspect of this position, as you will work together to ensure efficient DevOps workflows. Monitoring and troubleshooting system performance, deployments, and infrastructure issues will also be within your scope of responsibilities. Key Skills: - Ansible - Docker - Kubernetes - Cluster - Rancher - Terraform If you are a proactive and experienced DevOps professional who enjoys working collaboratively in a dynamic environment, this role offers the opportunity to contribute to the success of the organization by implementing and optimizing DevOps best practices. Job Code: GO/JC/548/2025 Recruiter Name: Christopher,
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As a Network Architect, you will be responsible for delivering high-level network consulting services, analyzing complex network requirements, and recommending tailored solutions for clients. Responsibilities: - Engage with clients to understand their business objectives, technical requirements, and network architecture needs. - Conduct thorough audits and assessments of existing network infrastructure, identifying areas for improvement and potential risks. - Design & implement customized network solutions aligned with client requirements, industry best practices, and technology trends. - Develop detailed network design documents, including diagrams, configurations, and migration plans. - Oversee the deployment and configuration of network solutions, ensuring successful deployment within defined timelines. - Provide technical guidance and troubleshooting expertise to resolve complex network issues. - Conduct network performance analysis, monitoring, and capacity planning to optimize network performance and scalability. - Stay updated with emerging technologies and industry trends related to modern networks and make recommendations to enhance the network infrastructure. - Collaborate with cross-functional teams, including project managers, engineers, and system administrators, to ensure successful project delivery. - Mentor and guide junior network engineers and technicians. Requirements: - Bachelor's degree in computer science, Information Technology, or a related field. - 12-14 years of experience in network consulting, design, and implementation, preferably in a client-facing role. - Strong knowledge in network fundamentals, including OSI & TCP/IP model, subnetting, Layer-2 and Layer-3 technologies, etc. - Proficient in network protocols and technologies such as TCP/IP, DNS, OSPF, BGP, MPLS, VLANs, VPNs, and firewalls. - Hands-on experience with network equipment from leading vendors, such as Cisco, Juniper, Arista, etc. - Proficient in security elements such as firewalls, IDS/IPS, etc. - Strong expertise in modern network technologies, including SD-WAN, virtualization, cloud networking, SDN, and network automation. - Expertise on Ansible for seamless automation of IP network configurations, optimizing the deployment and management of networking infrastructure. - Expertise on Terraform to define and provision IP network resources, ensuring consistent and scalable network architectures. - Proficient in cloud networking technologies and Secure Access Service Edge (SASE). - Familiarity with network monitoring and management tools, such as Wireshark, SolarWinds, ThousandEyes, LiveNX, etc. - Excellent analytical and problem-solving skills, with the ability to assess complex network requirements and propose effective solutions. - Strong communication skills, capable of articulating technical concepts to both technical and non-technical stakeholders. - Industry certifications from vendors like Cisco, Juniper, Zscaler, Palo Alto, Fortinet, etc. are highly desirable. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 1 week ago
7.0 - 12.0 years
0 Lacs
karnataka
On-site
The Senior ADC Migration Engineer will be responsible for the end-to-end migration of complex Application Delivery Controller (ADC) configurations from Citrix NetScaler to F5 BIG-IP platforms. A key focus of this role will involve expert analysis, translation, and re-implementation of custom Citrix LUA scripts into equivalent F5 iRules Tcl, as well as leveraging other F5 native features. This role requires deep technical expertise in both Citrix NetScaler and F5 BIG-IP, along with strong scripting and problem-solving abilities. The responsibilities include conducting a thorough analysis of existing Citrix NetScaler configurations, including Virtual Servers, Services, Policies, Profiles, and custom LUA scripts to understand their functionality and dependencies. Expertly analyzing complex Citrix LUA scripts and translating their functionality into optimized F5 iRules Tcl or alternative F5 features. Designing, configuring, and implementing equivalent F5 BIG-IP configurations, primarily focusing on LTM (Local Traffic Manager) and APM (Access Policy Manager) objects. Mapping and converting Citrix policies and profiles to their F5 counterparts. Developing and executing comprehensive test plans to ensure functional parity and optimal performance post-migration, including load testing and security validation. Creating detailed documentation of migrated configurations, iRules, and architectural changes. Diagnosing and resolving complex issues arising during the migration process and post-migration. Working closely with application owners, network architects, security teams, and project managers to ensure seamless migration and minimal business disruption. Advocating and implementing F5 best practices for security, performance, and maintainability. Potentially mentoring junior team members on F5 BIG-IP technologies and migration strategies. Required Skills and Qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 7 years of hands-on experience with Application Delivery Controllers (ADCs). - Extensive experience (5 years) with Citrix NetScaler/ADC platforms, including advanced configuration, policy creation, and expert-level proficiency in Citrix LUA scripting. - Extensive experience (5 years) with F5 BIG-IP platforms, including LTM and strong proficiency in F5 iRules Tcl. Experience with APM is highly desirable. - Demonstrable experience in successfully migrating ADC configurations between different vendor platforms (Citrix to F5 preferred). Technical Proficiency: - Deep understanding of networking protocols (TCP/IP, HTTPS, DNS, SSL/TLS). - Strong command of Tcl scripting language for iRules development. - Strong understanding of security concepts related to ADCs, SSL offloading, authentication/authorization. - Familiarity with automation tools and scripting (e.g., Python, Ansible) for ADC configuration management is a plus.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
As a member of the Customer Success team at Innovaccer, your primary goal is to empower our customers and help them achieve success by utilizing our platform to meet their organization's business objectives. If you are passionate about assisting customers in realizing the value they desire through technology, then this role is perfect for you. Your responsibilities will include: - Designing, modeling, implementing, and managing large-scale systems using Snowflake, Postgresql, and MongoDB. - Ensuring provision, availability 24x7, reliability, performance, security, maintenance, upgrades, and cost optimization of databases. - Conducting capacity planning for large-scale database clusters. - Automating database provisioning, deployments, routine administration, maintenance, and upgrades. - Addressing business-critical incidents P0/P1 within the SLA, identifying the root cause, and implementing permanent solutions. - Synchronizing data between different data stores such as Postgresql to ES and Snowflake to ES. - Designing, documenting, and benchmarking Snowflake or MongoDB. - Performing database maintenance, backups, health checks, alerting, and monitoring. - Establishing processes, best practices, and ensuring their adherence. - Identifying and optimizing long-running queries to enhance database performance and reduce costs. Qualifications and Requirements: - 4-8 years of relevant experience. - Ability to work in a dynamic environment and adapt to changing business requirements. - Proficiency in SQL query writing and experience with Python or any scripting language in various database environments. - Demonstrated expertise in cloud environments like AWS, Azure, and GCP. - In-depth knowledge of at least two of MongoDB, Redis, or ElasticSearch. - Familiarity with PostgreSQL, Snowflake, or MySQL is advantageous. - Experience in setting up high availability, replication, and incremental backups for different data stores. - Knowledge of database security best practices including encryption, auditing, and Role-Based Access Control (RBAC). - Understanding of database design principles, partitioning/sharding, and query optimization. - Proven troubleshooting skills for resolving database performance issues in production environments. - Experience with cloud-managed and self-hosted databases, managing medium to large-scale production systems. - Proficiency in tools such as Terraform, Jenkins, and Ansible is beneficial. - Familiarity with database monitoring stacks like Prometheus and Grafana. - Expertise in Docker and Kubernetes is mandatory. - Proactive nature with the ability to explore and provide solutions to complex technical challenges. In addition to a challenging and rewarding work environment, Innovaccer offers competitive benefits to support your personal and professional growth: - Generous leave policy of up to 40 days. - Industry-leading parental leave policy. - Sabbatical opportunities for skill development or personal pursuits. - Comprehensive health insurance coverage for you and your family. - Care Program with vouchers for significant life events and moments of need. - Financial assistance through salary advances and personal loans during times of need.,
Posted 1 week ago
1.0 - 31.0 years
1 - 2 Lacs
Vaishali Nagar, Jaipur
On-site
Job Description Job Title: Linux/Windows System Administrator Location: Vaishali Nagar, Jaipur Responsibility and Duties: • Graduation in Technical Degree B.Tech, M.Tech, BCA, MCA or equivalent course such as MCSA or RHCSA or RHCE preferred. • Basic Installation & Server Monitoring • Installation and configuration of web hosting control panels like, SolidCP and Plesk. • Installation and maintenance of virtual servers • Manage, coordinate and implement software upgrades, patches, fixes on servers Prioritizing and managing multiple open tickets at once. • Knowledge about server management, Docker, Kubernetes, Samba, Ansible. • Linux/Windows server administration, management, setup and migration. • System Backups • MSSQL and MySQL Admin • Server Security • Candidates will be responsible for troubleshooting of tickets, monitoring the server infrastructure and handling escalated tickets of low to medium complexity • Deliverable will include ticket ownership, response, resolution and update and handle escalated tickets from various levels. • Collaborate with fellow system administrators and support team members • Thorough knowledge of Linux/Windows server environment. • Experience into web hosting industry preferred • Outstanding troubleshooting skills • Ability to multitask and handle pressure • Work on the installation and modification of Apache, DNS, PHP, MySQL, Perl, and DNS management • Provide client support and technical issue resolution via e-mail, phone, and other electronic media Salary: Negotiable (Minimum 15k – 20k)
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough