Jobs
Interviews

6849 Logging Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Noida

On-site

Product Management at Innovaccer Our product team is a dynamic group of skilled individuals who transform ideas into real-life solutions. They mastermind new product creation, development, and launch, ensuring alignment with the overall business strategy. Additionally, we are leveraging AI across all our solutions, revolutionizing healthcare and shaping the future to make a meaningful impact on the world. About the Role Innovaccer is building the Gravity Platform , a next-generation healthcare data and application platform that powers interoperability, intelligence, and secure collaboration at scale. The InCore services form the foundation of Gravity, enabling identity, access, security, governance, and data control for all applications, services, and AI-driven tools on our platform. We are looking for an experienced Product Manager to own and drive InCore components that power platform-scale capabilities like: User management, authentication (SSO, Smart-on-FHIR), and tenant management Metadata & unified asset catalog (metastore) Access governance (RBAC/ABAC) for platform, data, APIs and AI/ML entities Centralized multi-channel communication service (email, SMS, Slack, WhatsApp, IVR, etc.) Billing metering, audit logging, security, and compliance controls Intelligent enterprise search for platform assets & data External app integrations (EHR, partner apps, AI tools) This is a high-impact role requiring cross-functional leadership, technical depth, and user-centric thinking to define, deliver, and scale platform services critical to healthcare transformation. A Day in the Life Define vision & strategy for InCore services aligned with Gravity platform goals, regulatory requirements (e.g., FHIR, SMART-on-FHIR, Mars-E, HIPAA), and customer needs. Own the roadmap for key components like IAM, SSO, metadata, access governance, security & compliance, usage metering, audit logging, and AI access control. Drive design and delivery of enterprise-scale access governance and security frameworks for apps, data, and AI/ML assets. Lead platform-level integrations including Smart-on-FHIR, external apps, SSO (OpenID, SAML), and partner tools. Lead delivery of a centralized communication service that enables apps and services to send messages (email, SMS, Slack, WhatsApp, IVR) with analytics, templating, and compliance controls. Work with engineering, design, security, compliance, and customer teams to deliver high-quality, scalable solutions. Define success metrics and continuously measure outcomes to guide iterations. Ensure platform self-service capabilities (e.g., access policy setup, SSO config, audit dashboards, billing reports) that delight customers and reduce operational overhead. Stay ahead of platform trends in identity, access, governance, compliance, and AI safety. What You Need 5+ years of product management experience (ideally in platform, infrastructure, or security products). Demonstrated success managing core platform services (identity & access management, RBAC/ABAC, SSO, metadata/catalogs, usage metering, audit logging, security frameworks, or enterprise search). Experience designing SaaS platform services at scale , with multi-tenant architectures and external integrations (EHR, third-party apps, etc.). Familiarity with standards like FHIR, SMART-on-FHIR, OpenID Connect, SAML, OAuth , etc. Strong technical understanding of API design, distributed systems, security controls, and enterprise architecture. Proven ability to work cross-functionally across engineering, design, security, legal, compliance, and GTM teams. Excellent communication, stakeholder management, and problem-solving skills. Preferred Skills Prior experience in healthcare platforms or data platforms. Exposure to AI governance frameworks and managing access/security for AI entities (e.g., LLM agents, ML models). Experience working on developer platforms or cloud infrastructure products. Familiarity with billing metering and enterprise usage reporting for SaaS. We offer competitive benefits to set you up for success in and outside of work. Here’s What We Offer Generous Leaves: Enjoy generous leave benefits of up to 40 days. Parental Leave : Leverage one of industry's best parental leave policies to spend time with your new addition. Sabbatical : Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer comprehensive health insurance to support you and your family, covering medical expenses related to illness, disease, or injury. Extending support to the family members who matter most. Care Program: Whether it’s a celebration or a time of need, we’ve got you covered with care vouchers to mark major life events. Through our Care Vouchers program, employees receive thoughtful gestures for significant personal milestones and moments of need. Financial Assistance : Life happens, and when it does, we’re here to help. Our financial assistance policy offers support through salary advances and personal loans for genuine personal needs, ensuring help is there when you need it most. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer : Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure, extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube, Glassdoor, LinkedIn, Instagram, and the Web.

Posted 4 days ago

Apply

8.0 years

0 Lacs

Uttar Pradesh

On-site

We are looking for a highly skilled and motivated DevOps Engineer to join our dynamic team. As a DevOps Engineer, you will be responsible for managing our infrastructure, CI/CD pipelines, and automating processes to ensure smooth deployment cycles. The ideal candidate will have a strong understanding of cloud platforms (AWS, Azure, GCP), version control tools (GitHub, GitLab), CI/CD tools (GitHub Actions, Jenkins, Azure DevOps, and Agro CD (GitOps methodologies)), and the ability to work in a fast-paced environment. RESPONSIBILITIES Design, implement, and manage CI/CD pipelines using GitHub Actions, Jenkins, Azure DevOps, and Agro CD (GitOps methodologies). Manage and automate the deployment of applications on cloud platforms such as AWS, GCP, and Azure. Maintain and optimize cloud-based infrastructure, ensuring high availability, scalability, and performance. Utilize GitHub and GitLab for version control, branching strategies, and managing code repositories. Collaborate with development, QA, and operations teams to streamline the software delivery process. Monitor system performance and resolve issues related to automation, deployments, and infrastructure. Implement security best practices across CI/CD pipelines, cloud resources, and other environments. Troubleshoot and resolve infrastructure issues, including scaling, outages, and performance degradation. Automate routine tasks and infrastructure management to improve system reliability and developer productivity. Stay up to date with the latest DevOps practices, tools, and technologies. REQUIRED SKILLS At least 8 years’ experience as DevOps Engineer. Proven experience as a DevOps Engineer, Cloud Engineer, or similar role. Expertise in CI/CD tools, including GitHub Actions, Jenkins, Azure DevOps, and Agro CD (GitOps methodologies). Strong proficiency with GitHub and GitLab for version control, repository management, and collaborative development. Extensive experience working with cloud platforms such as AWS, Azure, and Google Cloud Platform (GCP). Solid understanding of infrastructure-as-code (IaC) tools like Terraform or CloudFormation. Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Knowledge of monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK stack). Experience in scripting languages such as Python, Bash, or PowerShell. Strong knowledge of networking, security, and performance optimization in cloud environments. Familiarity with Agile development methodologies and collaboration tools. Education B.Tech/M Tech/MBA/BE/MCA Degree

Posted 4 days ago

Apply

10.0 years

20 - 30 Lacs

Meerut

On-site

In this role, you will be a key driver in building and maintaining our robust, scalable, and secure infrastructure. You will be responsible for designing, implementing, and maintaining the infrastructure and tools necessary for the development and deployment of software applications. You will be responsible for automating infrastructure deployments, managing cloud environments, and optimizing our CI/CD pipelines. As a leader, you will mentor junior engineers, drive best practices, and contribute to the continuous improvement of our DevOps processes. ROLES & RESPONSIBILITIES Infrastructure Management : Design, build, and maintain highly available and scalable infrastructure on cloud platforms (GCP, OpenStack). Automation & Scripting : Develop and maintain automation scripts using languages like Python, Bash, or PowerShell for infrastructure provisioning, configuration management, and application deployments.- Design and implement automation tools and frameworks for continuous integration and deployment SLO/SLI Management : Define, measure, and improve Service Level Objectives (SLOs) and Service Level Indicators (SLIs). CI/CD Pipeline Management : Design, implement, and optimize CI/CD pipelines using tools like Jenkins, GitLab CI, CircleCI, or Azure DevOps. Database & Data Management : Manage database systems (e.g., MySQL, filesystems etc), including data transfers, backups, recovery, and security. Implement and maintain data management strategies. Networking & Routing : Design and implement network infrastructure, including routing, firewalls, load balancing, and VPNs. Monitoring & Logging : Implement and maintain monitoring and logging solutions using tools like Prometheus, Grafana, ELK stack, or Datadog. Documentation & Incident Management : Create and maintain comprehensive documentation for infrastructure, processes, and procedures. Lead incident response and troubleshooting efforts, ensuring timely resolution of issues. Collaboration & Mentorship : Collaborate with development, QA to ensure smooth integration of DevOps processes and mentor junior DevOps engineers. Security Compliance: Implement and maintain security best practices and ensure compliance with relevant regulations and confidentiality of data. MUST HAVE SKILLS Scripting : Proficiency in at least one scripting language (Python, Bash, PowerShell). Cloud Platforms : Extensive experience with any cloud platform (like AWS, Azure, GCP). CI/CD Tools : Expertise in managing and optimizing CI/CD pipelines using tools like Jenkins, GitLab CI, CircleCI, or Azure DevOps, Docker and Kubernetes. Experience with Git and Git workflows. Database Management : Proficiency in managing relational and/or NoSQL databases, including data transfers, backups, recovery, and security. Networking, Routing & Logging : Strong understanding of networking concepts, including TCP/IP, DNS, routing, firewalls, and load balancing. Experience with monitoring and logging tools (Prometheus, Grafana, ELK stack, Datadog). Operating Systems : Strong understanding of Linux and/or Windows operating systems. Security Best Practices : Knowledge of security principles and best practices in cloud and infrastructure management. QUALIFICATIONS Bachelor's degree in Computer Science or a related field 10+ years of experience in software development, DevOps, QA or automation, server management, database fault tolerance Proficiency in scripting languages such as PHP, Python, Ruby, or Shell Excellent communication and teamwork skills Relevant certifications in DevOps or software development are a plus

Posted 4 days ago

Apply

7.0 years

4 - 5 Lacs

Noida

On-site

Country India Working Schedule Full-Time Work Arrangement Virtual Commutable Distance Required No Relocation Assistance Available No Posted Date 10-Jul-2025 Job ID 10079 Description and Requirements Position Summary We are seeking a forward-thinking and enthusiastic Engineering and Operations Specialist to manage and optimize our MongoDB and Splunk platforms. The ideal candidate will have in-depth experience in at least one of these technologies, with a preference for experience in both. Job Responsibilities Worked with engineering and operational tasks for MongoDB and Splunk platforms, ensuring high availability and stability. Continuously improve the stability of the environments, leveraging automation, self-healing mechanisms, and AIOps. Develop and implement automation using technologies such as Ansible, Python, Shell. Manage CI/CD deployments and maintain code repositories. Utilize Infrastructure/Configuration as Code practices to streamline processes. Work closely with development teams to integrate database and observability/logging tools effectively Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex MongoDB databases version (6.0,7.0 ,8.0 and above) on Linux OS on (on-premises, cloud-based). Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and implement best Database and infrastructure security to meet the compliance. Monitor and tune MongoDB and Splunk clusters for optimal performance, identifying bottlenecks and troubleshooting issues. Analyze database queries, indexing, and storage to ensure minimal latency and maximum throughput. The Senior Splunk System Administrator will build, maintain, and standardize the Splunk platform, including forwarder deployment, configuration, dashboards, and maintenance across Linux OS . Able to debug production issues by analyzing the logs directly and using tools like Splunk. Work in Agile model with the understanding of Agile concepts and Azure DevOps. Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. MongoDB Certified DBA or Splunk Certified Administrator is a plus Experience with cloud platforms like AWS, Azure, or Google Cloud. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in MongoDB and working experience Splunk Administrator Technical Skills In-depth experience with either MongoDB or Splunk, with a preference for exposure to both. Strong enthusiasm for learning and adopting new technologies. Experience with automation tools like Ansible, Python and Shell. Proficiency in CI/CD deployments, DevOps practices, and managing code repositories. Knowledge of Infrastructure/Configuration as Code principles. Developer experience is highly desired. Data engineering skills are a plus. Experience with other DB technologies and observability tools are a plus. Extensive work experience Managed and optimized MongoDB databases, designed robust schemas, and implemented security best practices, ensuring high availability, data integrity, and performance for mission-critical applications. Working experience in database performance tuning with MongoDB tools and techniques. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Extensive experience in Database Backup and recovery strategy by design, configuration and implementation using backup tools (Mongo dump, Mongo restore) and Rubrik. Extensive experience in Configuration and enforced SSL/TLS encryption for secure communication between MongoDB nodes Working experience to Configure and maintain Splunk environments, developed dashboards, and implemented log management solutions to enhance system monitoring and security across Linux OS. Experience Splunk migration and upgradation on Standalone Linux OS and Cloud platform is plus. Perform application administration for a single security information management system using Splunk. Working knowledge of Splunk Search Processing Language (SPL), architecture and various components (indexer, forwarder, search head, deployment server) Extensive experience in both MongoDB database and Splunk replication between Primary and Secondary servers to ensure high availability and fault tolerance. Managed Infrastructure security policy as per best industry standard by designing, configurating and implementing privileges and policy on database using RBAC as well as Splunk. Scripting skills and automation experience using DevOps, Repos and Infrastructure as code. Working experience in Container (AKS and OpenShift) is plus. Working experience in Cloud Platform experience (Azure, Cosmos DB) is plus. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Strong problem-solving abilities and proactive approach to identifying and resolving issues. Excellent communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities effectively. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible. Join us!

Posted 4 days ago

Apply

5.0 years

3 - 5 Lacs

Ahmedabad

On-site

Job Description Job Description: Senior ASP.NET Developer with Cloud Expertise - – Build fail-safe healthcare systems on Azure at scale Location : Ahmedabad, India (Onsite) Experience Level : 5+ Years Employment Type : Full-time About Ajmera Infotech Ajmera Infotech is a planet-scale engineering powerhouse with 120+ elite developers shipping mission-critical software for NYSE-listed clients. We architect high-stakes systems where failure is not an option—think HIPAA-regulated platforms, multi-cloud scale, and real-time data pipelines. Why You’ll Love It Code-first, TDD culture — engineers write specs, not just code Healthcare impact — your work safeguards patient data and powers critical decisions Deep Azure immersion — PaaS, AKS, Functions, Event Grid, and more Microservices done right — service boundaries, observability, resilience patterns Career acceleration — mentorship, vertical mobility, and architectural exposure Key Responsibilities Architect and develop secure, scalable .NET microservices hosted on Azure Implement mission-critical backend features with performance and compliance top of mind Lead Azure-native service integration (e.g., Azure Functions, App Services, Key Vault, Service Bus) Apply domain-driven design, TDD/BDD, and CI/CD pipelines to ship reliable code Collaborate across DevSecOps, product, and compliance teams to uphold HIPAA standards Monitor, troubleshoot, and continuously improve systems in production Requirements Must-Have Skills 5–12 years of .NET development experience (C# preferred) Strong command of Azure PaaS , including App Services, Azure Functions, and Cosmos DB Proven track record with microservices , event-driven architecture, and API design TDD/BDD, Git workflows, CI/CD (Azure DevOps or GitHub Actions) Hands-on with authentication, authorization, and encryption best practices Understanding of HIPAA , OWASP Top 10 , and secure coding practices Nice-to-Have Skills Azure Kubernetes Service (AKS), Docker, Infrastructure as Code (ARM/Bicep/Terraform) Messaging systems like Azure Event Grid, Service Bus, or Kafka Experience with performance tuning, load testing, and cost optimization in cloud apps Familiarity with logging and telemetry (App Insights, ELK, Grafana) Past contributions to healthcare products or EMR/EHR systems

Posted 4 days ago

Apply

7.0 years

0 Lacs

Andhra Pradesh

On-site

a.Responsible for building integrations to pull data into ServiceNow, including configuring APIs, middle layers, staging tables, and data transformation logic. Integration Developers specialize in connecting ServiceNow with external systems. Data Engineers handle data pipelines and transformations. b.Lead the design and implementation of complex ServiceNow integrations using REST and SOAP APIs, Integration Hub, and ETL tools to consume data from external sources into ServiceNow. c.Develop and configure staging tables, middle-layer services, and data transformation workflows to ensure data quality, integrity, and usability within ServiceNow. d.Collaborate closely with Business Analysts, Developers, and stakeholders to understand data requirements and translate them into technical integration designs. e.Build and optimize data ingestion jobs and transformation scripts to feed accurate and timely data into ServiceNow reporting and dashboard modules. f.Implement error handling, logging, and monitoring mechanisms for integration workflows to ensure reliability and maintainability. g.Support the configuration and customization of ServiceNow Workspaces, reports, and dashboards by providing clean and well-structured data.7+ years of hands-on experience in h.Excellent problem-solving and troubleshooting skills. Si.trong communication and collaboration skills. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 4 days ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

Proven experience in testing web applications across various platforms in Banking domain . Strong understanding of testing methodologies, including performance testing, API test Proficiency in bug tracking tools and processes. Effective communication and problem-solving skills. Familiarity with Agile methodologies and workflow . In-depth knowledge of monitoring, logging, and performance tuning using tools About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 4 days ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: VP-Digital Expert Support Lead Experience : 12 + Years Location : Pune Position Overview The Digital Expert Support Lead is a senior-level leadership role responsible for ensuring the resilience, scalability, and enterprise-grade supportability of AI-powered expert systems deployed across key domains like Wholesale Banking, Customer Onboarding, Payments, and Cash Management . This role requires technical depth, process rigor, stakeholder fluency , and the ability to lead cross-functional squads that ensure seamless operational performance of GenAI and digital expert agents in production environments. The candidate will work closely with Engineering, Product, AI/ML, SRE, DevOps, and Compliance teams to drive operational excellence and shape the next generation of support standards for AI-driven enterprise systems. Role-Level Expectations Functionally accountable for all post-deployment support and performance assurance of digital expert systems. Operates at L3+ support level , enabling L1/L2 teams through proactive observability, automation, and runbook design. Leads stability engineering squads , AI support specialists, and DevOps collaborators across multiple business units. Acts as the bridge between operations and engineering , ensuring technical fixes feed into product backlog effectively. Supports continuous improvement through incident intelligence, root cause reporting, and architecture hardening . Sets the support governance framework (SLAs/OLAs, monitoring KPIs, downtime classification, recovery playbooks). Position Responsibilities Operational Leadership & Stability Engineering Own the production health and lifecycle support of all digital expert systems across onboarding, payments, and cash management. Build and govern the AI Support Control Center to track usage patterns, failure alerts, and escalation workflows. Define and enforce SLAs/OLAs for LLMs, GenAI endpoints, NLP components, and associated microservices. Establish and maintain observability stacks (Grafana, ELK, Prometheus, Datadog) integrated with model behavior. Lead major incident response and drive cross-functional war rooms for critical recovery. Ensure AI pipeline resilience through fallback logic, circuit breakers, and context caching. Review and fine-tune inference flows, timeout parameters, latency thresholds, and token usage limits. Engineering Collaboration & Enhancements Drive code-level hotfixes or patches in coordination with Dev, QA, and Cloud Ops. Implement automation scripts for diagnosis, log capture, reprocessing, and health validation. Maintain well-structured GitOps pipelines for support-related patches, rollback plans, and enhancement sprints. Coordinate enhancement requests based on operational analytics and feedback loops. Champion enterprise integration and alignment with Core Banking, ERP, H2H, and transaction processing systems. Governance, Planning & People Leadership Build and mentor a high-caliber AI Support Squad – support engineers, SREs, and automation leads. Define and publish support KPIs , operational dashboards, and quarterly stability scorecards. Present production health reports to business, engineering, and executive leadership. Define runbooks, response playbooks, knowledge base entries, and onboarding plans for newer AI support use cases. Manage relationships with AI platform vendors, cloud ops partners, and application owners. Must-Have Skills & Experience 12+ years of software engineering, platform reliability, or AI systems management experience. Proven track record of leading support and platform operations for AI/ML/GenAI-powered systems . Strong experience with cloud-native platforms (Azure/AWS), Kubernetes , and containerized observability . Deep expertise in Python and/or Java for production debugging and script/tooling development. Proficient in monitoring, logging, tracing, and alerts using enterprise tools (Grafana, ELK, Datadog). Familiarity with token economics , prompt tuning, inference throttling, and GenAI usage policies. Experience working with distributed systems, banking APIs, and integration with Core/ERP systems . Strong understanding of incident management frameworks (ITIL) and ability to drive postmortem discipline . Excellent stakeholder management, cross-functional coordination, and communication skills. Demonstrated ability to mentor senior ICs and influence product and platform priorities. Nice-to-Haves Exposure to enterprise AI platforms like OpenAI, Azure OpenAI, Anthropic, or Cohere. Experience supporting multi-tenant AI applications with business-driven SLAs. Hands-on experience integrating with compliance and risk monitoring platforms. Familiarity with automated root cause inference or anomaly detection tooling. Past participation in enterprise architecture councils or platform reliability forums

Posted 4 days ago

Apply

5.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are looking for a Senior Cyber Security Engineer to drive end-to-end security architecture, operations, and culture across DeHaat’s tech landscape. You will work closely with engineering, DevOps, data, and compliance teams to lead security initiatives and strengthen our defenses across cloud, applications, and infrastructure. Key Responsibilities Conduct Web and Android Application Vulnerability Assessments and Penetration Testing (VAPT) following OWASP and industry standards. Conduct Network Pentesting on Cloud Infrastructure Perform Secure Source Code Reviews using tools such as SonarQube and Semgrep , and recommend remediations. Develop and integrate DevSecOps pipelines , embedding security into the CI/CD lifecycle. Implement and manage SIEM solutions such as Wazuh and other threat detection/logging platforms. Design and enforce Cloud Security configurations , including AWS WAF and Cloudflare for DDoS mitigation and application protection. Work with development teams to integrate security best practices and review threat models and secure architecture designs. Ensure compliance with industry standards such as PCI-DSS and ISO 27001 , and help support audit readiness. Provide detailed security findings, risk analysis, and actionable recommendations to stakeholders and developers. Stay updated with the latest threats, vulnerabilities, and technologies. Requirements Junior 2-4, or Senior 5-6 years of experience in cybersecurity, with hands-on expertise in cloud and application security. Deep understanding of AWS security services (IAM, VPC, KMS, GuardDuty, etc.). Experience with SIEMs, WAFs, endpoint protection, and vulnerability management tools. Proficiency in secure SDLC, DevSecOps, and scripting (Python, Bash). Familiarity with industry frameworks (OWASP, NIST, MITRE) and regulatory standards. Certifications like CISSP, OSCP, or AWS Security Specialty are a plus. Strong communication, leadership, and cross-functional collaboration skills.

Posted 4 days ago

Apply

0.0 - 3.0 years

0 - 0 Lacs

Mohali, Punjab

On-site

Apply Here - https://beyondroot.keka.com/careers/jobdetails/31270 We are seeking a highly skilled DevSecOps Engineer to join our team and enhance the security posture of our development and deployment processes. You will be responsible for embedding security throughout the DevOps pipeline and across the infrastructure, ensuring best practices are implemented in CI/CD, infrastructure automation, container security, and monitoring. The ideal candidate is experienced with AWS, Kubernetes, Jenkins, and a suite of security and monitoring tools. Key Responsibilities: Design, implement, and manage CI/CD pipelines for automated builds, testing, and deployments. Use Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation) to provision and manage infrastructure. Automate manual operational tasks through scripting and configuration management tools (e.g., Ansible, Bash, Python). Deploy, monitor, and maintain applications in cloud environments such as AWS, Set up and manage monitoring, alerting, and logging systems using tools like Prometheus, Grafana, ELK, or Datadog. Collaborate with development and QA teams to optimize the software development lifecycle. Implement DevSecOps practices to integrate security into CI/CD and cloud workflows. Perform routine system maintenance, upgrades, and troubleshooting. Required Skills and Qualifications: 3–6+ years in a DevSecOps, DevOps, or Security Engineer role. Strong hands-on experience with AWS services and security configurations. Proficient in Jenkins and GitLab CI for pipeline automation. Deep understanding of Docker and container orchestration with Kubernetes . Experience with SonarQube , CodeQL , and OWASP security practices. Familiarity with monitoring and observability tools like Datadog , ELK (Elasticsearch, Logstash, Kibana) , and New Relic . Proficient in Git workflows and secure development practices. Strong scripting experience (e.g., Bash, Python). Knowledge of secure coding practices, threat modeling, and compliance frameworks (e.g., CIS, NIST). Job Types: Full-time, Permanent Pay: ₹70,000.00 - ₹90,000.00 per month Benefits: Flexible schedule Paid sick time Schedule: Day shift Monday to Friday Morning shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: Devops : 3 years (Required) Location: Mohali, Punjab (Required) Work Location: In person Speak with the employer +91 9817558892

Posted 4 days ago

Apply

0.0 - 4.0 years

0 Lacs

Salem, Tamil Nadu

On-site

Job Title: Lead Application Support Engineer Company: RunLoyal Location: Salem Summary: RunLoyal , a rapidly growing vertical SaaS company based in Atlanta, GA, seeks a passionate and experienced Lead Application Support Engineer to join our dynamic team. As a Lead Application Support Engineer, you will play a vital role in providing exceptional technical support to our US customers, ensuring the smooth operation of our SaaS product. Responsibilities: ● Develop and implement comprehensive support policies and procedures to streamline customer support operations. ● Provide first-line technical support to US customers via phone, email, and chat, resolving issues efficiently and effectively. ● Troubleshoot complex technical issues, escalating them to senior engineers when necessary. ● Collaborate with the development team to identify and resolve bugs, ensuring product stability and performance. ● Create and update the existing support documentation and knowledge base. ● Stay up-to-date with the latest industry trends and technologies ● Must have general analytical skills. Qualifications: ● Bachelor's degree in Computer Science, Information Technology, Master in Computer Application. ● Minimum of 4-5 years of experience as a Lead Application Support Engineer in a SaaS-based company. (Preferably US-based) ● Proven ability to troubleshoot and resolve complex technical issues. ● Excellent communication (Written and Verbal) . ● Interpersonal skills, Strong analytical and problem-solving skills. ● Ability to work independently and as part of a team. Self-starter with a strong sense of ownership. ● Must be able to work in a 24/7 environment (Complete night shift - US Working Hours) ● Great proficiency in MySQL and AWS (Any Certifications in SQL & AWS would be a plus), and basic Linux Skills. ● Exposure to ticketing, monitoring, logging, and alerting tools. (Please mention the tools handled) ● Ability/Ready to work weekends and night shifts. Must have worked in a 24x7 setup. Additional Requirements: ● Passionate about being part of building something great and revolutionizing the pet care industry with our unique and innovative product ● Love for pets Cultural Expectations: ● We prioritize the well-being of our pet care providers and the pets they care for. This means being responsive and available when they need us, including holidays, weekends, and occasional late hours. ● We have an unyielding commitment to serving our customers, working diligently until they are fully satisfied. ● We focus on attention to detail and quality in everything we do, ensuring excellence in our products and services. ● We are passionate about building something extraordinary and revolutionizing the pet care industry with our innovative solutions. ● A love for pets is at the heart of what we do. Our Values: ● Kindness: We assume positive intent, celebrate co-workers' success, avoid toxic behaviors, and call out bad acting when we see it. ● Trust: We are authentic, humble, and empathetic. Empathy is the cornerstone of building trust. And in a world that is certain to be full of change, trust is a requirement ● Fearlessness: We are bold, honest, direct, and candid. We dare to challenge assumptions and push boundaries. And we are not afraid when someone challenges us. If we make mistakes, which we will, they are unique and good opportunities to learn ● Discourse, Not Dissonance: We encourage constructive discourse and welcome challenges. We strive to create an environment where the best ideas rise to the top, and data drives decisions. ● Understanding, Not Consensus: As leaders, we stand firm in our informed convictions until overturned by data. We embrace healthy disagreement but commit to the outcome once a decision is made. ● Ownership: We empower each other to solve problems and take initiative to achieve our goals. We are purposeful and intentional in our thinking, knowing that we are individually accountable for our impact on the company's results. ● Curiosity: We are passionate about learning and constantly seek opportunities to grow and develop. We adapt and mature with the ever-changing landscape of our industry. Benefits: ● Competitive salary and benefits package. ● Opportunity to work with cutting-edge technology. ● Fast-paced and dynamic work environment. ● Chance to make a real impact on a growing company. RunLoyal is an equal opportunity employer, and we value diversity at our company. We don’t discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. To Apply: Interested @ contact -6385599102 Job Type: Full-time Shift: US shift Application Question(s): Must be able to work in a 24/7 environment Experience: MySQL and AWS: 4 years (Required) Location: Salem, Tamil Nadu (Required) Work Location: In person

Posted 4 days ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Requirements Description and Requirements Position Summary We are seeking a forward-thinking and enthusiastic Engineering and Operations Specialist to manage and optimize our MongoDB and Splunk platforms. The ideal candidate will have in-depth experience in at least one of these technologies, with a preference for experience in both. Job Responsibilities Worked with engineering and operational tasks for MongoDB and Splunk platforms, ensuring high availability and stability. Continuously improve the stability of the environments, leveraging automation, self-healing mechanisms, and AIOps. Develop and implement automation using technologies such as Ansible, Python, Shell. Manage CI/CD deployments and maintain code repositories. Utilize Infrastructure/Configuration as Code practices to streamline processes. Work closely with development teams to integrate database and observability/logging tools effectively Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex MongoDB databases version (6.0,7.0 ,8.0 and above) on Linux OS on (on-premises, cloud-based). Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and implement best Database and infrastructure security to meet the compliance. Monitor and tune MongoDB and Splunk clusters for optimal performance, identifying bottlenecks and troubleshooting issues. Analyze database queries, indexing, and storage to ensure minimal latency and maximum throughput. The Senior Splunk System Administrator will build, maintain, and standardize the Splunk platform, including forwarder deployment, configuration, dashboards, and maintenance across Linux OS . Able to debug production issues by analyzing the logs directly and using tools like Splunk. Work in Agile model with the understanding of Agile concepts and Azure DevOps. Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. MongoDB Certified DBA or Splunk Certified Administrator is a plus Experience with cloud platforms like AWS, Azure, or Google Cloud. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in MongoDB and working experience Splunk Administrator Technical Skills In-depth experience with either MongoDB or Splunk, with a preference for exposure to both. Strong enthusiasm for learning and adopting new technologies. Experience with automation tools like Ansible, Python and Shell. Proficiency in CI/CD deployments, DevOps practices, and managing code repositories. Knowledge of Infrastructure/Configuration as Code principles. Developer experience is highly desired. Data engineering skills are a plus. Experience with other DB technologies and observability tools are a plus. Extensive work experience Managed and optimized MongoDB databases, designed robust schemas, and implemented security best practices, ensuring high availability, data integrity, and performance for mission-critical applications. Working experience in database performance tuning with MongoDB tools and techniques. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Extensive experience in Database Backup and recovery strategy by design, configuration and implementation using backup tools (Mongo dump, Mongo restore) and Rubrik. Extensive experience in Configuration and enforced SSL/TLS encryption for secure communication between MongoDB nodes Working experience to Configure and maintain Splunk environments, developed dashboards, and implemented log management solutions to enhance system monitoring and security across Linux OS. Experience Splunk migration and upgradation on Standalone Linux OS and Cloud platform is plus. Perform application administration for a single security information management system using Splunk. Working knowledge of Splunk Search Processing Language (SPL), architecture and various components (indexer, forwarder, search head, deployment server) Extensive experience in both MongoDB database and Splunk replication between Primary and Secondary servers to ensure high availability and fault tolerance. Managed Infrastructure security policy as per best industry standard by designing, configurating and implementing privileges and policy on database using RBAC as well as Splunk. Scripting skills and automation experience using DevOps, Repos and Infrastructure as code. Working experience in Container (AKS and OpenShift) is plus. Working experience in Cloud Platform experience (Azure, Cosmos DB) is plus. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Strong problem-solving abilities and proactive approach to identifying and resolving issues. Excellent communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities effectively. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!

Posted 4 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our world is transforming, and PTC is leading the way. Our software brings the physical and digital worlds together, enabling companies to improve operations, create better products, and empower people in all aspects of their business. Our people make all the difference in our success. Today, we are a global team of nearly 7,000 and our main objective is to create opportunities for our team members to explore, learn, and grow – all while seeing their ideas come to life and celebrating the differences that make us who we are and the work we do possible. Job Details As a senior SRE / Observability Engineer, you will be part of the Atlas Platform Engineering team and will: Create and maintain observability standards and best practices Review the current observability platform, identify areas for improvement, and guide the team in enhancing monitoring, logging, tracing, and alerting capabilities. Expand the observability stack across multiple clouds, regions, and clusters, managing all observability data. Design and implement monitoring solutions for complex distributed systems to provide deep insights into systems and services aiming at complete visibility of digital operations Supporting the ongoing evaluation of new capabilities in the observability stack, conducting proof of concepts, pilots, and tests to validate their suitability. Assist teams in creating clear, informative, and actionable dashboards to improve system visibility. Automate monitoring and alerting processes, including enrichment strategies and ML-driven anomaly detection where applicable. Provide technical leadership to the observability team with clear priorities ensuring agreed outcomes are achieved in a timely manner. Work closely with R&D and product development teams (understand their requirements and challenges) to ensure seamless visibility into system and service performance. Work closely with the Traffic Management team to identify and standardise on existing and new observability tools as part of a holistic solution Conduct training sessions and create documentation for internal teams Support the definition of SLI (service level indicators) and SLO (service level objectives) for the Atlas services. Keep track of the error budget of each service Participate in the emergency response process Conduct RCAs (root cause analysis) Help to automate repetitive tasks and reduce toil. Qualifications: People And Communication Qualifications Be a strong team player Have good collaboration and communication skills Ability to translate technical concepts for non-technical audiences Problem-solving and analytical thinking Technical qualifications - general: Familiarity with cloud platforms (Ideally Azure) Familiarity with Kubernetes and Istio as the architecture on which the observability and Atlas services run, and how they integrate and scale. Experience with infrastructure as code and automation Knowledge of common programming languages and debugging techniques Have a strong technical background and be hands on. Linux and scripting languages (Bash, Python, Golang). Significant Understanding of DevOps principles. Technical Qualifications - Observability Strong understanding of observability principles (metrics, logs, traces) Experience with APM tools and distributed tracing Proficiency in log aggregation and analysis Knowledge and hands-on experience with monitoring, logging, and tracing tools such as Prometheus, Prometheus, Grafana, Datadog, New Relic, Sumologic, ELK Stack, or others Knowledge of Open Telemetry, including OTEL collector and code instrumentation Experience designing and building unified observability platforms that enable the use of data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired. Technical Qualifications – SRE Understanding of the Google SRE principles Experience in defining SLIs and SLOs Experience in performing RCAs (root cause analysis) Experience in system performance Experience in incident response Knowledge of status tools, such as Atlassian Status Page or similar Knowledge of incident management and paging tools, such as PagerDuty or similar Knowledge of ITIL (Information Technology Infrastructure Library) processes Qualifications: People And Communication Qualifications Be a strong team player Have good collaboration and communication skills Ability to translate technical concepts for non-technical audiences Problem-solving and analytical thinking Technical qualifications - general: Familiarity with cloud platforms (Ideally Azure) Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Experience with infrastructure as code and automation Knowledge of common programming languages and debugging techniques Have a strong technical background and be hands on. Linux and scripting languages (Bash, Python, Golang). Significant Understanding of DevOps principles. Technical Qualifications - Observability Strong understanding of observability principles (metrics, logs, traces) Experience with APM tools and distributed tracing Proficiency in log aggregation and analysis Knowledge and hands-on experience with monitoring, logging, and tracing tools such as Prometheus, Prometheus, Grafana, Datadog, New Relic, Sumologic, ELK Stack, or others Knowledge of Open Telemetry, including OTEL collector and code instrumentation Experience designing and building unified observability platforms that enable the use of data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired. Life at PTC is about more than working with today’s most cutting-edge technologies to transform the physical world. It’s about showing up as you are and working alongside some of today’s most talented industry leaders to transform the world around you. If you share our passion for problem-solving through innovation, you’ll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us? We respect the privacy rights of individuals and are committed to handling Personal Information responsibly and in accordance with all applicable privacy and data protection laws. Review our Privacy Policy here."

Posted 4 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

About ACA: ACA was founded in 2002 by four former SEC regulators and one former state regulator. The founders saw a need for investment advisers to receive expert guidance on existing and new regulations. Over the years, ACA has grown both organically and by acquisition to expand our GRC business and technology solutions. Our services now include GIPS standards verification, cybersecurity and technology risk, regulatory technology, ESG advisory, AML and financial crimes, financial and regulatory reporting, and Mirabella for establishing EU operations. Job Summary: The role is highly skilled and detail-oriented Level 3 Support Analyst with a strong foundation in financial concepts, including transactions, holdings, and securities. This role is integral to the daily operations of our technology support team, with a specific focus on the personal trading components of ACA Technology platforms. This includes identifying issues, troubleshooting, offering guidance on corrective measures, and managing system configurations. This would include maintaining data integrity and providing support to clients. The ideal candidate will have a strong analytical mindset, excellent communication skills, and the ability to troubleshoot and resolve data-related issues within MS-SQL efficiently. Job Duties Perform in-depth analysis, troubleshooting, and diagnosis of incidents related to clients' trades, holdings, and securities. Fully manage support cases by logging, monitoring, updating, prioritizing, and resolving calls, emails, and indirect inquiries promptly. Troubleshoot complex issues, identify root causes, and implement effective solutions. Execute fixes, code enhancements, and optimizations with minimal assistance within MS SQL. Work collaboratively with developers and product owners to identify corrective actions or effective workarounds for incident resolution. Provide expert guidance on corrective actions and system configuration adjustments. Collaborate with internal teams and clients to resolve data-related issues efficiently. Provide recommendations to key stakeholders and leadership, using insights to advocate for both internal and external users. Utilize MS-SQL to analyze, diagnose, and resolve technical and data anomalies. Document processes, resolutions, and best practices to enhance team knowledge Preferred Preferred Education and Experienc e Proficient in analytical skills, adept at resolving issues through troubleshooting. Familiarity with financial concepts, including understanding transactions, holdings, and securities. Bachelor’s degree, with a preference for a concentration in technology or finance. Qualifications: Excellent command of the English language, with strong reading, writing, and comprehension skills. Strong proficiency in MS SQL, with the ability to write, interpret, and optimize complex SQL queries and stored procedures. Solid financial analytical skills, with a working knowledge of financial concepts such as transactions, holdings, and securities. Understanding of API calls and their role in data integration and system interoperability. Demonstrated ability to work independently, manage multiple priorities, and effectively organize tasks in a dynamic environment. Proven success in fast-paced, small-team environments, with a collaborative and adaptable mindset. Highly motivated and goal-oriented, with a proactive approach to self-learning and professional development. Willingness to contribute to both internal initiatives and client-facing projects. Nice to Have: Experience with AWS services, particularly AWS CloudWatch. Familiarity with New Relic for application performance monitoring. Proficiency with Postman for API testing and validation. Why join our team? We are the leading governance, risk, and compliance (GRC) advisor in financial services. When you join ACA, you'll become part of a team whose unique combination of talent includes the industry's largest team of former regulators, compliance professionals, legal professionals, and GIPS® standards verifiers in the industry, along with practitioners in cybersecurity, ESG, and regulatory technology. Our team enjoys an entrepreneurial work environment by offering innovative and tailored solutions for our clients. We encourage creative thinking and making the most of your experience at ACA by offering multiple career paths. We foster a culture of growth by focusing on continuous learning through inquiry and curiosity, and transparency. If you’re ready to be part of an award-winning, global team of thoughtful, talented, and committed professionals, you’ve come to the right place. What we commit to: ACA is an equal opportunity employer that values diversity. We conduct our business without regard to actual or perceived age, race, color, religion, disability, caregiver, marital or partnership status, pregnancy (including childbirth, breastfeeding, or related medical conditions), ancestry, national origin and citizenship, sex, gender identity and expression, sexual orientation, sexual and reproductive health decisions, military or veteran status, creed, genetic predisposition, carrier status or any other category protected by federal, state and local law. ACA is firmly committed to a policy of nondiscrimination, which applies to recruiting, hiring, placement, promotions, training, discipline, terminations, layoffs, recall, transfers, leaves of absence, compensation and all other terms and conditions of employment. Here at ACA, we have created a variety of programs to promote ACA’s culture of inclusivity and work hard to ensure that all our employees have an equal opportunity to contribute to ACA and feel that ACA is exactly where they belong.

Posted 4 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Role- Python Developer Job Type- Contract Duration- 6 months Work mode- Onsite(6 days working on a Hybrid Mode) Location- Gurugram Role Description Build and improve the backend services that power our chatbots, telephony flows, CRM automations, lead-allocation engine, database APIs, FAQ pipelines and third-party API integrations. You’ll work in Python with FastAPI, AWS Lambda and message queues, integrating platforms such as Ozonetel, Freshworks & third party supplier API’s while shipping reliable, scalable features for real users. Key Responsibilities • Design, code and ship new Python services and API endpoints using FastAPI or similar frameworks • Integrate third-party platforms (Ozonetel, Freshworks, payment providers) and handle secure data flow • Write unit and integration tests; troubleshoot and fix bugs quickly • Monitor performance, optimise queries and follow good logging and alerting practices • Keep task lists updated and share progress in regular team meetings • Learn and apply CI/CD, AWS/GCP/Azure basics and other DevOps practices as you grow Qualifications • 0–2 years of software development experience (internships count) • Strong coding skills in Python and understanding of RESTful APIs • Familiarity with FastAPI, Flask or Django, plus SQL/NoSQL databases • Basic knowledge of AWS Lambda (or any serverless stack), message queues and Git • Clear written and spoken English • Bachelor’s degree in Computer Science, Engineering or a related field • AWS, GCP or Azure experience is a plus

Posted 4 days ago

Apply

0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Job Title: Site Reliability Engineer (SRE) Location: Coimbatore (Prefer - TamilNadu candidates ) Department: Technology / Infrastructure / DevOps Employment Type: Full-time Job Summary: We are seeking an experienced Site Reliability Engineer (SRE) who will play a critical role in ensuring the reliability, performance, and scalability of our payment systems. The ideal candidate will possess deep expertise in DevOps automation , enterprise monitoring , and cloud platforms , along with a strong background in Card Payment systems . This role requires hands-on technical skills, a passion for problem-solving, and the ability to collaborate across teams in a fast-paced, dynamic environment. Key Responsibilities: Reliability & Performance Ensure the reliability, availability, and performance of critical payment platforms and services. Drive root cause analysis (RCA) and implement long-term solutions to prevent recurrence of incidents. Manage capacity planning , scalability, and performance tuning across cloud and on-prem environments. Lead and participate in the on-call rotation , providing timely support and issue resolution. DevOps Automation & CI/CD Design, implement, and maintain CI/CD pipelines using Jenkins, GitHub, and other DevOps tools . Automate infrastructure deployment, configuration, and monitoring, following Infrastructure as Code (IaC) principles. Enhance automation for routine operational tasks, incident response, and self-healing capabilities. Monitoring & Observability Implement and manage enterprise monitoring solutions including Splunk, Dynatrace, Prometheus, and Grafana . Build real-time dashboards , alerts, and reporting to proactively identify system anomalies. Continuously improve observability, logging, and tracing across all environments. Cloud Platforms & Infrastructure Work with AWS, Azure, and PCF (Pivotal Cloud Foundry) environments, managing cloud-native services and infrastructure. Design and optimize cloud architecture for reliability and cost-efficiency. Collaborate with cloud security and networking teams to ensure secure and compliant infrastructure. Payment Systems Expertise Apply your understanding of Card Payment systems to ensure platform reliability and compliance. Troubleshoot payment-related issues, ensuring minimal impact on transaction flows and customer experience. Collaborate with product and development teams to ensure alignment with business objectives.

Posted 4 days ago

Apply

5.0 years

9 - 12 Lacs

New Delhi, Delhi, India

On-site

Job Title - DevOps Engineer – App Infrastructure & Scaling (Immediate Joiner) Role Overview We are seeking an experienced and highly motivated DevOps Engineer to join our growing technology team. In this role, you will be responsible for designing, implementing, and maintaining scalable and secure cloud infrastructure that powers our mobile and web applications. You will play a critical role in ensuring system reliability, performance, and cost efficiency across environments. Key Responsibilities Design, configure, and manage cloud infrastructure on Google Cloud Platform (GCP). Implement robust horizontal scaling, load balancers, auto-scaling groups, and performance monitoring systems Develop and maintain CI/CD pipelines using tools such as GitHub Actions, Jenkins, or GitLab CI Set up real-time monitoring, crash alerting, logging systems, and health dashboards using industry-leading tools Manage and optimize Redis, job queues, caching layers, and backend request loads Automate data backups, enforce secure access protocols, and implement disaster recovery systems Collaborate with the Flutter and PHP (Laravel) teams to address performance bottlenecks and reduce system load Conduct infrastructure security audits and recommend best practices to prevent downtime and security breaches Monitor and optimize cloud usage and billing, ensuring a cost-effective and scalable architecture Required Skills & Qualifications 3–5 years of hands-on experience in a DevOps or Cloud Infrastructure role, preferably with GCP Strong proficiency with Docker, Kubernetes, NGINX, and load balancing strategies Proven experience with CI/CD pipelines and tools such as GitHub Actions, Jenkins, or GitLab CI Familiarity with monitoring tools like Grafana, Prometheus, NewRelic, or Datadog Deep understanding of API architecture, including rate limiting, error handling, and fallback mechanisms Experience working with PHP/Laravel backends, Firebase, and modern mobile app infrastructure Working knowledge of Redis, Socket.IO, and message queuing systems (e.g., RabbitMQ, Kafka) Preferred Qualifications Google Cloud Professional certification or equivalent is a plus Experience in optimizing systems for high-concurrency, low-latency environments Familiarity with Infrastructure as Code (IaC) tools such as Terraform or Ansible Skills: cloud,redis,laravel,gitlab,infrastructure as code (iac),architecture,firebase,docker,newrelic,app,ci,kubernetes,grafana,socket.io,rabbitmq,load,gcp,github actions,datadog,gitlab ci,cloud infrastructure,php,jenkins,ansible,prometheus,terraform,nginx,ci/cd,ci/cd pipelines,devops,api architecture,cd,google cloud platform (gcp),infrastructure,kafka,google cloud platform

Posted 4 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Senior DevOps Engineer (Azure) Location: Ahmedabad, Gujarat, India Experience: 5+ Years Duration: 1 year Job Overview: We are looking for a Senior DevOps Engineer who is highly skilled and detail-oriented with a strong background in Azure cloud infrastructure . This role is ideal for professionals who are passionate about automation, scalability, and driving best practices in CI/CD and cloud deployments. You will be responsible for managing and modernizing medium-sized, multi-tier environments, building and maintaining CI/CD pipelines, and ensuring application and infrastructure reliability and security. Key Responsibilities: Design, implement, and maintain efficient and secure CI/CD pipelines. Lead infrastructure development and operational support in Azure with high availability and performance. Utilize containerization technologies such as Docker and orchestration tools like Kubernetes. Implement and enforce Git best practices across development teams. Use Infrastructure as Code (IaC) tools, preferably Terraform, to provision and manage cloud resources. Establish and follow DevSecOps practices, ensuring security compliance throughout the development lifecycle. Monitor application and infrastructure performance using tools like Azure Monitor, Managed Grafana, and other observability tools. Implement and support multi-tenancy SaaS architecture and deployment best practices. Collaborate with software development teams to align with DevOps best practices. Troubleshoot and resolve issues across development, staging, and production environments. Provide leadership in infrastructure planning, design reviews, and incident management. Work with multiple cloud platforms such as Azure, AWS, and GCP, with a focus on Azure DevOps. Required Skills & Experience: 5+ years of hands-on DevOps experience. Strong communication and interpersonal skills. Expertise in Azure cloud services; experience with AWS is a plus. Proficient in Kubernetes, including deployment, scaling, and maintenance. Experience with web servers such as Nginx and Apache. Familiarity with the Azure Well-Architected Framework. Hands-on experience with monitoring and observability tools (e.g., Azure Monitor, Grafana). Strong understanding of DevSecOps principles and security best practices. Proven experience in debugging and troubleshooting infrastructure and applications. Experience working with multi-tenant SaaS platforms. Proficient in performance monitoring and tuning. Experience with Azure dashboarding, logging, and monitoring tools (Managed Grafana preferred). Preferred Qualifications: Azure certifications such as AZ-400 or AZ-104. Experience with cloud migration and application modernization. Familiarity with tools like Prometheus, Grafana, ELK Stack, or similar. Leadership or mentoring experience is a plus.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Key Responsibilities Develop and manage end-to-end ML pipelines from training to production. Automate model training, validation, and deployment using CI/CD. Ensure scalability and reliability of AI systems with Docker & Kubernetes. Optimize ML model performance for low latency and high availability. Design and maintain cloud-based/hybrid ML infrastructure. Implement monitoring, logging, and alerting for deployed models. Ensure security, compliance, and governance in AI model deployments. Collaborate with Data Scientists, Engineers, and Product Managers. Define and enforce MLOps best practices (versioning, reproducibility, rollback). Maintain model registry and conduct periodic pipeline reviews. Experience & Skills 5+ years in MLOps, DevOps, or Cloud Engineering. Strong experience in ML model deployment, automation, and monitoring. Proficiency in Kubernetes, Docker, Terraform, and cloud platforms (AWS, Azure, GCP). Hands-on with CI/CD tools (GitHub Actions, Jenkins). Expertise in ML frameworks (TensorFlow, PyTorch, MLflow, Kubeflow). Understanding of APIs, microservices, and infrastructure-as-code. Experience with monitoring tools (Prometheus, Grafana, ELK, Datadog). Strong analytical and debugging skills. Preferred: Cloud or MLOps certifications, real-time ML, ETL, and AI ethics knowledge. Tools & Technologies Cloud & Infrastructure: AWS, GCP, Azure, Terraform, Kubernetes, Docker. MLOps & Model Management: MLflow, Kubeflow, TFX, SageMaker. CI/CD & Automation: GitHub Actions, Jenkins, ArgoCD, Airflow. Monitoring & Logging: Prometheus, Grafana, ELK Stack, Datadog. Collaboration & Documentation: Slack, Confluence, JIRA, Notion. Why Join Yavar? Join us at an exciting growth phase, working with cutting-edge AI technology and talented teams. This role offers competitive compensation, equity participation, and the opportunity to shape a world-class engineering organization. Your leadership will be crucial in turning our innovative vision into reality, creating products that reshape how enterprises harness artificial intelligence. *At Yavar, talent knows no boundaries. While experience matters, we value your drive for excellence and ability to execute. Ready to build the future of enterprise AI? We're eager to start a conversation.* To apply, please contact: digital@yavar.ai Location: Chennai / Coimbatore.

Posted 4 days ago

Apply

130.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Job Description Senior Manager, Data Engineer Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Our Technology centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of the company IT operating model, Tech centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each tech center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview For the Senior Data Engineer role, we are looking for professional with experience in designing, developing, and maintaining data pipelines. We intend to make data reliable, governed, secure and available for analytics within the organization. As part of a team this role will be responsible for data management with a broad range of activities like data ingestion to cloud data lakes and warehouses, quality control, metadata management and orchestration of machine learning models. We are also forward looking and plan to bring innovations like data mesh and data fabric into our ecosystem of tools and processes. What Will You Do In This Role Design, develop and maintain data pipelines to extract data from a variety of sources and populate data lake and data warehouse. Develop the various data transformation rules and data modeling capabilities. Collaborate with Data Analyst, Data Scientists, Machine Learning Engineers to identify and transform data for ingestion, exploration, and modeling. Work with data governance team and implement data quality checks and maintain data catalogs. Use Orchestration, logging, and monitoring tools to build resilient pipelines. Use test driven development methodology when building ELT/ETL pipelines. Develop pipelines to ingest data into cloud data warehouses. Analyze data using SQL. Use serverless AWS services like Glue, Lambda, Step Functions Use Terraform Code to deploy on AWS. Containerize Python code using Docker. Use Git for version control and understand various branching strategies. Build pipelines to work with large datasets using PySpark Develop proof of concepts using Jupyter Notebooks Work as part of an agile team. Create technical documentation as needed. What Should You Have 4-10 years of relevant experience Good experience with AWS services like S3, ECS, Fargate, Glue, Any AWS developer or architect certification Agile development methodology Step Functions, CloudWatch, Lambda, EMR SQL Proficient in Python, PySpark Good with Git, Docker, Terraform Ability to work in cross functional teams Bachelor’s Degree or equivalent experience in a relevant field such as Engineering (preferably computer engineers.), Computer Science Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, Syou have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Remote Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 07/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R329038

Posted 4 days ago

Apply

4.0 years

0 Lacs

Thane, Maharashtra, India

On-site

DevOps Engineer - Kubernetes Specialist Experience: 4 - 8 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Hybrid (Mumbai) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Kubernetes , CI/CD , Google Cloud Ripplehire (One of Uplers' Clients) is Looking for: DevOps Engineer - Kubernetes Specialist who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are seeking an experienced DevOps Engineer with deep expertise in Kubernetes primarily Google Kubernetes Engine (GKE) to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and maintaining scalable containerized infrastructure, with a strong focus on cost optimization and operational excellence. Key Responsibilities & Required Skills Kubernetes Infrastructure & Deployment Responsibilities: Design, deploy, and manage production-grade Kubernetes clusters Perform cluster upgrades, patching, and maintenance with minimal downtime Deploy and manage multiple microservices with ingress controllers and networking Configure storage solutions and persistent volumes for stateful applications Required Skills: 3+ years of hands-on Kubernetes experience in production environments, primarily on Google Kubernetes Engine (GKE) Strong experience with Google Cloud Platform (GCP) and GKE-specific features Deep understanding of Docker, container orchestration, and GCP networking concepts Knowledge of Helm charts, YAML/JSON configuration, and service mesh technologies CI/CD, Monitoring & Automation Responsibilities: Design and implement robust CI/CD pipelines for Kubernetes deployments Implement comprehensive monitoring, logging, and alerting solutions Leverage AI tools and automation to improve team efficiency and task speed Create dashboards and implement GitOps workflows Required Skills: Proficiency with Jenkins, GitLab CI, GitHub Actions, or similar CI/CD platforms Experience with Prometheus, Grafana, ELK stack, or similar monitoring solutions Knowledge of Infrastructure as Code tools (Terraform, Ansible) Familiarity with AI/ML tools for DevOps automation and efficiency improvements Cost Optimization & Application Management Responsibilities: Analyze and optimize resource utilization across Kubernetes workloads Implement right-sizing strategies for services and batch jobs Deploy and manage Java-based applications and MySQL databases Configure horizontal/vertical pod autoscaling and resource management Required Skills: Experience with resource management, capacity planning, and cost optimization Understanding of Java application deployment and MySQL database administration Knowledge of database operators, StatefulSets, and backup/recovery solutions Proficiency in scripting languages (Bash, Python, or Go) Preferred Qualifications Experience with additional Google Cloud Platform services (Compute Engine, Cloud Storage, Cloud SQL, Cloud Build) Knowledge of GKE advanced features (Workload Identity, Binary Authorization, Config Connector) Experience with other cloud Kubernetes services (AWS EKS, Azure AKS) is a plus Knowledge of container security tools and chaos engineering Experience with multi-cluster GKE deployments and service mesh (Istio, Linkerd) Familiarity with AI-powered monitoring and predictive analytics platforms Key Competencies Strong problem-solving skills with innovative mindset toward AI-driven solutions Excellent communication and collaboration abilities Ability to work in fast-paced, agile environments with attention to detail Proactive approach to identifying issues using modern tools and AI assistance Ability to mentor team members and promote AI adoption for team efficiency Join our team and help shape the future of our DevOps practices with cutting-edge containerized infrastructure. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Ripplehire is a recruitment SaaS for companies to identify correct candidates from employees' social networks and gamify the employee referral program with contests and referral bonus to engage employees in the recruitment process. Developed and managed by Trampoline Tech Private Limited. Recognized by InTech50 as one of the Top 50 innovative enterprise software companies coming out of India and; NHRD (HR Association) Staff Pick for the most innovative social recruiting tool in India. Used by 7 clients as of July 2014. It is a tool available on the subscription-based pricing model. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 4 days ago

Apply

7.0 years

1 - 1 Lacs

Bengaluru, Karnataka

Remote

Python Full Stack Developer – Team Lead (7+ Years – Bangalore – Hybrid) Location: Bangalore Work Model: Hybrid (3 days from office) Experience Required: 7+ years Role Type: Individual Contributor Client: US-based multinational banking institution Role Summary We are seeking a highly skilled Python Full Stack Developer – Team Lead to join a fast-paced banking technology team. This role demands hands-on experience with backend development in Python , frontend in React.js , strong SQL expertise in PostgreSQL or MySQL , and exposure to cloud deployment ( AWS or Azure ). The candidate must also demonstrate experience in leading and mentoring 5–6 developers , conducting code reviews , and coordinating work within Agile teams . This is a hands-on IC role with clear leadership expectations , not a pure management position. Angular, Docker, or CI/CD tools like Jenkins/GitHub Actions are nice-to-have and not critical. Frontend Alternative Angular Exposure to Angular components, routing, and TypeScript bindings preferred. Ability to debug or maintain existing Angular code is sufficient. Version Control Git Must know how to use Git for branching, merging, conflict resolution, and pull request workflows. CLI or GUI familiarity accepted. Containerization Docker Should understand the concept of containers. Able to write simple Dockerfiles and run containers locally for development or testing. CI/CD Automation Jenkins / GitHub Actions Exposure to basic pipeline creation or consumption. Not required to build complex pipelines but should understand how build and deploy stages work. DevOps Awareness Environments, release cycles, rollback Conceptual familiarity with release workflows, staging environments, and basic deployment health checks. Hands-on not required. Backend Development Python (Flask or Django preferred) Must have independently developed modular services with routing, input validation, structured error handling, and logging. Should demonstrate strong understanding of request lifecycle, middleware usage, and packaging reusable modules. Frontend Development React.js Must have built React components from scratch. Should be comfortable with React Hooks (useState, useEffect), Context API, conditional rendering, props drilling, reusable components, and state management patterns. Integration with REST APIs is essential. Database Engineering PostgreSQL or MySQL (either one) Should have independently written optimized SQL queries, including complex JOINs, subqueries, indexing strategies, and views. Must be capable of designing normalized schemas and writing migration scripts. Experience with query performance debugging is expected. API Design & Integration RESTful APIs Must have designed or extended REST APIs with clear understanding of REST verbs, URI structuring, authentication (e.g., JWT, OAuth2), pagination, versioning, and error code handling. Should be able to both consume and expose APIs. Cloud Deployment AWS or Azure Must have participated in deployments involving services like AWS EC2, Lambda, S3, CloudWatch OR Azure App Services, Blob Storage, Functions. Not expected to manage infrastructure end-to-end but must understand deployment flow and troubleshooting basics. Team Leadership Task allocation, mentoring, code review Must have led a team of 5–6 developers in an Agile environment. Should have experience assigning tasks, mentoring juniors, conducting code reviews with quality gates, and coordinating standups. Ownership of delivery at a module level is required. Job Types: Full-time, Contractual / Temporary, Freelance Contract length: 12 months Pay: ₹110,000.00 - ₹160,000.00 per month Benefits: Health insurance Work from home Schedule: Day shift Work Location: Hybrid remote in Bengaluru Rural, Karnataka

Posted 4 days ago

Apply

8.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Role – Lead/ Senior DevOps Engineer Skills – Mandatory - Microsoft Azure (Expert level), CI/CD Pipeline Development, Infrastructure as Code(IaC), Containerisation, AWS Skills - Primary - Cloud Networking, Backup, HA, and Disaster Recovery, Monitoring & Logging, Security & Compliance Skills - Good to have - Multi-Cloud Exposure, Cost Optimisation, Exposure on Render – Cloud platform Qualification - Beach/MCA Total Experience - 8+ Relevant Experience - 5+ Work Location - TVM/Kochi (Work fom office job) Candidate from Kerala and tamilnadu prefer more Job Purpose (both Onsite / Offshore) We are seeking a highly skilled and experienced Senior Cloud / DevOps Engineer with deep expertise in Microsoft Azure to join our dynamic technology team. The ideal candidate will have a strong foundation in cloud architecture, CI/CD pipeline development, infrastructure automation, and DevSecOps practices. This role is critical in driving the design, implementation, and optimization of scalable, secure, and resilient cloud-based solutions using Azure services. The selected candidate will play a key role in enabling continuous delivery, improving deployment frequency, and enhancing platform reliability. In addition to hands-on engineering responsibilities, the role involves close collaboration with development, operations, and security teams to ensure that best practices in infrastructure-as- code, monitoring, security compliance, and cost optimization are consistently applied across environments. The engineer will also contribute to developing reusable DevOps templates and frameworks to accelerate project delivery and ensure operational excellence. Job Description / Duties and Responsibilities • Collaborate with development teams to establish and enhance continuous integration and delivery (CI/CD) pipelines, including source code management, build processes, and deployment automation. • Publish and disseminate CICD best practices, patterns, and solutions. • Design, configure, and maintain cloud infrastructure components using platforms such as Azure, AWS, GCP, and other cloud providers. • Implement and manage infrastructure as code (IaC) using tools like Terraform or Bicep or CloudFormation to ensure consistency, scalability, and repeatability of deployments. • Monitor and optimize cloud-based systems, addressing performance, availability, and scalability issues. • Implement and maintain containerization and orchestration technologies like Docker and Kubernetes to enable efficient deployment and management of applications. • Collaborate with cross-functional teams to identify and resolve operational issues, troubleshoot incidents, and improve system reliability. • Establish and enforce security best practices and compliance standards for cloud infrastructure and applications. • Automate infrastructure provisioning, configuration management, and monitoring tasks using tools like Ansible, Puppet, or Chef. • Ensure that the service’s uptime and response time SLAs/OLAs are met or surpassed. • Build or maintain CICD building blocks and shared libraries proactively for app and development teams to enable quicker build and deployment. • Actively participate in bridge calls with team members and contractors/vendors to prevent or quickly address problems. • Troubleshoot, identify, and fix problems in the DevSecOps domain. • Ensure incident tracking tools are updated in accordance with established norms and processes, gather all essential data and document any discoveries and concerns. • Align with technological Systems/Software Development Life Cycle (SDLC) processes and industry standard service management principles (such as ITIL) • Create and publish engineering platforms and solutions. Job Specification / Skills and Competencies • Expertise in any one of Azure/AWS/GCP. Azure is a mandatory requirement. • 8+ years of related job experience. • Strong experience in containerization and container orchestration technologies - Docker, Kubernetes, etc. • Strong Experience with infrastructure automation tools like Terraform/Bicep/CloudFormation, etc. • Knowledge of DevOps Automation (Terraform, GitHub, GitHub Actions) • Good knowledge of Monitoring/Observability tools and processes inclusive CloudWatch, ELK stack, CloudTrail, Kibana, Grafana, Prometheus. Infra monitoring using Nagios or Zabbix. • Experience in working with Operations team in Agile Development model and all SDLC phases. • Comprehensive technical expertise in a variety of DevSecOps toolkits, including Ansible, Jenkins, Artifactory, Jira, Black Duck, Terraform, Git/Version Control Software, or comparable technologies • Familiarity with information security frameworks and standards. • Exposure to the Render cloud platform is desirable and considered a plus. • Familiarity with API Security, Container Security, Azure Cloud Security • Excellent analytical and interpersonal skills • Strong debugging/troubleshooting skills. • To adhere to the Information Security Management policies and procedures. Interested candidates please send your resume to : gigin.raj@greenbayit.com MOB NO - 8943011666

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role - AWS Cloud Engineer Experience - 4 to 8 Yrs Location - Chennai, Bangalore, Hyderabad, Mumbai, Indore JD – AWS Cloud Engineer Cloud Infrastructure: AWS services: EC2, S3, VPC, IAM, Lambda, RDS, Route 53, ELB, CloudFront, Auto Scaling Serverless architecture design using Lambda, API Gateway, and DynamoDB Containerization: Docker and orchestration with ECS or EKS (Kubernetes) Infrastructure as Code (IaC): Terraform (preferred), AWS CloudFormation Hands-on experience creating reusable modules and managing cloud resources via code Automation & CI/CD: Jenkins, GitHub Actions, GitLab CI/CD, AWS CodePipeline Automating deployments and configuration management Scripting & Programming: Proficiency in Python, Bash, or PowerShell for automation and tooling · Monitoring & Logging: o CloudWatch, CloudTrail, Prometheus, Grafana, ELK stack · Networking: o VPC design, Subnets, NAT Gateway, VPN, Direct Connect, Load Balancing o Security Groups, NACLs, and route tables · Security & Compliance: o IAM policies and roles, KMS, Secrets Manager, Config, GuardDuty o Implementing encryption, access controls, and least privilege policies

Posted 4 days ago

Apply

0 years

0 Lacs

Tamil Nadu, India

Remote

Job Title: Endpoint Management Expert (Microsoft Intune, Microsoft Defender, Power Automate, Power Apps, Microsoft Purview, Microsoft Exchange Admin, SharePoint/Teams Admin) Location: [Bangalore/Coimbatore/Hybrid/ Remote] Employment Type: [Full-Time / Contract] Department: IT / End-User Computing About the Role :We are seeking a highly skilled Endpoint Management Exper t with deep expertise across Microsoft Intune, Microsoft Defender, Power Automate, Power Apps, Microsoft Purview, Microsoft Exchange Administration, and SharePoint/Teams Administration. This role covers comprehensive cross-platform device management including Windows, macOS, iOS, and Android .You will be central to shaping and advancing our endpoint technology landscape by modernizing management processes, securing our hybrid work environments, and driving operational excellence across the digital workplace .From deploying applications and managing devices to enforcing compliance and security policies, you’ll ensure that our users—wherever they are—have a seamless, reliable, and secure experience .Key Responsibilities :Lead the design, deployment, and management of endpoint devices across Windows, Apple, and Android using Microsoft Intun e and related tools .Implement and maintain security policies and threat protection using Microsoft Defende r across all managed endpoints .Develop, automate, and optimize workflows and processes leveraging Power Automat e and Power App s to improve operational efficiency and compliance .Manage and govern data protection, classification, and compliance frameworks using Microsoft Purvie w to ensure organizational and regulatory requirements are met .Administer and support Microsoft Exchange Onlin e, ensuring email service availability, security, and compliance .Oversee configuration, management, and user support for SharePoint Onlin e and Microsoft Team s, driving collaboration while enforcing governance policies .Manage device and application compliance policies, Conditional Access, and access controls in conjunction with Azure AD/Entr a to secure hybrid work environments .Package, deploy, and update enterprise applications using Intune and other deployment technologies .Monitor endpoint health and security posture, proactively identifying and resolving issues to minimize user impact .Collaborate with IT security, networking, and service desk teams to address incidents, implement improvements, and ensure seamless user experience .Maintain detailed documentation of configurations, policies, automation scripts, and workflows .Stay current on Microsoft endpoint management and security technologies and best practices to continuously enhance the organization’s digital workplace capabilities .Essential Skills & Experience :Proven experience designing and implementing JML (Joiner, Mover, Leaver) lifecycle automatio n using Microsoft Power Platform (Power Automate, Power Apps) integrated with enterprise identity management systems such as Entra ID/Azure A D .Strong expertise in Power Automat e for building complex automated workflows, including multi-stage approvals, notifications, and API integrations .Hands-on experience creating Power Apps form s integrated with SharePoint Online and Microsoft Teams to capture structured data using dynamic dropdowns and data validation .Solid knowledge of Microsoft Entra ID (Azure AD ) user lifecycle management, including provisioning, group membership management, license assignment, and access revocation .Experience integrating with third-party HR and recruitment systems such as JobAdde r via APIs to synchronize user data and retrieve unique user identifiers .Familiarity managing license assignments and security group memberships based on user roles, client mappings, and business logic stored in SharePoint or other configuration sources .Ability to generate and manipulate JSON payload s for automated provisioning and API communication .Experience implementing approval workflow s for sensitive or privileged access with multi-level escalation .Strong understanding of audit logging and complianc e, including tracking all JML activities in SharePoint for traceability and reporting purposes .Proficiency in designing email notification flow s targeting HR, IT, line managers, and other stakeholders throughout the JML process lifecycle .Experience working with service account s and adhering to security best practices to ensure least privilege and secure automation execution .Knowledge of email archiving automatio n for mailbox cleanup following leaver events, including compliance with retention policies and email isolation .Familiarity with business processes around user onboarding, role changes, and offboardin g in hybrid cloud environments .Experience building dynamic, configurable system s supporting multiple user types (e.g., Associates, Corporate Users), license types, and client-specific access requirements .Strong collaboration skills to work effectively with HR, IT security, compliance, and business teams to define requirements and deliver scalable solutions .Excellent documentation skills for process workflows, automation designs, and technical configurations .Experience in Endpoint Management, and Device Securit y, with a focus on Microsoft Intun e .Deep technical expertise managing Windows enterprise environments .Proven experience managing and securing macOS, iOS, and Androi d devices using Microsoft Intune (MDM/MAM) .Hands-on experience with application packaging and deploymen t using industry-standard tools and formats (MSIX, MSI, App-V, Win32 apps) .Proficiency in PowerShell scriptin g for automation, policy enforcement, and issue resolution .Strong troubleshooting skills in device connectivity, policy conflicts, and compliance failures .Expertise in Azure A D, Conditional Acces s, Windows Autopilo t, and Microsoft 36 5 tools .Preferred Qualifications :Proven track record designing and implementing Joiner, Mover, Leaver (JML) automatio n using Microsoft Power Platform (Power Automate, Power Apps) integrated with enterprise identity management systems such as Entra ID/Azure A D .Advanced expertise in building complex Power Automate workflow s incorporating approvals, notifications, multi-stage escalation, and API integrations .Hands-on experience developing Power Apps form s integrated with SharePoint Online and Microsoft Teams, with dynamic data-driven controls and validations .In-depth knowledge of user lifecycle managemen t in Microsoft Entra ID/Azure AD, including provisioning, license assignment, group memberships, and access revocation .Experience integrating with third-party HR/recruitment systems (e.g., JobAdde r) using APIs for user synchronization and unique identifier management .Skilled in managing license and security group assignment s based on role, client mappings, and configurable business logic .Proficiency in creating and manipulating JSON payload s for automated provisioning and API communication .Experience implementing and managing approval workflow s for sensitive or high-privilege access requests with robust governance controls .Strong understanding of audit logging, compliance requirements, and traceabilit y, with experience logging JML activities for audit and reporting purposes .Expertise in designing and implementing automated email notification flow s to keep HR, IT, line managers, and other stakeholders informed throughout the user lifecycle .Knowledge of service account managemen t and security best practices to enforce least privilege and secure automation execution .Familiarity with email archiving and mailbox cleanup automatio n to comply with retention policies post-leaver processing .Experience with endpoint management and securit y, including hands-on management of Windows 11, macOS, iOS, and Android devices via Microsoft Intune (MDM/MAM) .Proficiency in application packaging and deploymen t using industry standards and formats .Strong scripting skills, especially PowerShel l, for automation, policy enforcement, and troubleshooting .Deep expertise with Azure AD Conditional Access, Windows Autopilot, and Microsoft 36 5 tools .Excellent troubleshooting skills for device connectivity, policy conflicts, and compliance issues .Proven ability to collaborate effectively across HR, IT security, compliance, and business teams to design and implement scalable, compliant, and user-centric solutions .Strong documentation skills for technical configurations, workflows, and automation designs .Why You’ll Love Working With Us :You’ll build automation that makes important processes faster, easier, and more secure .Work with the latest Microsoft tools like Power Automate, Intune, and Entra ID .Manage devices across Windows, macOS, iOS, and Android for a seamless user experience .Collaborate with different teams to create solutions that really help the business .Keep learning new skills and grow your career in a supportive environment .Take ownership of key projects that improve security and compliance .Enjoy a flexible, hybrid work environment that values your ideas and effort .Health Insurance, EPF s

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies