Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
India
On-site
What You’ll Do Designs and delivers high traffic, high performance solutions to provide enriched data back to engineering systems Define the egress strategies and architectures to expose analytical data from the public cloud Collaborate with the stakeholders and engineering partners to define the architecture and implement solutions Act as a subject matter expert for technical guidance, solution design and best practice Develop scalable solutions to implement REST and utilize GraphQL and API Gateways to provide user friendly interfaces Develop streaming data pipelines for custom ingestion, processing and egress to the public cloud Design and implementation of Kafka / PubSub services to publish events adhering to Catalog messaging standards Develop containerized solutions, CI/CD pipelines and utilize orchestration services like Kubernetes Define key metrics, troubleshoot logs using Datadog, APM, Kibana, Grafana, Stackdriver etc Manage and mentor associates; ensure the team is being challenged, exposed to new opportunities, and learning, while still being able to deliver on ambitious goals Develop a technical center of excellence within the analytics organization through training, mentorship, and process innovation Build, lead, and mentor a team of highly talented data professionals working with petabyte scale datasets and rigorous SLAs What You’ll Need A graduate of a computer science, mathematics, engineering, physical science related degree program with 6+ years of relevant industry experience 6+ years of programming experience with at least one language such as Python, Go, Java Script, React JS. Experience in leading design and implementation of medium to large-scale complex projects Experience building high performance, scalable and fault-tolerant services and applications Experience with service oriented architecture (REST & GraphQL) and ability to architect scalable microservices Experience with web frameworks like Django, FastApi, Flask, Spring, Grail, Struts Experience in NoSQL solutions (MongoDB, Hbase/BigTable, Aerospike) and caching technologies (Redis, Memcache) Experience building big data pipelines using cloud-computing technologies preferred Experienced developing in cloud platforms such as Google Cloud Platform (preferred), AWS, Azure, or Snowflake at scale Experience with real-time data streaming tools like Kafka, Kinesis, PubSub, Apache Storm or any similar tools Experience with Docker containers and Kubernetes orchestration Experience in unit testing frameworks, CI/CD implementation (Buildkite preferred) Experience with monitoring and logging tools like Datadog, Grafana, Kibana, Splunk, Stackdriver etc
Posted 15 hours ago
5.0 - 11.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Senior (CTM – Threat Detection & Response) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 5 to 11 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 15 hours ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Required Information Details 1 Role Linux TSR (Technical Support Resident engineer) 2 Required Technical Skill Set L2 Linux, L1-L2 DevOps, CloudOps 4 Desired Experience Range 3 to 5 years 5 Location of Requirement Chennai, Ahmedabad (Gandhi Nagar) Desired Competencies (Technical/Behavioral Competency) Must-Have (Ideally should be at least 3 years of hands-on with Linux) · Compute: Demonstrates a deep understanding of compute concepts, including virtualization, operating systems, system administration, performance, networking, and troubleshooting. · Web Technologies: Experience with web technologies and protocols (HTTP/HTTPS, REST APIs, SSL/TLS) and experience troubleshooting web application issues. · Operating Systems: Strong proficiency in Linux (e.g., RHEL, CentOS, Ubuntu) system administration. Experience with Windows Server administration is a plus. · GCP Proficiency: Experience with GCP core services, particularly Compute Engine, Networking (VPC, Subnets, Firewalls, Load Balancing), Storage (Cloud Storage, Persistent Disk), and related services within the Compute domain. · Security: Solid understanding of security best practices for securing compute resources in cloud environments, including IAM implementation, access control, vulnerability management, and protection against unauthorized access and data exfiltration. · Monitoring and Logging: Experience with monitoring tools for troubleshooting and performance analysis. · Scripting and Automation: Proficiency in scripting (e.g., Bash, Python, PowerShell) for system administration, automation, and API interaction. Experience with automation tools (e.g., Terraform, Ansible, Jenkins, Cloud Build) is essential · Networking: Solid understanding of networking concepts and protocols (TCP/IP, DNS, BGP, routing, load balancing) and experience troubleshooting network issues. · Problem-Solving Skills: Excellent analytical and problem-solving skills with the ability to identify and resolve complex technical issues. · Communication Skills: Strong communication and collaboration skills with the ability to effectively communicate technical concepts to both technical and non-technical audiences. Good-to-Have ● Minimum of 3+ years’ experience in implementing both on-premises and cloud-based infrastructure ● Certifications related to Linux/ Cloud Ops, preferably Associate Cloud Engineer
Posted 15 hours ago
8.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Greetings from TCS!!!!!!! TCS Hiring for Amazon Web Service(AWS), DevOps, Kubernetes, Terraform Job Location: Chennai Experience Range: 8-12 Years Job Description : Maintains in depth knowledge of the AWS DevOps cloud platforms, provides detailed advice regarding their application, and executes specialized tasks Core experience in AWS, CI experience (Git, Jenkins, GitLab), Bash, PowerShell, Build automation, Container experience in Docker, Aws DevOps, CKA and CKAD Certifications Knowledge to worked extensively on CI image building with both Linux and Windows containers Should have the best standards knowledge on CI Image building process for both Linux and windows containers Significant experience with SaaS and web-based technologies Skilled with Continuous Integration and Continuous Deployments using AWS Devops Services. Skilled to automate Python, or Bash is an added advantage. Skilled with containerization platforms using Docker & Kubernetes. Familiar with architecture/design patterns and re-usability concepts. Skilled in SOLID design principles and TDD. Familiar with Application Security via OWASP Top 10 and common mitigation strategies. Detailed knowledge of database design and object/relational database technology. Good experience in MS Fabric AWS DevOps Implementation: Lead the design and implementation of CI/CD pipelines using AWS DevOps. Configure and manage build agents, release pipelines, and deployment environments in AWS DevOps. Establish and maintain robust CI processes to automate code builds, testing, and deployment. Integrate automated testing into CI pipelines for comprehensive code validation. Continuous Integration: Infrastructure as Code (IaC) -Terraform Utilize Infrastructure as Code principles to manage and provision infrastructure components on AWS Implement and maintain IaC templates Monitoring and Optimization: Implement monitoring and logging solutions to track the performance and reliability of CI/CD pipelines. Continuously optimize CI/CD processes for efficiency, speed, and resource utilization. Security and Compliance Implement security best practices within CI/CD pipelines. Ensure compliance with industry standards and regulatory requirements in CI/CD processes. Troubleshooting and Support Provide expert-level support for CI/CD-related issues. Troubleshoot and resolve build and deployment failures promptly
Posted 15 hours ago
5.0 - 11.0 years
0 Lacs
Kolkata, West Bengal, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Senior (CTM – Threat Detection & Response) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 5 to 11 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 15 hours ago
7.0 - 10.0 years
0 Lacs
Karnataka, India
On-site
Who You’ll Work With You’ll be joining a dynamic, fast-paced Global EADP (Enterprise Architecture & Developer Platforms) team within Nike. Our team is responsible for building innovative cloud-native platforms that scale with the growing demands of the business. Collaboration and creativity are at the core of our culture, and we’re passionate about pushing boundaries and setting new standards in platform development. Who We Are Looking For We are looking for an ambitious Lead Software Engineer – Platforms with a passion for cloud-native development and platform ownership. You are someone who thrives in a collaborative environment, is excited by cutting-edge technology, and excels at problem-solving. You have a strong understanding of AWS Cloud Services, Kubernetes, DevOps, Databricks, Python and other cloud-native platforms. You should be an excellent communicator, able to explain technical details to both technical and non-technical stakeholders and operate with urgency and integrity. Key Skills & Traits Deep expertise in Kubernetes, AWS Services, Full Stack. working experience in designing and building production grade Microservices in any programming languages preferably in Python Experience Building end to end CI/CD pipeline to build, test and deploy to different AWS environments such as lambda, EC2,ECS , EKS etc. Experience on AIML with proven knowledge of building chatbots by using LLM’s. Familiarity with software engineering best practices – including unit tests, code review, version control, production monitoring, etc. Strong Experience on React, Node JS, Proficient in managing cloud-native platforms, with a strong PaaS (Platform as a Service) focus. Knowledge of software engineering best practices including version control, code reviews, and unit testing. A proactive approach with the ability to work independently in a fast-paced, agile environment. Strong collaboration and problem-solving skills. Mentoring team through the complex technical problems What You’ll Work On You will play a key role in shaping and delivering Nike’s next-generation platforms. As a Lead Software Engineer, you’ll leverage your technical expertise to build resilient, scalable solutions, manage platform performance, and ensure high standards of code quality. You’ll also be responsible for leading the adoption of open-source and agile methodologies within the organization. Day-to-Day Activities: Deep working experience on Kubernetes, AWS Services, Databricks, AIML etc., Working experience of infrastructure as code tools, such as Helm, Kustomize, or Terraform. Implementation of Open Source Projects in K8s. Ability to set up monitoring, logging, and alerting for Kubernetes clusters. Implementation of Kubernetes security best practices, like RBAC, network, and pod security policies Experience with container runtimes like Docker Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation Design, implement, and maintain robust CI/CD pipelines using Jenkins for efficient software delivery. Manage and optimize Artifactory repositories for efficient artifact storage and distribution. Architect, deploy, and manage AWS EC2 instances, Lambda functions, Auto Scaling Groups (ASG), and Elastic Block Store (EBS) volumes. Collaborate with cross-functional teams to ensure seamless integration of DevOps practices into the software development lifecycle. Monitor, troubleshoot, and optimize AWS resources to ensure high availability, scalability, and performance. Implement security best practices and compliance standards in the AWS environment. Develop and maintain scripts in Python, Groovy, and Shell for automation and core engineering tasks. Deep expertise in at least one of the technologies - Python, React, NodeJS Good Knowledge on CI/CD Pipelines and DevOps Skills – Jenkins, Docker, Kubernetes etc., Collaborate with product managers to scope new features and capabilities. Strong collaboration and problem-solving skills. 7-10 years of experience in designing and building production-grade platforms. Technical expertise in Kubernetes, AWS Cloud Services and cloud-native architectures. Proficiency in Python, Node JS, React, SQL, and AWS. Strong understanding of PaaS architecture and DevOps tools like Kubernetes, Jenkins, Terraform, Docker Familiarity with governance, security features, and performance optimization. Keen attention to detail with a growth mindset and the desire to explore new technologies.
Posted 16 hours ago
5.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company: Our client is a global technology consulting and digital solutions company that enables enterprises to reimagine business models and accelerate innovation through digital technologies. Powered by more than 84,000 entrepreneurial professionals across more than 30 countries, it caters to over 700 clients with its extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes. Job Title: Golang Developer Location : (PAN India) – Bangalore (Global village Tech Park) / Hyderabad (Rai Durg) / Mumbai (Powai / Mahape) / Chennai (DLF IT Park) / Pune (Shivajinagar) / Noida (Candor Techspace, Industrial Area) / Gurgaon (Ambience Island, DLF Phase 3) / Kolkata (Merlin Infinite, Salt Lake Electronics Complex) Experience : 5 to 12 Years Employment Type : Contract to Hire Work Mode : Hybrid Notice Period : Immediate Joiners Only Job Description: We are looking for a skilled Lead with experience in backend development to design build and maintain scalable services Develop and maintain backend services using JavaGolang Mandatory Design implement and optimize cloud based solutions on AWS preferred GCP or Azure Work with SQL and NoSQL databases PostgreSQL MySQL MongoDB for data persistence Architect and develop Kubernetes based microservices caching solutions and messaging systems like Kafka Implement monitoring logging and ing using tools like Grafana CloudWatch Kibana and PagerDuty Participate in oncall rotations handle incident response and contribute to operational playbooks Adhere to software design principles algorithms and best practices to ensure technical excellence Write clear welldocumented code and contribute to technical documentation Hands on experience as a backend developer Proficiency in JavaGolang preferred and JavaScript Experience with at least one cloud provider AWS preferred GCP or Azure Strong understanding of data structures algorithms and software design principles Familiarity with microservice architectures containerization and Kubernetes Knowledge of caching solutions and eventdriven architectures using Kafka Ability to work in a globally distributed environment and contribute to high availability services Strong communication skills with an emphasis on technical documentation Mandatory Skills : Golang
Posted 16 hours ago
5.0 - 11.0 years
0 Lacs
Kanayannur, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Senior (CTM – Threat Detection & Response) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 5 to 11 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 16 hours ago
5.0 - 11.0 years
0 Lacs
Trivandrum, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Senior (CTM – Threat Detection & Response) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 5 to 11 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 16 hours ago
5.0 - 11.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Senior (CTM – Threat Detection & Response) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 5 to 11 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 16 hours ago
5.0 - 11.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Senior (CTM – Threat Detection & Response) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 5 to 11 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 16 hours ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
MLOps Engineer (Azure) Location: Ahmedabad, Gujarat Experience: 3–5 Years of Experience Immediate joiner will be preferred. Job Summary: We are seeking a skilled and proactive MLOps Engineer with strong experience in the Azure ecosystem to join our team. You will be responsible for streamlining and automating machine learning and data pipelines, supporting scalable deployment of AI/ML models, and ensuring robust monitoring, governance, and CI/CD practices across the data and ML lifecycle. Key Responsibilities: MLOps: · Design and implement CI/CD pipelines for machine learning workflows using Azure DevOps, GitHub Actions, or Jenkins. · Automate model training, validation, deployment, and monitoring using tools such as Azure ML, MLflow, or KubeFlow. · Manage model versioning, performance tracking, and rollback strategies. · Integrate machine learning models with APIs or web services using Azure Functions, Azure Kubernetes Service (AKS), or Azure App Services. DataOps: · Design, build, and maintain scalable data ingestion, transformation, and orchestration pipelines using Azure Data Factory, Synapse Pipelines, or Apache Airflow. · Ensure data quality, lineage, and governance using Azure Purview or other metadata management tools. · Monitor and optimize data workflows for performance and cost efficiency. · Support batch and real-time data processing using Azure Stream Analytics, Event Hubs, Databricks, or Kafka. Required Skills: · Strong hands-on experience with Azure Machine Learning, Azure Data Factory, Azure DevOps, and Azure Storage solutions. · Proficiency in Python, Bash, and scripting for automation. · Experience with Docker, Kubernetes, and containerized deployments in Azure. · Good understanding of CI/CD principles, testing strategies, and ML lifecycle management. · Familiarity with monitoring, logging, and alerting in cloud environments. · Knowledge of data modeling, data warehousing, and SQL. Preferred Qualifications: · Azure Certifications (e.g., Azure Data Engineer Associate, Azure AI Engineer Associate, or Azure DevOps Engineer Expert). · Experience with Databricks, Delta Lake, or Apache Spark on Azure. · Exposure to security best practices in ML and data environments (e.g., identity management, network security). Soft Skills: · Strong problem-solving and communication skills. · Ability to work independently and collaboratively with data scientists, ML engineers, and platform teams. · Passion for automation, optimization, and driving operational excellence.
Posted 16 hours ago
4.0 - 6.0 years
0 Lacs
Greater Chennai Area
On-site
Greetings from DSRC!!! DSRC provides competitive compensation that is revised purely on performance , flexible work hours & friendly work environment . At DSRC you will have opportunity not only to learn but also explore your area of interest with respect to technology and also effectively use the skills acquired over few years of IT experience. Experience: 4 to 6 years Requirement: Linux System Administrator Working from home will be available on an optional basis. Key Responsibilities Linux System Administration Install, configure, manage and maintain Linux servers (Ubuntu, RHEL, CentOS, or similar) in production and staging environments. Perform server migrations, system upgrades, patches, and kernel tuning. Troubleshoot performance issues and resolve system-related incidents. Manage user accounts, permissions, and access control mechanisms in Linux environments. Virtualization and Containerization Management Deploy, Manage and maintain virtualization infrastructure such as VMware, Hyper-V etc. Perform capacity planning, resource allocation, and performance tuning for virtual machines and containers. Build, deploy, and orchestrate containerized applications using Docker and Kubernetes. Ensure seamless orchestration and scaling of container workloads in production environments. Design scalable container infrastructure with Helm charts, namespaces, and network policies. Networking Expertise Configure and manage networking services and protocols including TCP/IP, SSH, HTTP/HTTPS, FTP, NFS, SMB, DNS, DHCP, VPN. Configure and manage firewalls and routing. Troubleshoot network-related issues impacting Linux systems and virtual environments. Diagnose and resolve network connectivity issues and implement firewall rules and NAT. Security and Hardening Implement robust security measures including OS hardening, patch monitoring and management, firewall configuration, intrusion detection/prevention, and compliance adherence. Conduct regular security audits and vulnerability assessments. Harden operating systems using industry best practices (CIS benchmarks, SELinux, AppArmor). Implement and manage security tools like Fail2Ban, auditd, and antivirus solutions. Conduct regular vulnerability assessments and ensure system compliance. Automation and Scripting Develop and maintain automation scripts using Bash, Python, Ansible, Perl or similar tools to automate system installations, configuration, monitoring, provisioning virtual machines with required software and reporting. Streamline operational workflows through scripting and configuration management tools. Develop and maintain deployment pipelines for system provisioning and software rollouts. Backup and Disaster Recovery Design, implement, and test backup and disaster recovery strategies to ensure data integrity, high availability and business continuity. Monitoring & Logging Setup, Deploy and manage monitoring tools to track system and application performance. Analyze metrics and optimize resource utilization. Monitor and troubleshoot issues related to hardware, software, network protocols, and storage systems in multi-layered environments. Proactively monitor infrastructure health and implement solutions to ensure system reliability and uptime. Collaborate with cross-functional teams to assess system capacity, conduct performance tuning, and support application scalability. Provide technical support and root cause analysis for system-related incidents. Required Skills And Qualifications Bachelor’s degree in engineering 4 years of professional experience in Linux system administration. Strong proficiency with multiple Linux distributions (Ubuntu, RHEL, CentOS etc.). Extensive experience with virtualization technologies (VMware, KVM, libvirt, QEMU etc). Proven expertise with container technologies: Docker and Kubernetes. Solid understanding of file systems, storage environments, and network protocols (TCP/IP, SSH, HTTP/HTTPS, FTP, DNS, DHCP, VPN). Solid experience in Linux security best practices, OS hardening, and compliance. Hands-on scripting experience with Shell scripting or Python; experience with automation tools like Ansible or Puppet. Familiarity with system monitoring and logging tools like ELK Stack, Fluentd, Graylog etc. Experience with backup and disaster recovery planning. Experience in configuring monitoring tools like Nagios, Prometheus, Grafana, Zabbix etc. Experience in system documentation, providing technical support to users, and collaborating with IT teams to improve infrastructure and processes. Knowledge of cloud platforms (AWS, Azure, GCP) is a plus. Strong analytical and troubleshooting skills. Experience in database tuning and capacity planning is a plus. Excellent verbal and written communication skills. Self-motivated, organized, and capable of managing multiple priorities simultaneously.
Posted 16 hours ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview If you love to design scalable fault-tolerant systems that can run efficiently with high performance and are eager to learn new technologies and develop new skills, then we have a great opportunity for you: join our PDI family and work closely with other talented PDI engineers to deliver solutions that delight our customers every day! As a DevOps Engineer III, you will design, develop & maintain E2E automated provisioning & deployment systems for PDI solutions. You will also partner with your engineering team to ensure these automation pipelines are integrated into our standard PDI CI/CD system. You will also partner with the Solution Automation team collaborating to bring test automation to the deployment automation pipeline. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell scripting & bash, database administration as well as bare metal virtualization technologies and public cloud environments in AWS. Key Responsibilities Promote and evangelize Infrastructure-as-code (IaC) design thinking everyday Designing, building, and managing cloud infrastructure using AWS services. Implementing infrastructure-as-code practices with tools like Terraform or Ansible to automate the provisioning and configuration of resources Working with container technologies like Docker and container orchestration platforms like Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS). Managing and scaling containerized applications using AWS services like Amazon ECR, AWS Fargate Employing IaC tools like Terraform, AWS CloudFormation to define and deploy infrastructure resources in a declarative and version-controlled manner. Automating the creation and configuration of AWS resources using infrastructure templates Implementing monitoring and logging solutions using Grafana or ELK Stack to gain visibility into system performance, resource utilization, and application logs. Configuring alarms and alerts to proactively detect and respond to issues Implementing strategies for disaster recovery and high availability using AWS services like AWS Backup, AWS Disaster Recovery, or multi-region deployments. Qualifications 7-9 years’ experience in DevOps role 1+ years leading DevOps initiatives AWS Services: In-depth understanding and hands-on experience with various AWS services, including but not limited to: o Compute: EC2, Lambda, ECS, EKS, Fargate, ELB o Networking: VPC, Route 53, CloudFront, TransitGateway, DirectConnect o Storage: S3, EBS, EFS o Database: RDS, MSSQL o Monitoring: CloudWatch, CloudTrail o Security: IAM, Security Groups, KMS, WAF Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Preferred Qualifications Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Working experience in Windows and Linux systems, CLI and scripting Familiar with build automation in Windows and Linux and familiar with the various build tools (MSBuild, Make), package managers (NuGet, NPM, Maven) and artifact repositories (Artifactory, Nexus) Familiarity with version control system: Git, Azure DevOps. Knowledge of branching strategies, merging, and resolving conflicts. Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.
Posted 17 hours ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra
On-site
Job Information Date Opened 06/23/2025 Industry IT Services Job Type Full time Salary 16 - 24 LPA Work Experience 5-8 Years City Pune City State/Province Maharashtra Country India Zip/Postal Code 411001 About Us CCTech 's mission is to transform human life by the democratization of technology. We are a well established digital transformation company building the applications in the areas of CAD, CFD, Artificial Intelligence, Machine Learning, 3D Webapps, Augmented Reality, Digital Twin, and other enterprise applications. We have two business divisions: product and consulting. simulationHub is our flagship product and the manifestation of our vision. Currently, thousands of users use our CFD app in their upfront design process. Our consulting division, with its partners such as Autodesk Forge, AWS and Azure, is helping the world's leading engineering organizations, many of which are Fortune 500 list of companies, in achieving digital supremacy. Job Description We are looking for a Senior IAM Expert to architect, implement, and maintain authentication and authorization platforms across commercial and FedRAMP environments. You will drive feature parity, compliance mapping, and seamless environment transitions. Responsibility : Architect and manage PingFederate and Okta-based AuthN/AuthZ solutions for both commercial and FedRAMP accounts. Lead the migration of AuthN/AuthZ flows from ID-Core and Okta to PingFederate, including PAT and SSA integrations. Configure and maintain multi-realm IDP instances (e.g., INT vs. Prod), manage claim mappings, and secure secrets in vaults. Ensure compliance with FedRAMP controls (FIPS encryption, audit logging) and SOC2 requirements. Collaborate with automation and SRE teams to integrate identity flows into CI/CD pipelines and smoke-tests. Develop end-to-end test suites for authentication, authorization, MFA, and token lifecycle scenarios. Create and maintain detailed runbooks, architecture diagrams, and developer onboarding guides. Requirements 5+ years in identity management, IAM engineering, or security engineering roles. Deep expertise with PingFederate, Okta, or equivalent enterprise IDP platforms. Strong understanding of OAuth2/OIDC protocols, SAML, and token-based authentication. Experience with compliance frameworks (FedRAMP, SOC2, PCI-DSS). Proficiency in scripting (Python, Bash) for automation and integration tests. Excellent communication, design-documentation, and stakeholder-management skills. Benefits Opportunity to work with a dynamic and fast-paced IT organization. Make a real impact on the company's success by shaping a positive and engaging work culture. Work with a talented and collaborative team. Be part of a company that is passionate about making a difference through technology. Preferred Familiarity with AWS Cognito, Azure AD B2C, or similar cloud identity services. Prior experience with serverless identity integrations and Lambda-based extensions. Knowledge of directory services and federation protocols. Hands-on experience with disaster-recovery planning for identity systems. Education B.E./B.Tech or M.E./M.Tech in Computer Science, Software Engineering, or a related field.
Posted 17 hours ago
5.0 years
0 Lacs
India
On-site
We are thrilled to invite an experienced Power Platform Developer to be a part of our dynamic team. You will play a pivotal role in implementing a Strategic Workforce Planning System that empowers HR Core, HR Client Services, and VPs. Your expertise will enhance supportability, safeguard data integrity, and cater to the evolving needs of our organization. In this role, you'll be leveraging cutting-edge technologies like Microsoft Azure , Azure DevOps , Databricks , Data Lake Storage , Power BI , and Power Platform (PowerApps, Power Automate, Dataverse) to drive innovation and efficiency. Key Responsibilities Design and implement streamlined dataflows within PowerApps to automate and integrate business processes seamlessly. Develop and maintain robust Dataverse logic for efficient data access and management. Build and customize approval workflows and implement effective audit logging mechanisms. Collaborate closely with stakeholders to understand their requirements and translate them into scalable, technical solutions. Optimize existing workflows for improved performance, reliability, and security. Work alongside UI/UX designers to develop responsive and user-friendly interfaces. Conduct meticulous testing and debugging to ensure seamless functionality and user experience. Required Skills & Experience Master's degree or equivalent experience in a relevant field. At least 5 years of hands-on experience in application development or workflow automation. Demonstrated success in delivering enterprise solutions using Microsoft Power Platform. Strong proficiency with Dataverse, Power BI, Power Automate, and PowerApps. Experience with Azure Logic Apps, Azure DevOps, and Common Data Service (CDS). Expertise in data modeling, DAX, Power Fx, and API & custom connector integrations. Familiarity with SharePoint, AI Builder, and Power Virtual Agents is a plus. Exceptional interpersonal and communication skills, with the ability to simplify complex concepts for non-technical stakeholders. Experience working in multicultural, collaborative environments with a high tolerance for ambiguity. Nice to Have Certifications in Microsoft Power Platform or Azure technologies. Experience with Azure Databricks and Azure Data Lake Storage. Knowledge of security, compliance, CI/CD pipelines, and data governance practices. Skills: power virtual agents,power bi,data lake storage,ai builder,azure logic apps,dataverse,power automate,microsoft power bi,api integrations,azure,microsoft power platform,power fx,custom connector integrations,azure devops,common data service (cds),sharepoint,powerapps,power platform,dax,databricks,data modeling
Posted 18 hours ago
4.0 - 15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: DevOps Engineer / SRE. Location: Pune, Maharashtra. Experience: 4 to 15 years. Education: BE/BTech in Computer Science or related field. Notice Period: Immediate to 15 days preferred. About the Role: We are seeking a passionate and experienced DevOps Engineer / Site Reliability Engineer (SRE) to join our dynamic team in Pune. The ideal candidate will have a solid foundation in DevOps practices, cloud platforms, and programming, with a strong focus on automation, scalability, and system reliability. This role is ideal for engineers from product-based companies, edtech/e-learning platforms, or emerging tech startups who are ready to make a real impact. Key Responsibilities: Design, implement, and manage CI/CD pipelines using GitHub Actions, Argo CD, and other modern DevOps tools. Deploy, monitor, and scale infrastructure using Kubernetes and Terraform. Manage and optimize cloud infrastructure on AWS or Azure. Collaborate with development teams to ensure smooth code deployments and system reliability. Work with multiple programming languages (at least 3 among Python, Java, TypeScript, or Ruby on Rails) for automation and scripting. Maintain and query relational and non-relational databases like PostgreSQL, SQL, Elasticsearch, Kafka, or DBL. Apply Regular Expressions (regex) for log analysis, validation, and text processing tasks. Optional: contribute to front-end/backend development using React or Node.js. Must-Have Skills: Hands-on experience with Kubernetes, Terraform, and CI/CD tools (especially GitHub Actions, Argo CD). Strong command over AWS or Azure cloud platforms. Proficiency in at least three of the following: Python, Java, TypeScript, Ruby on Rails (ROR). Strong understanding of SQL/PostgreSQL, ElasticSearch, Kafka, and other relevant data technologies. Solid grasp of regular expressions (regex) for automation and scripting tasks. Good to Have: Exposure to React or Node.js development. Familiarity with monitoring, logging, and alerting tools. Preferred Background: Experience working in product-based companies, eLearning/edtech platforms, or emerging technology startups. A proactive and collaborative mindset, with strong problem-solving skills Why Join Us? Be a part of a forward-thinking team driving innovation in the tech space. Work in a fast-paced environment with ample opportunities to grow and lead. Enjoy a flexible and inclusive culture where your contributions are valued. 📩 Apply Now if you're a DevOps enthusiast looking to join a company that values ownership, agility, and technical excellence!
Posted 18 hours ago
3.0 years
0 Lacs
Godhra, Gujarat, India
On-site
Backend Developer - Job Description Location: Ahmedabad, India (flexible for the right talent) Application form: https://forms.gle/yH6JC45wDRfwRV5C8 About Tarrina Health Tarrina Health is an early stage startup looking to bring health to Bharat. Our goal is to improve access to affordable, evidence based and high quality health products in small towns and rural India. We do this by creating a modern and digitized distribution channel for small town and rural pharmacies, addressing critical issues: under 20% of 925M rural Indians have reliable medicine access, and 72% of spurious drugs are found in rural pharmacies. Our Work Culture Our purpose-driven culture champions healthcare equity. We value: A health-first approach - no compromises on quality Integrity - we do what we promise Cross functional collaboration Community-informed solutions Customer centricity Continuous learning & adaptation What You Will Do As a Backend Developer, you'll design, build, and maintain core systems for our healthcare supply chain platform. You'll create scalable, reliable, and secure backend services, directly impacting healthcare access for millions in rural India. Key Responsibilities Develop & maintain scalable backend services (Java, Python). Design & implement APIs for healthcare supply chain stakeholders. Architect secure, scalable, HA, and performant backend systems. Design, test, & deploy A/B experiments. Collaborate with product, design, & engineering on new features. Deploy & support backend services in cloud environments. Utilize basic frontend skills (HTML, HTMX, JSX, TS) for UI integration/debugging. Implement & optimize search (Elasticsearch/Solr). Conduct code reviews, ensuring high code quality & security. Develop & execute comprehensive tests (unit, integration, E2E). Perform RCA for defects & communicate findings. Implement logging, monitoring, & alerting for backend services. Actively participate in agile ceremonies. Work independently, take project ownership, with strong problem-solving & attention to detail. Required Qualifications, Capabilities, and Skills 3+ years of software development experience (backend or full-stack). Proficient in backend development (Java, Python). Experience with frameworks like Spring Boot, Django, Flask, or FastAPI. Expertise in RESTful API & microservice design (versioning, security, OpenAPI/Swagger). Skilled in designing scalable, performant, and reliable systems (caching, load balancing, fault tolerance, DB optimization). Proficient with SQL & NoSQL databases (design, schema, optimization, migration). Strong grasp of web security (OWASP Top 10) & auth mechanisms (OAuth 2.0, JWT). Experience with cloud platforms (AWS, GCP). Familiar with containerization (Docker, Kubernetes). Familiar with CI/CD & DevOps practices. Experience with observability tools (logging, monitoring, alerting, e.g., ELK Stack). Foundational knowledge of statistics and data science concepts. Experience working in an Agile/Scrum development environment. Proven experience with Domain-Driven Design (DDD) in microservices architectures. Familiarity with event-driven architectures, CQRS, and Saga patterns for complex workflows. Experience with infrastructure-as-code tools (Terraform, CloudFormation) and automated DB migrations (Flyway, Liquibase). Preferred Qualifications, Capabilities, and Skills Experience in healthcare IT or supply chain management systems Experience with real-time data processing and event-driven architectures Familiarity with GraphQL Experience developing applications for emerging markets or resource-constrained environments Knowledge of geospatial technologies and location-based services Experience with message queuing systems (Kafka, RabbitMQ, etc.) Experience with high-volume Asynchronous data I/O pipelines in Microservice Architecture. Contributions to open-source projects Foundational knowledge of Machine Learning Experience with data analytics and visualization tools What You'll Build Scalable backend infrastructure for our healthcare supply chain platform. Robust APIs for seamless integration between manufacturers, distributors, and pharmacies. Inventory management and logistics tracking systems for rural environments. Secure data management systems for PII and healthcare information. Real-time analytics and reporting tools for supply chain visibility. Optimization engines for delivery routing in challenging geographies. Automated quality control and verification systems. Data sync mechanisms for areas with intermittent connectivity. AI-driven demand forecasting to prevent stockouts. Benefits and Perks Medical coverage Competitive salary Vacation and leaves of absence (flexible and special) Developmental opportunities through education and professional workshops Employee referral program Premium access to development tools and services Opportunity to make a meaningful impact on healthcare access in rural India Work on challenging technical problems in a purpose-driven organization Growth opportunities in a rapidly expanding organization Equal Opportunity Statement Tarrina Health is an equal opportunity employer. Application Process: Fill out the Google Form below Complete a take home technical test Technical Interview with our Tech Lead Behavioural Interview with our CEO and Tech Advisor Reference Check Offer CLICK HERE TO FILL OUT THE APPLICATION FORM: https://forms.gle/yH6JC45wDRfwRV5C8
Posted 18 hours ago
3.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Project Role : Infra Tech Support Practitioner Project Role Description : Provide ongoing technical support and maintenance of production and development systems and software products (both remote and onsite) and for configured services running on various platforms (operating within a defined operating model and processes). Provide hardware/software support and implement technology at the operating system-level across all server and network areas, and for particular software solutions/vendors/brands. Work includes L1 and L2/ basic and intermediate level troubleshooting. Must have skills : Cloud Automation DevOps Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Infra Tech Support Practitioner, you will engage in the ongoing technical support and maintenance of production and development systems and software products. Your typical day will involve addressing various technical issues, providing solutions for configured services across multiple platforms, and ensuring the smooth operation of hardware and software systems. You will work both remotely and onsite, collaborating with team members to troubleshoot and resolve issues effectively, while adhering to established operating models and processes. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the implementation of technology at the operating system level across all server and network areas. - Engage in basic and intermediate level troubleshooting for hardware and software support. Professional & Technical Skills: - Must To Have Skills: Proficiency in Cloud Automation DevOps. - Strong understanding of cloud infrastructure and services. - Experience with automation tools and scripting languages. - Familiarity with monitoring and logging tools for system performance. - Knowledge of network protocols and security best practices. Additional Information: - The candidate should have minimum 3 years of experience in Cloud Automation DevOps. - This position is based at our Gurugram office. - A 15 years full time education is required. 15 years full time education
Posted 19 hours ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
About The Role Grade Level (for internal use): 11 Job Title: Senior DevOps Engineer Location: Ahmedabad, India About Us ChartIQ , a division of S&P Global , provides a powerful JavaScript library that enables sophisticated data visualization and charting solutions for financial market participants. Our library is designed to run seamlessly in any browser or browser-like environment, such as a web-view, and empowers users to interpret and interact with complex financial datasets. By transforming raw data into compelling visual narratives, ChartIQ helps traders, analysts, and decision-makers uncover insights, identify key relationships, and spot critical opportunities in real-time. Role Overview As a DevOps Engineer at ChartIQ , you'll play a critical role not only in building, maintaining, and scaling the infrastructure that supports our Development our Development and QA needs , but also in driving new, exciting cloud-based solutions that will add to our offerings. Your work will ensure that the platforms used by our team remain available, responsive, and high-performing. In addition to maintaining the current infrastructure, you will also contribute to the development of new cloud-based solutions , helping us expand and enhance our platform's capabilities to meet the growing needs of our financial services customers. You will also contribute to light JavaScript programming , assist with QA testing , and troubleshoot production issues. Working in a fast-paced, collaborative environment, you'll wear multiple hats and support the infrastructure for a wide range of development teams. This position is based in Ahmedabad, India , and will require working overlapping hours with teams in the US . The preferred working hours will be until 12 noon EST to ensure effective collaboration across time zones. Key Responsibilities Design, implement, and manage infrastructure using Terraform or other Infrastructure-as-Code (IaC) tools. Leverage AWS or equivalent cloud platforms to build and maintain scalable, high-performance infrastructure that supports data-heavy applications and JavaScript-based visualizations. Understand component-based architecture and cloud-native applications. Implement and maintain site reliability practices, including monitoring and alerting using tools like DataDog, ensuring the platform’s availability and responsiveness across all environments. Design and deploy high-availability architecture to support continuous access to alerting engines. Support and maintain Configuration Management systems like ServiceNow CMDB. Manage and optimize CI/CD workflows using GitHub Actions or similar automation tools. Work with OIDC (OpenID Connect) integrations across Microsoft, AWS, GitHub, and Okta to ensure secure access and authentication. Contribute to QA testing (both manual and automated) to ensure high-quality releases and stable operation of our data visualization tools and alerting systems. Participate in light JavaScript programming tasks, including HTML and CSS fixes for our charting library. Assist with deploying and maintaining mobile applications on the Apple App Store and Google Play Store. Troubleshoot and manage network issues, ensuring smooth data flow and secure access to all necessary environments. Collaborate with developers and other engineers to troubleshoot and optimize production issues. Help with the deployment pipeline, working with various teams to ensure smooth software releases and updates for our library and related services. Required Qualifications Proficiency with Terraform or other Infrastructure-as-Code tools. Experience with AWS or other cloud services (Azure, Google Cloud, etc.). Solid understanding of component-based architecture and cloud-native applications. Experience with site reliability tools like DataDog for monitoring and alerting. Experience designing and deploying high-availability architecture for web based applications. Familiarity with ServiceNow CMDB and other configuration management tools. Experience with GitHub Actions or other CI/CD platforms to manage automation pipelines. Strong understanding and practical experience with OIDC integrations across platforms like Microsoft, AWS, GitHub, and Okta. Solid QA testing experience, including manual and automated testing techniques (Beginner/Intermediate). JavaScript, HTML, and CSS skills to assist with troubleshooting and web app development. Experience with deploying and maintaining mobile apps on the Apple App Store and Google Play Store that utilize web-based charting libraries. Basic network management skills, including troubleshooting and ensuring smooth network operations for data-heavy applications. Knowledge of package publishing tools such as Maven, Node, and CocoaPods to ensure seamless dependency management and distribution across platforms. Additional Skills and Traits for Success in a Startup-Like Environment: Ability to wear multiple hats: Adapt to the ever-changing needs of a startup environment within a global organization. Self-starter with a proactive attitude, able to work independently and manage your time effectively. Strong communication skills to work with cross-functional teams, including engineering, QA, and product teams. Ability to work in a fast-paced, high-energy environment. Familiarity with agile methodologies and working in small teams with a flexible approach to meeting deadlines. Basic troubleshooting skills to resolve infrastructure or code-related issues quickly. Knowledge of containerization tools such as Docker and Kubernetes is a plus. Understanding of DevSecOps and basic security practices is a plus. Preferred Qualifications Experience with CI/CD pipeline management, automation, and deployment strategies. Familiarity with serverless architectures and AWS Lambda. Experience with monitoring and logging frameworks, such as Prometheus, Grafana, or similar. Experience with Git, version control workflows, and source code management. Security-focused mindset, experience with vulnerability scanning, and managing secure application environments. What We Offer Competitive salary and benefits package. Flexible work schedule with remote work options. The opportunity to work in a collaborative, creative, and innovative environment. Hands-on experience with cutting-edge technologies and tools that power sophisticated financial data visualizations and charting solutions. Professional growth and career advancement opportunities. A dynamic startup culture within a global organization, where your contributions directly impact the product and the financial industry. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 312974 Posted On: 2025-06-23 Location: Ahmedabad, Gujarat, India
Posted 22 hours ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
AEM DevOps Engineer - Job Description Airline Domain Experience needed Responsibilities AEM Infrastructure Management: Design, deploy, and maintain secure, scalable, and highly available AEM environments (author, publish, and dispatcher) across development, staging, and production systems. Automation & CI/CD: Develop and manage automated CI/CD pipelines using tools like Jenkins, GitLab CI, or Cloud Manager to streamline AEM deployments and reduce release cycles. Monitoring & Performance Optimization: Implement robust monitoring, logging, and alerting systems (e.g., New Relic) to proactively identify performance issues, ensure high availability, and optimize system performance. Cloud & Containerization: Deploy and manage AEM and supporting applications on cloud platforms (AWS, Azure) using containerization and orchestration tools such as Docker and Kubernetes. Security & Compliance: Apply security best practices including role-based access control, secure dispatcher configuration, SSL, and vulnerability assessments to maintain compliance with organizational and industry standards. Cross-functional Collaboration: Collaborate with AEM developers, system administrators, QA, and business teams to troubleshoot infrastructure issues, optimize configurations, and ensure seamless integration with enterprise systems. Backup & Disaster Recovery: Design and implement comprehensive backup, restore, and disaster recovery strategies for AEM environments to ensure business continuity. DevOps & Agile Enablement: Promote DevOps culture by supporting automation, agile workflows, continuous improvement, and efficient collaboration between development and operations. Requirements Qualifications 3+ years of hands-on experience as a DevOps Engineer or similar role managing Adobe Experience Manager environments. Strong understanding of AEM architecture (6.5+), including author/publish topology, dispatcher setup, performance tuning, and upgrade/patching strategies. Experience with cloud platforms (AWS, Azure), including deployment, scaling, and automation. Hands-on experience with Docker and Kubernetes for container orchestration. Expertise in CI/CD pipeline management using Jenkins, GitLab, or AEM Cloud Manager. Proficiency with scripting languages such as Bash, or PowerShell. Familiarity with monitoring tools such as Nagios, or New Relic. Strong understanding of networking, load balancing, and security in cloud and hybrid environments. Comfortable working in Agile teams and supporting DevOps practices. Preferred Qualifications Experience with AEM as a Cloud Service and cloud-native deployment models. Familiarity with Adobe Marketing Cloud tools (Analytics, Target, Campaign). Relevant cloud certifications (e.g., AWS Certified Solutions Architect, Azure DevOps Engineer Expert). Experience implementing high availability and disaster recovery strategies for AEM. Advanced knowledge of AEM Dispatcher configuration and performance tuning
Posted 23 hours ago
6.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities: Design, implement, and maintain scalable cloud infrastructure primarily on AWS, with some exposure to Azure. Manage and optimize CI/CD pipelines using Jenkins and Git-based version control systems (GitHub/GitLab). Build and maintain containerized applications using Docker, Kubernetes (including AWS EKS), and Helm. Automate infrastructure provisioning and configuration using Terraform and Ansible. Implement GitOps-style deployment processes using ArgoCD and similar tools. Ensure observability through monitoring and logging with Prometheus, Grafana, Datadog, Splunk, and Kibana. Develop automation scripts using Python, Shell, and GoLang Implement and enforce security best practices in CI/CD pipelines and container orchestration environments using tools like Trivy, OWASP, SonarQube, Aqua Security, Cosign, and HashiCorp Vault. Support blue/green deployments and other advanced deployment strategies. Required Qualifications: 6-8 years of professional experience in a DevOps, SRE, or related role. Strong hands-on experience with AWS (EC2, S3, IAM, EKS, RDS, Lambda, Secrets Manager). Solid experience with CI/CD tools (Jenkins, GitHub/GitLab, Maven). Proficient with containerization and orchestration tools: Docker, Kubernetes, Helm. Experience with Infrastructure as Code tools: Terraform and Ansible. Proficiency in scripting languages: Python, Shell; GoLang Strong understanding of observability, monitoring, and logging frameworks. Familiarity with security practices and tools integrated into DevOps workflows. Excellent problem-solving and troubleshooting skills. Certifications (good to have): AWS Certified DevOps Engineer Certified Kubernetes Administrator (CKA) Azure Administrator/Developer Certifications
Posted 1 day ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Who are we? Securin is a leading product based company backed up by services in the cybersecurity domain, helping hundreds of customers world wide gain resilience against emerging threats. Our products are powered by accurate vulnerability intelligence, human expertise, and automation, enabling enterprises to make crucial security decisions to manage their expanding attack surfaces. Securin is built on the foundation of in-depth penetration testing and vulnerability research to help organizations continuously improve their security posture. Our team of intelligence experts is one of the best in the industry and our comprehensive portfolio of tech-enabled solutions include Attack Surface Management (ASM), Vulnerability Intelligence (VI), Penetration Testing, and Vulnerability Management. These solutions allow our customers to gain complete visibility of their attack surfaces, stay informed of the latest security threats. Also, trends, and proactively address risks. What do we promise? We are a highly effective tech-enabled cybersecurity solutions provider and promise continual security posture improvement, enhanced attack surface visibility, and proactive prioritised remediation for every one of our client businesses. What do we deliver? Securin helps organizations to identify and remediate the most dangerous exposures, vulnerabilities, and risks in their environment. We deliver predictive and definitive intelligence and facilitate proactive remediation to help organizations stay a step ahead of attackers. By utilising our cybersecurity solutions, our clients can have a proactive and holistic view of their security posture and protect their assets from even the most advanced and dynamic attacks. Securin has been recognized by national and international organizations for its role in accelerating innovation in offensive and proactive security. Our combination of domain expertise, cutting-edge technology, and advanced tech-enabled cybersecurity solutions has made Securin a leader in the industry. Job Location : IIT Madras Research Park, A block, Third floor, 32, Tharamani, Chennai, Tamil Nadu 600113 Work Mode: Hybrid (Work from office, Chennai, 2 days a week to office) Compensation : Up to 35LPA Responsibilities: ● Design & Development: Architect, implement, and maintain Java microservices processing high-volume data streams. ● Pipeline Engineering: Build and optimize ingestion pipelines (Kafka, Flink, Beam etc.) to ensure low-latency, high-throughput data flow. ● Secure Coding: Embed secure coding standards (OWASP, SAST/DAST integration, threat modeling) into the SDLC; implement authentication, authorization, encryption, and audit logging. ● Performance at Scale: Identify and resolve performance bottlenecks (JVM tuning, GC optimization, resource profiling) in distributed environments. ● Reliability & Monitoring: Develop health checks, metrics, and alerts (Prometheus/Grafana), and instrument distributed tracing (OpenTelemetry). ● Collaboration: Work closely with product managers, data engineers, SREs, and security teams to plan features, review designs, and conduct security/code reviews. ● Continuous Improvement: Champion CI/CD best practices (GitOps, automated testing, blue/green deployments) and mentor peers in code quality and performance tuning. Requirements ● Strong Java Expertise: 8+ years of hands-on experience with Java 11+; deep knowledge of concurrency, memory management, and JVM internals. ● Secure Coding Practices: Proven track record implementing OWASP Top Ten mitigations, performing threat modeling, and integrating SAST/DAST tools. ● Big Data & Streaming: Hands-on with Kafka (producers/consumers, schema registry), Spark or Flink for stream/batch processing. ● System Design at Scale: Experience designing distributed systems (microservices, service mesh) with high availability and partition tolerance. ● DevOps & Automation: Skilled in containerization (Docker), orchestration (Kubernetes), CI/CD pipelines (Jenkins). ● Cloud Platforms: Production experience on AWS; familiarity with managed services (MSK, EMR, GKE, etc.). ● Testing & Observability: Expertise in unit/integration testing (JUnit, Mockito), performance testing (JMH), logging (ELK/EFK), and monitoring stacks. ● Collaboration & Communication: Effective communicator; able to articulate technical trade-offs and evangelize best practices across teams. Qualifications: ● Bachelor’s or Master’s in Computer Science, Engineering, or related field. ● 6–10 years of software development experience, with at least 3 years focused on data-intensive applications. ● Demonstrated contributions to production-critical systems serving thousands of TPS (transactions per second). ● Strong analytical and problem-solving skills; comfortable working in fast-paced, agile environments. Nice to Have: ● Open source contributions to streaming or security projects. ● Experience with infrastructure as code (Terraform, CloudFormation). Why should we connect? We are a bunch of passionate cybersecurity professionals who are building a culture of security. Today, cybersecurity is no more a luxury but a necessity with a global market value of $150 billion. At Securin, we live by a people-first approach. We firmly believe that our employees should enjoy what they do. For our employees, we provide a hybrid work environment with competitive best-in-industry pay, while providing them with an environment to learn, thrive, and grow. Our hybrid working environment allows employees to work from the comfort of their homes or the office if they choose to. For the right candidate, this will feel like your second home. If you are passionate about cybersecurity just as we are, we would love to connect and share ideas.
Posted 1 day ago
12.0 - 15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Product Owner is responsible for the definition and delivery of a part of a product from a customer and market requirement point of view - regarding content, prioritization, quality, and customer excitement for a given cost and time frame. Responsible for a part of the product through the product life cycle from the definition to the phase-out. Provides the associated development teams with priorities and expertise regarding the product and ensures completeness and consistency of the derived requirements within the part of the product. Responsible for definition and delivery of a part of a product from a customer and market requirement point of view, working with one or several development teams. Internal Interactions: Product Line Manager, Product Manager, Project Manager, Quality Manager, Development Team (e.g. Scrum Master, System Analyst, Architect, Developers, Testers), Usability Engineer, other stakeholders (e.g. Business Units), SCM (Enabling) & Customer Service teams, Technical Writers, etc. External Interactions: Customers – Hospital Administrators, IT Administrators etc. What are my tasks? Elicit and collect stakeholder requests Define and prioritize Market Requirements Analyze Market Requirements (e.g. initiate and manage concepts for complex Market Requirements) Derive, prioritize, and communicate Software Requirements Create Software Requirement Specifications (i.e. problem part) Coach/support development team's questions and resolve conflict regarding features and requirements Analyze and decide complaints and bugs/charms Achieve commitments with and motivate development teams, assist development teams in attaining maximum effective sustainable pace for development Ensure quality by evaluating results of iterations and either approve/ accept or reject results based on DONEness criteria Support effort estimations of development teams Analyze change request entries and prioritize with other product backlog items Coordinate cross-feature-area development with peers to facilitate prioritized product development Deliver input for project management Support roll-out of the system, presentation, workshops, training for sales and engineering What do I need to know to qualify for this job? Qualification: A Bachelors / master’s degree in engineering and / or MCA or equivalent. Work Experience: 12 to 15 years. Desired Knowledge & Experience: Knowledge on medical product Infrastructure, deployment technologies and non-medical software development. Knowhow of Containers, Kubernetes, Dockers and other Containerization technologies. Strong experience in Virtual Appliances, centralized logging, DBs and cloud hosting of applications. Exposure to CI/CD pipeline. Healthcare market - Product knowhow and customer understanding Knowledge of Clinical Workflows and Healthcare IT, especially in the area of Radiology. Healthcare Industry standards like DICOM and IHE is desirable. Basic understanding of Legal regulations and standards applicable for medical devices, affecting safety aspects(i.e. FDA 21CFR820QSR, MDR, ISO 13485) Exposure to Agile methodology Good programming skills & should have worked for most of the time in software programming roles. Thorough experience in Requirements Engineering, Usability Engineering, Cyber Security and feature definition activities. Product Lifecycle Management & Software development cycle experience What experience do I need to have? Professional: Several years of experience in the medical device/ healthcare industry (e.g. as a Product Owner, System Engineer, System Analyst, Technology Lead, Architect etc.). Several years of experience in IT product or solution business and service topics. Project / Process: Several years of experience in requirements engineering and SW development. Ideally, IT integration experience. Experience in agile development projects, preferably in Product Owner role. Leadership: Experience with managing internationally staffed teams, management and balancing of different stakeholder expectations, management of product definitions. Ideally several years experience in technical leadership role and communicating direction and coaching others. Intercultural: Experience with international/intercultural teams, conduction of workshops with international development partners and customers. What else do I need to be strong at? Self driven and takes Initiatives Decision making skills Result orientation Self motivated and provides motivation and inspiration to the team Strong Analytical and Problem-Solving Skills. Strong team player and networking skills Strong written and oral communication skills. Strong interpersonal skills Strong customer focus
Posted 1 day ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Engineering Manager, you will lead a high-impact engineering team in developing and scaling Rakuten SixthSense’s Data Observability platform. This role requires a strong technical background working high-scale systems and full-stack cloud engineering, DevOps. Key Responsibilities Lead and mentor a team of 5-8 engineers, driving innovation in . Define and execute the technical vision for full-stack observability, ensuring high availability and performance. Architect and implement scalable solutions for real-time monitoring, data quality, and operational efficiency. Champion DevOps best practices, including CI/CD pipelines, automation, logging, and Kubernetes-based deployments. Ensure compliance with data governance standards, security, and scalability in hybrid cloud environments (AWS, GCP, Azure). Oversee end-to-end product lifecycle, from concept to deployment, ensuring seamless integration with enterprise systems. Collaborate with product, operations, and business stakeholders, aligning engineering efforts with strategic objectives. Drive Agile execution, ensuring rapid iterations, high-quality deliverables, and continuous improvement. What You Bring 8-12 years of software development experience, with 4+ years in technical leadership roles. Expertise in Java, Spring, Hibernate, and microservices architecture. Experience with any cloud platforms (AWS, GCP, Azure) and hybrid cloud strategies. Proficiency in containerization, Kubernetes, CI/CD pipelines, and infrastructure as code. Full-stack experience in building RESTful APIs, real-time monitoring, and automation frameworks. Proven track record of leading large-scale projects and handling post-production challenges. Hands-on DevOps mindset, ensuring automation, quality, and reliability in data observability platforms. Experience using Observability Platforms. Excellent problem-solving, debugging, and system architecture skills. Nice to Have Previous experience in AIOps, or real-time analytics platforms. Exposure to AI-driven automation and anomaly detection in data engineering.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The logging job market in India is vibrant and offers a wide range of opportunities for job seekers interested in this field. Logging professionals are in demand across various industries such as IT, construction, forestry, and environmental management. If you are considering a career in logging, this article will provide you with valuable insights into the job market, salary range, career progression, related skills, and common interview questions.
These cities are known for their thriving industries where logging professionals are actively recruited.
The average salary range for logging professionals in India varies based on experience and expertise. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.
A typical career path in logging may include roles such as Logging Engineer, Logging Supervisor, Logging Manager, and Logging Director. Professionals may progress from entry-level positions to more senior roles such as Lead Logging Engineer or Logging Consultant.
In addition to logging expertise, employers often look for professionals with skills such as data analysis, problem-solving, project management, and communication skills. Knowledge of industry-specific software and tools may also be beneficial.
As you embark on your journey to explore logging jobs in India, remember to prepare thoroughly for interviews by honing your technical skills and understanding industry best practices. With the right preparation and confidence, you can land a rewarding career in logging that aligns with your professional goals. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane