Home
Jobs

4344 Logging Jobs - Page 5

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 years

2 Lacs

Thiruvananthapuram

On-site

GlassDoor logo

9 - 12 Years 1 Opening Trivandrum Role description Job Title: Azure Infrastructure Architect Experience: 7+ Years Industry: Information Technology / Cloud Infrastructure / Consulting Job Summary: We are looking for a skilled and experienced Azure Infrastructure Architect to design, implement, and manage scalable, secure, and resilient cloud infrastructure solutions on Microsoft Azure. The ideal candidate will have a strong background in cloud architecture, infrastructure automation, and enterprise IT systems, with a focus on delivering high-availability and cost-effective solutions. Key Responsibilities: Design and implement Azure-based infrastructure solutions aligned with business and technical requirements. Lead cloud migration and modernization initiatives from on-premises to Azure. Define and enforce best practices for cloud security, networking, identity, and governance. Develop Infrastructure as Code (IaC) using tools like ARM templates, Bicep, or Terraform. Collaborate with application architects, DevOps teams, and security teams to ensure seamless integration. Monitor and optimize cloud infrastructure for performance, scalability, and cost-efficiency. Provide technical leadership and mentorship to junior engineers and support teams. Stay updated with the latest Azure services and industry trends. Required Skills & Qualifications: 7+ years of experience in IT infrastructure, with at least 3+ years in Azure cloud architecture. Strong expertise in: Azure IaaS and PaaS services (VMs, VNets, Azure AD, Load Balancers, App Services, etc.) Azure networking (NSGs, VPN, ExpressRoute, Azure Firewall) Identity and access management (Azure AD, RBAC, Conditional Access) Infrastructure automation (ARM, Bicep, Terraform, PowerShell) Monitoring and logging (Azure Monitor, Log Analytics, Application Insights) Experience with hybrid cloud environments and on-prem integration. Familiarity with DevOps tools and CI/CD pipelines (Azure DevOps, GitHub Actions). Excellent problem-solving and communication skills. Skills Azure,Microsoft Azure,Azure Paas About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 day ago

Apply

1.0 years

3 - 3 Lacs

Guruvāyūr

On-site

GlassDoor logo

Trains, cross–trains, and retrains all front office personnel. Participates in the selection of front office personnel. Schedules the front office staff. Supervises workload during shifts. Evaluate the job performance of each front office employee. Maintains working relationships and communicates with all departments. Maintains master key control. Verifies that accurate room status information is maintained and properly communicated. Resolves guest problems quickly, efficiently, and courteously. Updates group information. Maintains, monitors, and prepares group requirements. Relays information to appropriate personnel. Reviews and completes credit limit report. Works within the allocated budget for the front office. Receives information from the previous shift manager and passes on pertinent details to the incoming manager. Checks cashiers in and out and verifies banks and deposits at the end of each shift. Enforces all cash handling, check-cashing, and credit policies. Conducts regularly scheduled meetings of front office personnel. Wears the proper uniform at all times. Requires all front office employees to wear proper uniforms at all times. Upholds the hotel’s commitment to hospitality. Prepare performance reports related to the front office. Maximize room revenue and occupancy by reviewing status daily. Analyse rate variance, monitor credit reports and maintain close observation of daily house count. Monitor the selling status of the house daily. Ie flash report, allowance etc. Monitor high-balance guests and take appropriate action. Ensure implementation of all hotel policies and house rules. Operate all aspects of the Front Office computer system, including software maintenance, report generation and analysis, and simple configuration changes. Prepare revenue and occupancy forecasting. Ensure logging and delivery of all messages, packages, and mail in a timely and professional manner. Ensure that employees are, at all times, attentive, friendly, helpful and courteous to all guests managers and other employees. Monitor all V.I.P.’s special guests and requests. Maintain the required pars level of all front office and stationery supplies. Review daily front office work and activity reports generated by Night Audit. Review Front office log book and Guest feedback forms on a daily basis. Maintain an organised and comprehensive filing system with documentation of purchases, vouchering, schedules, forecasts, reports and tracking logs. Perform other duties as requested by management. Qualifications Candidates must have a minimum of 1 year experience as a Front Office Manager in a reputed hotels Strong Customer Satisfaction and Customer Service skills Experience in managing Front Office operations Excellent Communication skills, both written and verbal Leadership and team management skills Ability to work in a dynamic, fast-paced environment Interested candidate may drop your recent cv with photo Job Types: Full-time, Permanent Pay: ₹28,000.00 - ₹32,000.00 per month Benefits: Food provided Schedule: Day shift Morning shift Rotational shift Supplemental Pay: Yearly bonus Language: English (Required) Work Location: In person

Posted 1 day ago

Apply

4.0 years

2 - 6 Lacs

Pune

On-site

GlassDoor logo

Role: Senior Site Reliability Engineer Location: Pune Onit, Inc. is looking for a Site Reliability Engineer L2 to join our Core Infrastructure team. This role will help to ensure the reliability of a diverse set of applications across our AWS infrastructure. To be successful in this role you will need to collaborate and pair with team members, have strong technical skills, and a passion for technology. The individual we seek is skilled in observability, excellent at troubleshooting, and has strong problem-solving skills. You must be able to multi-task in a fast-paced environment and be a self-starter with the ability to work independently. Responsibilities Troubleshoot deployment failures and infrastructure issues across our full AWS infrastructure stack (EKS, RDS,) This incudes dev, test, and production environments. Create and maintain monitors for uptime and performance using Datadog, CloudWatch and other monitoring tools. Find ways to help reduce errors in systems and reduce noise in monitors and alerts Work with others on user stories to improve system health. Help create and prioritize work / stories. Participate in standups with US and India team. Help define runbooks and automation to solve production problems. Troubleshoot applications from a configuration and logging perspective. Assist with responding to and analyzing security events from security tooling. Help train others to take on SRE responsibilities. Assist with performance optimization by identifying performing bottlenecks and making recommendations on improvements. Verify systems are monitored, backed up, and following best practices ... via audits and automation Investigate how to take better advantage of the tools we use for monitoring, security. Requirements Bachelor's degree in computer science or equivalent experience is required. 4+ years of experience for the following: AWS (EC2, EKS, ECS, S3, RDS, CloudWatch, CloudTrail, IAM, AWS CLI, etc.). Experience with containers and EKS is a must. Linux (Centos, Amazon Linux, Ubuntu) Git source code management (Gitlab, GitHub) Bash shell scripting or other scripting / programming experience SaaS based Web application experience Relational Database performance and monitoring (Postgres RDS preferred) Experience with Jenkins or similar CI/CD tooling A solid understanding of the components that make up production systems (Memory, CPU, Disk space, Disk i/o, Network i/o, etc.) is required. Strong experience with monitoring, alerting, and log aggregation tools: Datadog, AWS CloudWatch, PagerDuty, Statuspage. Ability to read and interpret application server logs, outputs, CloudTrail and other critical logging output Excellent troubleshooting skills required. Nice to Have Skills Prior application coding and debugging experience (Ruby, Python, etc.) Terraform and/or CloudFormation Experience troubleshooting application integrations Other Technologies: Cloudflare, AWS Guard duty, Crowdstrike. About Onit : Onit creates solutions that transform best practices into smarter workflows, better processes and operational efficiencies. We do this for legal, compliance, sales, IT, HR and finance departments. We specialize in enterprise legal management, matter management, spend management, contract management and legal holds. We also specialize in AI/ML (NLP) based models for our platform for contract reviews. Onit partners with businesses to build custom enterprise-wide software solutions that can be implemented quickly, are easy to use, and drive better decisions. gv4ysDM3Ei

Posted 1 day ago

Apply

0 years

1 - 3 Lacs

Pune

Remote

GlassDoor logo

Position : Presales Executive Location : Pune Key Responsibilities: ???? Pre-Sales Activities: ? Calling and understanding the client's business needs, pain points, and objectives ? ? Present tailored product demos showcasing WappBiz’s features and benefits. ? Collaborate with the sales team to convert marketing-qualified leads (MQLs) into sales-qualified leads (SQLs). ? Respond to product-related queries and RFPs with compelling proposals. ? Maintain CRM hygiene by logging interactions, call notes, and next steps. ???? Onboarding & Customer Success: ? Manage end-to-end onboarding for new clients – from account setup to training. ? Assist in API integrations or coordinate with tech support if needed. ? Provide product walkthroughs and documentation to ensure smooth adoption. ? Collect feedback to continuously improve the onboarding experience. ? Act as the voice of the customer to internal teams (tech, sales, support). ???? Cross-Functional Collaboration: ? Coordinate with the product, tech, and support teams to deliver on client expectations. ? Work closely with marketing to provide feedback on client pain points and feature requests. Ideal Candidate Profile: ? Strong communication and presentation skills – both written and verbal. ? Tech-savvy with a basic understanding of APIs, CRM tools, and digital marketing workflows. ? Problem-solver mindset with a customer-first approach. ? Prior experience in WhatsApp Marketing, CRM, MarTech, or SaaS tools is a strong plus. Tools You'll Use: ? CRM (e.g., Freshsales, HubSpot, Odoo) ? Zoom/Google Meet for demos ? WhatsApp Business API ? Google Workspace (Docs, Sheets, Slides) Why Join Us? ? Be part of a fast-growing SaaS platform backed by a passionate tech team. ? Opportunity to grow into a Customer Success or Sales Enablement role. ? Work with exciting brands across industries like real estate, healthcare, and retail. ? Collaborative, learning-first culture with a focus on innovation. Job Types: Full-time, Fresher Pay: ₹15,000.00 - ₹25,000.00 per month Benefits: Health insurance Compensation Package: Performance bonus Schedule: Day shift Work Location: Remote

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

GlassDoor logo

Key Responsibilities: Deliver instructor-led training sessions on Linux system administration and DevOps tools and practices. Develop, update, and maintain comprehensive training materials, including presentations, labs, assessments, and projects. Filesystem management, permissions, user/group management Shell scripting (Bash), process management, networking Package management and system performance tuning. Containerization with Docker and orchestration with Kubernetes Infrastructure as Code using Ansible, Terraform Monitoring and logging tools like Prometheus, Grafana, ELK Cloud deployment basics (AWS, Azure, or GCP) Evaluate learner performance through assignments, assessments, and project reviews. Stay updated with the latest in Linux, DevOps, and cloud technologies and integrate them into the curriculum. Required Qualifications: Bachelor’s degree in Computer Science, IT, or related field. Strong knowledge of Linux. Proficient in cloud environments (AWS, Azure, or GCP). Strong communication, presentation, and interpersonal skills. Experienced : Minimum 06 Month experienced . Freshers Can be applied. Job Type: Full-time Schedule: Day shift Morning shift Work Location: In person

Posted 1 day ago

Apply

0 years

0 Lacs

Mumbai

On-site

GlassDoor logo

Role Purpose Min 1-1.5 yrs. of experience on Avaya voice Platform. 24 * 7 support Monitoring and providing break fix services to DHL offices in Chennai. Configuration of extension and Agent ID. Knowledge on Avaya Command lines to monitor Trunk status, PRI lines or Avaya devices. Should know how to take trace. Providing technical support to end user on Avaya and related queries. Should be well experienced on Avaya IP phones installation, configuration and troubleshooting. Should familiar with Avaya Phone configuration techniques. Should have hands on experienced on Avaya software such as 1x telecommuter or any other soft phone techniques. Should have the knowledge on vector, VDN and announcement. Should know the ACM ( Avaya Communication Manager) and Gateway function in detail. Troubleshooting, logging calls for L2 incidents follow up with Service provider until closure of incident. Hands on experienced on Avaya dialer techniques, Skype, Audio code etc Coordination with Malaysia and purge team ( DHL ITS team) for L2/ L3 support. Monitoring of Trunk Monitor of PRI lines ( MTNL, BSNL, TATA, BSNL, MTNL, Reliance, Airtel etc. Attending to monitor and breakdown calls of PRI/ BRI leased line at L1 level & follow up with service provider. Work with internal & external stakeholders to address the incident. Familiar with service desk ticketing application. Vendor management. Managing end-to-end user queries related to Avaya and related device. Reasonable English Ability to work in window of 8am to 8 pm. ( 9hrs rotational shift) Ready to work 6 days week Readiness to work on week offs sometimes on need basis Avaya voice skills experience is a must Client interview to happen post which position shall be closed Candidate worked on call center environment will be right fit for the requirement. Candidate from VIS network and AGC Company will be preferred. 1) 24*7 support Availability; 8* 8 Support; 9 hours working Monday to Saturday; Rotational shift except Night 2) Monitoring and providing break fix services to DHL offices in Chennai. 3) Knowledge on Avaya Command lines to monitor Trunk status, PRI lines or Avaya devices. 4) Providing technical support to end user on Avaya and related queries. 5) familiar with Avaya Phone configuration techniques and have the knowledge on vector, VDN and announcement. 6) L2 incidents follow up with Service provider until closure of incident. 7) Hands on experienced on Ameyo dialer techniques, Skype, Audio code etc 8) Coordination with Malaysia and purge team ( DHL ITS team) for L2/ L3 support. 9) Monitoring of Trunk and PRI lines ( MTNL, BSNL, TATA, BSNL, MTNL, Reliance, Airtel etc. 10)Work with internal & external stakeholders to address the incident. 11)Familiar with service desk ticketing application. 12)Vendor management. 13)Managing end-to-end user queries related to Avaya, Audio, Ameyo dialer and related device. 14)Good English 15)Ability to work in window of 9am to 6 pm. ( 9hrs rotational shift) 16)Ready to work 6 days week 17)Readiness to work on week offs sometimes on need basis

Posted 1 day ago

Apply

0 years

4 - 7 Lacs

Chennai

On-site

GlassDoor logo

We are looking for a highly experienced and motivated Senior Cloud Security Engineer with deep expertise in Amazon Web Services AWS security This role is critical in assessing designing and implementing security best practices across our AWS environments The ideal candidate will evaluate our current cloud security posture identify gaps and execute remediation strategies aligned with industry standards and compliance requirements Key Responsibilities Assess the AWS security posture by reviewing cloud architecture configurations IAM policies networking and data protection mechanisms Design and implement AWS security best practices including least privilege encryption monitoring logging and compliance controls Collaborate with DevOps Cloud Engineering and Application teams to embed security in CI CD pipelines and infrastructure as code Conduct threat modeling risk assessments and vulnerability management for AWS hosted applications and services Lead efforts to harden AWS accounts and services such as EC2 S3 Lambda RDS VPC IAM Define and implement guardrails and automated policies using tools like AWS Config Security Hub Macie GuardDuty and Control Tower Respond to security incidents investigate root causes and implement corrective actions in AWS environments Document and maintain security standards runbooks and reference architectures Stay current with evolving threats AWS services and industry regulations such as NIST ISO 27001 CIS Benchmarks Required Qualifications Minimum five years of experience in cloud security with a focus on AWS Deep knowledge of AWS security architecture services and tools Hands on experience with IAM KMS CloudTrail Config WAF Shield and VPC security Familiarity with AWS Well Architected Framework and CIS AWS Foundations Benchmark Strong understanding of network security encryption logging and monitoring and incident response in cloud environments Experience with infrastructure as code such as Terraform or CloudFormation and integrating security controls Knowledge of regulatory and compliance frameworks such as SOC two HIPAA GDPR FedRAMP Strong scripting or programming skills such as Python or Bash for automating security tasks Preferred Qualifications AWS Security Specialty Certification or equivalent AWS certifications Experience working in multi account AWS organizations and governance setups Exposure to other cloud platforms such as Azure or GCP Background in DevSecOps or experience integrating security into CI CD processes About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 day ago

Apply

3.0 years

3 - 5 Lacs

Chennai

On-site

GlassDoor logo

Job Description : We are looking for a Databricks Setup Specialist to lead the deployment and configuration of Databricks on AWS, focusing on PrivateLink integration and Unity Catalog administration. This role involves collaboration with AWS DevOps and Security teams to create a secure and compliant data platform. Key Responsibilities: Deploy Databricks workspaces on AWS with secure access via PrivateLink. Integrate Databricks with VPCs, IAM roles, and security groups alongside AWS DevOps engineers. Administer Unity Catalog, managing workspace binding, permissions, and audit logging. Implement best practices for identity federation and role-based access controls (RBAC). Troubleshoot configuration and connectivity issues to minimize downtime. Document setup procedures and provide training for users and administrators. Required Qualifications: 3+ years of experience with Databricks on AWS. Expertise in AWS services like VPC, PrivateLink, IAM, and S3. Experience with Unity Catalog in enterprise settings. Familiarity with DevOps tools for infrastructure automation. Understanding of data governance and compliance requirements. Strong problem-solving and communication skills. Preferred Qualifications: Experience with multi-workspace Databricks environments. Knowledge of Lakehouse architecture and data lineage tools. Relevant Databricks and AWS certifications. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 day ago

Apply

3.0 years

2 - 8 Lacs

Bengaluru

On-site

GlassDoor logo

What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here. Who We Look For Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. our Impact Site Reliability Engineering (SRE) is an engineering discipline that combines software and systems engineering to build and run large-scale, massively distributed, fault-tolerant systems. At Goldman Sachs, SRE is responsible for improving the availability and reliability of some of the firm’s most critical platform services, and ensures they meet the requirements of our internal and external users. We are looking for engineers who are motivated to collaborate with our businesses to build and run sustainable production systems, which can evolve and adapt to changes in our fast-paced, global business environment. The SRE team develops and maintains platforms and tools which help other engineering teams in Goldman Sachs to build and operate reliable and resilient systems. The platforms we offer range from central logging and tracing to monitoring and alerting and we provide tools to drive adoption and improvements to capacity planning, operational readiness assessments, production incident postmortems, SLIs / SLOs, and deployment automation including canary releases. The products and services we provide to our internal customers are used by thousands of engineers every day. We believe that reliability is the most important feature of any system, and we are devoted to giving our engineers the platforms and tools they need to build and operate reliable products. How You Will Fulfil Your Potential As a developer in the SRE team, you will work with internal customers, product owners, and SREs to design, develop, and support the platforms and tools we provide to other engineering teams to enable them to run reliable large scale production systems spanning cloud and on-prem datacenters. Responsibilities Design, develop, and support SRE platforms and tools Collaborate with other teams to onboard them onto SRE owned platforms and tools and help them implement SRE best practices Adhere to and drive SRE disciplines and processes across the global team Create and support automation solutions and build out monitoring and alerting to improve the reliability of the platforms and tools we operate Basic Qualifications Degree in computer science or engineering with at least 3 years industry experience Proficiency in at least one major programming language, preferably in Java or Go and JavaScript / Typescript Excellent programming skills including debugging, testing, and optimizing code Strong problem solving / analytical skills Experience with algorithms, data structures as well as software and system design Experience automating operational tasks Comfortable with technical ownership, managing multiple stakeholders, and working as part of a global team Preferred Experience Experience with distributed systems design, maintenance, and troubleshooting Experience with databases / data stores like PostgreSQL, MongoDB, and Elasticsearch Proficiency in using Terraform for Infrastructure deployment and management Knowledge of cloud native solutions in AWS or GCP Systems experience in Linux and networking, especially in scaling for performance and debugging complex distributed systems Experience with monitoring and alerting systems ABOUT GOLDMAN SACHS At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2023. All rights reserved. Goldman Sachs is an equal opportunity employer and does not discriminate on the basis of race, color, religion, sex, national origin, age, veterans status, disability, or any other characteristic protected by applicable law.

Posted 1 day ago

Apply

7.5 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Cloud Infrastructure, AWS Architecture Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Support Engineer, you will act as software detectives, providing a dynamic service that identifies and solves issues within multiple components of critical business systems. Your typical day will involve collaborating with various teams to troubleshoot and resolve complex problems, ensuring the seamless operation of essential applications and infrastructure. You will engage in proactive monitoring and maintenance, contributing to the overall efficiency and reliability of business processes while adapting to the evolving needs of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Develop and implement best practices for application support and incident management. Professional & Technical Skills: - Must To Have Skills: Proficiency in Cloud Infrastructure, AWS Architecture. - Strong understanding of cloud service models, including IaaS, PaaS, and SaaS. - Experience with infrastructure as code tools such as Terraform or CloudFormation. - Familiarity with monitoring and logging tools to ensure system health and performance. - Ability to troubleshoot and resolve issues in a cloud environment. Additional Information: - The candidate should have minimum 7.5 years of experience in Cloud Infrastructure. - This position is based at our Bengaluru office. - A 15 years full time education is required.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Project Role : Infra Tech Support Practitioner Project Role Description : Provide ongoing technical support and maintenance of production and development systems and software products (both remote and onsite) and for configured services running on various platforms (operating within a defined operating model and processes). Provide hardware/software support and implement technology at the operating system-level across all server and network areas, and for particular software solutions/vendors/brands. Work includes L1 and L2/ basic and intermediate level troubleshooting. Must have skills : Cloud Automation DevOps Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Infra Tech Support Practitioner, you will engage in the ongoing technical support and maintenance of production and development systems and software products. Your typical day will involve addressing various technical issues, providing solutions for configured services across multiple platforms, and ensuring the smooth operation of hardware and software systems. You will work both remotely and onsite, collaborating with team members to troubleshoot and resolve issues effectively, while adhering to established operating models and processes. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the implementation of technology at the operating system level across all server and network areas. - Engage in basic and intermediate level troubleshooting for hardware and software support. Professional & Technical Skills: - Must To Have Skills: Proficiency in Cloud Automation DevOps. - Strong understanding of cloud infrastructure and services. - Experience with automation tools and scripting languages. - Familiarity with monitoring and logging tools for system performance. - Knowledge of network protocols and security best practices. Additional Information: - The candidate should have minimum 3 years of experience in Cloud Automation DevOps. - This position is based at our Gurugram office. - A 15 years full time education is required.

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Description TRIROPE TECHNOLOGIES specializes in delivering high-quality, scalable, and secure application development solutions across various industries including finance, healthcare, retail, education, and logistics. Our team collaborates to deliver mobile, web, and enterprise applications that solve real-world challenges and enhance customer engagement. Role: Java Developer Exp: 3-5 years Preferred Immediate Joiners Location : Chennai (on-site) Key Responsibilities: Design, develop, test, and deploy scalable Java-based applications. Build and maintain RESTful APIs and microservices using Spring Boot. Write clean, maintainable, and efficient code following best practices. Optimize performance and troubleshoot application issues. Work closely with front-end developers, testers, and DevOps teams. Participate in code reviews, architectural discussions, and agile ceremonies. Write unit and integration tests to ensure application reliability. 𝐒𝐤𝐢𝐥𝐥𝐬 𝐓𝐞𝐜𝐡𝐧𝐢𝐜𝐚𝐥 𝐒𝐤𝐢𝐥𝐥𝐬 Strong proficiency in Java 8 or above. Good understanding of OOP principles, design patterns, and collections. Hands-on experience with Spring Boot, Spring MVC, and Spring Security. Familiarity with Hibernate or JPA. RESTful API design and development. JSON, XML, Postman/Swagger/OpenAPI documentation. Experience with MySQL, MongoDB. Ability to write complex SQL queries and optimize database performance. Git, Maven/Gradle, Jenkins (CI/CD). Familiarity with Docker and cloud platforms (AWS/GCP/Azure) is a plus. Logging (Log4j, SLF4J) 𝐒𝐨𝐟𝐭 𝐒𝐤𝐢𝐥𝐥𝐬 Strong problem-solving and analytical thinking abilities. Excellent communication and teamwork skills. Ability to work independently and manage multiple priorities. Eagerness to learn new technologies and adapt to changing requirements. 𝐍𝐢𝐜𝐞 𝐭𝐨 𝐇𝐚𝐯𝐞 Experience with Dockerized deployments. Familiarity with RabbitMQ, Kafka. Working knowledge of TypeScript. Exposure to AWS services (Lambda, S3, EC2). Interested candidates can apply directly on LinkedIn or send your resume to hr@trirope.com .

Posted 1 day ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Senior SRE (Engineering & Reliability) Job Summary: We are seeking an experienced and dynamic Site Reliability Engineering (SRE) Lead to oversee the reliability, scalability, and performance of our critical systems. As an SeniorSRE, you will play a pivotal role in establishing and implementing SRE practices, leading a team of engineers, and driving automation, monitoring, and incident response strategies. This position combines software engineering and systems engineering expertise to build and maintain high-performing, reliable systems. Experience: 5-10 years Key Responsibilities: Reliability & Performance: • Lead efforts to maintain high availability and reliability of critical services. • Define and monitor SLIs, SLOs, and SLAs to ensure business requirements are met. • Proactively identify and resolve performance bottlenecks and system inefficiencies. Incident Management & Response: • Establish and improve incident management processes and on-call rotations. • Lead incident response and root cause analysis for high-priority outages. • Drive post-incident reviews and ensure actionable insights are implemented. Automation & Tooling: • Develop and implement automated solutions to reduce manual operational tasks. • Enhance system observability through metrics, logging, and distributed tracing tools (e.g., Prometheus, Grafana, Elastic APM). • Optimize CI/CD pipelines for seamless deployments. Collaboration: • Partner with software engineering teams to improve the reliability of applications and infrastructure. • Work closely with product/ engineering teams to design scalable and robust systems. • Ensure seamless integration of monitoring and alerting systems across teams. Leadership & Team Building: • Manage, mentor, and grow a team of SREs. • Promote SRE best practices and foster a culture of reliability and performance across the organization. • Drive performance reviews, skills development, and career progression for team members. Capacity Planning & Cost Optimization: • Perform capacity planning and implement autoscaling solutions to handle traffic spikes. • Optimize infrastructure and cloud costs while maintaining reliability and performance. Skills & Qualifications: Required Skills: • Technical Expertise: o Experience with cloud platforms (AWS / Azure / GCP) and Kubernetes. o Hands-on knowledge of infrastructure-as-code tools like Terraform /Helm/ Ansible. o Proficiency in Java o Expertise in distributed systems, databases, and load balancing. • Monitoring & Observability: o Proficient with tools like Prometheus, Grafana,, Elastic APM, or New relic. o Understanding of metrics-driven approaches for system monitoring and alerting. • Automation & CI/CD: o Hands-on experience with CI/CD pipelines (e.g., Jenkins, Azure Pipelines etc). o Skilled in automation frameworks and tools for infrastructure and application deployments. • Incident Management: o Proven track record in handling incidents, post-mortems, and implementing solutions to prevent recurrence. Leadership & Communication Skills: • Strong people management and leadership skills with the ability to inspire and motivate teams. • Excellent problem-solving and decision-making skills. • Clear and concise communication, with the ability to translate technical concepts for non-technical stakeholders. Preferred Qualifications: • Experience with database optimization, Kafka, or other messaging systems. • Knowledge of autoscaling techniques • Previous experience in an SRE, DevOps, or infrastructure engineering leadership role. • Understanding of compliance and security best practices in distributed systems.

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Not just a job, but a career Yokogawa, award winner for ‘Best Asset Monitoring Technology’ and ‘Best Digital Twin Technology’ at the HP Awards, is a leading provider of industrial automation, test and measurement, information systems and industrial services in several industries. Our aim is to shape a better future for our planet through supporting the energy transition, (bio)technology, artificial intelligence, industrial cybersecurity, etc. We are committed to the United Nations sustainable development goals by utilizing our ability to measure and connect. About The Team Our 18,000 employees work in over 60 countries with one corporate mission, to "co-innovate tomorrow". We are looking for dynamic colleagues who share our passion for technology and care for our planet. In return, we offer you great career opportunities to grow yourself in a truly global culture where respect, value creation, collaboration, integrity, and gratitude are highly valued and exhibited in everything we do. This role will be responsible for overall Infrastructure & Datacentre management of Yokogawa India limited where candidate will manage and develop roadmap, re-design infrastructure upgrade, refresh, migration, and operational tasks. Candidate must have 8-12 years of IT industry experience and proficient in enterprise IT, including servers, storage, virtualization, networking concepts of on-premises, hybrid, and cloud environment. He/she must have hands-on experience working in both On-premises as well as cloud delivery model such as, IAAS/SAAS/PAAS. Some of the key responsibilities includes, and not limited to, Manage and support heterogenous infrastructure that consist of Physical & Virtual servers, Storage, Backup, Datacentre, High availability, Disaster Recovery, Business Continuity plans Assist in the architecture, design, build and Deploying the Servers & cloud infrastructure within Microsoft Azure Maintenance of On-premises servers and Datacentre Servers Well versed with scripting for automation via Terraform. Provisioning, managing Azure Infrastructure using automation PowerShell Strong understanding of Azure Cloud and active directory Create architectural and migration roadmap for On-premises Servers and Applications on Azure solution Strong understanding of different Azure IaaS services and platform and their associated limitations. Understanding and exposure to Azure Cloud, Hands-on experience of Azure Scale Sets, Load Balancer, Azure networking, Azure Monitor, Alerts, and Log Analytics Infrastructure monitoring & Logging (Ex: Tools like SCCM, Nagios, ManageEngine, InfraON, Grafana, Splunk) Strong Knowledge of Intune and its maintenance Maintaining the regulatory systems about ISMS policy (Information System Management Systems) Printer Management and Governance. Responsible for managing, installing, and configuring new servers / storage systems for business as per their requirement. Experience in managing Virtual Machines (VM), Hyper-V and support cloud migration activities. Expertise in managing and supporting Microsoft 365 workloads – Exchange online, Intune, MS Teams, SharePoint, AIP, ATP, Azure AD Connect, defender, license, and user management. Experience in managing and supporting Active Directory, DNS, SMTP, DHCP, Group Policy, printer management, and supporting business-critical systems. Embrace a culture of continuous service improvement and service excellence. Strong expertise in automating operational tasks, innovation, value adds, process and cost optimization that improves resource productivity as well as end user experience. Monitoring the installed systems [hardware, software, and services] for operational metrics. Ensure installed systems are optimized in terms of availability, stability, integrity, performance and scalability with necessary security updates, patches etc. Strong understanding of the ITIL framework with in-depth knowledge and experience in incident, problem, change & knowledge management, and exposure to ticketing tool (like SNOW, Remedy), other disciplines related to service delivery. Analyse current technologies used within the company and determine ways to improve. Develop and produce infrastructure best practices and standards. Respond to production events in restoring services quickly and efficiently lead root cause analysis and remediation. Review the infrastructure requirements and make recommendations on ways to improve or cost optimizations. Support team in Audit related activities. develop, document, and maintain operational procedures. Support successful execution with participation in organization wide IT transformation projects. Qualifications And General Skills / Experience Education: Masters/Bachelor of Engineering in Computer Science or IT or equivalent Technical Certification: Candidate must have either MCSE / MCSA / MTA / Solution Architect / Azure/ AWS or equivalent credential Process: ITIL knowledge is essential, and ITIL V3 certification will be an added advantage Soft Skills: Excellent verbal and written communication, problem-solving and presentation skills, and proactive attitude Collaborate with others within the Infrastructure Team as well as the Application Support teams on design and strategy Based on business need, flexible to work outside business hours, weekends, and holidays. Ability to work independently and as a team member, establish and maintain cooperative working relationships with coworkers Ability to connect on-premises technology and services, to cloud offerings. Stay current on technology and industry trend Yokogawa is an Equal Opportunity Employer. Yokogawa wants a diverse, equitable and inclusive culture. We will actively recruit, develop, and promote people from a variety of backgrounds who differ in terms of experience, knowledge, thinking styles, perspective, cultural background, and socioeconomic status. We will not discriminate based on race, skin color, age, sex, gender identity and expression, sexual orientation, religion, belief, political opinion, nationality, ethnicity, place of origin, disability, family relations or any other circumstances. Yokogawa values differences and enables everyone to belong, contribute, succeed, and demonstrate their full potential. Are you being referred to one of our roles? If so, ask your connection at Yokogawa about our Employee Referral process!

Posted 1 day ago

Apply

18.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Capabilities Required Total 18+ years with min 8 year exp in leading a team of Application support Solid Experience of leading teams supporting Cat1 / critical production applications Experience of supporting critical applications in Payments domain Payment engines, Payment Gateways Reconciliation and investigation Payment Integration Experience in Payment products like FIS OPF, NPP, CBIS, GPP, PAG Having experience of supporting any of payment schemes like NPP, RTGS, Cross border Payments, SWIFT, Fast Payments, CHAPS, SEPA etc. Detailed Knowledge in Change /Problem / Incident management Solid Experience of managing recoveries and major issues. Effective communicator and who can work under pressure Precise Questioning and Answering skills is a must in this role Experience of running Disaster Recoveries / simulations and BCP exercises. Open for 24x7 Support Tools and technologies Experience in using ServiceNow for incident, problem management and CMDB. Candidate should have understanding of Incidence governance and production change governance. Candidate should be well versed with logging and monitoring tools like Splunk, AppDynamics, DX-APM, Cloudwatch etc. Candidate should have experience in traversing Java and Springboot code in GIT and make code fixes in production. Candidate should have a good understanding of cloud technologies in both AWS and Azure and should be capable of supporting microservices and 3rd party products in cloud and on-prem environments. Candidate should be familiar with following technology elements Docker containers Kubernetes, Kafka, MQ, Unix Shell scripting Control-M REST APIs and API gateways like Kong Databases like Oracle, PostgreSQL etc. Load balancers and API Proxy Familiarity with Security tools like CyberArc, Hashicorp Vault, Snyk, Checkmarx etc.

Posted 1 day ago

Apply

7.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview Do you love creating solutions that unlock developer productivity and bring teams together? Do you insist on the highest standards for the software your team develops? Are you an advocate of fast release cycle times, continuous delivery and measurable quality? If this is you, then join an energetic team of DevOps Engineers building next generation development applications for PDI! As a DevOps Engineer, you will partner with a team of senior engineers in the design, development and maintenance of our CI/CD DevOps platform for new and existing PDI solutions. The platform will be used internally by the engineering teams, providing them an internal pipeline to work with POCs, alpha, betas and release candidate environments, as well as supporting the pipeline into our production stage and release environments managed by our CloudOps Engineers and running hybrid clouds composed of PDI datacenter based private cloud clusters federated with public cloud-based clusters. You will play a key role in designing & building our CI/CD delivery pipeline as we drive to continuously increase our cloud maturity. You will be supporting automated deployment mechanisms, writing hybrid cloud infrastructure as code, automated testing, source control integration and lab environment management. You will review, recommend & implement system enhancements in the form of new processes or tools that improve the effectiveness of our SDLC while ensuring secure development practices are followed and measured. You will be responsible for maintaining order in the DevOps environment by ensuring all stakeholders (testers, developers, architects, product owners, CloudOps, IT Ops…) are trained in operating procedures and best practices. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell & bash scripting, database administration as well as bare metal virtualization technologies and public cloud environments ( AWS ). Key Responsibilities Support pre-production services : Engage in system design consulting, develop software platforms and frameworks, conduct capacity planning, and lead launch reviews to ensure smooth deployment and operational readiness before services go live. Scale and evolve systems : Ensure sustainable system scaling through automation, continuously pushing for improvements in system architecture, reliability, and deployment velocity Champion Infrastructure-as-Code (IaC) practices to ensure scalability, repeatability, and consistency across environments. Drive the selection and implementation of portable provisioning and automation tools (e.g., Terraform, Packer) to enhance infrastructure flexibility and efficiency. Evangelize across teams: Work closely with development and QA teams to ensure smooth and reliable operations, promoting a culture of collaboration in addition to DevOps best practices. Optimize CI/CD pipelines : Lead the development, optimization, and maintenance of CI/CD pipelines to enable seamless code deployment, reduce manual processes, and ensure high-quality releases. Enhance observability and monitoring : Implement comprehensive monitoring, logging, and alerting solutions, using metrics to drive reliability and performance improvements across production systems. Administer and optimize DevOps tools (e.g., Jenkins, Jira, Confluence, Bitbucket), providing user support as needed and focusing on automation to reduce manual interventions. Mentor and guide team members : Provide technical leadership and mentorship to junior DevOps engineers, fostering continuous learning and knowledge sharing within the team Qualifications 7-10 years in DevOps or related software engineering, or equivalent combination of education and experience Proven expertise in AWS cloud services. Experience with other cloud platforms (Azure, GCP) is a plus. Advanced proficiency in Infrastructure as Code (IaC) using Terraform , with experience managing complex, multi-module setups for provisioning infrastructure across environments. Strong experience with configuration management tools, particularly Ansible (preferred), and/or Chef, for automating system and application configurations. Expertise in implementing CI/CD best practices ( Jenkins , Circle CI , TeamCity , or Gitlab ) Experience with version control systems (e.g., Git, Bitbucket), and developing branching strategies for large-scale, multi-team projects. Familiar with containerization ( Docker ) and cloud orchestration ( Kubernetes , ECS , EKS , Helm ) Functional understanding of various logging and observability tools ( Grafana , Loki , Fluentbit , Prometheus , ELK stack , Dynatrace , etc.) Familiar with build automation in Windows and Linux and familiar with the various build tools ( MSBuild , Make ), package managers ( NuGet , NPM , Maven ) and artifact repositories ( Artifactory , Nexus ) Working experience in Windows and Linux systems, CLI and scripting Programming experience with one or more of Python, Groovy, Go , C# , Ruby, PowerShell Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). Excellent problem-solving and troubleshooting skills, with the ability to diagnose complex system issues and design effective solutions. Strong communication and collaboration skills, with experience mentoring team members and working closely with development, operations, and security teams. Preferred Qualifications Domain experience in the Convenience Retail Industry, ERP, Logistics or Financial transaction processing solutions Any relevant certifications are a plus Any other experience with common Cloud Operations/DevOps tools and practices is a plus Behavioral Competencies : Cultivates Innovation Decision Quality Manages Complexity Drives Results Business Insight PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 1 day ago

Apply

0.0 years

0 Lacs

Ahmedabad, Gujarat

Remote

Indeed logo

About the Role: Grade Level (for internal use): 11 Job Title: Senior DevOps Engineer Location: Ahmedabad, India About Us: ChartIQ , a division of S&P Global , provides a powerful JavaScript library that enables sophisticated data visualization and charting solutions for financial market participants. Our library is designed to run seamlessly in any browser or browser-like environment, such as a web-view, and empowers users to interpret and interact with complex financial datasets. By transforming raw data into compelling visual narratives, ChartIQ helps traders, analysts, and decision-makers uncover insights, identify key relationships, and spot critical opportunities in real-time. Role Overview: As a DevOps Engineer at ChartIQ , you'll play a critical role not only in building, maintaining, and scaling the infrastructure that supports our Development our Development and QA needs , but also in driving new, exciting cloud-based solutions that will add to our offerings. Your work will ensure that the platforms used by our team remain available, responsive, and high-performing. In addition to maintaining the current infrastructure, you will also contribute to the development of new cloud-based solutions , helping us expand and enhance our platform's capabilities to meet the growing needs of our financial services customers. You will also contribute to light JavaScript programming , assist with QA testing , and troubleshoot production issues. Working in a fast-paced, collaborative environment, you'll wear multiple hats and support the infrastructure for a wide range of development teams. This position is based in Ahmedabad, India , and will require working overlapping hours with teams in the US . The preferred working hours will be until 12 noon EST to ensure effective collaboration across time zones. Key Responsibilities: Design, implement, and manage infrastructure using Terraform or other Infrastructure-as-Code (IaC) tools. Leverage AWS or equivalent cloud platforms to build and maintain scalable, high-performance infrastructure that supports data-heavy applications and JavaScript-based visualizations. Understand component-based architecture and cloud-native applications. Implement and maintain site reliability practices , including monitoring and alerting using tools like DataDog , ensuring the platform’s availability and responsiveness across all environments. Design and deploy high-availability architecture to support continuous access to alerting engines. Support and maintain Configuration Management systems like ServiceNow CMDB . Manage and optimize CI/CD workflows using GitHub Actions or similar automation tools. Work with OIDC (OpenID Connect) integrations across Microsoft , AWS , GitHub , and Okta to ensure secure access and authentication. Contribute to QA testing (both manual and automated) to ensure high-quality releases and stable operation of our data visualization tools and alerting systems. Participate in light JavaScript programming tasks, including HTML and CSS fixes for our charting library. Assist with deploying and maintaining mobile applications on the Apple App Store and Google Play Store . Troubleshoot and manage network issues , ensuring smooth data flow and secure access to all necessary environments. Collaborate with developers and other engineers to troubleshoot and optimize production issues. Help with the deployment pipeline , working with various teams to ensure smooth software releases and updates for our library and related services. Required Qualifications: Proficiency with Terraform or other Infrastructure-as-Code tools. Experience with AWS or other cloud services (Azure, Google Cloud, etc.). Solid understanding of component-based architecture and cloud-native applications. Experience with site reliability tools like DataDog for monitoring and alerting. Experience designing and deploying high-availability architecture for web based applications. Familiarity with ServiceNow CMDB and other configuration management tools. Experience with GitHub Actions or other CI/CD platforms to manage automation pipelines. Strong understanding and practical experience with OIDC integrations across platforms like Microsoft , AWS , GitHub , and Okta . Solid QA testing experience, including manual and automated testing techniques (Beginner/Intermediate). JavaScript , HTML , and CSS skills to assist with troubleshooting and web app development. Experience with deploying and maintaining mobile apps on the Apple App Store and Google Play Store that utilize web-based charting libraries. Basic network management skills, including troubleshooting and ensuring smooth network operations for data-heavy applications. Knowledge of package publishing tools such as Maven , Node , and CocoaPods to ensure seamless dependency management and distribution across platforms. Additional Skills and Traits for Success in a Startup-Like Environment: Ability to wear multiple hats : Adapt to the ever-changing needs of a startup environment within a global organization. Self-starter with a proactive attitude, able to work independently and manage your time effectively. Strong communication skills to work with cross-functional teams, including engineering, QA, and product teams. Ability to work in a fast-paced, high-energy environment. Familiarity with agile methodologies and working in small teams with a flexible approach to meeting deadlines. Basic troubleshooting skills to resolve infrastructure or code-related issues quickly. Knowledge of containerization tools such as Docker and Kubernetes is a plus. Understanding of DevSecOps and basic security practices is a plus. Preferred Qualifications: Experience with CI/CD pipeline management , automation, and deployment strategies. Familiarity with serverless architectures and AWS Lambda . Experience with monitoring and logging frameworks, such as Prometheus , Grafana , or similar. Experience with Git , version control workflows, and source code management. Security-focused mindset , experience with vulnerability scanning, and managing secure application environments. What We Offer: Competitive salary and benefits package. Flexible work schedule with remote work options. The opportunity to work in a collaborative, creative, and innovative environment. Hands-on experience with cutting-edge technologies and tools that power sophisticated financial data visualizations and charting solutions. Professional growth and career advancement opportunities. A dynamic startup culture within a global organization, where your contributions directly impact the product and the financial industry. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 312974 Posted On: 2025-06-23 Location: Ahmedabad, Gujarat, India

Posted 1 day ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview If you love to design scalable fault-tolerant systems that can run efficiently with high performance and are eager to learn new technologies and develop new skills, then we have a great opportunity for you: join our PDI family and work closely with other talented PDI engineers to deliver solutions that delight our customers every day! As a DevOps Engineer III, you will design, develop & maintain E2E automated provisioning & deployment systems for PDI solutions. You will also partner with your engineering team to ensure these automation pipelines are integrated into our standard PDI CI/CD system. You will also partner with the Solution Automation team collaborating to bring test automation to the deployment automation pipeline. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell scripting & bash, database administration as well as bare metal virtualization technologies and public cloud environments in AWS. Key Responsibilities Promote and evangelize Infrastructure-as-code (IaC) design thinking everyday Designing, building, and managing cloud infrastructure using AWS services. Implementing infrastructure-as-code practices with tools like Terraform or Ansible to automate the provisioning and configuration of resources Working with container technologies like Docker and container orchestration platforms like Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS). Managing and scaling containerized applications using AWS services like Amazon ECR, AWS Fargate Employing IaC tools like Terraform, AWS CloudFormation to define and deploy infrastructure resources in a declarative and version-controlled manner. Automating the creation and configuration of AWS resources using infrastructure templates Implementing monitoring and logging solutions using Grafana or ELK Stack to gain visibility into system performance, resource utilization, and application logs. Configuring alarms and alerts to proactively detect and respond to issues Implementing strategies for disaster recovery and high availability using AWS services like AWS Backup, AWS Disaster Recovery, or multi-region deployments. Qualifications 7-9 years’ experience in DevOps role 1+ years leading DevOps initiatives AWS Services: In-depth understanding and hands-on experience with various AWS services, including but not limited to: o Compute: EC2, Lambda, ECS, EKS, Fargate, ELB o Networking: VPC, Route 53, CloudFront, TransitGateway, DirectConnect o Storage: S3, EBS, EFS o Database: RDS, MSSQL o Monitoring: CloudWatch, CloudTrail o Security: IAM, Security Groups, KMS, WAF Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Preferred Qualifications Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Working experience in Windows and Linux systems, CLI and scripting Familiar with build automation in Windows and Linux and familiar with the various build tools (MSBuild, Make), package managers (NuGet, NPM, Maven) and artifact repositories (Artifactory, Nexus) Familiarity with version control system: Git, Azure DevOps. Knowledge of branching strategies, merging, and resolving conflicts. Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 1 day ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview If you love to design scalable fault-tolerant systems that can run efficiently with high performance and are eager to learn new technologies and develop new skills, then we have a great opportunity for you: join our PDI family and work closely with other talented PDI engineers to deliver solutions that delight our customers every day! As a DevOps Engineer III, you will design, develop & maintain E2E automated provisioning & deployment systems for PDI solutions. You will also partner with your engineering team to ensure these automation pipelines are integrated into our standard PDI CI/CD system. You will also partner with the Solution Automation team collaborating to bring test automation to the deployment automation pipeline. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell scripting & bash, database administration as well as bare metal virtualization technologies and public cloud environments in AWS. Key Responsibilities Promote and evangelize Infrastructure-as-code (IaC) design thinking everyday Designing, building, and managing cloud infrastructure using AWS services. Implementing infrastructure-as-code practices with tools like Terraform or Ansible to automate the provisioning and configuration of resources Working with container technologies like Docker and container orchestration platforms like Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS). Managing and scaling containerized applications using AWS services like Amazon ECR, AWS Fargate Employing IaC tools like Terraform, AWS CloudFormation to define and deploy infrastructure resources in a declarative and version-controlled manner. Automating the creation and configuration of AWS resources using infrastructure templates Implementing monitoring and logging solutions using Grafana or ELK Stack to gain visibility into system performance, resource utilization, and application logs. Configuring alarms and alerts to proactively detect and respond to issues Implementing strategies for disaster recovery and high availability using AWS services like AWS Backup, AWS Disaster Recovery, or multi-region deployments. Qualifications 7-9 years’ experience in DevOps role 1+ years leading DevOps initiatives AWS Services: In-depth understanding and hands-on experience with various AWS services, including but not limited to: o Compute: EC2, Lambda, ECS, EKS, Fargate, ELB o Networking: VPC, Route 53, CloudFront, TransitGateway, DirectConnect o Storage: S3, EBS, EFS o Database: RDS, MSSQL o Monitoring: CloudWatch, CloudTrail o Security: IAM, Security Groups, KMS, WAF Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Preferred Qualifications Familiar with some cross-platform provisioning technologies and IaC tools: Terraform, Ansible Experience with container technologies like Docker and container orchestration platforms like Kubernetes. Ability to build and manage containerized applications and deploy them to production environments Familiar with containerization (Docker), cloud orchestration (Kubernetes or Swarm) Working experience in Windows and Linux systems, CLI and scripting Familiar with build automation in Windows and Linux and familiar with the various build tools (MSBuild, Make), package managers (NuGet, NPM, Maven) and artifact repositories (Artifactory, Nexus) Familiarity with version control system: Git, Azure DevOps. Knowledge of branching strategies, merging, and resolving conflicts. Behavioral Competencies: Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 1 day ago

Apply

7.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview Do you love creating solutions that unlock developer productivity and bring teams together? Do you insist on the highest standards for the software your team develops? Are you an advocate of fast release cycle times, continuous delivery and measurable quality? If this is you, then join an energetic team of DevOps Engineers building next generation development applications for PDI! As a DevOps Engineer, you will partner with a team of senior engineers in the design, development and maintenance of our CI/CD DevOps platform for new and existing PDI solutions. The platform will be used internally by the engineering teams, providing them an internal pipeline to work with POCs, alpha, betas and release candidate environments, as well as supporting the pipeline into our production stage and release environments managed by our CloudOps Engineers and running hybrid clouds composed of PDI datacenter based private cloud clusters federated with public cloud-based clusters. You will play a key role in designing & building our CI/CD delivery pipeline as we drive to continuously increase our cloud maturity. You will be supporting automated deployment mechanisms, writing hybrid cloud infrastructure as code, automated testing, source control integration and lab environment management. You will review, recommend & implement system enhancements in the form of new processes or tools that improve the effectiveness of our SDLC while ensuring secure development practices are followed and measured. You will be responsible for maintaining order in the DevOps environment by ensuring all stakeholders (testers, developers, architects, product owners, CloudOps, IT Ops…) are trained in operating procedures and best practices. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell & bash scripting, database administration as well as bare metal virtualization technologies and public cloud environments ( AWS ). Key Responsibilities Support pre-production services : Engage in system design consulting, develop software platforms and frameworks, conduct capacity planning, and lead launch reviews to ensure smooth deployment and operational readiness before services go live. Scale and evolve systems : Ensure sustainable system scaling through automation, continuously pushing for improvements in system architecture, reliability, and deployment velocity Champion Infrastructure-as-Code (IaC) practices to ensure scalability, repeatability, and consistency across environments. Drive the selection and implementation of portable provisioning and automation tools (e.g., Terraform, Packer) to enhance infrastructure flexibility and efficiency. Evangelize across teams: Work closely with development and QA teams to ensure smooth and reliable operations, promoting a culture of collaboration in addition to DevOps best practices. Optimize CI/CD pipelines : Lead the development, optimization, and maintenance of CI/CD pipelines to enable seamless code deployment, reduce manual processes, and ensure high-quality releases. Enhance observability and monitoring : Implement comprehensive monitoring, logging, and alerting solutions, using metrics to drive reliability and performance improvements across production systems. Administer and optimize DevOps tools (e.g., Jenkins, Jira, Confluence, Bitbucket), providing user support as needed and focusing on automation to reduce manual interventions. Mentor and guide team members : Provide technical leadership and mentorship to junior DevOps engineers, fostering continuous learning and knowledge sharing within the team Qualifications 7-10 years in DevOps or related software engineering, or equivalent combination of education and experience Proven expertise in AWS cloud services. Experience with other cloud platforms (Azure, GCP) is a plus. Advanced proficiency in Infrastructure as Code (IaC) using Terraform , with experience managing complex, multi-module setups for provisioning infrastructure across environments. Strong experience with configuration management tools, particularly Ansible (preferred), and/or Chef, for automating system and application configurations. Expertise in implementing CI/CD best practices ( Jenkins , Circle CI , TeamCity , or Gitlab ) Experience with version control systems (e.g., Git, Bitbucket), and developing branching strategies for large-scale, multi-team projects. Familiar with containerization ( Docker ) and cloud orchestration ( Kubernetes , ECS , EKS , Helm ) Functional understanding of various logging and observability tools ( Grafana , Loki , Fluentbit , Prometheus , ELK stack , Dynatrace , etc.) Familiar with build automation in Windows and Linux and familiar with the various build tools ( MSBuild , Make ), package managers ( NuGet , NPM , Maven ) and artifact repositories ( Artifactory , Nexus ) Working experience in Windows and Linux systems, CLI and scripting Programming experience with one or more of Python, Groovy, Go , C# , Ruby, PowerShell Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). Excellent problem-solving and troubleshooting skills, with the ability to diagnose complex system issues and design effective solutions. Strong communication and collaboration skills, with experience mentoring team members and working closely with development, operations, and security teams. Preferred Qualifications Domain experience in the Convenience Retail Industry, ERP, Logistics or Financial transaction processing solutions Any relevant certifications are a plus Any other experience with common Cloud Operations/DevOps tools and practices is a plus Behavioral Competencies : Cultivates Innovation Decision Quality Manages Complexity Drives Results Business Insight PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 1 day ago

Apply

7.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview Do you love creating solutions that unlock developer productivity and bring teams together? Do you insist on the highest standards for the software your team develops? Are you an advocate of fast release cycle times, continuous delivery and measurable quality? If this is you, then join an energetic team of DevOps Engineers building next generation development applications for PDI! As a DevOps Engineer, you will partner with a team of senior engineers in the design, development and maintenance of our CI/CD DevOps platform for new and existing PDI solutions. The platform will be used internally by the engineering teams, providing them an internal pipeline to work with POCs, alpha, betas and release candidate environments, as well as supporting the pipeline into our production stage and release environments managed by our CloudOps Engineers and running hybrid clouds composed of PDI datacenter based private cloud clusters federated with public cloud-based clusters. You will play a key role in designing & building our CI/CD delivery pipeline as we drive to continuously increase our cloud maturity. You will be supporting automated deployment mechanisms, writing hybrid cloud infrastructure as code, automated testing, source control integration and lab environment management. You will review, recommend & implement system enhancements in the form of new processes or tools that improve the effectiveness of our SDLC while ensuring secure development practices are followed and measured. You will be responsible for maintaining order in the DevOps environment by ensuring all stakeholders (testers, developers, architects, product owners, CloudOps, IT Ops…) are trained in operating procedures and best practices. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell & bash scripting, database administration as well as bare metal virtualization technologies and public cloud environments ( AWS ). Key Responsibilities Support pre-production services : Engage in system design consulting, develop software platforms and frameworks, conduct capacity planning, and lead launch reviews to ensure smooth deployment and operational readiness before services go live. Scale and evolve systems : Ensure sustainable system scaling through automation, continuously pushing for improvements in system architecture, reliability, and deployment velocity Champion Infrastructure-as-Code (IaC) practices to ensure scalability, repeatability, and consistency across environments. Drive the selection and implementation of portable provisioning and automation tools (e.g., Terraform, Packer) to enhance infrastructure flexibility and efficiency. Evangelize across teams: Work closely with development and QA teams to ensure smooth and reliable operations, promoting a culture of collaboration in addition to DevOps best practices. Optimize CI/CD pipelines : Lead the development, optimization, and maintenance of CI/CD pipelines to enable seamless code deployment, reduce manual processes, and ensure high-quality releases. Enhance observability and monitoring : Implement comprehensive monitoring, logging, and alerting solutions, using metrics to drive reliability and performance improvements across production systems. Administer and optimize DevOps tools (e.g., Jenkins, Jira, Confluence, Bitbucket), providing user support as needed and focusing on automation to reduce manual interventions. Mentor and guide team members : Provide technical leadership and mentorship to junior DevOps engineers, fostering continuous learning and knowledge sharing within the team Qualifications 7-10 years in DevOps or related software engineering, or equivalent combination of education and experience Proven expertise in AWS cloud services. Experience with other cloud platforms (Azure, GCP) is a plus. Advanced proficiency in Infrastructure as Code (IaC) using Terraform , with experience managing complex, multi-module setups for provisioning infrastructure across environments. Strong experience with configuration management tools, particularly Ansible (preferred), and/or Chef, for automating system and application configurations. Expertise in implementing CI/CD best practices ( Jenkins , Circle CI , TeamCity , or Gitlab ) Experience with version control systems (e.g., Git, Bitbucket), and developing branching strategies for large-scale, multi-team projects. Familiar with containerization ( Docker ) and cloud orchestration ( Kubernetes , ECS , EKS , Helm ) Functional understanding of various logging and observability tools ( Grafana , Loki , Fluentbit , Prometheus , ELK stack , Dynatrace , etc.) Familiar with build automation in Windows and Linux and familiar with the various build tools ( MSBuild , Make ), package managers ( NuGet , NPM , Maven ) and artifact repositories ( Artifactory , Nexus ) Working experience in Windows and Linux systems, CLI and scripting Programming experience with one or more of Python, Groovy, Go , C# , Ruby, PowerShell Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). Excellent problem-solving and troubleshooting skills, with the ability to diagnose complex system issues and design effective solutions. Strong communication and collaboration skills, with experience mentoring team members and working closely with development, operations, and security teams. Preferred Qualifications Domain experience in the Convenience Retail Industry, ERP, Logistics or Financial transaction processing solutions Any relevant certifications are a plus Any other experience with common Cloud Operations/DevOps tools and practices is a plus Behavioral Competencies : Cultivates Innovation Decision Quality Manages Complexity Drives Results Business Insight PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Bhiwandi, Maharashtra, India

Remote

Linkedin logo

Company Description Leading company in wastewater management, treatment, and re-cycling packaged solutions Company has developed turnkey and affordable point of source, decentralized wastewater management, treatment, and re-cycling packaged solutions. Our patented designs are modular, easily scalable, smaller in footprint and can cater to domestic as well as industrial wastewater treatment and recycling needs. Further, our systems are fully automated, backed by INDRA SMART automation and INDRA SPECTRUM analytics platforms, enhancing system efficiency and delivering consistent performance. Key benefits also include low energy consumption, no added chemical requirements, minimal sludge generation, water recovery of more than 95%, high operational efficiency and lower maintenance. Role Description As an Automation Programmer, you will be responsible for developing, configuring, and maintaining PLC, SCADA, and IoT systems used in Effluent Treatment Plants (ETP) and Sewage Treatment Plants (STP). Your role is critical in ensuring automated and reliable plant performance, remote monitoring, and data-driven decision-making. Responsibilities: 1. Design, program, and test PLC logic for automation of wastewater treatment plant processes. 2. Develop and configure PLC and SCADA systems for real-time plant monitoring, control, and alarms. 3. Integrate IoT devices for remote monitoring, data logging, and cloud communication. 4. Collaborate with process engineers to translate treatment logic and control requirements into functional automation. 5. Conduct on-site installation, commissioning, and troubleshooting of control systems. 6. Ensure proper communication between field instruments, PLCs, HMIs, SCADA systems, and cloud servers. 7. Maintain documentation for PLC/SCADA architecture, code, and network configurations. 8. Provide remote and on-site technical support to field teams and clients. 9. Implement cybersecurity and fail-safe protocols for control systems. 10. Support data analytics, reporting, and dashboard development through IoT integration. Desired Candidate Profile: 1. Degree in Instrumentation, Electronics, Electrical, or Automation Engineering. 2. 2–4 years of hands-on experience with PLC (e.g., Siemens, Allen Bradley, Delta), SCADA (e.g., Wonderware, WinCC, Indu Soft), and IoT platforms. 3. Experience in wastewater or process automation is preferred. 4. Knowledge of industrial communication protocols (MODBUS, TCP/IP, MQTT, RS485). 5. Proficiency in HMI design and logic development. Competencies Required- 1. Strong logical and troubleshooting skills in automation systems. 2. Ability to read and interpret P&IDs, control schematics, and wiring diagrams. 3. Familiarity with sensor calibration and field instrumentation (flow, level, pH, DO, etc.). 4. Understanding of remote data acquisition and cloud integration techniques. 5. Excellent coordination and communication skills to support field deployments. Salary : Up to 5LPA Interested candidates can apply to contact@absinternational.co.in or WhatsApp 8108313813

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

hackajob is collaborating with Comcast to connect them with exceptional tech professionals for this role. Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Required Skills and Qualifications: Strong proficiency in Core Java, including multithreading, design patterns, and data structures. Experience with SQL and database management. Hands-on experience with DevOps tools such as Kubernetes, Docker, and Terraform. Familiarity with CI/CD pipelines and version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work in a fast-paced, agile environment. Preferred Qualifications: Experience with cloud platforms (e.g., AWS, Azure, GCP). Knowledge of microservices architecture and RESTful APIs. Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Job Description Core Responsibilities Job Description: Java Full Stack Developer with DevOps Skills Position: Java Developer with DevOps Skills Location: CIEC, Comcast Type: Full-time Experience Level: 5-8years Job Summary: We are seeking a skilled Java Developer with 5+ years of strong DevOps capabilities to join our dynamic team. The ideal candidate will have a solid background in Core Java, including multithreading, design patterns, and data structures, as well as experience with SQL and DevOps tools such as Kubernetes, Docker, and Terraform. Key Responsibilities Develop and maintain high-performance Java applications. Implement multithreading and design patterns to optimize application performance. Design and manage data structures to ensure efficient data handling. Write and optimize SQL queries for database interactions. Collaborate with DevOps teams to deploy and manage applications using Kubernetes, Docker, and Terraform. Participate in code reviews and contribute to continuous improvement of the development process. Troubleshoot and resolve application issues in a timely manner. Work closely with cross-functional teams to deliver high-quality software solutions. Required Skills And Qualifications Strong proficiency in Core Java, including multithreading, design patterns, and data structures. Experience with SQL and database management. Hands-on experience with DevOps tools such as Kubernetes, Docker, and Terraform. Familiarity with CI/CD pipelines and version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work in a fast-paced, agile environment. Preferred Qualifications Experience with cloud platforms (e.g., AWS, Azure, GCP). Knowledge of microservices architecture and RESTful APIs. Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Education Bachelor's degree in computer science, Engineering, or a related field. Disclaimer This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality - to help support you physically, financially and emotionally through the big milestones and in your everyday life. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 5-7 Years

Posted 1 day ago

Apply

4.0 - 6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Description : Machine Learning Engineer LLM, Agentic AI, Computer Vision, and MLOps Location : Ahmedabad Experience : 4 to 6 years Employment Type : Us : Responsibilities Join a forward-thinking team at Tecblic, where innovation meets cutting-edge technology. We specialize in delivering AI-driven solutions that empower businesses to thrive in the digital age. If you're passionate about LLMs, Computer Vision, MLOps, and pushing the boundaries of Agentic AI, wed love to have you on Responsibilities : Research and Development: Design, develop, and fine-tune machine learning models across LLM, computer vision, and Agentic AI use cases. Model Optimization: Fine-tune and optimize pre-trained models, ensuring performance, scalability, and minimal latency. Computer Vision: Build and deploy vision models for object detection, classification, OCR, and segmentation. Integration: Work closely with software and product teams to integrate models into production-ready applications. Data Engineering: Develop robust data pipelines for structured, unstructured (text/image/video), and streaming data. Production Deployment: Deploy, monitor, and manage ML models in production using DevOps and MLOps practices. Experimentation: Prototype and test new AI approaches such as reinforcement learning, few-shot learning, and generative AI. DevOps Collaboration: Collaborate with the DevOps team to ensure CI/CD pipelines, infrastructure-as-code, and scalable deployments are in place. Technical Mentorship: Support and mentor junior ML and data : Core Technical Skills Strong Python skills for machine learning and computer vision. Hands-on experience with PyTorch, TensorFlow, Hugging Face, Scikit-learn, OpenCV. Deep understanding of LLMs (e.g., GPT, BERT, T5) and Computer Vision architectures (e.g., CNNs, Vision Transformers, YOLO, R-CNN). Strong knowledge of NLP tasks, image/video processing, and real-time inference. Experience in cloud platforms: AWS, GCP, or Azure. Familiarity with Docker, Kubernetes, and serverless deployments. Proficiency in SQL, Pandas, NumPy, and data wrangling & MLOps Skills : Experience with CI/CD tools such as GitHub Actions, GitLab CI, Jenkins, etc. Knowledge of Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Pulumi. Familiarity with container orchestration and Kubernetes-based ML model deployment. Hands-on experience with ML pipelines and monitoring tools: MLflow, Kubeflow, TFX, or Seldon. Understanding of model versioning, model registry, and automated testing/validation in ML workflows. Exposure to observability and logging frameworks (e.g., Prometheus, Grafana, ELK Skills (Good to Have) : Knowledge of Agentic AI systems and use cases. Experience with generative models (e.g., GANs, VAEs) and RL-based architectures. Prompt engineering and fine-tuning for LLMs in specialized domains. Working with vector databases (e.g., Pinecone, FAISS, Weaviate). Distributed data processing using Apache Spark, Skills : Strong foundation in mathematics, including linear algebra, probability, and statistics. Deep understanding of data structures and algorithms. Comfortable handling large-scale datasets, including images, video, and multi-modal Skills : Strong analytical and problem-solving mindset. Excellent communication skills for cross-functional collaboration. Self-motivated, adaptive, and committed to continuous learning (ref:hirist.tech)

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description As a Node.js Developer, you will be responsible for developing and maintaining backend services and APIs, ensuring high performance, scalability, and reliability of our applications. Responsibilities You will work closely with DevOps, front-end engineers, and product managers to deliver cloud-native applications on AWS and manage relational databases with optimized SQL Responsibilities : Design, develop, and maintain backend services and RESTful APIs using Node.js and Express.js. Write efficient, reusable, and secure JavaScript code (ES6+) on both server-side and client-side (if full stack). Work with AWS services such as Lambda, API Gateway, S3, EC2, RDS, DynamoDB, IAM, and CloudWatch. Build and maintain SQL queries, stored procedures, functions, and schema designs (PostgreSQL, MySQL, or MS SQL). Optimize database queries for performance and reliability. Implement serverless functions and microservices architectures on AWS. Collaborate with cross-functional teams to define, design, and ship new features. Participate in code reviews, unit testing, and system integration. Ensure application security, scalability, and performance. Integrate third-party APIs and data sources. Monitor and troubleshoot production issues, perform root cause analysis, and apply fixes. Automate deployments and workflows using CI/CD pipelines. Document technical specifications, systems design, and process Technical Skills : Languages & Frameworks : Strong in JavaScript (ES6+) Hands-on experience with Node.js, Express.js, or similar frameworks Familiarity with TypeScript is a & DevOps (AWS) : AWS Lambda, API Gateway, S3, RDS, EC2, CloudFormation or Terraform IAM roles and security policies CloudWatch for logging and monitoring Experience in CI/CD tools like GitHub Actions, Jenkins, or AWS & Data Handling : Proficient in SQL writing optimized queries, stored procedures, and views Experience with relational databases like PostgreSQL, MySQL, or MS SQL Server Familiarity with NoSQL databases (DynamoDB, MongoDB) is a plus API Development & Integration RESTful API design and implementation Familiarity with GraphQL is a bonus OAuth2, JWT, and API key-based authentication & Code Quality : Unit testing frameworks like Jest, Mocha, or Chai Knowledge of Postman and Swagger/OpenAPI Control & Tools : Git, Agile/Scrum development Qualifications : Bachelors/Masters degree in Computer Science, Information Technology, or related fields. AWS Certified Developer or AWS Certified Solutions Architect is a plus. Strong analytical and debugging skills. Excellent written and verbal communication. Ability to work independently and in a team-oriented, collaborative environment (ref:hirist.tech)

Posted 1 day ago

Apply

Exploring Logging Jobs in India

The logging job market in India is vibrant and offers a wide range of opportunities for job seekers interested in this field. Logging professionals are in demand across various industries such as IT, construction, forestry, and environmental management. If you are considering a career in logging, this article will provide you with valuable insights into the job market, salary range, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Chennai

These cities are known for their thriving industries where logging professionals are actively recruited.

Average Salary Range

The average salary range for logging professionals in India varies based on experience and expertise. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.

Career Path

A typical career path in logging may include roles such as Logging Engineer, Logging Supervisor, Logging Manager, and Logging Director. Professionals may progress from entry-level positions to more senior roles such as Lead Logging Engineer or Logging Consultant.

Related Skills

In addition to logging expertise, employers often look for professionals with skills such as data analysis, problem-solving, project management, and communication skills. Knowledge of industry-specific software and tools may also be beneficial.

Interview Questions

  • What is logging and why is it important in software development? (basic)
  • Can you explain the difference between logging levels such as INFO, DEBUG, and ERROR? (medium)
  • How do you handle log rotation in a large-scale application? (advanced)
  • Have you worked with any logging frameworks like Log4j or Logback? (basic)
  • Describe a challenging logging issue you faced in a previous project and how you resolved it. (medium)
  • How do you ensure that log files are secure and comply with data protection regulations? (advanced)
  • What are the benefits of structured logging over traditional logging methods? (medium)
  • How would you optimize logging performance in a high-traffic application? (advanced)
  • Can you explain the concept of log correlation and how it is useful in troubleshooting? (medium)
  • Have you used any monitoring tools for real-time log analysis? (basic)
  • How do you handle log aggregation from distributed systems? (advanced)
  • What are the common pitfalls to avoid when implementing logging in a microservices architecture? (medium)
  • How do you troubleshoot a situation where logs are not being generated as expected? (medium)
  • Have you worked with log parsing tools to extract meaningful insights from log data? (medium)
  • How do you handle sensitive information in log files, such as passwords or personal data? (advanced)
  • What is the role of logging in compliance with industry standards such as GDPR or HIPAA? (medium)
  • Can you explain the concept of log enrichment and how it improves log analysis? (medium)
  • How do you handle logging in a multi-threaded application to ensure thread safety? (advanced)
  • Have you implemented any custom log formats or log patterns in your projects? (medium)
  • How do you perform log monitoring and alerting to detect anomalies or errors in real-time? (medium)
  • What are the best practices for logging in cloud-based environments like AWS or Azure? (medium)
  • How do you integrate logging with other monitoring and alerting tools in a DevOps environment? (medium)
  • Can you discuss the role of logging in performance tuning and optimization of applications? (medium)
  • What are the key metrics and KPIs you track through log analysis to improve system performance? (medium)

Closing Remark

As you embark on your journey to explore logging jobs in India, remember to prepare thoroughly for interviews by honing your technical skills and understanding industry best practices. With the right preparation and confidence, you can land a rewarding career in logging that aligns with your professional goals. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies