Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
30 - 45 Lacs
Bengaluru
Work from Office
Key Responsibilities Design, implement, and manage our AWS infrastructure, with a strong emphasis on automation, resiliency, and cost-efficiency. Develop and oversee scalable data pipelines (for event processing, transformation, and delivery). Implement and manage stream processing frameworks (such as Kinesis, Kafka, or MSK). Handle orchestration and ETL workloads, employing services like AWS Glue, Athena, Databricket, Redshift, or Apache Airflow. Implement robust network, storage, and backup strategies for growing workloads. Monitor, debug, and resolve production issues related to data and infrastructure in real time. Implement IAM controls, logging, alerts, and Security Best Practices across all components. Provide deployment automation (Docker, Terraform, CloudFormation) and collaborate with application engineers to enable smooth delivery. Build SOP for support and setup a functioning 24*7 support system (including hiring right engineers) to ensure system uptime and availability Required Technical Skills 5+ years of experience with AWS services (VPC, EC2, S3, Security Groups, RDS, Kinesis, MSK, Redshift, Glue). Experience designing and managing large-scale data pipelines with high-throughput workloads. Ability to handle 5 billion events/day and 1M+ concurrent users workloads gracefully. Familiar with scripting (Python, Terraform) and automation practices (Infrastructure as Code). Familiar with network fundamentals, Linux, scaling strategies, and backup routines. Collaborative team player — able to work with engineers, data analysts, and stakeholders. Preferred Tools & Technologies AWS: EC2, S3, VPC, Security Groups, RDS, Redshift, DocumentDB, MSK, Glue, Athena, CloudWatch Infrastructure as Code: Terraform, CloudFormation Scripted automation: Python, Bash Container orchestration: Docker, ECS or EKS Workflow orchestration: Apache Airflow, Dagster Streaming framework: Apache Kafka, Kinesis, Flink Other: Linux, Git, Security best practices (IAM, Security Groups, ACM) Education Bachelor's/Master's degree in Computer Science, Data Science, or related field Relevant professional certifications in cloud platforms or data technologies Please share your updated resume at kshipra.garg@wowjobs.biz.
Posted 3 weeks ago
2.0 - 7.0 years
4 - 6 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Develop scalable microservices using Java Spring Boot Design, implement REST APIs and integrate with frontend and external services Deploy and manage services using AWS services like EC2, S3, Lambda, RDS, and ECS Required Candidate profile Use CI/CD pipelines for automated builds, deployments (e.g., Jenkins, GitHub Actions) Collaborate with frontend, QA, DevOps, and business teams Write unit, integration tests to ensure code quality Perks and benefits Perks and Benefits
Posted 3 weeks ago
2.0 - 4.0 years
6 - 7 Lacs
Mumbai Suburban
Work from Office
We are the PERFECT match if you... Are a graduate with a minimum of 2-4 years of technical product support experience with following skills: Clear logical thinking and good communication skills. We believe in individuals who are high on ownership and like to operate with minimum management An ability to "understand" data and analyze logs to help investigate production issues and incidents Hands on experience of Cloud Platforms (GCP/AWS) Experience creating Dashboards & Alerts with tools like Metabase, Grafana, Prometheus Hands-on experience with writing SQL queries Hands on experience of logs monitoring tool (Kibana, Stackdriver, CloudWatch) Knowledge of Scripting language like Elixir/Python is a plus Experience in Kubernetes/Docker is a plus. Has actively worked on documenting RCA and creating incident reports. Good understanding of APls, with hands-on experience using tools like Postman or Insomnia. Knowledge of ticketing tool such as Freshdesk/Gitlab Here's what your day would look like... Defining monitoring events for IDfy's services and setting up the corresponding alerts Responding to alerts, with triaging, investigating and resolving resolution of issues Learning about various IDfy applications and understanding the events emitted Creating analytical dashboards for service performance and usage monitoring Responding to incidents and customer tickets in a timely manner Occasionally running service recovery scripts Helping improve the IDfy Platform by providing insights based on investigations and analysis root cause analysis Get in touch with ankit.pant@idfy.com
Posted 3 weeks ago
8.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Hiring: AWS DevOps Engineer | Bangalore | Exp: 8+ Yrs | AWS Certified | Manage CI/CD, automate infra with Terraform/Cloud Formation, monitor systems, and ensure high availability
Posted 3 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Title Technical Specialist Department Enterprise Engineering - Data Management Team Location Bangalore Level Grade 4 Introduction Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our data management platform team in Enterprise Engineering and feel like youre part of something bigger. About your team Enterprise Data Management Team has been formed to execute FILs data strategy and be a data-driven organization. The team would be responsible for providing standards and policies and manage central data projects working with data programmes of various business functions across the organization in a hub-and-spoke model. The capabilities of this team includes data cataloguing and data quality tooling. The team would also ensure the adoption of tooling, enforce the standards and deliver on foundational capabilities. About your role The successful candidate is expected to be a part of Enterprise Engineering team; and work on Data management platform. We are looking for a skilled Technical Specialist to join our dynamic team to build and deliver capabilities for data management platform to realise organisations data strategy. About you Key Responsibilities Create scalable solutions for data management, ensuring seamless integration with data sources, consistent metadata management, reusable data quality rules and framework. Develop robust APIs to facilitate the efficient retrieval and manipulation of data from a range of internal and external data sources. Integrate with diverse systems and platforms, ensuring data flows smoothly and securely between sources and our data management ecosystem. Design and implement self-service workflows to empower data role holders, enhancing accessibility and usability of the data management platform. Collaborate with product owner to understand requirements and translate them into technical solutions that promote data management and operational excellence. Work with data engineers within the team and guide them with technical direction and establishing coding best practices. Mentor junior team members, fostering a culture of continuous improvement and technical excellence. Work to implement devops pipelines and ensure smooth, automated deployment of data management solutions Monitor performance and reliability, proactively addressing issues and optimizing system performance. Stay up-to-date with emerging technologies, especially in GenAI, and incorporate advanced technologies to enhance existing frameworks and workflows Experience and Qualifications Required B.E./B.Tech. or M.C.A. in Computer Science from a reputed University 7+ years of relevant industry experience Experience of complete SDLC cycle Experience of working with multi-cultural and geographically disparate teams Essential Skills (Technical) Strong proficiency in Python, with a good understanding of its ecosystems. Experience with the Python libraries and frameworks such as Pandas, Requests, Flask, FastAPI, and web development concepts. Experience with RESTful APIs and microservices architecture. Deep understanding of AWS cloud services such as EC2, S3, Lambda, RDS, and experience in deploying and managing applications on AWS. Understanding of software development principles and design patterns. Candidate should have experience with Jenkins pipeline, hands on experience in writing testable code and unit testing Stay up to date with the latest releases and features to optimize system performance. Desirable Skills Experience Experience with database systems like Oracle, AWS RDS, DynamoDB Ability to implement test driven development Understanding of the Data Management concepts & its implementation using python Good knowledge of Unix scripting and windows platform Optimize data workflows for performance and efficiency. Ability to analyse complex problems in a structured manner and demonstrate multitasking capabilities. Personal Characteristics Excellent interpersonal and communication skills Self-starter with ability to handle multiple tasks and priorities Maintain a positive attitude that promotes teamwork within the company and a favourable image of the team Must have an eye for detail and analyse/relate to the business problem in hand Ability to develop & maintain good relationships with stakeholders Flexible and positive attitude, openness to change Self-motivation is essential, should demonstrate commitment to high quality solution Ability to discuss both business and related technology/system at various levels
Posted 3 weeks ago
3.0 - 7.0 years
10 - 19 Lacs
Bengaluru
Work from Office
Good understanding of different database and SQL Programming: Java and React Understanding of SDLC and Devops knowledge on Linux, AWS (add on)
Posted 3 weeks ago
1.0 - 6.0 years
4 - 8 Lacs
New Delhi, Hyderabad, Delhi / NCR
Work from Office
The Cloud Sales Specialist - AWS is an Individual Contributor role and will identify, sell, and maintain sales relationships within customer accounts & partners in his/her assigned territory. The Cloud Sales Specialist will provide feedback to their regions leadership based on interactions with clients, prospects, and other market players. This position requires interaction with other internal departments such as Partner Eco System, Sales Operations, and Customer Success Managers, Partner Success Managers. Key Responsibilities: 1. Primarily a hunter and hustler personality with 3-8 years of experience in SME & Enterprise Segment. Strong enterprise sales background in solutions / SaaS space ideally with knowledge of AWS platforms. 2. Sell the Microsoft Cloud product and services to new and existing clients. Identify and properly qualify Cloud opportunities. Present Cloud solutions at the executive level (C level Executive). Lead negotiations and overcome objections for deal closure. Manage complex sales cycles and multiple engagements simultaneously, Work with partner sales consultants to discover, identify and meet customer requirements. 3. Prepare accurate BOQ & sales forecasts and sales cycle reporting. Provide hand holding to ensure the success of the potential or current clients. Leverage and enhance partner relationships to drive additional value and revenue. 4. Forge strong working relationships with Partners. Encourage and develop increased awareness of Microsoft Cloud services among partners. Collaborate with channel partners executive, sales, and technical teams. Develops and executes successful targeted territory development plans / GTM to help achieve growth and revenue. Monitor and report sales activity within the system. 5. Generate new ARR and long term TCVs by landing new clients. Create territory specific sales strategy aligned to Redington Limited GTM plans and execute on it. Grow business by signing new partnerships and leveraging existing ones . Educational Qualification / Experience Desired: 1. Bachelors degree in engineering or another relevant discipline. Bachelors degree in management or English. 2. Prior work experience in a Sales position working with solutions that include business analytics, Data & AI will be a plus. Certification AZ 900 & Cloud Certifications are a plus. 3. Proven track record of consistently exceeding corporate objectives and quotas. Proven prospecting and sales cycle management skills 4. Superb written and verbal communication skills. Strong teamwork and interpersonal abilities 5. Experience and training in a value-based enterprise sales methodology (Solution Selling, Customer Centric Selling, etc.). Previous sales methodology training, e.g., MEDDIC, SPIN, Challenger Sales.
Posted 3 weeks ago
5.0 - 10.0 years
8 - 14 Lacs
Bengaluru
Work from Office
lInformation Architecture SET ART DJANGO Framework,PYthon.docker system archi IAM concept &protocols (OAAuth2,SAML) Agile/scrum,AWS Cloud service(IAM,lambda )FastAPI, CI/CD,AWS cloud,rest API design, Monitoring & logging(cloud watch,ELK) 9140679821
Posted 3 weeks ago
10.0 - 15.0 years
12 - 20 Lacs
Coimbatore
Work from Office
Why Whizlabs? At Whizlabs, we empower you to lead, grow, and thrive: Freedom to Innovate Drive engineering initiatives with autonomy and lead a culture of continuous improvement. Work with Visionaries Collaborate with our CEO-led R&D team and passionate developers who love solving meaningful problems. Build for the Future – Work on projects using the MERN stack, AI integrations, and cloud technologies that scale. Employee Benefits – Enjoy complimentary daily lunches (including non-veg options), and comprehensive medical insurance for you and your family. What You’ll Be Doing As a Technical Manager at Whizlabs, you will: Lead Cross-Functional Teams : Oversee engineering teams working on diverse products and features, ensuring high performance and alignment with company goals. Drive Project Delivery : Manage the full software development lifecycle – planning, execution, code reviews, testing, and deployment. Architect Scalable Solutions : Guide design decisions and technical architecture across the stack (MERN), cloud platforms, and AI integrations. Mentor and Develop Talent : Nurture the growth of developers and engineers through coaching, feedback, and career guidance. Ensure Code Quality and Efficiency : Promote engineering best practices and introduce processes that improve code quality, deployment speed, and team productivity. Collaborate with Leadership : Work closely with product, design, and executive teams to define product roadmaps and align engineering efforts with strategic priorities. Solve Complex Problems : Tackle high-impact challenges that require a combination of technical depth, leadership, and business acumen. What You’ll Bring We’re looking for a Technical Manager who brings: Technical Mastery : Proficiency in Node.js, React.js, MongoDB, and Express.js (MERN) and experience with modern backend/frontend practices. Cloud & AI Know-How : Exposure to AWS, Azure, or GCP and a good understanding of AI/ML integration within products. Leadership & Team Building : Proven experience managing engineering teams and nurturing talent across diverse projects. Strategic Thinking : Ability to align tech initiatives with business objectives and contribute to company-wide growth and innovation. Execution Excellence : Strong project management skills, with a track record of delivering scalable, high-quality software. Communication & Collaboration : Excellent interpersonal and communication skills to work effectively with stakeholders across departments. What We’re Offering At Whizlabs, this is more than just a job—it’s a mission: Lead with Purpose – Contribute to life-changing learning solutions that impact millions globally. Grow in Leadership – Strengthen your leadership muscles in an organization that values growth, innovation, and excellence. Shape the Future of Learning – Join a team driven by purpose, vision, and impact. Optional : You can share a 2-minute video introduction along with your resume. Send both to careers@whizlabs.com .
Posted 3 weeks ago
4.0 - 7.0 years
6 - 9 Lacs
Noida
Work from Office
Strong hands on experience on Java &J2EE. Good skills to work on kubernetes/docker Good AWS Cloud Skills A very strong command of data structures and algorithms, plus how the Java collections framework uses them. Strong inobject-oriented design principlesand functional programming. Mandatory Competencies Java - Core JAVA Fundamental Technical Skills - Spring Framework/Hibernate/Junit etc. Beh - Communication Cloud - AWS DevOps - Kubernetes
Posted 3 weeks ago
3.0 - 6.0 years
4 - 6 Lacs
Mumbai
Work from Office
What you will do for Sectona Key Responsibilities: Cloud Infrastructure Management: Design, implement, and maintain cloud infrastructures on AWS. Manage compute resources, storage, and networking components in AWS. Provision, configure, and monitor EC2 instances, S3 storage, and VPCs. Operating System Management: Configure and manage Windows and Unix-based VMs (Linux/Ubuntu). Perform patch management, security configurations, and system upgrades. Ensure high availability and performance of cloud-hosted environments. Active Directory Integration: Implement and manage Active Directory (AD) services, including AWS Directory Service, within the cloud environment. Integrate on-prem AD with AWS using AWS Managed AD or AD Connector. Networking: Design and manage secure network architectures, including VPCs, subnets, VPNs, and routing configurations. Implement network security best practices (firewalls, security groups, NACLs). Troubleshoot and resolve network connectivity issues, ensuring optimal network performance. Storage Solutions: Implement scalable storage solutions using AWS S3, EBS, and Glacier. Manage backup and recovery strategies for cloud-hosted environments. Database Management: Manage relational (RDS, Aurora) and NoSQL (DynamoDB) databases in the cloud. Ensure database performance, security, and high availability. Load Balancer & Auto-scaling: Configure and manage AWS Elastic Load Balancers (ELB) to distribute traffic across instances. Implement Auto Scaling policies to ensure elasticity and high availability of applications. Performance Tuning: Monitor system performance and apply necessary optimizations. Identify and resolve performance bottlenecks across compute, network, storage, and database layers. Security & Compliance: Implement security best practices in line with AWS security standards (IAM, encryption, security groups, etc.). Regularly audit cloud environments for compliance with internal and external security regulations. Skills and experience you require Bachelors degree in Computer Science, Information Technology, or a related field (or equivalent work experience). 4+ years of hands-on experience with AWS cloud platforms, including EC2, S3, VPC, RDS, Lambda, and IAM. Proficient in managing both Windows and Unix/Linux servers in a cloud environment. Strong experience with Active Directory integration in a cloud infrastructure. Solid understanding of cloud networking, VPC design, and security groups. Knowledge of cloud storage solutions like EBS, S3, and Glacier. Experience with cloud-based databases- RDS (MySQL and MS SQL Server). Familiarity with load balancing technologies (Elastic Load Balancer) and Auto Scaling in AWS. Experience with cloud monitoring tools such as AWS CloudWatch, CloudTrail, or third-party tools. Familiarity with cloud services in Azure (e.g., VMs, Azure AD, Azure Storage) and GCP
Posted 3 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad, Ahmedabad
Work from Office
Grade Level (for internal use): 10 The Team: We seek a highly motivated, enthusiastic, and skilled engineer for our Industry Data Solutions Team. We strive to deliver sector-specific, data-rich, and hyper-targeted solutions for evolving business needs. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts and Infrastructure Teams. The Impact: Enterprise Data Organization is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. Whats in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of dataideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies, including AWS Cloud and Databricks . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages, and tools, including unit testing, performance testing, and monitoring, and implementation Support business and technology teams as necessary during design, development, and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk, Spark ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially in writing, with the business and other technical groups Basic Qualifications: Bachelor's/Masters Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Nice to have knowledge in Grafana, Kibana, Big data, Kafka, Git Hub, EMR, Terraforms, AI-ML Advanced SQL programming skills Highly recommended skillset in Databricks , SPARK , Scalatechnologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Spark, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing.
Posted 3 weeks ago
8.0 - 10.0 years
12 - 18 Lacs
Noida
Work from Office
Primary Role Function: - Create and maintain optimal data pipeline architecture, - Assemble large, complex data sets that meet functional non-functional business requirements. - Experience with AWS cloud services: EC2, Glue, RDS, Redshift - Experience with big data tools: Hadoop, Spark, Kafka, etc. - Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. - Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. - Experience with object-oriented/object function scripting languages: Python. - Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. - Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. - Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. - Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. - Work with data and analytics experts to strive for greater functionality in our data systems. - Writes high quality and well-documented code according to accepted standards based on user requirements Knowledge: - Thorough in-depth knowledge of design and analysis methodology and application development processes - Exhibits solid knowledge of databases - Programming experience with extensive business knowledge - University degree in Computer Science, Engineering or equivalent industry experience - Solid understanding of SDLC and QA requirements Mandatory Competencies Data on Cloud - AWS S3 Cloud - AWS Python - Airflow Python - Python DevOps - Docker
Posted 3 weeks ago
2.0 - 3.0 years
4 - 6 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
J ob Title: Spring Boot & AWS Developer Location: [Hyderabad, Bangalore , Pune ] Experience: 2 to 4 years Job Type: Full-Time Notice Period - 30 to 60 days Functional Area: Software Development Role Category: Programming & Design Employment Type: Permanent Job Description We are looking for a skilled and motivated Spring Boot & AWS Developer to join our team. The ideal candidate will have solid experience in developing backend applications using Spring Boot and deploying/managing them in the AWS cloud environment. Preferred candidate profile Design, develop, and maintain scalable microservices using Spring Boot . Develop RESTful APIs and ensure high performance and responsiveness. Work with AWS services like EC2, Lambda, S3, RDS, API Gateway, etc. Deploy, monitor, and troubleshoot applications in AWS Cloud . Collaborate with DevOps teams to automate CI/CD pipelines. Write clean, maintainable code with proper unit testing. Participate in code reviews and Agile/Scrum development processes. Optimize applications for performance and scalability.
Posted 3 weeks ago
3.0 - 8.0 years
20 - 35 Lacs
Hyderabad, Pune, Gurugram
Work from Office
Preferred coding skills: Java , JavaScript and TypeScript. Other desired technical skills include python. AWS infrastructure experience. EC2, CloudWatch, lambda, networking side of AWS is required. A bachelors degree in Computer Science or equivalent degree is required Minimum 3 years of SDE experience is required.
Posted 3 weeks ago
7.0 - 10.0 years
7 - 17 Lacs
Bengaluru
Hybrid
Are you the kind who thrives under pressure, enjoys digging deep into database issues, and believes in getting it done right, not just fast ? Were looking for a Senior Production Support Engineer with solid PL/SQL and Linux expertise to handle critical incidents, minimize downtime, and ensure smooth performance for our enterprise-grade systems at CNDT . This isnt just another support role. You’ll be the guardian of uptime , the go-to problem-solver, and a strategic voice in improving system resilience. Responsibilities: Take ownership of production incidents , identify root causes, and restore services fast Perform in-depth PL/SQL troubleshooting and resolve database-related bugs, performance issues, and inconsistencies Respond promptly to system alerts, monitor logs, and handle escalations Provide 24/7 on-call support on a rotating schedule Study system documentation and application flows to provide proactive support Collaborate with internal tech teams and external vendors to resolve issues quickly and effectively Actively contribute to reducing MTTR (Mean Time To Restoration) and improving service reliability Guide junior engineers and support resources; be a technical SME for designated apps Participate in root cause analysis , documenting findings and lessons learned Suggest and implement process improvements, automation, and optimizations Maintain database and server health, track performance, and ensure space management Participate in project planning meetings and advocate production-readiness Perform regular maintenance tasks and uphold data integrity Requirements: Bachelor’s degree in Computer Science , IT , or related field 7–10 years of experience in PL/SQL development/support Strong knowledge of Oracle databases , stored procedures, packages, and query tuning Experience in production support , incident handling , and performance optimization Minimum 5+ years working in Linux environments Good command over shell scripting , with added bonus if you know Python, Perl, or Ruby Ability to perform log analysis and lead service restoration efforts Effective communicator who can handle pressure and lead during outages Nice-to-Haves: Knowledge of Oracle 19c , data warehousing , or ETL flows Familiarity with cloud platforms (AWS/OCI) Exposure to ITIL processes or tools like ServiceNow , Jira Understanding of CI/CD and DevOps principles Experience working with monitoring tools (e.g., Grafana, Nagios, Prometheus) Why Join Us? Flexible hybrid work setup Competitive CTC as per current market trends Work with mission-critical systems that truly impact businesses Learn, grow, and be part of a collaborative engineering culture Leadership that backs innovation and ownership
Posted 4 weeks ago
10.0 - 20.0 years
12 - 22 Lacs
Noida
Work from Office
Server Infrastructure Lead to oversee Windows & Linux Server environments across On-premise Data Center, AWS and OCI cloud environments. Manage all matters related to server infrastructure, ensuring operational excellence & security compliance. Required Candidate profile Experience in Windows & Linux Server Infrastructure in Enterprise Environment. On-premise & cloud server ops. Server hardening, patch management, security, monitoring, backup & disaster recovery.
Posted 4 weeks ago
5.0 - 8.0 years
10 - 15 Lacs
Pune
Work from Office
*****AWS Cloud Admin***** *****Pune (Maharashtra)***** *****Work From Office***** *****Early Joiner Preferred***** *****AWS Certification is Mandatory***** We are hiring for a leading Fintech company that specializes in consumer lending solutions in India. Requirements: Minimum 4 Years experience with cloud platform Amazon Web Services (AWS). Knowledge of Google Cloud Platform (GCP) would be nice to have. Experience with Linux/Unix operating systems, web and application servers like Apache, Tomcat and Nginx. Knowledge of scripting languages such as Python, Shell/Bash scripting. Knowledge of SQL, PostGres, and Mongo databases is a must. Monitoring and alerting based on Grafana and Prometheus Should be able to write Automation scripts using Python SDK for AWS services. Previous working experience with a Startup environment is a plus. A person with excellent communication skills and coordination with a team to expedite the tasks on time. Should have worked on at least one cost optimization project on AWS and should come up With ideas and optimization plans and duly take responsibility for their executions. Experience with the orchestration of containers using Kubernetes is a Plus. Using services like EKS, ECR and EKS. Should be an avid learner and a self-starter. Should be able to take up new implementations and ideas as a challenge and should be open to learning something new all the time. AWS cloud engineer responsibilities: Designing and implementing secure network solutions that meet business requirements. Creating and configuring virtualized systems in the AWS environment. Performing infrastructure upgrades and updates to maximize system efficiency while minimizing downtime. Deploying applications in AWS using EC2, AWS Lambda and AWS Fargate using ECS and ECR. Creating blueprints using Cloud Formation templates for common workloads. Maintaining, testing and implementing disaster recovery procedures. Implementing automation using scripting languages (e.g., Python) to manage AWS services. Building tools for deployment, monitoring, and troubleshooting of system resources in an AWS environment. Developing software components in Python or any other high-level language that interacts with AWS cloud services by leveraging the AWS APIs Graduation in B.Tech, MCA, and BSc in Computer Science with certifications in AWS.
Posted 4 weeks ago
6.0 - 10.0 years
11 - 16 Lacs
Noida
Work from Office
AWS (certification preferred), Ansible, Terraform, Shell Script Location: NAB Office WFO 3/4 days a week Mandatory Competencies DevOps - Ansible DevOps - CLOUD AWS DevOps - Shell Scripting Beh - Communication DevOps - Terraform
Posted 4 weeks ago
7.0 - 12.0 years
8 - 10 Lacs
Jaipur
Work from Office
Role Clarity | Senior Manager - Information Technology Department: Business Tranformation Reporting To: CTO Role Definition: The Sr. Manager IT Infrastructure & Operations is responsible for ensuring 24x7 secure, stable, and connected infrastructure across all DBCL centers — Central Lab, Regional Labs, Collection Centers (CCs), HLMs, POC Labs, and Corporate Office. The role leads uptime delivery, network continuity, cybersecurity enforcement, and infrastructure compliance as per Standard IT Security framework. Deliverables: Uninterrupted IT infrastructure across all DBCL locations Cloud (AWS and GCP), Email Security, Firewall (Shophosh,Fortinet), EDR, XDR, DLP Security implementation. Secure Network Segmentations and architecture. External audit compliance Centralized uptime, patch, and access management License audit and SOP governance for infrastructure Role-based access control (RBAC) on all systems IAM/DC Implementation Smooth IT infrastructure and Security Operations Implement ITSM framework Task and Activities: Network & Infrastructure Uptime Management Monitor internet uptime status across all Laboratories and collection centers Coordinate with internet service providers (ISP) to minimize downtime. Maintain alternate connectivity plans. Ensure routers, switches, access points, and LAN cabling are functional and standardized across all centers. Endpoint Device Governance Implement and regularly update antivirus and EDR/XDR tools on all desktops and laptops used for business processes. Tag every endpoint (PC, Laptop) to center/employee with asset code and configuration details. Conduct monthly patch rollouts for Windows/Linux systems across Labs and CCs. Restrict admin rights and unauthorized software installations via centralized policy. Maintain endpoint assets with maintenance history and audit trails. Email & Communication Security Management Administer all email IDs under authorized domain; ensure 2FA, user-specific access controls, and mailbox storage limits are enforced. Regularly run spam filter and phishing attempts audit for all users. Define and update official mailing groups. Configure auto-forwarding restrictions, suspicious login alerts, and mailbox activity monitoring. Ensure business continuity of email during server upgrades or failures. Cloud Infrastructure & Data Security Manage cloud environment (AWS) Configure firewall rules, access policies, encryption protocols, and IAM roles. Review SG alerts weekly and resolve flagged issues. Conduct backup of key data assets to secure cloud storage; test restore capability quarterly. Document change logs for server configuration changes, user additions, and permissions granted. Cloudwatch/S3/SES/EC2/RDS Cost Controls Firewall, VPN, and Access Management Conduct firewall configuration reviews for all DBCL locations. Ensure only authorized ports are open, IP restrictions are applied, and logs are stored. Maintain VPN credentials and usage logs for regional users or partners who access LIMS from remote sites. Implement network segmentation between Admin systems, Lab analyzers, and customer kiosks. Patch, Antivirus & License Compliance Maintain patch management calendar with classification: critical, moderate, optional. Automate antivirus definition updates and audit compliance for every endpoint weekly. Maintain license inventory: endpoint security, AWS, firewall tools, monitoring software. Flag expiring licenses 30 days in advance and initiate renewal approval process. IT SOP Enforcement & Audit Readiness Create SOPs for each core infra function: Device Setup, Access Granting, Patch Updates, Incident Handling. Conduct monthly internal audits for compliance: access logs, system backup logs, antivirus scan status, endpoint updates. Prepare documentation and logs in required formats for NABL, ISO, and external audit inspections. Coordinate with internal Quality & Audit teams during surveillance audits. Incident & Escalation Management Maintain incident register with severity level, assigned engineer, response time, and RCA. Act as escalation point for field engineers when repeated failures or access issues are reported. Follow defined escalation SOP – inform CTO and Admin head for physical infra risks. New Center IT Setup & Closure Plan IT readiness checklist for each new CC/HLM launch: Internet, router, PC with LIMS, printer, barcode scanner. Coordinate LAN setup, device configuration, VPN setup, and system testing before go-live. During closure, ensure secure data wipe, device collection, access revocation, and handover to Assets/Store. Coordination with Procurement & Admin Share infrastructure requirement specs (router models, firewall specs, endpoint configs) with Purchase for procurement. Collaborate with Admin for physical security of data rooms, installation of surveillance equipment, and maintenance contracts (UPS, wiring, routers). Validate incoming infra assets (QC check) before tagging and deployment. Success Metrics: 100% Infra Uptime (All Sites) 100% adherence to internal IT SOPs and process non-negotiables CSAT Score 4.8 from key internal users Zero critical bugs or vulnerabilities during external audits 100% completion of documentation updates and policy reviews as per governance calendar
Posted 4 weeks ago
5.0 - 7.0 years
4 - 8 Lacs
Pune
Work from Office
We are looking for a skilled PostgreSQL Expert with 5 to 7 years of experience in the field. The ideal candidate should have expertise in GCP Cloud SQL knowledge, DB DDL, DML, and production support. This position is located in Pune. Roles and Responsibility Design, develop, and implement database architectures using PostgreSQL. Develop and maintain databases on GCP Cloud SQL. Ensure high availability and performance of database systems. Troubleshoot and resolve database-related issues. Collaborate with cross-functional teams to identify and prioritize database requirements. Implement data security and access controls. Job Strong knowledge of PostgreSQL and GCP Cloud SQL. Experience with DB DDL, DML, and production support. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills. Familiarity with database design principles and best practices.
Posted 4 weeks ago
4.0 - 7.0 years
3 - 6 Lacs
Noida
Work from Office
We are looking for a skilled AWS Data Engineer with 4 to 7 years of experience in data engineering, preferably in the employment firm or recruitment services industry. The ideal candidate should have a strong background in computer science, information systems, or computer engineering. Roles and Responsibility Design and develop solutions based on technical specifications. Translate functional and technical requirements into detailed designs. Work with partners for regular updates, requirement understanding, and design discussions. Lead a team, providing technical/functional support, conducting code reviews, and optimizing code/workflows. Collaborate with cross-functional teams to achieve project goals. Develop and maintain large-scale data pipelines using AWS Cloud platform services stack. Job Strong knowledge of Python/Pyspark programming languages. Experience with AWS Cloud platform services such as S3, EC2, EMR, Lambda, RDS, Dynamo DB, Kinesis, Sagemaker, Athena, etc. Basic SQL knowledge and exposure to data warehousing concepts like Data Warehouse, Data Lake, Dimensions, etc. Excellent communication skills and ability to work in a fast-paced environment. Ability to lead a team and provide technical/functional support. Strong problem-solving skills and attention to detail. A B.E./Master's degree in Computer Science, Information Systems, or Computer Engineering is required. The company offers a dynamic and supportive work environment, with opportunities for professional growth and development. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 4 weeks ago
3.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
We are looking for a skilled Data Engineer with 3 to 6 years of experience in processing data pipelines using Databricks, PySpark, and SQL on Cloud distributions like AWS. The ideal candidate should have hands-on experience with Databricks, Spark, SQL, and AWS Cloud platform, especially S3, EMR, Databricks, Cloudera, etc. Roles and Responsibility Design and develop large-scale data pipelines using Databricks, Spark, and SQL. Optimize data operations using Databricks and Python. Develop solutions to meet business needs reflecting a clear understanding of the objectives, practices, and procedures of the corporation, department, and business unit. Evaluate alternative risks and solutions before taking action. Utilize all available resources efficiently. Collaborate with cross-functional teams to achieve business goals. Job Experience working in projects involving data engineering and processing. Proficiency in large-scale data operations using Databricks and overall comfort with Python. Familiarity with AWS compute, storage, and IAM concepts. Experience with S3 Data Lake as the storage tier. ETL background with Talend or AWS Glue is a plus. Cloud Warehouse experience with Snowflake is a huge plus. Strong analytical and problem-solving skills. Relevant experience with ETL methods and retrieving data from dimensional data models and data warehouses. Strong experience with relational databases and data access methods, especially SQL. Excellent collaboration and cross-functional leadership skills. Excellent communication skills, both written and verbal. Ability to manage multiple initiatives and priorities in a fast-paced, collaborative environment. Ability to leverage data assets to respond to complex questions that require timely answers. Working knowledge of migrating relational and dimensional databases on AWS Cloud platform.
Posted 4 weeks ago
6.0 - 11.0 years
12 - 16 Lacs
Noida
Work from Office
We are looking for a skilled professional with 6 to 15 years of experience in platform setup and management of MDM platforms and AWS. The ideal candidate will have expertise in setting up new MDM platforms, managing existing ones on AWS cloud, and configuring Reltio SaaS Platform. Roles and Responsibility Setup and manage MDM platforms on AWS cloud. Configure Reltio SaaS Platform and ensure seamless integration with other systems. Collaborate with operations teams to ensure platform stability, reliability, and scalability. Work closely with stakeholders to understand MDM platform issues, requirements, and roadmap. Define platform architecture and design, driving implementation. Ensure all required integrations (up-stream and down-stream) from the MDM platform integration point of view are enabled. Job Minimum 6 years of experience in platform setup and management of MDM platforms and AWS. Strong knowledge of MDM integrations and experience with Unix shell scripting, Elastic Search, Kibana, Logstash, and SSO / AD / SSL. Experience with Agile/Scrum processes and ceremonies; ability to act as an EBX Product lead. Familiarity with Oracle and Postgres databases. Good understanding of DevOps, Jenkins, and CI/CD concepts is a plus. Ability to work collaboratively with various stakeholders to understand MDM platform issues and requirements.
Posted 4 weeks ago
10.0 - 12.0 years
3 - 7 Lacs
Noida
Work from Office
We are looking for a skilled Senior Java Developer with strong expertise in Java and Spring Boot framework. The ideal candidate should have extensive experience with AWS cloud services and deploying applications in a cloud environment. This position is located in Hyderabad and requires 10 to 12 years of experience. Roles and Responsibility Design, develop, and deploy high-quality software applications using Java and Spring Boot. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale Java-based systems with scalability and performance. Troubleshoot and resolve complex technical issues efficiently. Participate in code reviews and contribute to improving overall code quality. Stay updated with the latest trends and technologies in Java and related fields. Job Strong hands-on experience with Apache Kafka (producer/consumer, topics, partitions). Deep knowledge of PostgreSQL including schema design, indexing, and query optimization. Experience with JUnit test cases and developing unit/integration test suites. Familiarity with code coverage tools such as JaCoCo or SonarQube. Excellent verbal and written communication skills to explain complex technical concepts clearly. Demonstrated leadership skills with experience managing, mentoring, and motivating technical teams. Proven experience in stakeholder management, including gathering requirements, setting expectations, and delivering technical solutions aligned with business goals. Familiarity with microservices architecture and RESTful API design. Experience with containerization (Docker) and orchestration platforms like Kubernetes (EKS). Strong understanding of CI/CD pipelines and DevOps practices. Solid problem-solving skills with the ability to handle complex technical challenges. Familiarity with monitoring tools like Prometheus and Grafana, and log management. Experience with version control systems (Git) and Agile/Scrum methodologies.
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi