Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderābād
On-site
Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Lead Consultant - Oracle Cloud Infrastructure Administrator This position is for a technical hands-on consultant to assist with planning, designing, executing and administration of Oracle Cloud Infrastructure. Responsibilities Perform administration of OCI virtualization systems using native Oracle Cloud Infrastructure services, performing analysis, tuning, and troubleshooting. Hands on experience on Oracle Cloud technologies i.e. IAM, VCN, Peering, Routing, Fastconnect , Load Balancer setup, Compute, Autoscaling, Block Volume, Backup and restore, File and object Storage, Oracle Databases on OCI and Oracle Autonomous Databases. Perform activities that include crafting and decommissioning systems as part of administration activities. Work with application, server, and database engineers to perform vital troubleshooting activities and performing tuning and scaling activities. Assist with migration of environments as necessary across datacenters and assist with Cloud based initiatives. Responsible for systems maintenance, system upgrades, infrastructure design and layout, DR design and implementation, physical to virtual migrations. Work closely with Product Development teams and provide feedback to improve product quality Develop and maintain Standard Operating Procedures/documentation Provide partner concern support for database related issues Coordinate OCI compute Manage OCI compute instances Experience in deploying and migrating software computing infrastructure like storage networking compute applications middleware security migration of on premise workloads to Oracle Cloud Infrastructure Experience in migration Virtual Machines from On premise NTT infrastructure to Oracle Cloud IaaS Should have experience in setup Cloud Network Firewall Certificates VLBR VCN IP Addresses Security Rules Management of OCI IAM Cloud User policy role compartment and access management Cloud security management Manage the service instances maintain capacity and schedules notifications and alerts Patch and Upgrade cloud instances Knowledge of Oracle Cloud IaaS PaaS products and solutions Coordination with client infrastructure and networking teams Project Identity IDCS configuration for SSO and federation password policy management MFA management Nice to have Expertise in managing ERP databases in a cloud environment, preferable RDS. Proficient in Oracle Apps, RAC, ASM, Data Guard, Oracle Cluster ware, RMAN & OEM Proficiency in Unix shell script . Should have sound communication skills Certification in Oracle Cloud Infrastructure is helpful Experience in OCI admin with AWS & Azure Qualifications Bachelors Degree in Computer Science, Information Systems, Engineering, related fields or equivalent professional experience Preferred qualification Very good written and presentation / verbal communication skills with experience of customer interfacing role. In-depth requirement understanding skills with good analytical and problem solving ability, interpersonal efficiency, and positive attitude Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career—Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transpar ency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, re ligion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 21, 2025, 8:29:37 AM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time
Posted 1 day ago
3.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a highly skilled LLM Engineer with a minimum of 3 to 6 years of experience in software development and 1-2 years in LLM solution development,. The ideal candidate will have strong experience working with Python, LLM solution patterns and tools (RAG, Vector DB, Agentic workflows, LoRA, etc.) cloud platforms (AWS, Azure, GCP), and DevOps tools. They will be responsible for designing and developing scalable software solutions, leading architecture design, and ensuring the performance and reliability of our systems. Responsibilities: • Take ownership of architecture design and development of scalable and distributed software systems. • Translate business to technical requirements • Own technical execution, ensuring code quality, adherence to deadlines, and efficient resource allocation • Data driven decision making skills with focus on achieving product goals • Design, develop and deploy LLM based pipelines involving patterns like RAG, Agentic workflows, PEFT (e.g. LORA, QLORA, etc.) • Responsible for the complete software development lifecycle, including requirements analysis, design, coding, testing, and deployment. • Utilize AWS services/ Azure services like IAM, Monitoring, Load Balancing, Autoscaling, Database, Networking, storage, ECR, AKS, ACR etc. • Implement DevOps practices using tools like Docker, Kubernetes to ensure continuous integration and delivery. Develop DevOps scripts for automation and monitoring. • Collaborate with cross-functional teams, conduct code reviews, and provide guidance on software design and best practices. Qualifications : • Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). At least 5 years of experience in software development, with relevant work experience in LLM app development. • Strong coding skills with proficiency in Python and Javascript • Experience with API frameworks both stateless and stateful such as Fast API, Django • Well versed in implementation of web sockets, gRPC, access management using JWT (Azure AD, IDM preferred) • Proficient in cloud platforms, specifically AWS, Azure, or GCP • Knowledge and hands-on experience with front-end development (React JS, Next JS, Tailwind CSS) preferred • Strong experience in LLM patterns like RAG, Vector DB, Hybrid Search, Agent development, Agentic workflows, prompt engineering, etc. • Strong experience with LLM APIs (Open AI, Anthropic, AWS Bedrock), SDKs (Langchain, DSPy) • Hands-on experience with DevOps tools including Docker, Kubernetes, and AWS services (Redshift, RDS, S3). • Experience in production deployments involving thousands of users • Strong understanding of scalable application design principles and experience with security best practices and compliance with privacy regulations. • Good knowledge of software engineering practices like version control (GIT), DevOps (Azure DevOps preferred) and Agile or Scrum. • Strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. • Experience of SDLC and best practices while development • Experience with Agile methodology for continuous product development and delivery
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, _VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Core Competencies Excellent knowledge on EKS, Kubernetes and its related AWS Component. Kubernetes Networking Kubernetes DevOps which includes Deployment of Kubernetes – EKS Cluster using IaaC (Terraform) and CI/CD pipeline. EKS Secret Management, Autoscaling and Lifecycle Management. EKS Security using AWS Native Services. Excellent Understanding on AWS cloud services like VPC, EC2, ECS, S3, EBS, ELB, Elastic IPs, Security Group etc. AWS Component deployment using Terraform Application Onboarding on Kubernetes using Argocd AWS Codepipeline, Codebuild, Code Commit HashiCorp Stack, HasiCorp Packer. Bitbucket and Git, Profound Cloud Technology, Network, Security and Platform Expertise (AWS or Google Cloud or Azure) Good documentation and communication skills. Good Understanding on ELK, Cloudwatch, datadog Roles & Responsibilites Manage project driven integration and day-to-day administration of cloud solutions Develop prototypes, designing and building modules and solutions for Cloud Platforms in an iterative agile cycles, develop, maintain, and optimize the business outcome Conduct peer reviews and maintain coding standards Driving automation using CI/CD using Jenkins or argcd Driving Cloud solution automation and integration activity for Cloud Provider - AWS and Tenant (Project) workloads. Build and deploy AWS cloud infrastructure by using cloud formation and terraform scripts. Use Ansible & Python to perform routines tasks like user management and security hardening, etc. Providing professional technical consultancy to migrate and transform existing on-premises applications to public cloud and support to all Cloud-related programmes and existing environments Design and deploy direct connect network between AWS and datacentre. Train and develop AWS expertise within the organisation. Proven troubleshooting skills to resolve issues related with cloud network, storage and performance management. VOIS Equal Opportunity Employer Commitment VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion , Top 10 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!
Posted 5 days ago
0.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Role Overview Join our dynamic team as a Backend + DevOps Engineer. You'll architect and scale document processing pipelines that handle thousands of financial documents daily, ensuring high availability and cost efficiency. What You'll Do Build scalable async processing pipelines for document classification, extraction, and validation Optimize cloud infrastructure costs while maintaining 99.9% uptime for document processing workflows Design and implement APIs for document upload, processing status, and results retrieval Manage Kubernetes deployments with autoscaling based on document processing load Implement monitoring and observability for complex multistage document workflows Optimize database performance for high-volume document metadata and processing results Build CI/CD pipelines for safe deployment of processing algorithms and business rules Technical Requirements Have: 5+ years backend development (Python or Go) Strong experience with async processing (Celery, Temporal, or similar) Docker containerization and orchestration Cloud platforms (AWS/GCP/Azure) with cost optimization experience API design and development (REST/GraphQL) Database optimization (MongoDB, PostgreSQL) Production monitoring and debugging Nice to Have: Kubernetes experience Experience with document processing or ML pipelines Infrastructure as Code (Terraform/CloudFormation) Message queues (SQS, RabbitMQ, Kafka) Performance optimization for high-throughput systems Job Type: Full-time Experience: Python: 5 years (Required) DevOps: 5 years (Preferred) Location: Bangalore, Karnataka (Required) Work Location: In person Speak with the employer +91 9258692828
Posted 5 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
As a Lead Software Engineer – Performance Engineering , you will drive the strategy, design, and execution of performance engineering initiatives across highly distributed systems. You will lead technical efforts to ensure reliability, scalability, and responsiveness of business-critical applications. This role requires deep technical expertise, hands-on performance testing experience, and the ability to mentor engineers while collaborating cross-functionally with architecture, SRE, and development teams. Responsibilities: Define, implement, and enforce SLAs, SLOs, and performance benchmarks for large-scale systems. Lead performance testing initiatives including load, stress, soak, chaos, and scalability testing. Design and build performance testing frameworks integrated into CI/CD pipelines. Analyze application, infrastructure, and database metrics to identify bottlenecks and recommend optimizations. Collaborate with cross-functional teams to influence system architecture and improve end-to-end performance. Guide the implementation of observability strategies using monitoring and APM tools. Optimize cloud infrastructure (e.g., autoscaling, caching, network tuning) for cost-efficiency and speed. Tune databases and messaging systems (e.g., PostgreSQL, Kafka, Redis) for high throughput and low latency. Mentor engineers and foster a performance-first culture across teams. Lead incident response and postmortem processes related to performance issues. Drive continuous improvement initiatives using data-driven insights and operational feedback. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 8+ years of experience in software/performance engineering, with 2+ years in a technical leadership role. Expertise in performance testing tools such as JMeter, k6, Gatling, or Locust. Strong knowledge of distributed systems, cloud-native architecture, and microservices. Proficiency in scripting and automation using Python, Go, or Shell. Experience with observability and APM tools (e.g., Datadog, Prometheus, New Relic, AppDynamics). Deep understanding of SQL performance, caching strategies, and tuning for systems like PostgreSQL and Redis. Familiarity with CI/CD pipelines, container orchestration, and IaC tools (e.g., Kubernetes, Terraform). Strong communication skills and experience mentoring and leading technical teams. Ability to work cross-functionally and make informed decisions in high-scale, production environments.
Posted 6 days ago
0 years
0 Lacs
India
On-site
Job description Company Description Evallo is a leading provider of a comprehensive SaaS platform for tutors and tutoring businesses, revolutionizing education management. With features like advanced CRM, profile management, standardized test prep, automatic grading, and insightful dashboards, we empower educators to focus on teaching. We're dedicated to pushing the boundaries of ed-tech and redefining efficient education management. Why this role matters Evallo is scaling from a focused tutoring platform to a modular operating system for all service businesses that bill by the hour. As we add payroll, proposals, whiteboarding, and AI tooling, we need a Solution Architect who can translate product vision into a robust, extensible technical blueprint. You’ll be the critical bridge between product, engineering, and customers, owning architecture decisions that keep us reliable at 5k+ concurrent users and cost-efficient at 100k+ total users. Outcomes we expect Map current backend + frontend, flag structural debt, and publish an Architecture Gap Report Define naming & layering conventions, linter / formatter rules, and a lightweight ADR process Ship reference architecture for new modules Lead cross-team design reviews; no major feature ships without architecture sign-off The eventual goal is to have Evallo run in a fully observable, autoscaling environment with < 10 % infra cost waste. Monitoring dashboards should trigger < 5 false positives per month. Day-to-day Solution Design: Break down product epics into service contracts, data flows, and sequence diagrams. Choose the right patterns—monolith vs. microservice, event vs. REST, cache vs. DB index—based on cost, team maturity, and scale targets. Platform-wide Standards: Codify review checklists (security, performance, observability) and enforce via GitHub templates and CI gates. Champion a shift-left testing mindset; critical paths reach 80 % automated coverage before QA touches them. Scalability & Cost Optimization: Design load-testing scenarios that mimic 5 k concurrent tutoring sessions; guide DevOps on autoscaling policies and CDN strategy. Audit infra spend monthly; recommend serverless, queuing, or data-tier changes to cut waste. Release & Environment Strategy: Maintain clear promotion paths: local → sandbox → staging → prod with one-click rollback. Own schema-migration playbooks; zero-downtime releases are the default, not the exception. Technical Mentorship: Run fortnightly architecture clinics; level-up engineers on domain-driven design and performance profiling. Act as a tie-breaker on competing technical proposals, keeping debates respectful and evidence-based. Qualifications 3+ yrs engineering experience, 2+ yrs in a dedicated architecture or staff-level role on a high-traffic SaaS product. Proven track record designing multi-tenant systems that scaled beyond 50 k users or 1k RPM. Deep knowledge of Node.js / TypeScript (our core stack), MongoDB or similar NoSQL, plus comfort with event brokers (Kafka, NATS, or RabbitMQ). Fluency in AWS (preferred) or GCP primitives—EKS, Lambda, RDS, CloudFront, IAM. Hands-on with observability stacks (Datadog, New Relic, Sentry, or OpenTelemetry). Excellent written communication; you can distill technical trade-offs in one page for execs and in one diagram for engineers.
Posted 6 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Jivi Jivi is transforming primary healthcare with an AI-powered clinical agentic platform designed for 8 billion people. Our flagship product, a super health app, combines an AI doctor and longevity coach. It provides a full-stack solution covering sickness, prevention, and wellness. In just six months, 500,000 users from 170+ countries have already used Jivi. The company was founded by Ankur Jain (BharatPe, WalmartLabs, Stanford), GV Sanjay Reddy (Reddy Ventures, Aragen), and Andrew Ng 's AI Fund (Coursera, DeepLearning). Together, they bring deep expertise in AI, medicine, and scaling billion-dollar ventures. Jivi is powered by groundbreaking research in Large Language Models (LLMs). Our MedX model is ranked #1 globally, surpassing OpenAI and Google Gemini in diagnostic accuracy. Additionally, our AudioX model boasts the lowest word error rate for Indic languages. Jivi’s health knowledge base, one of the largest in the world, plays a key role in training these models. In the spirit of fostering innovation, we’ve open-sourced these models on Hugging Face for the AI community. Jivi has been recognized for its innovation with awards such as the NASSCOM’s Digital Adoption Pioneer Award and the IndiaAI Mission Award. We are proud to be a global leader in AI healthcare. Job Overview We are looking for a skilled DevOps Engineer to join our growing engineering team. You will be responsible for supporting and managing cloud infrastructure, CI/CD pipelines, and Kubernetes-based workloads primarily on AWS. This is a hands-on role that requires solid experience in DevOps best practices, cloud troubleshooting, and automation. Key Responsibilities Manage and support AWS infrastructure services such as EC2, EKS, RDS, S3, IAM, etc. Handle day-to-day operations of Kubernetes (EKS) including pods, services, volumes, autoscaling, and cluster maintenance. Design, implement, and maintain CI/CD pipelines using tools like GitHub Actions, ArgoCD, etc. Implement Infrastructure as Code (IaC) using Terraform for reproducible and auditable infrastructure deployments. Support and troubleshoot Linux servers, Docker containers, Git workflows, and shell scripting tasks. Monitor and analyze logs and metrics for performance and incident troubleshooting. Technical Skills Strong hands-on experience with Linux, Docker, Git, and basic shell scripting Familiarity with cloud troubleshooting in AWS environments Understanding of CI/CD principles and experience with Git-based workflows Basic knowledge of logs, metrics, and monitoring tools Good to Have AWS certifications (e.g., AWS Certified Solutions Architect, DevOps Engineer) Experience with AI/ML workloads including GPU/NVIDIA-based training/inferencing environments Exposure to DevSecOps tools such as Bandit, SonarQube, etc. Familiarity with compliance frameworks like ISO 27001, HIPAA, or experience working in regulated environments Experience and Qualifications Work Experience: Minimum 3+ years of experience in DevOps Engineering. Education: Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience). Why Work with Jivi? Make a Global Impact : Shape AI-driven healthcare solutions that transform billions of lives and revolutionize global wellness. Accelerate Your Career : Enjoy competitive salaries, growth opportunities, and the chance to take on leadership roles as Jivi scales. Lead in a High-Growth Environment : Own key initiatives, influence company strategy, and drive impactful health projects in a dynamic, fast-paced setting. Collaborate with the Best : Work alongside top professionals in AI and healthcare, learn from experts, and contribute to breakthrough innovations. Jivi’s Products: Jivi is available as a mobile app or as an AI assistant on WhatsApp. You can access Jivi: iOS app Android app WhatsApp Jivi in Media: Economic Times - https://tinyurl.com/m3kep5at Reuters - https://tinyurl.com/mpcs6dpx Inc42 - https://tinyurl.com/emsdas55 Many more - https://www.jivi.ai/news
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
You should have expert-level proficiency in Python and Python frameworks or Java. You must have hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, CloudFormation, Docker, ECS, Fargate, and ECR. Deep experience with key AWS services like Compute (PySpark, Lambda, ECS), Storage (S3), Databases (DynamoDB, Snowflake), Networking (VPC, 53, CloudFront, API Gateway), DevOps/CI-CD (CloudFormation, CDK), Security (IAM, KMS, Secrets Manager), Monitoring (CloudWatch, X-Ray, CloudTrail), and NoSQL Databases like Cassandra, PostGreSQL is required. You should have very strong hands-on knowledge of using Python for integrations between systems through different data formats. Expertise in deploying and maintaining applications in AWS, along with hands-on experience in Kinesis streams and Auto-scaling, is essential. Designing and implementing distributed systems and microservices, and following best practices for scalability, high availability, and fault tolerance are key responsibilities. Strong problem-solving and debugging skills are necessary for this role. You should also have the ability to lead technical discussions and mentor junior engineers. Excellent written and verbal communication skills are a must. Comfort working in agile teams with modern development practices and collaborating with business and other teams to understand business requirements and work on project deliverables is expected. Participation in requirements gathering, understanding, designing a solution based on available framework and code, and experience with data engineering tools or ML platforms (e.g., Pandas, Airflow, SageMaker) are required. An AWS certification such as AWS Certified Solutions Architect or Developer is preferred. This position is based in multiple locations including Indore, Mumbai, Noida, Bangalore, Chennai in India. Qualifications: - Bachelor's degree or foreign equivalent required from an accredited institution. Consideration will be given to three years of progressive experience in the specialty in lieu of every year of education. - At least 8+ years of Information Technology experience.,
Posted 1 week ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. What will you do at Fynd? Build scalable services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements You know how to take ownership on things and get it done end to end You have prior experience developing and working on consumer-facing web/app products Hands-on experience in Python, know in depth on asyncio, generators and use case in event driven scenario Through knowledge of async programming using Callbacks, Promises, and Async/Await Someone from an Andriod development background would be preferred. Good Working knowledge of MongoDB, Redis, PostgreSQL Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3 Experience with Frontend Stack would be added advantage (HTML, CSS) You might not have experience with all the tools that we use but you can learn those given the guidance and resources What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 1 week ago
14.0 years
3 - 5 Lacs
Hyderābād
On-site
Country India Working Schedule Full-Time Work Arrangement Hybrid Relocation Assistance Available No Posted Date 16-Jul-2025 Job ID 10136 Description and Requirements Job Responsibilities Database Management and Administration: Lead the administration, monitoring, and maintenance of IBM DB2(UDB/LUW) databases, ensuring high availability and optimal performance. Perform regular database backups, restores, and disaster recovery planning. Monitor and troubleshoot database performance issues, optimizing queries and database structure. Design and implement database solutions, upgrades, and patches. Backup and Recovery Management: Implement and manage comprehensive backup strategies for IBM DB2(UDB/LUW) databases, both on-premises and in the cloud using backup tools( Rubrik,) Conduct disaster recovery exercises and ensure business continuity in the event of data loss or system failures. Performance Tuning and Optimization: Conduct database performance assessments, identifying and resolving bottlenecks. Optimize queries, indexes, and other database objects to improve system efficiency. Monitor resource usage (CPU, memory, disk) and implement strategies to ensure resource optimization. Cloud Integration: Manage IBM DB2(UDB/LUW) database instances deployed in cloud environments such as AWS, Azure, or IBM DB2(UDB/LUW) Cloud Infrastructure. Ensure proper database configuration, migration, and optimization within the cloud infrastructure. Implement cloud-specific features such as autoscaling, disaster recovery, and security measures for cloud databases. Collaborate with cloud architects to design scalable, secure, and cost-effective cloud database architectures. Database programming skills: Very good experience in Database programming skills for designing and coding, optimizing and tuning the SQL and PL/SQL queries. Strong knowledge in debug the code and provide appropriate solution to developers and application team. Automation Implementation: Lead the design, implementation and maintenance of automated infrastructure solutions using Infrastructure as Code tools like Ansible, Elastic and Terraform. Lead the develop and management of Azure DevOps CI/CD pipelines to automate infrastructure deployments using Ansible and Elastic. Automate database health checks, monitoring and alerting system to proactively address potential issues. Security and Compliance: Implement robust database security policies to safeguard data, including access control, encryption, and auditing. Ensure compliance with data privacy laws and regulations such as GDPR, HIPAA, or SOC 2. Conduct periodic security audits and vulnerability assessments on IBM DB2(UDB/LUW) databases. Collaboration: Work closely with cross-functional teams, including developers, system administrators, network engineers, and security specialists. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. IBM DB2(UDB/LUW) Certified DBA and Cloud certification (Azure,AWS or OCI) is preferrable Experience (In Years) 14+ Years Total IT experience & 10+ Years relevant experience in UDB database Technical Skills 10+ years of strong work experience with database design, installation configuration and implementation; knowledge of all key IBM DB2/LUW utilities such as HADR, Reorg, run stats, Load on (Linux/Unix/Windows) Expertise in Unix and Linux operating systems and shell scripting. Expertise in database migration, Upgradation and Patching. Strong experience in cloud computing (Azure, AWS RDS, IBM Cloud PAK). Experience administering IBM Informix databases is a Big Plus. Experience with backups, restores and recovery models and implementation of Backup strategies mandatory including RTO and RPO with tools (Rubrik,Networker,BCV). Very good experience in managing of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Experience in IBM db2 LUW replication (Db2 SQL replication and Q Replication, a Queue -based Replication) as well as Using Third party tools for Replications. Experience with Performance Tuning and Optimization (PTO), using native monitoring and troubleshooting tools (Explain plan, Db2 reorg, Db2 run stats). Strong knowledge of Clustering, High Availability (HA/HADR) and Disaster Recovery (DR) options for DB2 LUW. Strong Knowledge of data security (User Access, Groups and Roles) and Data encryption (at rest and in transit) for DB2 LUW. Should have ability to work closely with IBM-PMR to resolve any ongoing production issues. Experience in Cloud environment especially in IBM Cloud, Azure Cloud is Big Plus Good knowledge on ITIL Methodologies like Change, Incident, Problem, Service Management using ServiceNow tools. Strong database analytical skills to improve application and database performance. Automation tools and programming skills such as Ansible, Perl, and Shell scripting. Strong knowledge Database monitoring with Observability tools (Elastic) Understanding of modern IT infrastructure such as Cloud Architecture as well as Agile DevOps Framework. Participates in a 24X7 pager rotation, providing Subject Matter Expert support to the on call DBA as needed Strong Experience managing geographically distributed and culturally diverse workgroups with strong team management, leadership and coaching skills Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Ability to work 24*7 rotational shift to support for production, development, and test databases Other Critical Requirements Project management experience is required to follow the Agile methodology to improve and deliver the project or operational excellence. DB2/LUW database administration/advanced database administration for experts certification is preferred. Demonstrate ability to work independently and in a team environment Ability to work 24*7 rotational shift to support for production, development, and test databases About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible. Join us!
Posted 1 week ago
2.5 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role Overview We are seeking a highly skilled Full Stack Developer with at least 2.5 years of experience to join our product engineering team. In this role, you will be responsible for building and maintaining a microservices-based multi-tenant framework using the MERN stack, while also enabling seamless integrations with Python-based applications that leverage Retrieval-Augmented Generation (RAG), Large Language Models (LLMs), and the Azure AI Library. Key Responsibilities Design, develop, and maintain microservices using Node.js/Express.js in a multi-tenant architecture. Build end-to-end solutions using MERN stack (MongoDB, Express.js, React.js, Node.js). Integrate front-end components with backend logic for seamless performance and UX. Develop and integrate with Python-based services and APIs (Flask/FastAPI) that implement RAG workflows and LLMs. Leverage the Azure AI Library for embedding AI-powered features into applications. Implement and maintain RESTful APIs for internal services and external third-party integrations. Optimize backend performance with efficient code, data structures, and caching mechanisms. Use Azure (or AWS) cloud services for deployment, monitoring, autoscaling, and system health. Work with both relational (MySQL/PostgreSQL) and NoSQL (MongoDB) databases to manage distributed data. Follow DevOps practices, CI/CD pipelines, and containerization (Docker; Kubernetes is a plus). Ensure data security, system availability, and compliance in a cloud-native environment. Conduct code reviews, debug issues, and optimize applications for performance and scalability. Collaborate closely with product managers, designers, and cross-functional engineering teams. Required Qualifications Bachelors degree in Computer Science, Information Technology, or related field. Minimum of 2.5 years experience as a Full Stack or Backend Developer. Hands-on expertise in MERN stack (MongoDB or MySQL, Express.js, React.js, Node.js). Experience with Microservices architecture and REST API development. Practical exposure to Python web frameworks like Flask or FastAPI. Experience integrating or developing AI capabilities using RAG, LLMs, or Azure AI Services. Familiarity with cloud platforms (Azure preferred; AWS/GCP acceptable). Working knowledge of containerization tools (Docker; Kubernetes is a plus). Proficient in database design, performance tuning, and managing distributed data systems. Strong understanding of multi-tenant architecture, application security, and scalability. Version control experience (preferably Git). Excellent communication, collaboration, and analytical skills (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Overview Job Title: Cloud Engineer Location: Bangalore , India Corporate Title: AVP Role Description A Google Cloud Platform (GCP) Engineer is responsible for designing, implementing, and managing cloud infrastructure and services on Google Cloud. Here’s a detailed role description in points: What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities: Cloud Infrastructure Management – Design, deploy, and manage scalable, secure, and cost-effective cloud environments on GCP. Automation & Scripting – Develop Infrastructure as Code (IaC) using Terraform, Deployment Manager, or other tools. Security & Compliance – Implement security best practices, IAM policies, and ensure compliance with organizational and regulatory standards. Networking & Connectivity – Configure and manage VPCs, subnets, firewalls, VPNs, and interconnects for secure cloud networking. CI/CD & DevOps – Set up CI/CD pipelines using Cloud Build, Jenkins, GitHub Actions, or similar tools for automated deployments. Monitoring & Logging – Implement monitoring and alerting using Stackdriver (Cloud Operations), Prometheus, or third-party tools. Cost Optimization – Analyze and optimize cloud spending by leveraging committed use discounts, autoscaling, and right-sizing resources. Disaster Recovery & Backup – Design backup, high availability, and disaster recovery strategies using Cloud Storage, Snapshots, and multi-region deployments. Database Management – Deploy and manage GCP databases like Cloud SQL, BigQuery, Firestore, and Spanner. Containerization & Kubernetes – Deploy and manage containerized applications using GKE (Google Kubernetes Engine) and Cloud Run. Team / division overview: The Platform Engineering Team is responsible for building and maintaining the foundational infrastructure, tooling, and automation that enable efficient, secure, and scalable software development and deployment. The team focuses on creating a self-service platform for developers and operational teams, ensuring reliability, security, and compliance while improving developer productivity. Design and manage scalable, secure, and cost-effective cloud infrastructure (GCP, AWS, Azure). Implement Infrastructure as Code (IaC) using Terraform Implement security best practices for IAM, networking, encryption, and secrets management. Ensure regulatory compliance (SOC 2, ISO 27001, PCI-DSS) by automating security checks. Manage API gateways, service meshes, and secure service-to-service communication.. Enable efficient workload orchestration using Kubernetes, serverless Your skills and experience: Strong experience with GCP services like Compute Engine, Cloud Storage, IAM, Networking, Kubernetes, and Serverless technologies. Proficiency in scripting (Python, Bash) and Infrastructure as Code (Terraform, CloudFormation). Knowledge of DevOps practices, CI/CD tools, and GitOps workflows. Understanding of security, IAM, networking, and compliance in cloud environments. Experience with monitoring tools like Stackdriver, Prometheus, or Datadog. Strong problem-solving skills and ability to troubleshoot cloud-based infrastructure. Google Cloud certifications (e.g., Associate Cloud Engineer, Professional Cloud Architect, or Professional DevOps Engineer) are a plus. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 week ago
0 years
0 Lacs
India
On-site
Job description Company Description Evallo is a leading provider of a comprehensive SaaS platform for tutors and tutoring businesses, revolutionizing education management. With features like advanced CRM, profile management, standardized test prep, automatic grading, and insightful dashboards, we empower educators to focus on teaching. We're dedicated to pushing the boundaries of ed-tech and redefining efficient education management. Why this role matters Evallo is scaling from a focused tutoring platform to a modular operating system for all service businesses that bill by the hour. As we add payroll, proposals, white-boarding, and AI tooling, we need a Solution Architect who can translate product vision into a robust, extensible technical blueprint. You’ll be the critical bridge between product, engineering, and customers—owning architecture decisions that keep us reliable at 5k+ concurrent users and cost-efficient at 100k+ total users. Outcomes we expect Map current backend + frontend, flag structural debt, and publish an Architecture Gap Report Define naming & layering conventions, linter / formatter rules, and a lightweight ADR process Ship reference architecture for new modules Lead cross-team design reviews; no major feature ships without architecture sign-off Eventual goal is to have Evallo run in a fully observable, autoscaling environments with < 10 % infra cost waste. Monitoring dashboards should trigger < 5 false positives per month. Day-to-day Solution Design: Break down product epics into service contracts, data flows, and sequence diagrams. Choose the right patterns—monolith vs. microservice, event vs. REST, cache vs. DB index—based on cost, team maturity, and scale targets. Platform-wide Standards: Codify review checklists (security, performance, observability) and enforce via GitHub templates and CI gates. Champion a shift-left testing mindset; critical paths reach 80 % automated coverage before QA touches them. Scalability & Cost Optimization: Design load-testing scenarios that mimic 5 k concurrent tutoring sessions; guide DevOps on autoscaling policies and CDN strategy. Audit infra spend monthly; recommend serverless, queuing, or data-tier changes to cut waste. Release & Environment Strategy: Maintain clear promotion paths: local → sandbox → staging → prod with one-click rollback. Own schema-migration playbooks; zero-downtime releases are the default, not the exception. Technical Mentorship: Run fortnightly architecture clinics; level-up engineers on domain-driven design and performance profiling. Act as tie-breaker on competing technical proposals, keeping debates respectful and evidence-based. Qualifications 5+ yrs engineering experience, 2+ yrs in a dedicated architecture or staff-level role on a high-traffic SaaS product. Proven track record designing multi-tenant systems that scaled beyond 50 k users or 1k RPM. Deep knowledge of Node.js / TypeScript (our core stack), MongoDB or similar NoSQL, plus comfort with event brokers (Kafka, NATS, or RabbitMQ). Fluency in AWS (preferred) or GCP primitives—EKS, Lambda, RDS, CloudFront, IAM. Hands-on with observability stacks (Datadog, New Relic, Sentry, or OpenTelemetry). Excellent written communication; you can distill technical trade-offs in one page for execs and in one diagram for engineers.
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
You should possess expert-level proficiency in Python and Python frameworks or Java. Additionally, you must have hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, CloudFormation, Docker, ECS, Fargate, and ECR. Your deep experience should cover key AWS services such as Compute (PySpark, Lambda, ECS), Storage (S3), Databases (DynamoDB, Snowflake), Networking (VPC, 53, CloudFront, API Gateway), DevOps/CI-CD (CloudFormation, CDK), Security (IAM, KMS, Secrets Manager), and Monitoring (CloudWatch, X-Ray, CloudTrail). Moreover, you should be proficient in NoSQL Databases like Cassandra, PostgreSQL, and have strong hands-on knowledge of using Python for integrations between systems through different data formats. Your expertise should extend to deploying and maintaining applications in AWS, with hands-on experience in Kinesis streams and Auto-scaling. Designing and implementing distributed systems and microservices, scalability, high availability, and fault tolerance best practices are also key aspects of this role. You should have strong problem-solving and debugging skills, with the ability to lead technical discussions and mentor junior engineers. Excellent communication skills, both written and verbal, are essential. You should be comfortable working in agile teams with modern development practices, collaborating with business and other teams to understand business requirements and work on project deliverables. Participation in requirements gathering and understanding, designing solutions based on available frameworks and code, and experience with data engineering tools or ML platforms (e.g., Pandas, Airflow, SageMaker) are expected. An AWS certification (AWS Certified Solutions Architect or Developer) would be advantageous. This position is based in multiple locations in India, including Indore, Mumbai, Noida, Bangalore, and Chennai. To qualify, you should hold a Bachelor's degree or a foreign equivalent from an accredited institution. Alternatively, three years of progressive experience in the specialty can be considered in lieu of each year of education. A minimum of 8+ years of Information Technology experience is required for this role.,
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, _VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Core Competencies Excellent knowledge on EKS, Kubernetes and its related AWS Component. Kubernetes Networking Kubernetes DevOps which includes Deployment of Kubernetes – EKS Cluster using IaaC (Terraform) and CI/CD pipeline. EKS Secret Management, Autoscaling and Lifecycle Management. EKS Security using AWS Native Services. Excellent Understanding on AWS cloud services like VPC, EC2, ECS, S3, EBS, ELB, Elastic IPs, Security Group etc. AWS Component deployment using Terraform Application Onboarding on Kubernetes using Argocd AWS Codepipeline, Codebuild, Code Commit HashiCorp Stack, HasiCorp Packer. Bitbucket and Git, Profound Cloud Technology, Network, Security and Platform Expertise (AWS or Google Cloud or Azure) Good documentation and communication skills. Good Understanding on ELK, Cloudwatch, datadog Roles & Responsibilites Manage project driven integration and day-to-day administration of cloud solutions Develop prototypes, designing and building modules and solutions for Cloud Platforms in an iterative agile cycles, develop, maintain, and optimize the business outcome Conduct peer reviews and maintain coding standards Driving automation using CI/CD using Jenkins or argcd Driving Cloud solution automation and integration activity for Cloud Provider - AWS and Tenant (Project) workloads. Build and deploy AWS cloud infrastructure by using cloud formation and terraform scripts. Use Ansible & Python to perform routines tasks like user management and security hardening, etc. Providing professional technical consultancy to migrate and transform existing on-premises applications to public cloud and support to all Cloud-related programmes and existing environments Design and deploy direct connect network between AWS and datacentre. Train and develop AWS expertise within the organisation. Proven troubleshooting skills to resolve issues related with cloud network, storage and performance management. VOIS Equal Opportunity Employer Commitment VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion , Top 10 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role: Python+DevOps Experience: 5+years Location: Bangalore Budget: 2 LPM Job Description: You'll architect and scale document processing pipelines that handle thousands of financial documents daily, ensuring high availability and cost efficiency. What You'll Do ⦁ Build scalable async processing pipelines for document classification, extraction, and validation ⦁ Optimize cloud infrastructure costs while maintaining 99.9% uptime for document processing workflows ⦁ Design and implement APIs for document upload, processing status, and results retrieval ⦁ Manage Kubernetes deployments with autoscaling based on document processing load ⦁ Implement monitoring and observability for complex multistage document workflows ⦁ Optimize database performance for high-volume document metadata and processing results ⦁ Build CI/CD pipelines for safe deployment of processing algorithms and business rules Technical Requirements Must Have: ⦁ 5+ years backend development (Python or Go) ⦁ Strong experience with async processing (Celery, Temporal, or similar) ⦁ Docker containerization and orchestration ⦁ Cloud platforms (AWS/GCP/Azure) with cost optimization experience ⦁ API design and development (REST/GraphQL) ⦁ Database optimization (MongoDB, PostgreSQL) ⦁ Production monitoring and debugging Nice to Have: ⦁ Kubernetes experience ⦁ Experience with document processing or ML pipelines ⦁ Infrastructure as Code (Terraform/CloudFormation) ⦁ Message queues (SQS, RabbitMQ, Kafka) ⦁ Performance optimization for high-throughput systems Interested candidate can apply through - https://thexakal.com/share-job?jobId=686e09563a69611b52ad693f
Posted 1 week ago
3.0 - 7.0 years
0 - 0 Lacs
ahmedabad, gujarat
On-site
As a Cloud Administrator (AWS), your primary responsibility will be to install, support, and maintain cloud/on-premise server infrastructure while ensuring optimal performance and availability of services. You will need to have a solid working knowledge of Kubernetes to manage Kubernetes clusters of Linux on AWS. Your role will also involve participating in calls, performing quality audits, building a knowledge database, engaging with clients, and providing training to the team. It is essential to demonstrate a combination of technical expertise and interpersonal skills to excel in this position. Your duties and responsibilities will include answering technical queries through various channels, logging all issues and resolutions, performing Linux server administration and configuration, maintaining system security, installing, configuring, and fine-tuning cloud infrastructure, monitoring performance, troubleshooting incidents and outages, and ensuring system security through access controls and backups. You will also be responsible for upgrading systems, monitoring backups, training staff on new technologies, maintaining technical documentation, providing 24/7 technical support, and contributing to IT team meetings. To be successful in this role, you should have at least 2+ years of international experience in configuring, managing, and automating cloud environments (AWS/Azure) along with an additional 3+ years of Linux experience. You should be familiar with Elastic Load Balancers, auto-scaling, Virtual Private Cloud, routing, cloud databases, IAM, ACM, and SSM. Strong knowledge of networking principles, virtualization administration, scripting, multi-tier system configurations, disaster recovery, and data integrity is crucial. Additionally, you must possess excellent analytical, problem-solving, communication, organizational, and time-management skills. The ideal candidate will hold a Bachelor's degree in Computer Science, Information Technology, or a related field and relevant certifications such as AWS Cloud Practitioner, AWS Solution Associate, Red Hat Certified System Administrator/Engineer, and ITIL Knowledge. A willingness to learn new technologies, follow established procedures, and take ownership of tasks is highly valued. With 3-5 years of experience, you can expect a salary ranging from 40,000 to 60,000 per month. If you meet the qualifications and possess the required skills, we encourage you to apply for this challenging and rewarding position in cloud administration.,
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Experience Required: 4-6 years Location: Gurgaon Department: Product and Engineering Working Days: Alternate Saturdays Working (1st and 3rd) 🔧 Key Responsibilities Design, implement, and maintain highly available and scalable infrastructure using AWS Cloud Services. Build and manage Kubernetes clusters (EKS, self-managed) to ensure reliable deployment and scaling of microservices. Develop Infrastructure-as-Code using Terraform, ensuring modular, reusable, and secure provisioning. Containerize applications and optimize Docker images for performance and security. Ensure CI/CD pipelines (Jenkins, GitHub Actions, etc.) are optimized for fast and secure deployments. Drive SRE principles including monitoring, alerting, SLIs/SLOs, and incident response. Set up and manage observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Automate routine tasks with scripting languages (Python, Bash, etc.). Lead capacity planning, auto-scaling, and cost optimization efforts across cloud infrastructure. Collaborate closely with development teams to enable DevSecOps best practices. Participate in on-call rotations, handle outages with calm, and conduct postmortems. 🧰 Must-Have Technical Skills Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular, state management, remote backends) AWS (EC2, VPC, S3, RDS, IAM, CloudWatch, ECS/EKS) Linux system administration CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions) Logging & monitoring tools: ELK, Prometheus, Grafana, CloudWatch Site Reliability Engineering practices Load balancing, autoscaling, and HA architectures 💡 Good-To-Have GCP or Azure exposure Service Mesh (Istio, Linkerd) Secrets management (Vault, AWS Secrets Manager) Security hardening of containers and infrastructure Chaos engineering exposure Knowledge of networking (DNS, firewalls, VPNs) 👤 Soft Skills Strong problem-solving attitude; calm under pressure Good documentation and communication skills Ownership mindset with a drive to automate everything Collaborative and proactive with cross-functional teams
Posted 1 week ago
3.0 - 6.0 years
4 - 8 Lacs
Gurgaon
Remote
Experience Required: 3-6 years Location: Gurgaon Department: Product and Engineering Working Days: Alternate Saturdays Working (1st and 3rd) Key Responsibilities Design, implement, and maintain highly available and scalable infrastructure using AWS Cloud Services . Build and manage Kubernetes clusters (EKS, self-managed) to ensure reliable deployment and scaling of microservices. Develop Infrastructure-as-Code using Terraform , ensuring modular, reusable, and secure provisioning. Containerize applications and optimize Docker images for performance and security. Ensure CI/CD pipelines (Jenkins, GitHub Actions, etc.) are optimized for fast and secure deployments. Drive SRE principles including monitoring, alerting, SLIs/SLOs, and incident response. Set up and manage observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Automate routine tasks with scripting languages (Python, Bash, etc.). Lead capacity planning, auto-scaling , and cost optimization efforts across cloud infrastructure. Collaborate closely with development teams to enable DevSecOps best practices. Participate in on-call rotations , handle outages with calm, and conduct postmortems. Must-Have Technical Skills Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular, state management, remote backends) AWS (EC2, VPC, S3, RDS, IAM, CloudWatch, ECS/EKS) Linux system administration CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions) Logging & monitoring tools: ELK, Prometheus, Grafana, CloudWatch Site Reliability Engineering practices Load balancing, autoscaling, and HA architectures Good-To-Have GCP or Azure exposure Service Mesh (Istio, Linkerd) Secrets management (Vault, AWS Secrets Manager) Security hardening of containers and infrastructure Chaos engineering exposure Knowledge of networking (DNS, firewalls, VPNs) Soft Skills Strong problem-solving attitude; calm under pressure Good documentation and communication skills Ownership mindset with a drive to automate everything Collaborative and proactive with cross-functional teams
Posted 1 week ago
4.0 years
0 Lacs
Thane, Maharashtra, India
On-site
DevOps Engineer - Kubernetes Specialist Experience: 4 - 8 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Hybrid (Mumbai) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Kubernetes , CI/CD , Google Cloud Ripplehire (One of Uplers' Clients) is Looking for: DevOps Engineer - Kubernetes Specialist who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are seeking an experienced DevOps Engineer with deep expertise in Kubernetes primarily Google Kubernetes Engine (GKE) to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and maintaining scalable containerized infrastructure, with a strong focus on cost optimization and operational excellence. Key Responsibilities & Required Skills Kubernetes Infrastructure & Deployment Responsibilities: Design, deploy, and manage production-grade Kubernetes clusters Perform cluster upgrades, patching, and maintenance with minimal downtime Deploy and manage multiple microservices with ingress controllers and networking Configure storage solutions and persistent volumes for stateful applications Required Skills: 3+ years of hands-on Kubernetes experience in production environments, primarily on Google Kubernetes Engine (GKE) Strong experience with Google Cloud Platform (GCP) and GKE-specific features Deep understanding of Docker, container orchestration, and GCP networking concepts Knowledge of Helm charts, YAML/JSON configuration, and service mesh technologies CI/CD, Monitoring & Automation Responsibilities: Design and implement robust CI/CD pipelines for Kubernetes deployments Implement comprehensive monitoring, logging, and alerting solutions Leverage AI tools and automation to improve team efficiency and task speed Create dashboards and implement GitOps workflows Required Skills: Proficiency with Jenkins, GitLab CI, GitHub Actions, or similar CI/CD platforms Experience with Prometheus, Grafana, ELK stack, or similar monitoring solutions Knowledge of Infrastructure as Code tools (Terraform, Ansible) Familiarity with AI/ML tools for DevOps automation and efficiency improvements Cost Optimization & Application Management Responsibilities: Analyze and optimize resource utilization across Kubernetes workloads Implement right-sizing strategies for services and batch jobs Deploy and manage Java-based applications and MySQL databases Configure horizontal/vertical pod autoscaling and resource management Required Skills: Experience with resource management, capacity planning, and cost optimization Understanding of Java application deployment and MySQL database administration Knowledge of database operators, StatefulSets, and backup/recovery solutions Proficiency in scripting languages (Bash, Python, or Go) Preferred Qualifications Experience with additional Google Cloud Platform services (Compute Engine, Cloud Storage, Cloud SQL, Cloud Build) Knowledge of GKE advanced features (Workload Identity, Binary Authorization, Config Connector) Experience with other cloud Kubernetes services (AWS EKS, Azure AKS) is a plus Knowledge of container security tools and chaos engineering Experience with multi-cluster GKE deployments and service mesh (Istio, Linkerd) Familiarity with AI-powered monitoring and predictive analytics platforms Key Competencies Strong problem-solving skills with innovative mindset toward AI-driven solutions Excellent communication and collaboration abilities Ability to work in fast-paced, agile environments with attention to detail Proactive approach to identifying issues using modern tools and AI assistance Ability to mentor team members and promote AI adoption for team efficiency Join our team and help shape the future of our DevOps practices with cutting-edge containerized infrastructure. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Ripplehire is a recruitment SaaS for companies to identify correct candidates from employees' social networks and gamify the employee referral program with contests and referral bonus to engage employees in the recruitment process. Developed and managed by Trampoline Tech Private Limited. Recognized by InTech50 as one of the Top 50 innovative enterprise software companies coming out of India and; NHRD (HR Association) Staff Pick for the most innovative social recruiting tool in India. Used by 7 clients as of July 2014. It is a tool available on the subscription-based pricing model. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
Were Hiring: FinOps Consultant at OpsLyft Are you passionate about cloud cost optimization and FinOps strategies Do you enjoy collaborating with finance, engineering, and cloud teams to drive financial efficiency If so, wed love to have you on board. At OpsLyft, we help businesses maximize efficiency and reduce cloud expenses through automation, actionable insights, and real-time cost governance. What Youll Do: Partner with clients to develop and implement FinOps best practices. Analyze cloud costs and provide data-driven cost-saving strategies. Set up budgeting, forecasting, and governance frameworks. Optimize AWS, Azure, and GCP costs (RIs, Savings Plans, Auto-scaling). Automate cost monitoring and anomaly detection with engineering teams. Recommend and implement FinOps tools like Kubecost, Apptio, CloudHealth. What Were Looking For: Strong FinOps and cloud cost management experience (AWS, Azure, GCP). Ability to analyze cost trends, budgets, and financial reports. Experience working with cross-functional teams to drive cost transparency. Nice-to-have: Scripting and automation skills (Python, Bash, Terraform). Bonus: FinOps certifications (FCP, AWS, GCP Cloud Financial Management). Why Join Us Make an impact Help businesses save millions on cloud costs. Work with top FinOps and engineering leaders in cutting-edge cloud financial management. Stay ahead of the curve in the fast-evolving FinOps ecosystem. Interested Lets Talk! Send your resume to hr@opslyft.com.,
Posted 1 week ago
2.0 - 3.8 years
5 - 7 Lacs
Hyderābād
Remote
Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. Whether you’re interested in engineering or development, marketing or sales, or something else – if this sounds like you, then we’d love to hear from you! We are headquartered in Denver, Colorado, with offices in the US, Canada, and India. DevOps II JD Vertafore is a leading technology company whose innovative software solution are advancing the insurance industry. Our suite of products provides solutions to our customers that help them better manage their business, boost their productivity and efficiencies, and lower costs while strengthening relationships. Our mission is to move InsurTech forward by putting people at the heart of the industry. We are leading the way with product innovation, technology partnerships, and focusing on customer success. Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. We are headquartered in Denver, Colorado, with offices across the U.S., Canada, and India. JOB DESCRIPTION Does building out a top-tier DevOps team, and everything that comes with it, sound intriguing? DevOps senior software engineer/team lead embedded in an energetic DevOps agile team. Our DevOps teams are tightly coupled and integrated with the culture, tools, practices and patterns of the rest of our software engineering organization. They not only “keep the lights on” for our systems and networks, but also empower our other development teams with cutting edge tools and capabilities to bring Vertafore products to market as quickly as possible. All of this will be accomplished with cutting edge, lean-agile, software development methodologies. Core Requirements and Responsibilities: Essential job functions included but are not limited to the following: You will lead the team in building out our continuous delivery infrastructure and processes for all our products utilizing state of the art technologies. You will be hands on leading the architecture and design of the frameworks for the automated continuous deployment of application code, the operational and security monitoring and care of the infrastructure and software platforms. You and your team will serve as the liaison between the agile development teams, SaaS operations, and external cloud providers for deployment, operational efficiency, security, and business continuity. Why Vertafore is the place for you: *Canada Only The opportunity to work in a space where modern technology meets a stable and vital industry Medical, vision & dental plans Life, AD&D Short Term and Long Term Disability Pension Plan & Employer Match Maternity, Paternity and Parental Leave Employee and Family Assistance Program (EFAP) Education Assistance Additional programs - Employee Referral and Internal Recognition Why Vertafore is the place for you: *US Only The opportunity to work in a space where modern technology meets a stable and vital industry We have a Flexible First work environment! Our North America team members use our offices for collaboration, community and team-building, with members asked to sometimes come into an office and/or travel depending on job responsibilities. Other times, our teams work from home or a similar environment. Medical, vision & dental plans PPO & high-deductible options Health Savings Account & Flexible Spending Accounts Options: Health Care FSA Dental & Vision FSA Dependent Care FSA Commuter FSA Life, AD&D (Basic & Supplemental), and Disability 401(k) Retirement Savings Plain & Employer Match Supplemental Plans - Pet insurance, Hospital Indemnity, and Accident Insurance Parental Leave & Adoption Assistance Employee Assistance Program (EAP) Education & Legal Assistance Additional programs - Tuition Reimbursement, Employee Referral, Internal Recognition, and Wellness Commuter Benefits (Denver) The selected candidate must be legally authorized to work in the United States. The above statements are intended to describe the general nature and level of work being performed by people assigned to this job. They are not intended to be an exhaustive list of all the job responsibilities, duties, skill, or working conditions. In addition, this document does not create an employment contract, implied or otherwise, other than an "at will" relationship. Vertafore strongly supports equal employment opportunity for all applicants regardless of race, color, religion, sex, gender identity, pregnancy, national origin, ancestry, citizenship, age, marital status, physical disability, mental disability, medical condition, sexual orientation, genetic information, or any other characteristic protected by state or federal law. The Professional Services (PS) and Customer Success (CX) bonus plans are a quarterly monetary bonus plan based upon individual and practice performance against specific business metrics. Eligibility is determined by several factors including: start date, good standing in the company, and actives status at time of payout. The Vertafore Incentive Plan (VIP) is an annual monetary bonus for eligible employees based on both individual and company performance. Eligibility is determined by several factors including: start date, good standing in the company, and actives status at time of payout. Commission plans are tailored to each sales role but common components include quota, MBO's and ABPMs. Salespeople receive their formal compensation plan within 30 days of hire. Vertafore is a drug free workplace and conducts preemployment drug and background screenings. We do not accept resumes from agencies, headhunters or other suppliers who have not signed a formal agreement with us. We want to make sure our recruiting process is accessible for everyone. if you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact recruiting@vertafore.com Just a note, this contact information is for accommodation requests only. Knowledge, Skills, Abilities and Qualifications: Bachelor’s degree in Computer Science (or related technical field) or equivalent practical experience 2 – 3.8 years professional experience in DevOps Have excellent communication and interpersonal skills and ability to work with other developers, business analysts, testing specialists and product owners to create stellar software products Have a strong sense of ownership. Strong diagnostic, analytical, and design skills. Closely follow industry trends and the open source community, identifying and proactively advocating for cutting edge tools that would optimize operational performance and/or reduce operating costs. Have experience in regulated environments. Care about quality and know what it means to ship high quality code and infrastructure. Be curious and avid learner Communicate clearly to explain and defend design decisions. Self-motivated and excellent problem-solvers. Driven to improve, personally and professionally. Mentor and inspire others to raise the bar for everyone around them. Love to collaborate with their peers, designing pragmatic solutions. Operate best in a fast-paced, flexible work environment. Experience with Agile software development. Experience in mission critical Cloud operations and/or DevOps engineering Have experience with AWS technologies and/or developing with Distributed Systems using Ansible, Puppet, or Jenkins. Strong understanding and experience working with Windows, Unix and Linux operating systems; specifically troubleshooting and providing administration. Have experience with operating and tuning relational and NoSQL databases Strong experience with Terraform, Jenkins. Have experience with performing support and administrative tasks within Amazon Web Services (AWS), Azure, OpenStack, or other cloud infrastructure technologies. Proficiency in managing systems within multiple sites including fail-over redundancy & autoscaling (knowledge of best practices and IT operations in an always-up, always-available service). Have experience deploying, maintaining and managing secure systems. A background in software development, preferably Web applications. Proficient in monitoring and logging tools such as ELK Stack (Elasticsearch, Logstash, and Kibana). Have experience with build & deploy tools (Jenkins). Have knowledge of IP networking, VPN's, DNS, load balancing and firewalling. Enjoy solving problems through the entire application stack. Have been on the front lines of production outages, both working to resolve the outage as well as root-cause the problem and provide long-term resolution or early identification strategies
Posted 1 week ago
4.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.
Posted 1 week ago
4.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.
Posted 1 week ago
2.0 - 5.0 years
6 - 11 Lacs
Hyderabad
Work from Office
Roles & Responsibilities: Take ownership of architecture design and development of scalable and distributed software systems. Translate business to technical requirements Oversee technical execution, ensuring code quality, adherence to deadlines, and efficient resource allocation Data driven decision making skills with focus on achieving product goals Design and develop data ingestion and processing pipelines capable of handling largescale events. Responsible for the complete software development lifecycle, including requirements analysis, design, coding, testing, and deployment. Utilize AWS services/ Azure services like IAM, Monitoring, Load Balancing, Autoscaling, Database, Networking, storage, ECR, AKS, ACR etc. Implement DevOps practices using tools like Docker, Kubernetes to ensure continuous integration and delivery. Develop DevOps scripts for automation and monitoring. Collaborate with cross-functional teams, conduct code reviews, and provide guidance on software design and best practices. Qualifications: Bachelors degree in computer science, Information Technology, or a related field (or equivalent work experience). 2-5 years of experience in software development, with relevant work experience. Strong coding skills with proficiency in Python, Java, or C++. Experience with API frameworks both stateless and stateful such as FastAPI, Django, Spring, Spring Boot. Proficient in cloud platforms, specifically AWS, Azure, or GCP. Hands-on experience with DevOps tools including Docker, Kubernetes, and AWS services. Strong understanding of scalable application design principles and experience with security best practices and compliance with privacy regulations. Good knowledge of software engineering practices like version control (GIT), DevOps (Azure DevOps preferred) and Agile or Scrum. Strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. Experience of SDLC and best practices while development Preferred experience with AI/ML based product development. Experience with Agile methodology for continuous product development and delivery Why you might want to join us: Going to be part of shaping one of the most exciting AI companies Opportunity to learn from peer group including experts in AI, computer vision & robotics to Data Engineering and System Engineering Sharp, motivated co-workers in a fun office environment Our motto: Put employees first. We only succeed when our employees succeed. Think Big. Be ambitious and have audacious goals. Aim for excellence. Quality and excellence count in everything we do. Own it and get it done. Results matter! Embrace each others differences
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France