Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Surat, Gujarat, India
On-site
Position : Lead Software Engineer ✅ Key Responsibilities 🚀 Architecture & System Design · Define scalable, secure, and modular architectures. · Implement high-availability patterns (circuit breakers, autoscaling, load balancing). · Enforce OWASP best practices, role-based access, and GDPR/PIPL compliance. 💻 Full-Stack Development · Oversee React Native & React.js codebases; mentor on state management (Redux/MobX). · Architect backend services with Node.js/Express; manage real-time layers (WebSocket, Socket.io). · Integrate third-party SDKs (streaming, ads, offerwalls, blockchain). 📈 DevOps & Reliability · Own CI/CD pipelines and Infrastructure-as-Code (Terraform/Kubernetes). · Drive observability (Grafana, Prometheus, ELK); implement SLOs and alerts. · Conduct load testing, capacity planning, and performance optimization. 👥 Team Leadership & Delivery · Mentor 5–10 engineers, lead sprint planning, code reviews, and Agile ceremonies. · Collaborate with cross-functional teams to translate roadmaps into deliverables. · Ensure on-time feature delivery and manage risk logs. 🔍 Innovation & Continuous Improvement · Evaluate emerging tech (e.g., Layer-2 blockchain, edge computing). · Improve development velocity through tools (linters, static analysis) and process optimization. 📌 What You’ll Need · 5+ years in full-stack development, 2+ years in a lead role · Proficient in: React.js, React Native, Node.js, Express, AWS, Kubernetes · Strong grasp of database systems (PostgreSQL, Redis, MongoDB) · Excellent communication and problem-solving skills · Startup or gaming experience a bonus 🎯 Bonus Skills · Blockchain (Solidity, smart contracts), streaming protocols (RTMP/HLS) · Experience with analytics tools (Redshift, Metabase, Looker) · Prior exposure to monetization SDKs (PubScale, AdX)
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka
On-site
GeekyAnts India Pvt Ltd Services 251 - 500 Employees 4.5 Reviews Bengaluru, Karnataka Location About company GeekyAnts is a design and development studio that specializes in building solutions for web and mobile that drive innovation and transform industries and lives. They hold expertise in state-of-the-art technologies like React, React Native, Flutter, Angular, Vue, NodeJS, Python, Svelte and more. GeekyAnts has worked with around 500+ clients all across the globe, delivering tailored solutions to a wide array of industries like Healthcare, Finance, Education, Banking, Gaming, Manufacturing, Real Estate and more. They are trusted tech partners of some of the world's top corporate giants and have helped small to mid-sized companies realize their vision and transform digitally. They are also the registered service suppliers for Google LLC since 2017. They provide services ranging from Web & Mobile Development, UI/UX design, Business Analysis, Product Management, DevOps, QA, API Development, Delivery & Support and more. In addition to that, GeekyAnts is the brains behind React Native's most famous UI library; NativeBase (15000+ GitHub Stars), BuilderX, Vue Native, Flutter Starter, apibeats and hold numerous other Open Source contributions to their name. GeekyAnts has offices in India (Bangalore) and the UK (London) 5 vacancy Senior Software Engineer III Posted 7 months ago Not Disclosed Salary 5+ Years Experience Bengaluru, Karnataka Location Job Description We are seeking an experienced Senior Software Engineer - III who thrives in a tech-agnostic, cloud-first environment and can architect scalable backend systems with confidence and clarity. Your expertise in backend engineering principles, system design, and performance optimization at scale will be the cornerstone of your success—not your familiarity with any one programming language. This role requires strategic thinking, hands-on backend capability, and the ability to make technology decisions aligned with business goals. If you choose the right tools for the problem—not based on preference but based on what scales—we want to talk to you. Responsibilities Architect and lead the design of scalable, resilient, and cloud-native backend systems Optimize backend systems for performance, reliability, and cost-efficiency, particularly in high-scale environments Make cloud-first design decisions, leveraging the best of AWS and modern cloud architectures (e.g., serverless, container orchestration, managed services) Collaborate with product, design, and engineering teams to translate business goals into technical solutions Evaluate and introduce technologies based on project needs—not personal preferences Champion engineering excellence, including clean code, reusable design patterns, and robust APIs Conduct technical design and code reviews, mentor developers, and drive continuous improvement Communicate effectively across engineering and non-engineering stakeholders, breaking down complex ideas simply Required Skills 6+ years of backend development experience with strong system-level thinking Proven track record in system design, architecture decisions, and design trade-offs Strong understanding of performance tuning, distributed systems, data modeling, and API design Experience working across multiple tech stacks or programming languages Ability to quickly adapt to any backend framework, language, or ecosystem Deep experience with AWS and cloud-native architecture principles (e.g., stateless services, managed databases, autoscaling, serverless) Proficiency in DevOps, CI/CD pipelines, containerization (e.g., Docker, ECS/EKS) Experience optimizing systems for throughput, latency, and scalability at production scale Familiarity with Spring Boot is a strong plus Educational Qualifications B.Tech / B.E. degree in Computer Science Rounds description [One-to-One In-person Interview] You will be talking directly with the HR Team at GeekyAnts for Communication Assessments & Review. HR Discission
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Senior SRE (Engineering & Reliability) Job Summary: We are seeking an experienced and dynamic Site Reliability Engineering (SRE) Lead to oversee the reliability, scalability, and performance of our critical systems. As an SeniorSRE, you will play a pivotal role in establishing and implementing SRE practices, leading a team of engineers, and driving automation, monitoring, and incident response strategies. This position combines software engineering and systems engineering expertise to build and maintain high-performing, reliable systems. Experience: 5-10 years Key Responsibilities: Reliability & Performance: • Lead efforts to maintain high availability and reliability of critical services. • Define and monitor SLIs, SLOs, and SLAs to ensure business requirements are met. • Proactively identify and resolve performance bottlenecks and system inefficiencies. Incident Management & Response: • Establish and improve incident management processes and on-call rotations. • Lead incident response and root cause analysis for high-priority outages. • Drive post-incident reviews and ensure actionable insights are implemented. Automation & Tooling: • Develop and implement automated solutions to reduce manual operational tasks. • Enhance system observability through metrics, logging, and distributed tracing tools (e.g., Prometheus, Grafana, Elastic APM). • Optimize CI/CD pipelines for seamless deployments. Collaboration: • Partner with software engineering teams to improve the reliability of applications and infrastructure. • Work closely with product/ engineering teams to design scalable and robust systems. • Ensure seamless integration of monitoring and alerting systems across teams. Leadership & Team Building: • Manage, mentor, and grow a team of SREs. • Promote SRE best practices and foster a culture of reliability and performance across the organization. • Drive performance reviews, skills development, and career progression for team members. Capacity Planning & Cost Optimization: • Perform capacity planning and implement autoscaling solutions to handle traffic spikes. • Optimize infrastructure and cloud costs while maintaining reliability and performance. Skills & Qualifications: Required Skills: • Technical Expertise: o Experience with cloud platforms (AWS / Azure / GCP) and Kubernetes. o Hands-on knowledge of infrastructure-as-code tools like Terraform /Helm/ Ansible. o Proficiency in Java o Expertise in distributed systems, databases, and load balancing. • Monitoring & Observability: o Proficient with tools like Prometheus, Grafana,, Elastic APM, or New relic. o Understanding of metrics-driven approaches for system monitoring and alerting. • Automation & CI/CD: o Hands-on experience with CI/CD pipelines (e.g., Jenkins, Azure Pipelines etc). o Skilled in automation frameworks and tools for infrastructure and application deployments. • Incident Management: o Proven track record in handling incidents, post-mortems, and implementing solutions to prevent recurrence. Leadership & Communication Skills: • Strong people management and leadership skills with the ability to inspire and motivate teams. • Excellent problem-solving and decision-making skills. • Clear and concise communication, with the ability to translate technical concepts for non-technical stakeholders. Preferred Qualifications: • Experience with database optimization, Kafka, or other messaging systems. • Knowledge of autoscaling techniques • Previous experience in an SRE, DevOps, or infrastructure engineering leadership role. • Understanding of compliance and security best practices in distributed systems.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At Codvo, software and people transformations go hand-in-hand. We are a global empathy-led technology services company where product innovation and mature software engineering are embedded in our core DNA. Our core values of Respect, Fairness, Growth, Agility, and Inclusiveness guide everything we do. We continually expand our expertise in digital strategy, design, architecture, and product management to offer measurable results and outside-the-box thinking. About The Role We are seeking a highly skilled and experienced Senior AI Engineer to lead the design, development, and implementation of robust and scalable pipelines and backend systems for our Generative AI applications. In this role, you will be responsible for orchestrating the flow of data, integrating AI services, developing RAG pipelines, working with LLMs, and ensuring the smooth operation of the backend infrastructure that powers our Generative AI solutions. You will also be expected to apply modern LLMOps practices, handle schema-constrained generation, optimize cost and latency trade-offs, mitigate hallucinations, and ensure robust safety, personalization, and observability across GenAI systems. Responsibilities Generative AI Pipeline Development Design and implement scalable and modular pipelines for data ingestion, transformation, and orchestration across GenAI workloads. Manage data and model flow across LLMs, embedding services, vector stores, SQL sources, and APIs. Build CI/CD pipelines with integrated prompt regression testing and version control. Use orchestration frameworks like LangChain or LangGraph for tool routing and multi-hop workflows. Monitor system performance using tools like Langfuse or Prometheus. Data and Document Ingestion Develop systems to ingest unstructured (PDF, OCR) and structured (SQL, APIs) data. Apply preprocessing pipelines for text, images, and code. Ensure data integrity, format consistency, and security across sources. AI Service Integration Integrate external and internal LLM APIs (OpenAI, Claude, Mistral, Qwen, etc.). Build internal APIs for smooth backend-AI communication. Optimize performance through fallback routing to classical or smaller models based on latency or cost budgets. Use schema-constrained prompting and output filters to suppress hallucinations and maintain factual accuracy. Retrieval-Augmented Generation (RAG) Pipelines Build hybrid RAG pipelines using vector similarity (FAISS/Qdrant) and structured data (SQL/API). Design custom retrieval strategies for multi-modal or multi-source documents. Apply post-retrieval ranking using DPO or feedback-based techniques. Improve contextual relevance through re-ranking, chunk merging, and scoring logic. LLM Integration and Optimization Manage prompt engineering, model interaction, and tuning workflows. Implement LLMOps best practices: prompt versioning, output validation, caching (KV store), and fallback design. Optimize generation using temperature tuning, token limits, and speculative decoding. Integrate observability and cost-monitoring into LLM workflows. Backend Services Ownership Design and maintain scalable backend services supporting GenAI applications. Implement monitoring, logging, and performance tracing. Build RBAC (Role-Based Access Control) and multi-tenant personalization. Support containerization (Docker, Kubernetes) and autoscaling infrastructure for production. Required Skills And Qualifications Education Bachelor’s or Master’s in Computer Science, Artificial Intelligence, Machine Learning, or related field. Experience 5+ years of experience in AI/ML engineering with end-to-end pipeline development. Hands-on experience building and deploying LLM/RAG systems in production. Strong experience with public cloud platforms (AWS, Azure, or GCP). Technical Skills Proficient in Python and libraries such as Transformers, SentenceTransformers, PyTorch. Deep understanding of GenAI infrastructure, LLM APIs, and toolchains like LangChain/LangGraph. Experience with RESTful API development and version control using Git. Knowledge of vector DBs (Qdrant, FAISS, Weaviate) and similarity-based retrieval. Familiarity with Docker, Kubernetes, and scalable microservice design. Experience with observability tools like Prometheus, Grafana, or Langfuse. Generative AI Specific Skills Knowledge of LLMs, VAEs, Diffusion Models, GANs. Experience building structured + unstructured RAG pipelines. Prompt engineering with safety controls, schema enforcement, and hallucination mitigation. Experience with prompt testing, caching strategies, output filtering, and fallback logic. Familiarity with DPO, RLHF, or other feedback-based fine-tuning methods. Soft Skills Strong analytical, problem-solving, and debugging skills. Excellent collaboration with cross-functional teams: product, QA, and DevOps. Ability to work in fast-paced, agile environments and deliver production-grade solutions. Clear communication and strong documentation practices. Preferred Qualifications Experience with OCR, document parsing, and layout-aware chunking. Hands-on with MLOps and LLMOps tools for Generative AI. Contributions to open-source GenAI or AI infrastructure projects. Knowledge of GenAI governance, ethical deployment, and usage controls. Experience with hallucination suppression frameworks like Guardrails.ai, Rebuff, or Constitutional AI. Experience and Shift Experience: 5+ years Shift Time: 2:30 PM to 11:30 PM IST
Posted 1 week ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
About The Role As a Machine Learning Operation Engineer, you will work on deploying, scaling, and optimizing backend algorithms, robust and scalable data ingestion pipelines, machine learning services, and data platforms to support analysis on vast amounts of text and analytics data. You will apply your technical knowledge and Big Data analytics on Onclusives billions of online content data points to solve challenging marketing problems. ML Ops Engineers are integral to the success of Onclusive. Your Responsibilities Design and build scalable machine learning services and data platforms. Utilize benchmarks, metrics, and monitoring to measure and improve services. Manage system currently processing data on the order of tens of millions of jobs per day. Research, design, implement and validate cutting-edge algorithms to analyze diverse sources of data to achieve targeted outcomes. Work with data scientists and machine learning engineers to implement ML, AI, and NLP techniques for article analysis and attribution. Deploy, manage, and optimize inference services on autoscaling fleets with GPUs and specialized inference you are : A degree (BS, MS, or Ph.D.) in Computer Science or a related field, accompanied by hands-on experience. Proficiency in Python, showcasing your understanding of Object-Oriented Programming (OOP) principles. Solid knowledge of containerisation (Docker preferable). Experience working with Kubernetes. Experience in Infrastructure as Code (IAC) for AWS, with a preference for Terraform. Knowledge of Version Control Systems (VCS), particularly Git and GitHub, alongside familiarity with CI/CD, preferably GitHub Actions. Understanding of release management, embracing rigorous testing, validation, and quality assurance protocols. Good understanding of ML principles. Data Engineering experience (airflow, dbt, meltano) is highly desired. Exposure to deep learning tech-stacks like Torch / Tensorflow, and we can offer : We are a global fast growing company which offers a variety of opportunities for you to develop your skill set and career. In exchange for your contribution, we can offer you : Competitive salary and benefits. Hybrid working in a team that is passionate about the work we deliver and supporting the development of those that we work with. A company focus on wellbeing and work life balance including initiatives such as flexible working and mental health support. We want the best talent available, regardless of race, religion, gender, gender reassignment, sexual orientation, marital status, pregnancy, disability or age. (ref:hirist.tech)
Posted 1 week ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Location: Bangalore / Hyderabad / Chennai / Pune / Gurgaon Mode: Hybrid (3 days/week from office) Relevant Experience: 7+ years must Role Type: Individual Contributor Client: US-based multinational banking institution Role Summary We are hiring a seasoned DevOps Engineer (IC) to drive infrastructure automation, deployment reliability, and engineering velocity for AWS-hosted platforms. You’ll play a hands-on role in building robust CI/CD pipelines, managing Kubernetes (EKS or equivalent), and implementing GitOps, infrastructure as code, and monitoring systems. Must-Have Skills & Required Depth AWS Cloud Infrastructure Independently provisioned core AWS services — EC2, VPC, S3, RDS, Lambda, SNS, ECR — using CLI and Terraform. Configured IAM roles, security groups, tagging standards, and cost monitoring dashboards. Familiar with basic networking and serverless deployment models. Containerization (EKS / Kubernetes) Deployed containerized services to Amazon EKS or equivalent. Authored Helm charts, configured ingress controllers, pod autoscaling, resource quotas, and health probes. Troubleshot deployment rollouts, service routing, and network policies. Infrastructure as Code (Terraform / Ansible / AWS SAM) Created modular Terraform configurations with remote state, reusable modules, and drift detection. Implemented Ansible playbooks for provisioning and patching. Used AWS SAM for packaging and deploying serverless workloads. GitOps (Argo CD / Equivalent) Built and managed GitOps pipelines using Argo CD or similar tools. Configured application sync policies, rollback strategies, and RBAC for deployment automation. CI/CD (Bitbucket / Jenkins / Jira) Developed multi-stage pipelines covering build, test, scan, and deploy workflows. Used YAML-based pipeline-as-code and integrated Jira workflows for traceability. Scripting (Bash / Python) Written scripts for log rotation, backups, service restarts, and automated validations. Experienced in handling conditional logic, error management, and parameterization. Operating Systems (Linux) Proficient in Ubuntu/CentOS system management, package installations, and performance tuning. Configured Apache or NGINX for reverse proxy, SSL, and redirects. Datastores (MySQL / PostgreSQL / Redis) Managed relational and in-memory databases for application integration, backup handling, and basic performance tuning. Monitoring & Alerting (Tool-Agnostic) Configured metrics collection, alert rules, and dashboards using tools like CloudWatch, Prometheus, or equivalent. Experience in designing actionable alerts and telemetry pipelines. Incident Management & RCA Participated in on-call rotations. Handled incident bridges, triaged failures, communicated status updates, and contributed to root cause analysis and postmortems. Nice-to-Have Skills Skill Skill Depth Kustomize / FluxCD Exposure to declarative deployment strategies using Kustomize overlays or FluxCD for GitOps workflows. Kafka Familiarity with event-streaming architecture and basic integration/configuration of Kafka clusters in application environments. Datadog (or equivalent) Experience with Datadog for monitoring, logging, and alerting. Configured custom dashboards, monitors, and anomaly detection. Chaos Engineering Participated in fault-injection or resilience testing exercises. Familiar with chaos tools or simulations for validating system durability. DevSecOps & Compliance Exposure to integrating security scans in pipelines, secrets management, and contributing to compliance audit readiness. Build Tools (Maven / Gradle / NPM) Experience integrating build tools with CI systems. Managed dependency resolution, artifact versioning, and caching strategies. Backup / DR Tooling (Veeam / Commvault) Familiar with backup scheduling, data restore processes, and supporting DR drills or RPO/RTO planning. Certifications (AWS / Terraform) Possession of certifications like AWS Certified DevOps Engineer, Developer Associate, or HashiCorp Certified Terraform Associate is preferred.
Posted 1 week ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us At Edifyer Technologies, we’re building a category-defining, AI-first product at the intersection of conversational AI, real-time 3D, and experiential, hyper-personalised learning. Our mission is to transform how organizations learn — by making workforce training intelligent. If you're a bold builder ready to shape the core of a high-impact product, this is your moment. Who We're Looking For We're seeking a Technical Co-founder & CTO who thrives in early-stage chaos, enjoys solving complex technical challenges, and is eager to lead the AI/ML strategy from the ground up. This is more than a job—it's an opportunity to join as a co-founder, build and own key technical systems, and directly influence the trajectory of the company. Core Responsibilities Define and evolve the long-term technical vision, architecture, and system design for scalability Drive the transition from MVP prototypes to enterprise-grade, production-ready systems Collaborate closely with product and design leads to rapidly prototype and iterate on user-centric features Fine-tune and serve 6B–13B parameter Large Language Models (LLMs) using Triton and TensorRT Integrate retrieval-augmented generation (RAG) pipelines and AI safety layers (filters, guardrails, etc.) Design real-time pipelines (STT → LLM → TTS → Unity) using WebSockets or gRPC Optimize for ultra-low latency to ensure a smooth frame budget across Unity experiences Partner with SDK leads to expose clean, developer-friendly C# APIs Enable microphone capture and viseme-driven lip sync for WebGL/WebXR environments Set up spot-GPU orchestration using Kubernetes (K8s) and Terraform-based Infrastructure as Code (IaC) Build and manage CI/CD pipelines (blue-green deployments); set up monitoring dashboards for cost and latency Implement OAuth2/JWT-based auth, secure secret management, and rate-limiting Lead security hardening (OWASP) and lay the groundwork for SOC 2 Type I compliance Hire and mentor a small founding tech team on ESOP + cash terms Cultivate a strong engineering culture focused on velocity, quality, and ownership. Manage relationships with key vendors (GPU providers, cloud infra, AI APIs) to optimize cost and reliability Engage directly with early customers and partners to gather feedback, debug live issues, and validate technical direction. What We’re Offering Opportunity to become a Co-founder + Significant Equity Ownership - Your contribution deserves long-term upside. Edifyer Technologies will be your company too. Creative and Technical Freedom - You’ll have a blank canvas to build, experiment, and ship without red tape. High-Impact Mission - Your work will lay the foundation for the next-generation of enterprise learning platforms from India that will transform the learning culture of organisations world-wide. Next-gen Tech Stack - You get to work with the cutting-edge LLMs, STT/TTS, ASR, scalable cloud infra, etc.— and build a world class system. Ideal Candidate Profile 7+ years building distributed or real-time ML systems; 2+ years technical leadership. Hands-on LLM or speech experience : Triton, TensorRT, Riva, Whisper, or similar - demonstrated < 1 s latency. Deep Python (FastAPI) & C# expertise; comfort with micro-services, gRPC, WebSockets. Unity/WebGL chops : Profiling, memory tuning, audio pipelines, viseme/blendshape animation. Cloud-native engineering : Docker, K8s, autoscaling, Terraform/Pulumi, Prometheus/Grafana. Security mindset : OAuth2, TLS everywhere, moderation gates, GDPR awareness. Entrepreneurial stamina : Persistence and optimistic outlook even in the face of challenge or setback. How to Apply? Email us your GitHub, portfolio, or any past projects you're proud of. We're more interested in your curiosity, initiative, and raw capability. Email : contact@edifyer.io Subject: Applying for Technical Co-founder & CTO – [Your Name] Show more Show less
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Job Title: Senior DevOps Engineer Location: Anywhere (UST) Experience: 6 to 10 Years Mandatory Skills & Experience Azure DevOps ArgoCD Kubernetes (services, ingress, secrets, RBAC, autoscaling) Scripting (Bash, PowerShell, or Python) Hands-on experience deploying and managing applications built with: Java Spring Boot Python Strong understanding of CI/CD pipelines, secrets management, and secure deployment practices Infrastructure-as-Code (IaC) experience, preferably with Terraform Job Description We are looking for a Senior DevOps Engineer with 6 to 10 years of experience to manage and optimize deployment pipelines, infrastructure, and application lifecycle. The candidate will work extensively with Azure DevOps, ArgoCD, and Kubernetes to deploy and manage containerized applications, primarily Java Spring Boot and Python-based. Key Responsibilities Develop, maintain, and optimize CI/CD pipelines using Azure DevOps and ArgoCD. Deploy and manage containerized applications on Kubernetes clusters with a solid understanding of core Kubernetes concepts including services, ingress, secrets, RBAC, and autoscaling. Write and maintain automation scripts in Bash, PowerShell, or Python to streamline deployment and operations. Implement secrets management and ensure secure application deployment. Use Infrastructure-as-Code tools such as Terraform to provision and manage infrastructure. Collaborate with development, QA, and security teams to ensure smooth and secure deployments. Troubleshoot issues in deployment pipelines and Kubernetes environments. Qualifications 6 to 10 years of proven experience in DevOps roles. Expertise with Azure DevOps, ArgoCD, and Kubernetes administration. Experience deploying Java Spring Boot and Python applications in containerized environments. Proficient in scripting and automation. Experience with Infrastructure-as-Code, preferably Terraform. Strong problem-solving skills and ability to work collaboratively. Skills Azure DevOps,AgroCD,Kubernetes,Scripting Show more Show less
Posted 1 week ago
7.5 years
0 Lacs
Gurugram, Haryana, India
On-site
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Hybris Commerce Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that project goals are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application development with business objectives, ensuring that the solutions provided are effective and efficient. Your role will require you to stay updated with industry trends and best practices to continuously improve application performance and user experience. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate training and knowledge-sharing sessions to enhance team capabilities. - Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Strong understanding of e-commerce platforms and their architecture. - Experience with integration of third-party services and APIs. - Familiarity with agile methodologies and project management tools. - Ability to troubleshoot and resolve technical issues efficiently. Performance Engineering Fundamentals - In-depth knowledge of: Latency, throughput, concurrency, scalability, resource utilization - Performance metrics: CPU usage, memory consumption, disk I/O, network latency - Understanding of bottlenecks in multi-tiered architectures - JVM tuning (GC optimization, thread pools) - Database tuning (indexing, query optimization, DB Connection pool) - Monitoring & Observability - Have knowledge of Dynatrace, New Relic, Prometheus, Grafana - Resource tuning pods, autoscaling, memory/CPU optimization, Load Balancing, Cluster Configuration - Knowledge of Akamai Caching, APG Caching - Good to have if SAP Commerce Cloud CCV2 Experience Additional Information: - The candidate should have minimum 7.5 years of experience in SAP Hybris Commerce. - This position is based at our Gurugram office. - A 15 years full time education is required. Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Greater Kolkata Area
On-site
Skills : AWS Solution Architect - GenAI Location : Kolkata Experience : 12 - 22 Years Job Description 15+ years of hands on IT experience in design and development of complex system Minimum of 5+ years in a solution or technical architect role using service and hosting solutions such as private/public cloud IaaS, PaaS and SaaS platforms At least 4+ years of experience hands on experience in cloud native architecture design, implementation of distributed, fault tolerant enterprise applications for Cloud. Experience in application migration to AWS cloud using Refactoring, Rearchitecting and Re-platforming approach 3+ Proven experience using AWS services in architecting PaaS solutions. AWS Certified Architect Technical Skills Deep understanding of Cloud Native and Microservices fundamentals Deep understanding of Gen AI usage and LLM Models, Hands on experience creating Agentic Flows using AWS Bedrock, Hands on experience using Amazon Q for Dev/Transform Deep knowledge and understanding of AWS PaaS and IaaS features Hands on experience in AWS services i.e. EC2, ECS, S3, Aurora DB, DynamoDB, Lambda, SQS, SNS, RDS, API gateway, VPC, Route 53, Kinesis, cloud front, Cloud Watch, AWS SDK/CLI etc. Strong experience in designing and implementing core services like VPC, S3, EC2, RDS, IAM, Route 53, Autoscaling , Cloudwatch, AWS Config, Cloudtrail, ELB, AWS Migration services, ELB, VPN/Direct connect Hands on experience in enabling Cloud PaaS app and data services like Lambda, RDS, SQS, MQ,, Step Functions, App flow, SNS, EMR, Kinesis, Redshift, Elastic Search and others Experience automation and provisioning of cloud environments using APIs, CLI and scripts. Experience in deploy, manage and scale applications using Cloud Formation/ AWS CLI Good understanding of AWS Security best practices and Well Architecture Framework Good knowledge on migrating on premise applications to AWS IaaS Good knowledge of AWS IaaS (AMI, Pricing Model, VPC, Subnets etc.) Good to have experience in Cloud Data processing and migration, advanced analytics AWS Redshift, Glue, AWS EMR, AWS Kinesis, Step functions Creating, deploying, configuring and scaling applications on AWS PaaS Experience in java programming languages Spring, Spring boot, Spring MVC, Spring Security and multi-threading programming Experience in working with hibernate or other ORM technologies along with JPA Experience in working on modern web technologies such as Angular, Bootstrap, HTML5, CSS3, React Experience in modernization of legacy applications to modern java applications Experience in DevOps tool Jenkins/Bamboo, Git, Maven/Gradle, Jira, SonarQube, Junit, Selenium, Automated deployments and containerization Knowledge on relational database and no SQL databases i.e. MongoDB, Cassandra etc. Hands on experience with Linux operating system Experience in full life-cycle agile software development Strong analytical troubleshooting skills Experienced in Python, Node and Express JS (Optional) Main Duties AWS architect takes companys business strategy and outlines the technology systems architecture that will be needed to support that strategy. Responsible for analysis, evaluation and development of enterprise long term cloud strategic and operating plans to ensure that the EA objectives are consistent with the enterprises long-term business objectives. Responsible for the development of architecture blueprints for related systems Responsible for recommendation on Cloud architecture strategies, processes and methodologies. Involved in design and implementation of best fit solution with respect to Azure and multi-cloud ecosystem Recommends and participates in activities related to the design, development and maintenance of the Enterprise Architecture (EA). Conducts and/or actively participates in meetings related to the designated project/s Participate in Client pursuits and be responsible for technical solution Shares best practices, lessons learned and constantly updates the technical system architecture requirements based on changing technologies, and knowledge related to recent, current and upcoming vendor products and solutions. Collaborates with all relevant parties in order to review the objectives and constraints of each solution and determine conformance with the EA. Recommends the most suitable technical architecture and defines the solution at a high level. This job is provided by Shine.com Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job description: As a PE SME, you will be focusing on performance engineering to ensure system reliability, capacity and scalability of the core platform (based on AWS and On-Prem) products & infrastructure, develop and maintain test strategy and test development in line with the Agile development process. In this role, you will lead performance testing & engineering efforts for Security Platform product lines and engineering disciplines. The successful candidate also acts as POC for key stakeholders Skills and Experience: Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Teraform Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications Performance engineering toolsets such as JMeter, Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analysing skills Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability Performance Testing and Engineering activity planning, estimating, designing, executing and analysing output from performance tests Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function Jenkins and CI-CD Pipelines including Pipeline scripting Chaos engineering using tools like Chaos toolkit, AWS Fault Injection Simulator, Gremlin etc. Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. Tools like GitHub, Jira & Confluence Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken Strong stakeholder management and excellent communication skills. Extensive knowledge of risk management and mitigation Strong analytical and problem-solving skills Job Title : Performance Test Engineer Key Skills : Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS), CloudFormation, Teraform,JMeter, Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, AppDynamics, New Relic, Splunk, DataDog,Jenkins and CI-CD Pipelines,Kubernetes Job Locations : Any Virtusa Experience : 5-7 Years Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contract Notice Period : Immediate - 10 Days Payroll : people prime Worldwide Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title : Performance Test Engineer Key Skills : Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS), CloudFormation, Teraform, JMeter, Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, AppDynamics, New Relic, Splunk, DataDog,Jenkins and CI-CD Pipelines,Kubernetes Job Locations : Any Virtusa Experience : 5-7 Years Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contract Notice Period : Immediate - 10 Days Payroll : people prime Worldwide Job description: As a PE SME, you will be focusing on performance engineering to ensure system reliability, capacity and scalability of the core platform (based on AWS and On-Prem) products & infrastructure, develop and maintain test strategy and test development in line with the Agile development process. In this role, you will lead performance testing & engineering efforts for Security Platform product lines and engineering disciplines. The successful candidate also acts as POC for key stakeholders Skills and Experience: Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Teraform Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications Performance engineering toolsets such as JMeter, Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analysing skills Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability Performance Testing and Engineering activity planning, estimating, designing, executing and analysing output from performance tests Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function Jenkins and CI-CD Pipelines including Pipeline scripting Chaos engineering using tools like Chaos toolkit, AWS Fault Injection Simulator, Gremlin etc. Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. Tools like GitHub, Jira & Confluence Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken Strong stakeholder management and excellent communication skills. Extensive knowledge of risk management and mitigation Strong analytical and problem-solving skills Show more Show less
Posted 1 week ago
17.0 - 20.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Developer Experience is a growing department within the Global Technology division of Bank of America. We drive modernization of technology tools and processes and Operational Excellence work across Global Technology. The organization operates in a very dynamic and fast-paced global business environment. As such, we value versatility, creativity, and innovation provided through individual contributors and teams that come from diverse backgrounds and experiences. We believe in an Agile SDLC environment with a strong focus on technical excellence and continuous process improvement. Job Description* We are seeking a strategic and hands-on Principal Engineer to drive the design, modernization, and delivery of secure enterprise-grade applications at scale. In this role, you will shape architectural decision, introduce modern engineering practices, and influence platform and product teams to build secure, scalable, and observable systems. This is a high-impact technical leadership role for a proven engineer passionate about cloud-native architecture, developer experience, and responsible innovation. Responsibilities* Lead architecture, design and development of modern, distributed applications using modern tech stack, frameworks, and cloud-native patterns. Provide hands-on leadership in designing system components, APIs, and integration patterns, ensuring high performance, security, and maintainability. Define and enforce architectural standards, reusable patterns, coding practices and technical governance across engineering teams. Guide the modernization of legacy systems into modern architectures, optimizing for resilience, observability, and scalability. Integrate secure-by-design principles across SDLC through threat modeling, DevSecOps practices, and zero-trust design. Drive engineering effectiveness by enhancing observability, developer metrics and promoting runtime resiliency. Champion the responsible adoption of Generative AI tools to improve development productivity, code quality and automation. Collaborate with product owners, platform teams, and stakeholders to align application design with business goals. Champion DevSecOps, API-first design, and test automation to ensure high-quality and secure software delivery. Evaluate and introduce new tools, frameworks, and design patterns that improve engineering efficiency and platform consistency. Mentor and guide engineers through design reviews, performance tuning and technical deep dives. Requirements* Education* Graduation / Post Graduation : BE/B.Tech/MCA Certifications If Any: NA Experience Range* 17 to 20 Years Foundational Skills* Proven expertise in architecting large-scale distributed system with a strong focus on Java-based cloud-native applications using Spring Boot, Spring Cloud and API-first design; experience defining reference architectures, reusable patterns, and modernization blueprints. Deep hands-on experience with container orchestration platforms like Kubernetes/OpenShift including service mesh, autoscaling, observability and cost-aware architecture. In-depth knowledge of relational and NoSQL data platforms (e.g.: Oracle, PostgreSQL, MongoDB, Redis) including data modeling for microservices, transaction patterns, distributed consistency, caching strategies, and query performance optimization. Expertise in CI/CD pipelines, GitOps and DevSecOps practices for secure, automated application delivery; strong understanding of API lifecycle, runtime resiliency, and multi-environment release strategies. Strong grasp of threat modeling, secure architecture principles, and zero-trust application design with experience in integrating security throughout the software development lifecycle. Demonstrated experience using GenAI tools (e.g.: GitHub Copilot) to enhance the software development lifecycles – prompt engineering for code generation, automated test creation, refactoring, and architectural validation – with a responsible use, prompt design and maximizing engineering efficiency. Desired Skills* Experience modernizing legacy applications to modern cloud native architectures [e.g.: Microservices, Event-Driven etc.] Experience with big data platforms or architectures supporting real-time or large-scale transactional systems would be a big plus. Exposure to AI/ML workflows, including integration with ML APIs, and orchestration of AI-powered features. Demonstrated ability to explore emerging technologies like platform engineering, internal developer tooling and AI-augmented architecture. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Mumbai, Chennai, Hyderabad Show more Show less
Posted 1 week ago
6.0 - 7.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In quality engineering at PwC, you will focus on implementing leading practice standards of quality in software development and testing processes. In this field, you will use your experience to identify and resolve defects, optimise performance, and enhance user experience. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Job Description: Performance Test Lead_Senior Associate Performance Leads Strategic Partner with Application, Delivery, and Infrastructure Architects to collaborate on recommendations and fine tuning opportunities' and provide input into autoscaling recommendations. Driving and coordinating the implementation and execution of performance standards and best practices across applications. Lead a group of performance test engineers, coach and mentor the team on day to day activities including using monitoring tools to do bottleneck analysis. Tactical Demonstrate knowledge on high level design of application architecture to create detailed performance test strategy. Over 6-7 years experience and proficiency with performance tools JMeter, Blazemeter and Load runner. Over 3 years and more hands on experience with monitoring tools like Data Dog, App insights, Splunk to help the teams with bottleneck analysis and fine tuning recommendations. Experience with mobile performance testing tools like Headspin is a nice to have Responsible for planning performance testing with project/product teams as a core element of pre-production testing. Leverage performance monitoring tools to analyze bottlenecks and provide recommendations Work closely with the DevOps/production support team to create workload model patterns for load emulation similar to production. Collaboration with the project/product teams to incorporate performance testing as a practice. Day to day management of the vendor teams to ensure performance testing and execution are being performed smoothly. Driving and coordinating the implementation and execution of performance standards and best practices across applications Senior performance test lead and should be able to guide juniors, stakeholder mgmt., budgeting, application performance testing, architectural exp Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Noida
On-site
JOB DESCRIPTION Our CompanyChanging the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Our CompanyAs one of the world’s most innovative software companies whose products touch billions of people around the world, Adobe empowers everyone, everywhere to imagine, create, and bring any digital experience to life. From creators and students to small businesses, global enterprises, and nonprofit organizations — customers choose Adobe products to ideate, collaborate, be more productive, drive business growth, and build remarkable experiences.Our 30,000+ employees worldwide are creating the future and raising the bar as we drive the next decade of growth. We’re on a mission to hire the very best and believe in creating a company culture where all employees are empowered to make an impact. At Adobe, we believe that great ideas can come from anywhere in the organization. The next big idea could be yours. The OpportunityAdobe is revolutionizing digital experiences by empowering users to craft, manage, and share content effortlessly. What you'll DoBuild high-quality and performant solutions and features using web technologies.Drive solutioning and architecture discussions in the team and technically guide and mentor the team.Partner with product management for the technical feasibility of the features passionate about user experience as well as performance.Stay proficient in emerging industry technologies and trends, bringing that knowledge to the team to influence product direction.Use a combination of data and instinct to make decisions and move at a rapid pace.Craft a culture of collaboration and shared accomplishments, having fun along the way. What you need to succeedStrong technical background and analytical abilities, with experience developing services based on Java/Javascript and Web Application experiences.An interest in and ability to learn new technologies.Demonstrated results working in a diverse, global, team-focused environment.10+ years of relevant experience in software engineering with 1+ year being Tech Lead/Architect for engineering teams.Proficiency in the latest technologies like Web Component and TypeScript (or other Javascript frameworks).Familiarity with MVC framework and concepts such as HTML, DOM, CSS, REST, AJAX, responsive design, development with tests.Experience with AWS with knowledge of AWS Services like Autoscaling, ELB, ElastiCache, SQS, SNS, RDS, S3, Serverless Architecture, etc., or similar technology stack.Able to define APIs and integrate them into web applications using XML, JSON, SOAP/REST APIs.Knowledge of software fundamentals including design principles & analysis of algorithms, data structure design, and implementation, documentation, and unit testing and the acumen to apply them.Ability to work proactively and independently with minimal supervision. At Adobe, we believe in creating a company culture where all employees are empowered to make an impact. about Adobe life, including our values and culture, focus on people, purpose and community, Adobe For All, comprehensive benefits programs, the stories we tell, the customers we serve, and how you can help us change the world through personalized digital experience.Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 1 week ago
2.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
We're hiring for Python SDE 1 to join our Commerce Team. The Commerce Engineering Team forms the backbone of our core business. We build and iterate over our core platform that handles start from onboarding a seller to serving the finished products to end customers across different channels with customisation and configuration. Our team consists of generalist engineers who work on building REST APIs, Internal tools, and Infrastructure. Some Specific Requirements Atleast 2+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Solid experience in Python with experience in building web/app-based tech products Experience in at least one of the following frameworks - Sanic, Django, Flask, Falcon, web2py, Twisted, Tornado Working knowledge of MySQL, MongoDB, Redis, Aerospike Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with core AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3, Elasticache Understanding of Kafka, Docker, Kubernetes Have knowledge of Solr, Elastic search Attention to detail You can dabble in Frontend codebases using HTML, CSS, and Javascript You love doing things efficiently the work you do will have a disproportionate impact on the business. We believe in systems and processes that let us scale our impact to be larger than ourselves You might not have experience with all the tools that we use but you can learn those given the guidance and resources (ref:hirist.tech) Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Our Company As one of the world’s most innovative software companies whose products touch billions of people around the world, Adobe empowers everyone, everywhere to imagine, create, and bring any digital experience to life. From creators and students to small businesses, global enterprises, and nonprofit organizations — customers choose Adobe products to ideate, collaborate, be more productive, drive business growth, and build remarkable experiences. Our 30,000+ employees worldwide are creating the future and raising the bar as we drive the next decade of growth. We’re on a mission to hire the very best and believe in creating a company culture where all employees are empowered to make an impact. At Adobe, we believe that great ideas can come from anywhere in the organization. The next big idea could be yours. The Opportunity Adobe is revolutionizing digital experiences by empowering users to craft, manage, and share content effortlessly. What You'll Do Build high-quality and performant solutions and features using web technologies. Drive solutioning and architecture discussions in the team and technically guide and mentor the team. Partner with product management for the technical feasibility of the features passionate about user experience as well as performance. Stay proficient in emerging industry technologies and trends, bringing that knowledge to the team to influence product direction. Use a combination of data and instinct to make decisions and move at a rapid pace. Craft a culture of collaboration and shared accomplishments, having fun along the way. What you need to succeed Strong technical background and analytical abilities, with experience developing services based on Java/Javascript and Web Application experiences. An interest in and ability to learn new technologies. Demonstrated results working in a diverse, global, team-focused environment. 10+ years of relevant experience in software engineering with 1+ year being Tech Lead/Architect for engineering teams. Proficiency in the latest technologies like Web Component and TypeScript (or other Javascript frameworks). Familiarity with MVC framework and concepts such as HTML, DOM, CSS, REST, AJAX, responsive design, development with tests. Experience with AWS with knowledge of AWS Services like Autoscaling, ELB, ElastiCache, SQS, SNS, RDS, S3, Serverless Architecture, etc., or similar technology stack. Able to define APIs and integrate them into web applications using XML, JSON, SOAP/REST APIs. Knowledge of software fundamentals including design principles & analysis of algorithms, data structure design, and implementation, documentation, and unit testing and the acumen to apply them. Ability to work proactively and independently with minimal supervision. At Adobe, we believe in creating a company culture where all employees are empowered to make an impact. Learn more about Adobe life, including our values and culture, focus on people, purpose and community, Adobe For All, comprehensive benefits programs, the stories we tell, the customers we serve, and how you can help us change the world through personalized digital experience. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
India
On-site
Job Summary: We are looking for an experienced Cloud Platform Lead to spearhead the design, implementation, and governance of scalable, secure, and resilient cloud-native platforms on Azure . This role requires deep technical expertise in Azure services , Kubernetes (AKS) , containers , Gateway, Frontdoor, WAF , and API management , along with the ability to lead cross-functional initiatives and define cloud platform strategy and best practices. Key Responsibilities: ● Lead the architecture, development, and operations of Azure-based cloud platforms across environments (dev, staging, production). ● Design and manage Azure Front Door , Application Gateway , and WAF to ensure global performance, availability, and security. ● Design and implement Kubernetes platform (AKS) , ensuring reliability, observability, and governance of containerized workloads. ● Drive adoption and standardization of Azure API Management for secure and scalable API delivery. ● Collaborate with security and DevOps teams to implement secure-by-design cloud practices, including WAF rules , RBAC , and network isolation . ● Guide and mentor engineers in Kubernetes, container orchestration, CI/CD pipelines, and Infrastructure as Code (IaC). ● Define and implement monitoring, logging, and alerting best practices using tools like Azure Monitor , ELK, Signoz ● Evaluate and introduce tools, frameworks, and standards to continuously evolve the cloud platform. ● Participate in cost optimization and performance tuning initiatives for cloud services. Required Skills & Qualifications: ● 8+ years of experience in cloud infrastructure or platform engineering, including at least 4+ years in a leadership or ownership role . ● Deep hands-on expertise with Azure Front Door , Application Gateway , Web Application Firewall (WAF) , and Azure API Management . ● Strong experience with Kubernetes and Azure Kubernetes Service (AKS) , including networking, autoscaling, and security. ● Proficient with Docker and container orchestration principles. ● Infrastructure-as-Code experience with Terraform , ARM Templates , or Bicep . ● Excellent understanding of cloud security, identity (AAD, RBAC), and compliance. ● Experience building and guiding CI/CD workflows using tools like Azure DevOps and Bitbucket Ci/CD, or similar. Education B Tech / BE/ M Tech / MCA Job Type: Full-time Schedule: Day shift Application Question(s): What is your total years of experience what is the relevant years of experience what is your current CTC What is your expected CTC How long is the notice period How many years of experience in Azure Front Door, Application Gateway, Web Application Firewall (WAF), and Azure API Management. How many years of experience in Terraform, ARM Templates, or Bicep. How many years of experience in Kubernetes and Azure Kubernetes Service (AKS). How many years of experience in designing and implementing Azure architecture for production grade application on Kubernetes. How many years of experience in Docker and container orchestration principles. Work Location: In person
Posted 2 weeks ago
3.0 - 6.0 years
0 Lacs
Goregaon, Maharashtra, India
On-site
Experience: 3 to 6 years Location: Mumbai (Onsite) Openings: 2 About the Role: We are looking for hands-on and automation-driven Associate Cloud Engineers to join our DevOps team at Gray Matrix. You will be responsible for managing cloud infrastructure, CI/CD pipelines, containerized deployments, and ensuring platform stability and scalability across environments. Key Responsibilities: Design, build, and maintain secure and scalable infrastructure on AWS, Azure, or GCP. Set up and manage CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins. Manage Dockerized environments, ECS, EKS, or Kubernetes clusters for microservice-based deployments. Monitor and troubleshoot production and staging environments, ensuring uptime and performance. Work closely with developers to streamline release cycles and automate testing, deployments, and rollback procedures. Maintain infrastructure as code using Terraform or CloudFormation. What We’re Looking For: 3–6 years of experience in DevOps or cloud engineering roles. Strong knowledge of Linux system administration, networking, and cloud infrastructure (preferably AWS). Experience with Docker, Kubernetes, Nginx, and monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with Git, scripting (Shell/Python), and secrets management tools. Ability to debug infrastructure issues, logs, and deployments across cloud-native stacks. Bonus Points: Certification in AWS/GCP/Azure DevOps or SysOps. Exposure to security, cost optimization, and autoscaling setups. Work Mode: Onsite – Mumbai Reporting To: Senior Cloud Engineer / Lead Cloud Engineer Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Design, provision, and document a production-grade AWS micro-service platform for a Apache-powered ERP implementation—hitting our 90-day “go-live” target while embedding DevSecOps guard-rails the team can run without you. Key Responsibilities Cloud Architecture & IaC Author Terraform modules for VPC, EKS (Graviton), RDS (MariaDB Multi-AZ), MSK, ElastiCache, S3 lifecycle, API Gateway, WAF, Route 53. Implement node pools (App, Spot Analytics, Cache, GPU) with Karpenter autoscaling. CI/CD & GitOps Set up GitHub Actions pipelines (lint, unit tests, container scan, Terraform Plan). Deploy Argo CD for Helm-based application roll-outs (ERP, Bot, Superset, etc.). DevSecOps Controls Enforce OPA Gatekeeper policies, IAM IRSA, Secrets Manager, AWS WAF rules, ECR image scanning. Build CloudWatch/X-Ray dashboards; wire alerting to Slack/email. Automation & DR Define backup plans (RDS PITR, EBS, S3 Std-IA → Glacier). Document cross-Region fail-over run-book (Route 53 health-checks). Standard Operating Procedures Draft SOPs for patching, scaling, on-call, incident triage, budget monitoring. Knowledge Transfer (KT) Run 3× 2-hour remote workshops (infra deep-dive, CI/CD hand-over, DR drill). Produce “Day-2” wiki: diagrams (Mermaid), run-books, FAQ. Required Skill Set 8+ yrs designing AWS micro-service / Kubernetes architectures (ideally EKS on Graviton). Expert in Terraform , Helm , GitHub Actions , Argo CD . Hands-on with RDS MariaDB , Kafka (MSK) , Redis , SageMaker endpoints . Proven DevSecOps background: OPA, IAM least-privilege, vulnerability scanning. Comfortable translating infra diagrams into plain-language SOPs for non-cloud staff. Nice-to-have: prior ERP deployment experience; WhatsApp Business API integration; EPC or construction IT domain knowledge. How Success Is Measured Go-live readiness — Production cluster passes load, fail-over, and security tests by Day 75. Zero critical CVEs exposed in final Trivy scan. 99 % IaC coverage — manual console changes not permitted. Team self-sufficiency — internal staff can recreate the stack from scratch using docs + KT alone. Show more Show less
Posted 2 weeks ago
6.0 - 11.0 years
10 - 16 Lacs
Bengaluru
Work from Office
What You ll Do Developing, Testing, Debugging, and Troubleshooting of (containerized) application hosted on Kubernetes. Manage Kubernetes clusters, including deployment, scaling, and maintenance of containerized applications Design, implement, and manage cloud infrastructure using AWS, Azure Cloud to ensure high availability and scalability Collaborate with development and operations teams to enhance our CI/CD pipelines for efficient code deployment and testing Implement and maintain monitoring, logging, and alerting solutions for system and application health Automate infrastructure provisioning and configuration using Terraform IaaC Stay current with emerging DevOps technologies and industry best practices What You ll Bring Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). At least 6+ years of relevant experience as an DevOps Engineer In depth knowledge of AWS/Azure services like IAM, Monitoring, Load Balancing, Autoscaling, Database, Networking, storage, ECR, AKS, ACR etc Strong Knowledge of container ( Docker ) and container orchestration, Hands-on Kubernetes experience with AKS/EKS Hands on Experience with Linux and concepts around it Strong CI/CD skills , preferably Azure DevOps Experience is setting up Logging and Monitoring functionalities with tools such as Prometheus, Loki, Promtail, Graffana Good to have coding skills for automating regular activities using Python/Bash Good to have experience with Terraform for automating Cloud infrastructure automation Good to have understanding around end to end product development, AI/ML concepts Experience in Source Code Management: Gitlab/ GitHub/ Bitbucket, preferably Azure DevOps Ability to collaborate with cross-functional teams. Good communication skills to explain technical ideas to non-technical people. Additional Skills: Understanding DevOps CI / CD, data security, experience in designing on cloud platform; Willingness to travel to other global offices as needed to work with the client or other internal project teams
Posted 2 weeks ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description We are looking for a skilled and proactive Full Stake .NET Developer with 4+ years of experience to join our engineering team at ELLKAY Software Pvt.Ltd. Your position as a Software Engineer will be instrumental in coding, testing and maintaining software applications that power our organisations product(s). As a crucial part of our development team, you will also be responsible for developing and integrating Cloud Services and Deployment as well as to build scalable and performant solutions on cloud platforms such as AWS or Azure. Key Responsibilities Develop efficient C# client applications(.net framework), robust APIs and Services using .Net Core that interact seamlessly with backend services. Develop and maintain cloud-native applications using services offered by AWS or Azure. Design clean architecture enables easy maintenance, scalability and Identify/Manage performance bottlenecks of the Application and APIs. Implement monitoring solutions using Prometheus, Grafana and centralized logging like ELK stack to gain application performance and health. Follow best practices for security measures like HTTPs, JWT Authentication and secure storage of sensitive information. Participate in code reviews, architecture discussions, and design Skills And Qualifications Bachelors degree in Computer Science, Information Technology, or a related field. 4+ years of experience as a Full Stack .Net Developer on Microsoft Applications and working experience on cloud-based applications. Efficient communication skills and the ability to work collaboratively within a team and can work closely with cross-functional Skills Strong experience with .NET Core / .NET 8 and C# development. Solid understanding of DevOps principles and hands-on experience with CI/CD pipelines (e., Azure DevOps, Jenkins, GitHub Actions). Familiarity with Containerization (Docker) and Orchestration (Kubernetes) deployment to managed services (e., ConfigMaps, Secrets, Horizontal Pod Autoscaling). Experience with either Azure or AWS cloud Skills Experience with monitoring and logging tools like Prometheus, Grafana, Azure Monitor, or CloudWatch. Experience with databases such as PostgreSQL database. Experience with event-driven architectures or microservices. Cloud certification is a plus (e., AWS Developer Associate, Azure Developer Associate). (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Company Profile Our client is a global IT services company that helps businesses with digital transformation with offices in India and the United States. It helps businesses with digital transformation, provide IT collaborations and uses technology, innovation, and enterprise to have a positive impact on the world of business. With expertise is in the fields of Data, IoT, AI, Cloud Infrastructure and SAP, it helps accelerate digital transformation through key practice areas - IT staffing on demand, innovation and growth by focusing on cost and problem solving. Location & work – New Delhi (On –Site), WFO Employment Type - Full Time Profile – Platform Engineer Preferred experience – 3-5 Years The Role: We are looking for a highly skilled Platform Engineer to join our infrastructure and data platform team. This role will focus on the integration and support of Posit integration for data science workloads, managing R language environments, and leveraging Kubernetes to build scalable, reliable, and secure data science infrastructure. Responsibilities: Integrate and manage Posit Suite (Workbench, Connect, Package Manager) within containerized environments. Design and maintain scalable R environment integration (including versioning, dependency management, and environment isolation) for reproducible data science workflows. Deploy and orchestrate services using Kubernetes, including Helm-based Posit deployments. Automate provisioning, configuration, and scaling of infrastructure using IaC tools (Terraform, Ansible). Collaborate with Data Scientists to optimize R runtimes and streamline access to compute resources. Implement monitoring, alerting, and logging for Posit components and Kubernetes workloads. Ensure platform security and compliance, including authentication (e.g., LDAP, SSO), role-based access control (RBAC), and network policies. Support continuous improvement of DevOps pipelines for platform services. Must-Have Qualifications ● Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. Minimum 3+ years of experience in platform, DevOps, or infrastructure engineering. Hands-on experience with Posit (RStudio) products including deployment, configuration, and user management. Proficiency in R integration practices in enterprise environments (e.g., dependency management, version control, reproducibility). Strong knowledge of Kubernetes, including Helm, pod security, and autoscaling. Experience with containerization tools (Docker, OCI images) and CI/CD pipelines. Familiarity with monitoring tools (Prometheus, Grafana) and centralized logging (ELK, Loki). Scripting experience in Bash, Python, or similar. Preferred Qualifications Experience with cloud-native Posit deployments on AWS, GCP, or Azure. Familiarity with Shiny apps, RMarkdown, and their deployment through Posit Connect. Background in data science infrastructure, enabling reproducible workflows across R and Python. Exposure to JupyterHub or similar multi-user notebook environments. Knowledge of enterprise security controls, such as SSO, OAuth2, and network segmentation. Application Method Apply online on this portal or on email at careers@speedmart.co.in Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title :Performance Tester Key Skills :AWS,Jmeter,AppDynamics, New Relic, Splunk, DataDog Job Locations :Chennai,Pune Experience : 65-7 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: Experience, Skills and Qualifications: • Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) • Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Teraform • Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications • Performance engineering toolsets such as JMeter, Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, • Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) • Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analysing skills • Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability • Performance Testing and Engineering activity planning, estimating, designing, executing and analysing output from performance tests • Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function • Jenkins and CI-CD Pipelines including Pipeline scripting • Chaos engineering using tools like Chaos toolkit, AWS Fault Injection Simulator, Gremlin etc. • Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. • Tools like GitHub, Jira & Confluence • Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis • Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment • High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken • Strong stakeholder management and excellent communication skills. • Extensive knowledge of risk management and mitigation • Strong analytical and problem-solving skills Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Delhi
On-site
Purpose of role: Mid-level leadership role in Service Management. Maintain excellent service uptime levels for both external services for clients and internal high impacting tools. Maintain Engineers and architects within the team and provide higher management with a clear high level overview of the team’s activities and progress. Seniority is based on years of experience, knowledge and skill-set. This role is also hands-on in day-to-day operations of the team. Experience: 10+ years for Senior Manager (12+ years for Director position) Role: Technical, Sr. Manager / Director (MC) Knowledge and Skill-set: Degree in Computer Science, Software Engineering, IT or related discipline 10+ years’ professional experience in infrastructure (on-premise/cloud) / Linux administration / networking / client project implementations and experience in leading a Infrastructure team Must have a strong background in Cloud infrastructure, from serverless up to containerization. Must have a general idea about (but not limited to): Cloud infrastructure, Continuous Integration/Continuous Deployment Must be an expert in Infrastructure best practices and practise them where applicable Must have in-depth knowledge of AWS (or similar), including: AutoScaling, S3, CloudFront, Route53, IAM, Certificate Manager, DynamoDB/MongoDB and RDS Must have in-depth knowledge of Jenkins or other CI/CD environments Must be familiar with cost optimisation both for clients’ and internal projects Must have the ability to develop and manage a budget Must have, at least, the following certification(s): AWS Certified Solutions Architect - Associate (Professional will be preferred) Must have an understanding of software development processes, tools, and skill in at least two languages (back-end/front-end/scripting/JS) Strong written and verbal communication skills in English. Must also be able to simplistically explain solutions to other team members and clients, who don’t necessarily have to be technical Experience with containerisation and orchestration Requirements: Responsibilities: Lead the Infrastructure Operations team Act as an escalation point for the Infrastructure Operations Team Act as mentor and escalation point for the Support Engineering team Analyse system requirements Recommend alternative technologies where applicable Work closely with the higher management and provide high level reporting of the team’s activities Have the ability to document his/her work in a clear and concise manner Always be on the lookout for gaps in the general day to day operations Provide suggestions for where things can be automated Work closely with internal stakeholders (Delivery Managers, Engineers, Support, Products, and QA) for implementing the best solutions for clients and define clear roadmaps and milestones Work closely with the Engineering Directors to define architecture standards, policies and processes, and governing methodologies on the aspect of (but not limited to) infrastructure, efficiency, security, and reliability Draft, review and management of proposals and commercial contracts Carry-out Management tasks such as resourcing, budget, proposals and commercial contracts preparation/review, mentoring, etc Country: India
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane