Jobs
Interviews

29063 Gcp Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

27 - 28 Lacs

Gurugram, Haryana, India

On-site

NET Technical Lead Location : HCL Gurugram (Work from Office) Notice Period : 30 Days Experience : 10+ years Responsibilities Lead the full software development lifecycle: design, development, testing, deployment, and maintenance of .NET applications. Drive architectural design and implementation using C#, ASP.NET MVC, Web API, Entity Framework, and SQL Server. Mentor and guide a team of developers; ensure code quality through reviews and best practices. Collaborate with cross-functional teams (product, UI/UX, QA, DevOps) in an Agile environment. Plan, coordinate, and execute CI/CD pipelines using tools like Jenkins. Integrate cloud-native solutions and services (Azure, AWS, or GCP) into applications. Engage with stakeholders to translate business requirements into robust technical solutions. Troubleshoot, debug, and optimize performance of existing systems. Required Skills & Expertise Backend: Strong experience with C#, ASP.NET MVC, Web API, Entity Framework, LINQ, and SQL Server. Frontend: Proficiency in JavaScript frameworks (AngularJS, React, or Vue.js) and familiarity with HTML5, CSS3, and Bootstrap. Cloud & DevOps: Practical knowledge of cloud platforms (Azure, AWS, GCP) and CI/CD tools (Jenkins or equivalent). Agile Methodologies: Solid understanding of Scrum/Kanban, sprint planning, and iterative delivery. Leadership & Communication: Excellent team management, collaboration, and stakeholder engagement skills. Independence & Ownership: Ability to work autonomously, drive technical decisions, and own deliverables end-to-end. Bonus Qualifications Certifications in Microsoft Azure/AWS/GCP or DevOps tools. Experience with microservices, containerization (Docker, Kubernetes), or event-driven architectures. Domain expertise in enterprise applications, e-commerce, or B2B platforms. Skills: gcp,agile,devops,html5,aws,javascript frameworks (angularjs, react, or vue.js),bootstrap,linq,c#,javascript framework,vue.js,kanban,jenkin,jenkins,entity framework,angularjs,azure,cloud,agile methodologies,containerization (docker, kubernetes),sql server,css3,react,sql,web api,asp.net mvc,agile development methodologies,scrum

Posted 1 day ago

Apply

2.0 - 5.0 years

0 Lacs

Mohali district, India

Remote

Job Description: SDE-II – Python Developer Job Title SDE-II – Python Developer Department Operations Location In-Office Employment Type Full-Time Job Summary We are looking for an experienced Python Developer to join our dynamic development team. The ideal candidate will have 2 to 5 years of experience in building scalable backend applications and APIs using modern Python frameworks. This role requires a strong foundation in object-oriented programming, web technologies, and collaborative software development. You will work closely with the design, frontend, and DevOps teams to deliver robust and high-performance solutions. Key Responsibilities • Develop, test, and maintain backend applications using Django, Flask, or FastAPI. • Build RESTful APIs and integrate third-party services to enhance platform capabilities. • Utilize data handling libraries like Pandas and NumPy for efficient data processing. • Write clean, maintainable, and well-documented code that adheres to industry best practices. • Participate in code reviews and mentor junior developers. • Collaborate in Agile teams using Scrum or Kanban workflows. • Troubleshoot and debug production issues with a proactive and analytical approach. Required Qualifications • 2 to 5 years of experience in backend development with Python. • Proficiency in core and advanced Python concepts, including OOP and asynchronous programming. • Strong command over at least one Python framework (Django, Flask, or FastAPI). • Experience with data libraries like Pandas and NumPy. • Understanding of authentication/authorization mechanisms, middleware, and dependency injection. • Familiarity with version control systems like Git. • Comfortable working in Linux environments. Must-Have Skills • Expertise in backend Python development and web frameworks. • Strong debugging, problem-solving, and optimization skills. • Experience with API development and microservices architecture. • Deep understanding of software design principles and security best practices. Good-to-Have Skills • Experience with Generative AI frameworks (e.g., LangChain, Transformers, OpenAI APIs). • Exposure to Machine Learning libraries (e.g., Scikit-learn, TensorFlow, PyTorch). • Knowledge of containerization tools (Docker, Kubernetes). • Familiarity with web servers (e.g., Apache, Nginx) and deployment architectures. • Understanding of asynchronous programming and task queues (e.g., Celery, AsyncIO). • Familiarity with Agile practices and tools like Jira or Trello. • Exposure to CI/CD pipelines and cloud platforms (AWS, GCP, Azure). Company Overview We specialize in delivering cutting-edge solutions in custom software, web, and AI development. Our work culture is a unique blend of in-office and remote collaboration, prioritizing our employees above everything else. At our company, you’ll find an environment where continuous learning, leadership opportunities, and mutual respect thrive. We are proud to foster a culture where individuals are valued, encouraged to evolve, and supported in achieving their fullest potential. Benefits and Perks • Competitive Salary: Earn up to ₹6 –10 LPA based on skills and experience. • Generous Time Off: Benefit from 18 annual holidays to maintain a healthy work-life balance. • Continuous Learning: Access extensive learning opportunities while working on cutting-edge projects. • Client Exposure: Gain valuable experience in client-facing roles to enhance your professional growth.

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Purpose Manage Electro-mechanical and Fire & Safety operations to ensure the quality and deliverables in a timely and cost effective manner at all office locations of Bangalore; Role would be responsible to build, deploy, and scale machine learning and AI solutions across GMR’s verticals. This role will build and manage advanced analytics initiatives, predictive engines, and GenAI applications — with a focus on business outcomes, model performance, and intelligent automation. Reporting to the Head of Automation & AI, you will operate in a high-velocity, product-oriented environment with direct visibility of impact across airports, energy, infrastructure and enterprise functions ORGANISATION CHART Key Accountabilities Accountabilities Key Performance Indicators AI & ML Development Build and deploy models using supervised, unsupervised, and reinforcement learning techniques for use cases such as forecasting, predictive scenarios, dynamic pricing & recommendation engines, and anomaly detection, with exposure to broad enterprise functions and business Lead development of models, NLP classifiers, and GenAI-enhanced prediction engines. Design and integrate LLM-based features such as prompt pipelines, fine-tuned models, and inference architecture using Gemini, Azure OpenAI, LLama etc. Program Plan Vs Actuals End-to-End Solutioning Translate business problems into robust data science pipelines with emphasis on accuracy, explainability, and scalability. Own the full ML lifecycle — from data ingestion and feature engineering to model training, evaluation, deployment, retraining, and drift management. Program Plan Vs Actuals Cloud , ML & data Engineering Deploy production-grade models using AWS, GCP, or Azure AI platforms and orchestrate workflows using tools like Step Functions, SageMaker, Lambda, and API Gateway. Build and optimise ETL/ELT pipelines, ensuring smooth integration with BI tools (Power BI, QlikSense or similar) and business systems. Data compression and familiarity with cloud finops will be an advantage, have used some tools like kafka, apache airflow or similar 100% compliance to processes KEY ACCOUNTABILITIES - Additional Details EXTERNAL INTERACTIONS Consulting and Management Services provider IT Service Providers / Analyst Firms Vendors INTERNAL INTERACTIONS GCFO and Finance Council, Procurement council, IT council, HR Council (GHROC) GCMO/ BCMO FINANCIAL DIMENSIONS Other Dimensions EDUCATION QUALIFICATIONS Engineering Relevant Experience 5 - 8years of hands-on experience in machine learning, AI engineering, or data science, including deploying models at scale. Strong programming and modelling skills in some like Python, SQL, and ML frameworks like scikit-learn, TensorFlow, XGBoost, PyTorch. Demonstrated ability to build models using supervised, unsupervised, and reinforcement learning techniques to solve complex business problems. Technical & Platform Skills Proven experience with cloud-native ML tools: AWS SageMaker, Azure ML Studio, Google AI Platform. Familiarity with DevOps and orchestration tools: Docker, Git, Step Functions, Lambda,Google AI or similar Comfort working with BI/reporting layers, testing, and model performance dashboards. Mathematics and Statistics Linear algebra, Bayesian method, information theory, statistical inference, clustering, regression etc Collaborate with Generative AI and RPA teams to develop intelligent workflows Participate in rapid prototyping, technical reviews, and internal capability building NLP and Computer Vision Knowledge of Hugging Face Transformers, Spacy or similar NLP tools YoLO, Open CV or similar for Computer vision. COMPETENCIES Personal Effectiveness Social Awareness Entrepreneurship Problem Solving & Analytical Thinking Planning & Decision Making Capability Building Strategic Orientation Stakeholder Focus Networking Execution & Results Teamwork & Interpersonal influence

Posted 1 day ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: SRE Engineer with GCP cloud Location: Hyderabad & Ahmedabad Work Model: Hybrid 3 Days from office Exp in year: 7years+ Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary: As a Senior SRE, you will ensure platform reliability, incident management, and performance optimization. You'll define SLIs/SLOs, contribute to robust observability practices, and drive proactive reliability engineering across services. Roles and Responsibilities: · Define and measure Service Level Indicators (SLIs), Service Level Objectives (SLOs), and manage error budgets across services. · Lead incident management for critical production issues – drive root cause analysis (RCA) and post-mortems. · Create and maintain run books and standard operating procedures for high[1]availability services. · Design and implement observability frameworks using ELK, Prometheus, and Grafana; drive telemetry adoption. · Coordinate cross-functional war-room sessions during major incidents and maintain response logs. · Develop and improve automated system recovery, alert suppression, and escalation logic. · Use GCP tools like GKE, Cloud Monitoring, and Cloud Armor to improve performance and security posture. · Collaborate with DevOps and Infrastructure teams to build highly available and scalable systems. · Analyse performance metrics and conduct regular reliability reviews with engineering leads. · Participate in capacity planning, failover testing, and resilience architecture reviews. Mandatory: · Cloud: GCP (GKE, Load Balancing, VPN, IAM) · Observability: Prometheus, Grafana, ELK, Data dog · Containers & Orchestration: Kubernetes, Docker · Incident Management: On-call, RCA, SLIs/SLOs · IaC: Terraform, Helm · Incident Tools: Pager Duty, OpsGenie Nice to Have: · GCP Monitoring, Sky walking · Service Mesh, API Gateway · GCP Spanner, MongoDB (basic)

Posted 1 day ago

Apply

12.0 years

0 Lacs

India

On-site

Job Title : Solution/Software Architect Reports To : Project Manager Overview We are seeking a highly skilled Solution/Software Architect to lead the design and delivery of secure, scalable, and high-performing software solutions. This role involves engaging with clients, crafting architectural plans, and guiding development teams through implementation. Key Responsibilities Design and implement software architecture for complex business needs. Lead teams, provide technical leadership, and create architectural blueprints. Evaluate tools, technologies, and processes for best outcomes. Ensure solutions follow secure-by-design principles and modern DevOps practices. Collaborate across teams for integration, quality, and performance. Stay updated with technology and security trends. Support pre-sales, documentation, and risk mitigation efforts. Required Skills & Experience 8–12 years total experience, including 4+ years as an architect. Proficiency in Node.js, Python, PHP, JavaScript, and microservices. Hands-on with MySQL, MongoDB, Redis, Nginx/Apache. DevOps tools: Git, Jenkins, Docker; SonarQube (optional). Strong grasp of cloud (AWS, Azure, GCP) and security principles. Experience with secure SDLC, threat modeling, OWASP Top 10. Knowledge of DNS, BGP, Linux kernel settings is a plus. Security certifications (CISSP, CEH, CCSP) are a bonus.

Posted 1 day ago

Apply

0.0 - 1.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Title: Senior Backend Developer (Python, FastAPI & MongoDB) Location : Bengaluru, India Company Overview: IAI Solution Pvt Ltd (www.iaisolution.com ),operates at the edge of applied AI where foundational research meets real-world deployment. We craft intelligent systems that think in teams, adapt with context, and deliver actionable insight across domains. We are seeking a Senior Backend Developer who thrives in fast-paced environments, enjoys solving complex technical challenges, and is passionate about building scalable, high-performance backend systems that power real-world AI applications.. Position Summary: We are looking for a Senior Backend Developer with 3 to 5 years of professional experience in Python-based development, especially using FastAPI and MongoDB . The ideal candidate is skilled in building and maintaining scalable, high-performance back-end services and APIs, has strong understanding of modern database design (SQL & NoSQL), and has experience integrating backend services with cloud platforms. Experience or interest in AI/ML projects is a strong plus, as our products often interface with LLMs and real-time AI pipelines. Key Responsibilities: Design, build, and maintain robust backend services using Python and FastAPI . Develop and maintain scalable RESTful APIs for internal tools and third-party integrations. Work with MongoDB , PostgreSQL , and Redis to manage structured and unstructured data efficiently. Collaborate with frontend, DevOps, and AI/ML teams to deliver secure and performant backend infrastructure. Implement best practices in code architecture, performance optimization, logging, and monitoring. Ensure APIs and systems are production-ready, fault-tolerant, and scalable. Handle API versioning, documentation (Swagger/OpenAPI), and error management. Optimize queries, indexes, and DB schema for high-performance data access. Maintain clean code with emphasis on object-oriented principles and modular design. Troubleshoot production issues and deliver timely fixes and improvements. Qualifications: Overall Experience : 3 to 5 years in backend software development. Python : Strong proficiency with object-oriented programming. Frameworks : Hands-on experience with FastAPI (preferred), Django. Databases : MongoDB : Experience with schema design, aggregation pipelines, and indexing. Familiarity with SQL databases (PostgreSQL/MySQL). Experience with Redis and optionally Supabase. API Development : Proficient in building and documenting REST APIs. Strong understanding of HTTP, request lifecycles, and API security. Testing & Debugging : Strong debugging and troubleshooting skills using logs and tools. Performance & Scalability : Experience optimizing backend systems for latency, throughput, and reliability. Tools : Git, Docker, Linux commands for development environments. Must-Have Skills: Proficiency in Python and object-oriented programming Strong hands-on experience with FastAPI (or similar async frameworks) Knowledge of MongoDB for schema-less data storage and complex queries Experience building and managing REST APIs in production Comfortable working with Redis , PostgreSQL , or other data stores Experience with Dockerized environments and Git workflows Solid grasp of backend architecture , asynchronous programming, and performance tuning Ability to write clean, testable, and maintainable code Good-to-Have Skills: Experience with asynchronous programming using async/await Integration with third-party APIs (e.g., Firebase, GCP, Azure services) Basic understanding of WebSocket and real-time backend patterns Exposure to AI/ML pipelines , model APIs, or vector DBs (e.g., FAISS) Basic DevOps exposure: GitHub Actions, Docker Compose, Nginx Familiarity with JWT , OAuth2, and backend security practices Familiarity with CI/CD pipelines and versioning Basic understanding of GraphQL or gRPC is a plus Preferred Qualifications: Bachelor’s degree in Computer Science , Engineering , or a related field Demonstrated experience delivering production-grade backend services Experience working in agile teams and using tools like Jira Familiarity with Agile/Scrum methodologies and sprint cycles Interest or experience in AI/ML-integrated systems is a plus Perks & Benefits: Competitive salary with performance-based bonuses Opportunity to work on AI-integrated platforms and intelligent products Access to latest tools, cloud platforms, and learning resources Support for certifications and tech conferences Flexible working hours and hybrid work options Wellness initiatives and team-building activities Job Type: Full-time Pay: Up to ₹1,500,000.00 per year Benefits: Health insurance Paid sick time Provident Fund Schedule: Fixed shift Ability to commute/relocate: Bangalore City, Karnataka: Reliably commute or planning to relocate before starting work (Required) Experience: Python: 1 year (Required) FastAPI: 1 year (Required) Location: Bangalore City, Karnataka (Required) Work Location: In person Speak with the employer +91 9003562294

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Purpose: Royal Enfield is looking for a skilled Backend Developer to join our team in Chennai. The ideal candidate will have strong experience in backend technologies, cloud platforms, and building scalable microservices-based applications. Position Overview: Location: Chennai Position Title: App Developer Cloud Reports to : Manager - CGI Function: Information Technology What you’ll do: Key Responsibilities: Design, develop, and maintain backend services using Spring Boot Build and manage microservices architecture Work with MongoDB , Kafka , and Redis Cache to develop efficient and scalable solutions Develop and integrate RESTful APIs Collaborate with cross-functional teams to deliver high-quality software Deploy applications on cloud platforms such as AWS , GCP , or Azure Write clean, efficient, and well-documented code Apply strong problem-solving and logical thinking skills in development tasks Requirements: Proficiency in Spring Boot and Java Hands-on experience with MongoDB , Kafka , Redis Hand on experience with Cloud platforms (AWS/GCP/Azure) Strong understanding of microservices architecture Solid coding and logical problem-solving abilities Good communication and teamwork skills Qualification: Bachelor’s or Master's’ degree in Information Technology. Ready to Join Us? Apply via our website today. Join our trailblazing team and be a part of our legacy! “So why wait? Join us and experience the freedom of embracing the road, riding with pure motorcycling passion.”

Posted 1 day ago

Apply

8.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Linux and DevOPS Manager Experience: 8 to 10 Years Location: Chennai Employment Type: Full-Time Job Summary: The Linux and DevOps Manager responsibilities include leading a team of engineers to architect and develop automated solutions, manage system upgrades, and ensure that all systems are working optimally. Key Responsibilities: Developing and maintaining Linux infrastructure Developing and implementing IT projects and infrastructures Designing and maintaining cloud-based applications Overseeing continuous integration and continuous delivery (CI/CD) and DevOps architecture Implementing automation and orchestration of tools and processes to minimize delivery time and increase efficiency Managing a team of engineers and developers, fostering a collaborative work environment Monitoring system performance and troubleshooting issues Communicating and working closely with customers for solving their requirements Collaborating with team members to improve system consistency, stability, and efficiency Conducting technical reviews and audits Maintaining communication with relevant departments to ensure software development projects are aligned with company goals Implementing industry best practices for system hardening and configuration management Required Skills: Experience with different Linux distributions Experience with different applications like apache, tomcat and email systems Experience with different relational and non-relational databases Containerization technologies (e.g., Docker, LXC, Rocket) Proven work experience as a DevOps Engineer or similar software engineering role. Experience with Kubernetes, Jenkins, Ansible, Puppet, Chef or similar tools. Understanding of cloud technologies such as AWS, GCP, or Azure. Knowledge of scripting languages such as Python, Bash, or Perl. Experience with network infrastructure, database, and application services. Excellent problem-solving skills and ability to manage complex systems. Familiarity with a wide range of systems engineering tools, including source code repository hubs, continuous integration services, issue tracking, test automation, deployment automation, development team collaboration, project management Minimum 5 years operations and deployment (development, automation and support experience) At least 3 years of professional experience as a technical leader with a proven record of innovating solutions for a team of atleast 5 engineers Nice to Have: Certifications in AWS or Azure or GCP

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Title: GCP Data Engineer Location: Remote Experience Required: 8 Years Position Type: Freelance / Contract As a Senior Data Engineer with a focus on pipeline migration from SAS to Google Cloud Platform (GCP) technologies, you will tackle intricate problems and create value for our business by designing and deploying reliable, scalable solutions tailored to the company’s data landscape. You will lead the development of custom-built data pipelines on the GCP stack, ensuring seamless migration of existing SAS pipelines. Additionally, you will mentor junior engineers, define standards and best practices, and contribute to strategic planning for data initiatives. Responsibilities: ● Lead the design, development, and implementation of data pipelines on the GCP stack, with a focus on migrating existing pipelines from SAS to GCP technologies. ● Develop modular and reusable code to support complex ingestion frameworks, simplifying the process of loading data into data lakes or data warehouses from multiple sources. ● Mentor and guide junior engineers, providing technical oversight and fostering their professional growth. ● Work closely with analysts, architects, and business process owners to translate business requirements into robust technical solutions. ● Utilize your coding expertise in scripting languages (Python, SQL, PySpark) to extract, manipulate, and process data effectively. ● Leverage your expertise in various GCP technologies, including BigQuery, GCP Workflows, Dataflow, Cloud Scheduler, Secret Manager, Batch, Cloud Logging, Cloud SDK, Google Cloud Storage, IAM, and Vertex AI, to enhance data warehousing solutions. ● Lead efforts to maintain high standards of development practices, including technical design, solution development, systems configuration, testing, documentation, issue identification, and resolution, writing clean, modular, and sustainable code. ● Understand and implement CI/CD processes using tools like Pulumi, GitHub, Cloud Build, Cloud SDK, and Docker.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

🚀 We’re Hiring: Generative AI Architect | Noida (Hybrid) Experience: 10+ years (with 2–3 years in GenAI/LLMs) Type: Full-Time | Hybrid (Noida) Are you passionate about shaping the future of AI? We’re on the lookout for a Generative AI Architect to lead the design, development, and deployment of next-generation GenAI solutions that power enterprise-scale applications. This is your opportunity to work at the intersection of AI innovation and real-world impact—building intelligent systems using cutting-edge models like GPT, Claude, LLaMA, and Mistral. 🔍 What You’ll Do: Architect and implement secure, scalable GenAI solutions using LLMs. Design state-of-the-art RAG pipelines using LangChain, LlamaIndex, FAISS, etc. Lead prompt engineering and build reusable modules (e.g., chatbots, summarizers). Deploy on cloud-native platforms: AWS Bedrock, Azure OpenAI, GCP Vertex AI . Integrate GenAI into enterprise products in collaboration with cross-functional teams. Drive MLOps best practices for CI/CD, monitoring, and observability. Explore the frontier of GenAI—multi-agent systems, fine-tuning, and autonomous agents. Ensure compliance, security, and data governance in all AI systems. ✅ What We’re Looking For: 8+ years in AI/ML, with 2–3 years in LLMs or GenAI. Proficiency in Python , Transformers, LangChain, OpenAI SDKs. Experience with Vector Databases (Pinecone, Weaviate, FAISS). Hands-on with cloud platforms: AWS, Azure, GCP. Knowledge of LLM orchestration (LangGraph, AutoGen, CrewAI). Familiarity with tools like MLflow, Docker, Kubernetes, FastAPI. Strong understanding of GenAI evaluation metrics (BERTScore, BLEU, GPTScore). Excellent communication and architectural leadership skills. 🌟 Nice to Have: Experience fine-tuning open-source LLMs (LoRA, QLoRA). Exposure to multi-modal AI systems (text-image, speech). Domain knowledge in BFSI, Healthcare, Legal, or EdTech. Published research or open-source contributions in GenAI. 📍 Location: Noida (Hybrid) 🌐 Apply Now and be part of the GenAI transformation. Let’s build the future—one intelligent system at a time. 💡 #GenerativeAI #LLM #AIArchitect #MachineLearning #LangChain #VertexAI #GenAIJobs #PromptEngineering #Hiring #TechJobs #AIInnovation

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job description: JAVA Fullstack Developer Hands on with Java (Spring, Hibernate, MVC) – (MUST) Good Working knowledge and Web Framework experience with Spring MVC, Spring Boot, Micro services. (MUST) Strong hands experience on one of the JS frameworks – Angular/React Hands on with Rest based web services Knowledge on Agile (Scrum, Kanban) Strong exposure to Design Patterns (IOC, MVC, Singleton, Factory) Work experience either of following CLOUD (AWS or Azure or GCP) will be an advantage. Continuous Testing - TDD, LeanFT, Cucumber, Gherkin , Jboss Possess excellent written and verbal communication skills Experience of Code Quality Tools like Sonar, Check style, Find bug will be plus Work Location Pune/Bengaluru/Chennai/Hyderabad Mandatory Skills: Fullstack Java Enterprise . Experience: 3-5 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 day ago

Apply

0 years

10 - 16 Lacs

Pune, Maharashtra, India

On-site

Job Description We are seeking a skilled Generative AI Engineer with a strong background in Python to join our dynamic team. In this role, you will integrate backend development expertise with the latest advancements in AI to create impactful solutions. If you excel in a fast-paced environment and enjoy tackling complex challenges, we encourage you to apply. Key Responsibilities Generative AI Development Develop and implement generative AI models using frameworks like LangChain or Llama-Index. Apply prompt engineering techniques to design effective queries and ensure optimal LLM responses for diverse use cases. Master advanced LLM functionalities, including prompt optimization, hyperparameter tuning, and response caching. Implement Retrieval-Augmented Generation (RAG) workflows by integrating vector databases like Pinecone, Weaviate, Supabase, or PGVector for efficient similarity searches. Work with embeddings and build solutions that leverage similarity search for personalized query resolution. Explore and process multimodal data, including image and video understanding and generation. Integrate observability tools for monitoring and evaluating LLM performance to ensure system reliability. Backend Engineering Build and maintain scalable backend systems using Python frameworks such as FastAPI, Django, or Flask. Design and implement RESTful APIs for seamless communication between systems and services. Optimize database performance with relational databases (PostgreSQL, MySQL) and integrate vector databases (Pinecone, PGVector, Weaviate, Supabase) for advanced AI workflows. Implement asynchronous programming and adhere to clean code principles for maintainable, high-quality code. Seamlessly integrate third-party SDKs and APIs, ensuring robust interoperability with external systems. Develop backend pipelines for handling multimodal data processing, and supporting text, image, and video workflows. Manage and schedule background tasks with tools like Celery, cron jobs, or equivalent job queuing systems. Leverage containerization tools such as Docker for efficient and reproducible deployments. Ensure security and scalability of backend systems with adherence to industry best practices. Qualifications Essential: Strong Programming Skills: Proficiency in Python and experience with backend frameworks like FastAPI, Django, or Flask. Generative AI Expertise: Knowledge of frameworks like LangChain, Llama-Index, or similar tools, with experience in prompt engineering and Retrieval-Augmented Generation (RAG). Data Management: Hands-on experience with relational databases (PostgreSQL, MySQL) and vector databases (Pinecone, Weaviate, Supabase, PGVector) for embeddings and similarity search. Machine Learning Knowledge: Familiarity with LLMs, embeddings, and multimodal AI applications involving text, images, or video. Deployment Experience: Proficiency in deploying AI models in production environments using Docker and managing pipelines for scalability and reliability. Testing and Debugging: Strong skills in writing and managing unit and integration tests (e.g., Pytest), along with application debugging and performance optimization. Asynchronous Programming: Understanding of asynchronous programming concepts for handling concurrent tasks efficiently. Preferred Cloud Proficiency: Familiarity with platforms like AWS, GCP, or Azure, including serverless applications and VM setups. Frontend Basics: Understanding of HTML, CSS, and optionally JavaScript frameworks like Angular or React for better collaboration with frontend teams. Observability and Monitoring: Experience with observability tools to track and evaluate LLM performance in real-time. Cutting-Edge Tech: Awareness of trends in generative AI, including multimodal AI applications and advanced agentic workflows. Security Practices: Knowledge of secure coding practices and backend system hardening. Certifications: Relevant certifications in AI, machine learning, or cloud technologies are a plus. Skills: retrieval-augmented generation (rag),supabase,docker,rag,gcp,azure,flask,asynchronous programming,postgresql,generativeai,aws,pytest,pgvector,langchain,pinecone,genai,fastapi,weaviate,mysql,python,llama-index,django,llm

Posted 1 day ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a skilled and passionate Full Stack Engineer to join our high-performing engineering team. You will be responsible for building scalable, secure, and high-performance applications that support our healthcare platforms. This role requires deep expertise in both front-end and back-end development, with a solid focus on microservices, cloud-native architecture, observability, and operational excellence. Primary Responsibilities Design, develop, and maintain full-stack applications using modern frameworks and tools Build and develop UI using ReactJS Develop microservices using Java and Spring Boot and integrate with messaging systems like Apache Kafka Write comprehensive unit and integration test cases to ensure code quality and reliability Implement monitoring and observability using Grafana, Dynatrace, and Splunk Deploy and manage applications using Kubernetes (K8s) and GitHub Actions Participate in production support activities as needed, including incident resolution and root cause analysis Collaborate with cross-functional teams including product, QA, DevOps, and UX Participate in code reviews, architecture discussions, and agile ceremonies Ensure application performance, scalability, and security Stay current with emerging technologies and propose innovative solutions Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 5+ years of experience in full stack development Hands-on experience with Kubernetes, Docker, and GitHub Actions for CI/CD Experience with Kafka for event-driven architecture Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB) Solid proficiency in Java, Spring Boot, and RESTful API development Solid understanding of HTML, CSS, JavaScript, and modern front-end ReactJS framework Familiarity with Grafana, Elastic APM, and Splunk for monitoring and logging Proven solid problem-solving skills and attention to detail Proven ability to write and maintain unit tests using tools like JUnit, Mockito, etc. Preferred Qualifications Experience in the healthcare domain or working with HIPAA-compliant systems Exposure to DevOps practices and infrastructure as code (e.g., Terraform) Knowledge of security best practices in web and microservices development Familiarity with cloud platforms like AWS, Azure, or GCP At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 day ago

Apply

5.0 years

8 - 30 Lacs

Viman Nagar, Pune, Maharashtra

On-site

Key Responsibilities: Build, train, and validate machine learning models for prediction, classification, and clustering to support NBA use cases. Conduct exploratory data analysis (EDA) on both structured and unstructured data to extract actionable insights and identify behavioral drivers. Design and deploy A/B testing frameworks and build pipelines for model evaluation and continuous monitoring. Develop vectorization and embedding pipelines using models like Word2Vec, BERT, to enable semantic understanding and similarity search. Implement Retrieval-Augmented Generation (RAG) workflows to enrich recommendations by integrating internal and external knowledge bases. Collaborate with cross-functional teams (engineering, product, marketing) to deliver data-driven Next Best Action strategies. Present findings and recommendations clearly to technical and non-technical stakeholders. Required Skills & Experience: Strong programming skills in Python, including libraries like pandas, NumPy, and scikit-learn. Practical experience with text vectorization and embedding generation (Word2Vec, BERT, SBERT, etc.). Proficiency in Prompt Engineering and hands-on experience in building RAG pipelines using LangChain, Haystack, or custom frameworks. Familiarity with vector databases (e.g., PostgreSQL with pgvector, FAISS, Pinecone, Weaviate). Expertise in Natural Language Processing (NLP) tasks such as NER, text classification, and topic modeling. Sound understanding of supervised learning, recommendation systems, and classification algorithms. Exposure to cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes) is a plus. Experience – 5+ years Job Type: Full-time Pay: ₹800,000.38 - ₹3,000,000.60 per year Benefits: Health insurance Paid time off Schedule: Day shift Location: Viman Nagar, Pune, Maharashtra (Preferred) Work Location: In person

Posted 1 day ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Summary Position Summary Job title: Google SecOps (GCP Chronicle) (Senior Consultant/ Consultant) About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk Management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage, and implement risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. Our Risk and Financial Advisory services professionals help organizations effectively navigate business risks and opportunities—from strategic, reputation, and financial risks to operational, cyber, and regulatory risks—to gain competitive advantage. We apply our experience in ongoing business operations and corporate lifecycle events to help clients become stronger and more resilient. Our market-leading teams help clients embrace complexity to accelerate performance, disrupt through innovation, and lead in their industries. We use cutting-edge technology like AI/ML techniques, analytics, and RPA to solve Deloitte’s clients ‘most complex issues. Working in Risk and Financial Advisory at Deloitte US-India offices has the power to redefine your ambitions. The Team Cyber & Strategic Risk We help organizations create a cyber-minded culture, reimagine risk to uncover strategic opportunities, and become faster, more innovative, and more resilient in the face of ever-changing threats. We provide intelligence and acuity that dynamically reframes risk, transcending a manual, reactive paradigm. The cyber risk services—Identity & access management (IAM) practice helps organizations in designing, developing, and implementing industry-leading IAM solutions to protect their information and confidential data, as well as help them build their businesses and supporting technologies to be more secure, vigilant, and resilient. The IAM team delivers service to clients through following key areas: User provisioning Access certification Access management and federation Entitlements management Work you’ll do Design, implement, and manage security solutions for Google Cloud Platform (GCP) environments Configure and manage GCP security services, such as Cloud Identity and Access Management (IAM), Cloud Key Management Service (KMS), and Cloud Identity-Aware Proxy (IAP) Implement and maintain security policies and procedures Monitor for security threats and incidents Respond to security incidents and vulnerabilities Manage security risks and compliance requirements Develop and maintain Chronicle SIEM/SOAR playbooks to automate security tasks on GCP Investigate and respond to security incidents on GCP using Chronicle SIEM/SOAR Work with other security teams to integrate Chronicle SIEM/SOAR with other security solutions on GCP Keep up to date on the latest GCP security threats and trends Required Skills Overall experience of 5 to 8 Years for Senior Consultant and 3+ years for a consultant 3+ years of experience in cloud security 2+ years of experience with GCP 2+ years of experience with Chronicle SIEM/SOAR Experience with security monitoring and incident response Experience with risk management and compliance Strong understanding of GCP security best practices Experience and understanding of Apigee API management platform Excellent communication and interpersonal skills Ability to work independently and as part of a team Qualification Bachelor’s Degree required. Ideally in Computer Science, Cyber Security, Information Security, Engineering, Information Technology. How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India . Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. Deloitte is committed to achieving diversity within its workforce, and encourages all qualified applicants to apply, irrespective of gender, age, sexual orientation, disability, culture, religious and ethnic background. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with Deloitte’s clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiting tips Finding the right job and preparing for the recruitment process can be tricky. Check out tips from our Deloitte recruiting professionals to set yourself up for success. Check out recruiting tips from Deloitte recruiters . Benefits We believe that to be an undisputed leader in professional services, we should equip you with the resources that can make a positive impact on your well-being journey. Our vision is to create a leadership culture focused on the development and well-being of our people. Here are some of our benefits and programs to support you and your family’s well-being needs. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you . Our people and culture Our people and our culture make Deloitte a place where leaders thrive. Get an inside look at the rich diversity of background, education, and experiences of our people. What impact will you make? Check out our professionals’ career journeys and be inspired by their stories. Professional development You want to make an impact. And we want you to make it. We can help you do that by providing you the culture, training, resources, and opportunities to help you grow and succeed as a professional. Learn more about our commitment to developing our people . © 2023. See Terms of Use for more information. Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 304104

Posted 1 day ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary: As a Principal Engineer, you will be the most senior technical leader within our team, tackling complex challenges across both technology and business domains. You will play a pivotal role in shaping our technical strategy and producing our Technology Roadmap. Your leadership will be crucial in coaching and guiding the organization to successful execution. You are adept at diving deep into varied domains, collaborating effectively with Platform, Security, Engineering, Product, and Quality teams to contribute meaningfully to solution design. Your exceptional communication skills will enable you to connect with developers at all levels, conveying complex ideas effectively. Above all, you are driven by the challenge of solving difficult problems within legacy systems and possess the collaborative skills to find optimal solutions. You can envision the bigger picture, establish a target state, and chart the course to get there, empowering others to contribute meaningfully along the way. What You Will Be Doing: Develop and contribute to the technical strategy for the product. Guide and lead your team in the execution of the technical roadmap. Identify, diagnose, and resolve major technical blockers to ensure project momentum. Establish and champion high standards for code quality, actively working to up-skill and enable fellow Engineers. Define clear goals and objectives for our systems and create comprehensive plans for their execution and evolution. Influence and spearhead design efforts for robust solutions that meet both operational and product needs, ensuring alignment with our overarching technical strategy. Review technical solutions proposed by various teams, providing constructive feedback and ensuring best practices. Provide expert guidance and mentorship on code reviews and help refine code review processes. Mentor, guide, and positively influence Technical Leads and Software Engineers, fostering their growth and development. Lead complex, cross-departmental projects and project teams from initial conception through to successful completion. Collaborate closely with Technical Leads to effectively communicate and embed technical strategy and direction across engineering teams. Tackle and solve intricate technical problems, thereby enabling our Engineering teams to deliver more effectively and efficiently. Required Qualifications : A minimum of 15 years of software engineering experience, with a substantial portion dedicated to system architecture, design, and the development of Platforms, Data solutions, and distributed systems. Demonstrated expertise in modernizing and uplifting legacy systems, including a proven ability to quantify the commercial impact of these improvements. Exceptional ability to clearly and concisely communicate complex technical, architectural, and/or organizational problems, and to propose and drive thorough, iterative, and practical solutions. Demonstrable, deep expertise in architecting and developing solutions across diverse technology stack, including backend systems (PHP/Laravel, Node.js), modern frontend ecosystems (React/Next.js, Vue.js) using TypeScript, and databases (MySQL, MongoDB). Proven experience architecting, developing, and maintaining complex and reliable third-party integrations at scale. Significant hands-on, full-stack development experience across a diverse range of programming languages and modern frameworks. Proficiency in Infrastructure as Code (IaC) principles and tools (e.g., CloudFormation, Terraform, ARM, AWS CDK, or similar). Extensive and in-depth knowledge of cloud architecture, major cloud provider services (e.g., AWS, Azure, GCP), and best practices for designing, implementing, and securing cloud-native solutions. Proven experience in domain modeling and translating complex business requirements into robust, scalable, and maintainable technical solutions. Deep understanding of various software architecture patterns, principles, and frameworks (e.g., microservices, event-driven architectures, Domain-Driven Design) and the ability to discern and apply them effectively to solve complex business problems. Substantial experience working on Software-as-a-Service (SaaS) platforms, particularly those operating at significant scale and with high availability requirements. Demonstrated commitment to transparent communication, fostering an inclusive engineering culture, and ensuring visibility across projects and technical decisions. Proven experience in leading, mentoring, and developing other senior engineers and technical leads. Demonstrable expertise in performance analysis, advanced troubleshooting, and optimization of complex, large-scale systems, including a proactive approach to identifying and preventing future issues. Preferred Qualifications: An advanced degree (e.g., Masterʼs or Ph.D.) in Computer Science, Engineering, or a closely related technical discipline. Advanced professional certifications in relevant areas, such as cloud architecture (e.g., AWS Certified Solutions Architect Professional, Google Professional Cloud Architect, Azure Solutions Architect Expert) or enterprise architecture. Demonstrable deep expertise and thought leadership in a specific technical domain critical to strategic future (e.g., advanced data analytics and machine learning applications, scalable real-time communication platforms, cutting-edge front-end architectures for user engagement, or specific FinTech/PropTech innovations). Experience in driving significant technological innovation, potentially evidenced by patents, leading successful R&D initiatives, or introducing transformative technologies within an organization. Active participation in the broader tech community (e.g., open-source contributions, speaking at conferences, writing technical blogs). Proven experience in defining, evangelizing, and successfully implementing a long-term technical vision and strategy that has been adopted across a large engineering organization or a significant business unit. Experience with multi-cloud or hybrid-cloud environments, and managing the complexities associated with them.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: DevSecOps engineer Location: Hyderabad/Chennai- Hybrid Experience: 6+ About us: Conglomerate IT is a certified and a pioneer in providing premium end-to-end Global Workforce Solutions and IT Services to diverse clients across various domains. Visit us at https://www.conglomerateit.com/ Conglomerate IT mission is to establish global cross culture human connections that further the careers of our employees and strengthen the businesses of our clients. We are driven to use the power of global network to connect business with the right people without bias. We provide Global Workforce Solutions with affability. Job Summary: We are seeking a highly skilled and experienced DevSecOps Engineer to join our dynamic team. In this role, you will be responsible for integrating security into our DevOps pipelines, ensuring robust infrastructure, scalable deployment systems, and secure operations across our cloud environments. Key Responsibilities: Design, implement, and manage scalable and secure Infrastructure as Code (IaC) using tools like Terraform. Build, maintain, and support Kubernetes clusters across various environments. Develop and enhance CI/CD pipelines using tools such as Azure DevOps or equivalent. Collaborate with development, security, and operations teams to implement secure, efficient, and reliable DevSecOps practices. Automate routine processes and workflows to increase efficiency and reduce manual errors. Monitor and optimize system performance and ensure high availability, scalability, and security. Respond to security incidents and operational issues in a timely and effective manner. Implement and enforce security best practices across the software development lifecycle. Participate in a rotational on-call schedule, providing L3/L4 support during off-hours, weekends, and holidays. Develop and maintain system and process documentation. Stay current with emerging technologies, tools, and best practices in DevSecOps and cybersecurity. Required Qualifications: Bachelor’s degree in Computer Science, Engineering, Information Security, or a related field. 6+ years of experience in a DevSecOps or similar role. Strong hands-on experience with cloud platforms such as Azure and Google Cloud Platform (GCP), including security configurations. Deep understanding of the Software Development Lifecycle (SDLC) and DevSecOps practices. Expertise in CI/CD tools, especially Azure DevOps, GitHub Actions, or similar. Proficient in containerization tools including Docker and orchestration platforms like Kubernetes. Strong experience with Infrastructure as Code (IaC) and configuration management using tools like Terraform and Ansible. Familiarity with monitoring and observability tools and practices (e.g., SLO/SLA/SLI, Distributed Tracing). Solid experience with Git and other version control systems. Strong problem-solving, communication, and collaboration skills. Preferred Skills (Nice to Have): Knowledge of security standards (e.g., NIST, OWASP, CIS). Experience with policy-as-code tools like OPA/Gatekeeper. Familiarity with Secrets Management tools (e.g., HashiCorp Vault, Azure Key Vault). Experience in scripting languages such as Python, Bash, or PowerShell.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Job Type: Full-Time Experience Required: 8+ years in Software QA/Testing, 3+ years in Test Automation using Playwright, 2+ years in AI/ML project environments --- About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. --- Key Responsibilities: Test Automation Framework Design & Implementation · Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. · Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). · Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy · Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. · Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. · Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. · Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage · Lead the implementation of end-to-end automation for: o Web interfaces (React, Angular, or other SPA frameworks) o Backend services (REST, GraphQL, WebSockets) o ML model integration endpoints (real-time inference APIs, batch pipelines) · Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration · Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. · Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. · Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership · Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. · Lead and mentor a team of automation and QA engineers across multiple projects. · Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration · Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. · Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. · Review feature specs, AI/ML model update notes, and data schemas for impact analysis. --- Required Skills and Qualifications: Technical Skills: · Strong hands-on expertise with Playwright (TypeScript/JavaScript). · Experience building custom automation frameworks and utilities from scratch. · Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. · Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). · Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). · Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: · Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. · Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. · Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: · Proven experience leading QA/Automation teams (4+ engineers). · Strong documentation, code review, and stakeholder communication skills. · Experience collaborating in Agile/SAFe environments with cross-functional teams. --- Preferred Qualifications: · Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. · Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. · Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. · Experience with GraphQL, Kafka, or event-driven architecture testing. · QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). --- Education: · Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. · Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. --- Why Join Us? · Work on cutting-edge AI platforms shaping the future of [industry/domain]. · Collaborate with world-class AI researchers and engineers. · Drive the quality of products used by [millions of users / high-impact clients]. · Opportunity to define test automation practices for AI—one of the most exciting frontiers in tech.

Posted 1 day ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a motivated and detail-oriented Software Engineer with 2+ years of experience, primarily in support and maintenance activities. The ideal candidate will be responsible for monitoring, troubleshooting, and resolving issues in production environments, collaborating with cross-functional teams, and ensuring system stability and performance. Primary Responsibilities Provide L1/L2 support for production systems and applications Monitor system performance and proactively identify issues Troubleshoot and resolve application and infrastructure-related problems Collaborate with development teams to escalate and resolve complex issues Perform root cause analysis and document findings Maintain and update support documentation and knowledge base Participate in deployment activities and post-deployment validation Ensure adherence to SLAs and timely resolution of support tickets Automate routine support tasks using scripts or tools Communicate effectively with stakeholders regarding issue status and resolution Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field 1+ years in Application Support Engineer Equivalent practical experience or certifications may also be considered Experience with ticketing systems like JIRA, ServiceNow, or similar Knowledge of scripting languages (e.g., Python, Shell, PowerShell) Familiarity with monitoring tools (e.g., Splunk, Nagios, Grafana) Basic understanding of networking and system administration Basic understanding of software development lifecycle (SDLC) Exposure to databases and ability to write basic SQL queries Proven good problem-solving and analytical skills Proven solid communication and documentation abilities Preferred Qualifications Experience with cloud platforms (AWS, Azure, GCP) Exposure to DevOps tools and CI/CD pipelines At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen

Posted 1 day ago

Apply

18.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Brief House of Shipping provides business consultancy and advisory services for Shipping & Logistics companies. House of Shipping's commitment to their customers begins with developing an understanding of their business fundamentals. We are hiring on behalf of one of our key US based client - a globally recognized service provider of flexible and scalable outsourced warehousing solutions, designed to adapt to the evolving demands of today’s supply chains. Currently House of Shipping is looking to identify a high caliber Warehouse Engineering & Automation Lead . This position is an on-site position for Hyderabad . Background and experience: 15–18 years in software engineering roles with at least 5 years in leadership roles managing cross-functional delivery teams Experience in designing and scaling platforms for logistics, warehouse automation, or e-commerce Proven success in leading large product or platform teams through architecture modernization and tech transformation Job purpose: To lead the architecture, development, and delivery of enterprise-grade platforms and applications supporting warehouse, logistics, and supply chain operations. This role drives engineering standards, delivery velocity, team growth, and cross-functional alignment with business strategy. Main tasks and responsibilities: Own engineering strategy, delivery planning, and execution of large-scale, mission-critical systems across WMS, TMS, and supply chain orchestration platforms Define and implement best practices in system design, scalability, observability, and modularity Guide the technical direction of microservices architecture, cloud-native deployments (AWS/GCP), and DevSecOps pipelines Build engineering roadmaps in collaboration with product, operations, and infrastructure teams Mentor architects, engineering managers, and tech leads on design decisions, sprint execution, and long-term maintainability Review and approve solution architectures, integration patterns (REST, EDI, streaming), and platform-wide decisions Oversee budgeting, licensing, tooling decisions, and vendor evaluation Champion a culture of code quality, automated testing, release velocity, and incident-free deployment Ensure compliance with global security, audit, and data privacy standards (SOC2, ISO 27001, GDPR where applicable) Serve as escalation point for cross-platform technical blockers and engineering productivity challenges Education requirements: Bachelor’s or Master’s in Computer Science, Software Engineering, or related technical field Preferred: Certifications in Cloud Architecture (AWS/GCP), TOGAF, or Scaled Agile Leadership Competencies and skills: Technical: Distributed systems, Microservices, DevOps, Cloud-native development, Event-driven architecture Leadership: Strategic planning, team scaling, stakeholder communication, delivery governance Tools: Java/Python/Node.js, Kubernetes, Kafka, Git, Terraform, CI/CD (Jenkins/Azure DevOps), Logging/Monitoring stacks Strong business acumen in supply chain/logistics workflows and SLAs

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

Job Title - GCP Administrator Location - Remote (Hybrid for Chennai& Mumbai) Experience - 8 + years We are looking for an experienced GCP Administrator to join our team. The ideal candidate will have strong hands-on experience with IAM Administration, multi-account management, Big Query administration, performance optimization, monitoring and cost management within Google Cloud Platform (GCP). Responsibilities: ● Manages and configures roles/permissions in GCP IAM by following the principle of least privileged access ● Manages Big Query service by way of optimizing slot assignments and SQL Queries, adopting FinOps practices for cost control, troubleshooting and resolution of critical data queries, etc. ● Collaborate with teams like Data Engineering, Data Warehousing, Cloud Platform Engineering, SRE, etc. for efficient Data management and operational practices in GCP ● Create automations and monitoring mechanisms for GCP Data-related services, processes and tasks ● Work with development teams to design the GCP-specific cloud architecture ● Provisioning and de-provisioning GCP accounts and resources for internal projects. ● Managing, and operating multiple GCP subscriptions ● Keep technical documentation up to date ● Proactively being up to date on GCP announcements, services and developments. Requirements: ● Must have 5+ years of work experience on provisioning, operating, and maintaining systems in GCP ● Must have a valid certification of either GCP Associate Cloud Engineer or GCP Professional Cloud Architect. ● Must have hands-on experience on GCP services such as Identity and Access Management (IAM), BigQuery, Google Kubernetes Engine (GKE), etc. ● Must be capable to provide support and guidance on GCP operations and services depending upon enterprise needs ● Must have a working knowledge of docker containers and Kubernetes. ● Must have strong communication skills and the ability to work both independently and in a collaborative environment. ● Fast learner, Achiever, sets high personal goals ● Must be able to work on multiple projects and consistently meet project deadlines ● Must be willing to work on shift-basis based on project requirements. Good to Have: ● Experience in Terraform Automation over GCP Infrastructure provisioning ● Experience in Cloud Composer, Dataproc, Dataflow Storage and Monitoring services ● Experience in building and supporting any form of data pipeline. ● Multi-Cloud experience with AWS. ● New-Relic monitoring. Perks: ● Day off on the 3rd Friday of every month (one long weekend each month) ● Monthly Wellness Reimbursement Program to promote health well-being ● Paid paternity and maternity leaves Notice period: Immediate to 30 days Email to: poniswarya.m@aptita.com

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

Job Title: Generative AI Engineer Job Overview: We are on the lookout for a creative and technically proficient Generative AI Engineer to join our team. This individual will be responsible for architecting and deploying innovative AI systems centered around large language models (LLMs). The role emphasizes hands-on development in areas like prompt engineering, model customization, and real-world implementation of generative AI capabilities. As part of the team, you will help design and build applications such as AI chat interfaces, intelligent content generators, summarization engines, and other tools that utilize state-of-the-art LLM technologies. Key Responsibilities: Build, train, and implement generative AI systems using models such as GPT, BERT, LLaMA, Claude, or similar. Adapt and fine-tune pre-trained foundation models to meet specific business needs and domains. Integrate generative AI into various products, including conversational agents, summarization tools, and AI content creators. Conduct exploratory research to remain updated on the latest advancements in LLMs, prompt optimization, and multi-modal AI technologies. Prepare and manage relevant datasets for model training, evaluation, and testing. Collaborate with teams from data science, engineering, and product development to ensure smooth integration of AI models into live systems. Monitor model behavior continuously and make improvements to ensure high accuracy, low latency, and consistent reliability. Apply responsible AI principles and set up feedback mechanisms to refine model output over time. Required Skills and Qualifications: A Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related technical discipline. Proven experience in applying LLMs and generative AI techniques in production environments. Strong programming skills in Python and familiarity with machine learning libraries such as PyTorch, TensorFlow, and Hugging Face Transformers. Solid grounding in NLP methods, including text generation, translation, summarization, and conversational AI. Hands-on experience with techniques like prompt tuning, LoRA, PEFT, and RLHF for fine-tuning models. Understanding of cloud-based deployment using platforms such as AWS, GCP, or Azure. Strong analytical and debugging skills, with attention to performance tuning and cost optimization. Knowledge of MLOps best practices, including model version control (e.g., MLflow, Git), and container technologies like Docker and Kubernetes is advantageous.

Posted 1 day ago

Apply

0 years

1 Lacs

India

Remote

Selected candidate's day-to-day responsibilities include: 1. Design and develop applications using frontend languages such as React.js, Angular.js, HTML, Vue.js 2. Design and develop applications using backend languages such as Python, Node.js, Java, PHP 3. Develop applications using frameworks such as MERN, MEAN, Flask, Django, NEXT.js, NEST.js Springboot, etc. 4. Develop and deploy applications to GCP, AWS, Azure, Linode, and more. 5. Develop applications such as web applications, SaaS applications, Dashboard, Admin Dashboard, etc. Skill(s) required React.js Python Flask Django MongoDB Express.js Angular Node.js Elasticsearch API Next.js Vue.js TYPE Work From Home START DATE Immediately DURATION 6 month STIPEND ₹ 10,000 /month to 15000/month

Posted 1 day ago

Apply

0.0 - 2.0 years

0 - 0 Lacs

Bengaluru, Karnataka

On-site

Key Responsibilities: Deliver training sessions on the following topics: Basic Database Awareness Relational databases: tables, rows, columns Primary Key, Foreign Key, and Data Integrity Introduction to DBMS; setup of local servers (MySQL, SQLite) Connecting to databases using sqlite3, mysql-connector SQL queries – SELECT, INSERT, UPDATE, DELETE, JOINS Introduction to NoSQL (MongoDB, Cassandra) Basic querying & manipulation in NoSQL Difference between relational & non-relational databases Overview of relational vs non-relational data in Azure Fundamentals of Networking Internet basics and core components Network representations, cable types Types of networks and networking devices Introduction to network security and encryption Awareness about viruses, threats, and network safety Fundamentals of Cloud What is Cloud Computing – characteristics and usage Difference between traditional Data Centers vs Cloud Advantages of Cloud Computing Cloud Models: IaaS, PaaS, SaaS Introduction to virtualization, hypervisors Popular cloud platforms: Azure, AWS, GCP (overview) Manage classroom interactions and practicals Evaluate student performance and participation Track attendance and provide feedback to the coordinator Modify training pace and content based on audience understanding Requirements: Bachelor's or Master’s degree in Computer Science / IT / related field Minimum 2–3 years of experience in delivering technical training Strong hands-on experience with SQL, NoSQL databases, and networking tools Familiarity with cloud platforms (Azure/AWS) is a must Excellent communication and presentation skills Ability to simplify complex topics for beginners Experience in classroom, college or corporate training preferred Preferred Certifications (Optional but a Plus): Microsoft Certified: Azure Fundamentals CompTIA Network+ AWS Cloud Practitioner MongoDB or Oracle SQL Certifications Job Type: Contractual / Temporary Contract length: 4 days Pay: ₹400.00 - ₹500.00 per hour Schedule: Day shift Experience: Training & development: 2 years (Required) Technical: 2 years (Required) Location: Bangalore, Karnataka (Required) Work Location: In person

Posted 1 day ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join DAZN – The Ultimate Sports Streaming Experience! DAZN is revolutionizing the way fans experience sports with cutting-edge streaming technology. As we continue to innovate, we are looking for a Solutions Architect – Streaming & OTT to design and optimize high-performance video streaming architectures. If you have 10+ years of experience in streaming/OTT solutions, encoding, CDN distribution, and playback services, we’d love to hear from you! 📩 Interested? Apply now by sharing your updated resume, current & expected CTC, and notice period. Let’s shape the future of sports streaming together! 🚀 Job Title: Solutions Architect – Streaming & OTT Location: Hyderabad Role Overview: We are looking for an experienced Solutions Architect – Streaming & OTT to design, optimize, and support scalable, high-performance video streaming architectures. The ideal candidate will have a deep understanding of end-to-end streaming workflows, encoding/transcoding pipelines, packaging, CDN distribution, and playback services while ensuring seamless content delivery across a variety of devices and platforms. Key Responsibilities: Architect and implement end-to-end streaming solutions , ensuring high availability, low latency, and scalability. Define technical roadmaps for streaming infrastructure, aligning with business and operational goals. Optimize video encoding/transcoding pipelines for live and VOD content, ensuring optimal compression efficiency without quality loss. Design and implement adaptive bitrate (ABR) streaming strategies to optimize playback across different devices and network conditions. Architect and integrate multi-CDN strategies , ensuring resilience, redundancy, and global distribution efficiency. Design and oversee OTT packaging workflows (HLS, DASH, CMAF) and DRM integration for content security. Provide third-line technical support for streaming technologies, debugging complex playback, latency, and delivery issues. Work closely with backend, player, and DevOps teams to ensure seamless integration of playback services and analytics solutions . Stay ahead of emerging trends and advancements in streaming technology, contributing to strategic initiatives and innovation. Technical Expertise Required: 10+ years of experience in streaming/OTT industry , with a focus on solution architecture and design . Proven track record in designing and deploying scalable, high-performance streaming solutions . Hands-on expertise in video encoding/transcoding (FFmpeg, AWS Media Services, Elemental, Harmonic, etc.). Strong knowledge of OTT packaging standards (HLS, MPEG-DASH, CMAF) and DRM solutions (Widevine, FairPlay, PlayReady). Experience working with Content Delivery Networks (CDNs) (Akamai, CloudFront, Fastly, etc.) and designing multi-CDN architectures . Deep understanding of video player technologies, ABR streaming, and low-latency playback optimizations . Experience in designing and maintaining backend playback services with APIs for content discovery, recommendations, and analytics. Familiarity with cloud-based media workflows (AWS, GCP, Azure) and Infrastructure as Code (IaC) methodologies. Proficiency in networking, HTTP streaming protocols (RTMP, HLS, DASH), and caching strategies for optimal content delivery. Experience with monitoring and troubleshooting tools (QoE/QoS analytics, log aggregators, and network diagnostics). Bonus: Prior experience in live sports streaming with expertise in ultra-low latency streaming (WebRTC, LL-HLS, CMAF-CTE)

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies