Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 Lacs
Kolkata, West Bengal
On-site
Job Description: SentientGeeks is seeking a skilled RPA Developer with expertise in Blue Prism to design, develop, and implement robotic process automation solutions. Key Responsibilities: Design, develop, and deploy Blue Prism automation solutions. Analyze business processes to identify automation opportunities. Develop reusable automation components following best practices. Perform testing, debugging, and troubleshooting of RPA bots. Collaborate with business and IT teams to optimize automation workflows. Maintain and update existing automation solutions. Ensure compliance with security and governance standards. Requirement: 3+ years of experience in RPA development using Blue Prism. Applicants must be from West Bengal or nearby states like Bihar, Odisha, Jharkhand or should be willing to relocate in Kolkata. Strong understanding of process automation lifecycle, power automate. Experience in SQL, API integration, and scripting (Python or C#). Knowledge of AI/ML-based automation (preferred but not mandatory). Experience of knowledge in Agentic AI will preferred. Ability to analyze and document business processes. Strong problem-solving and analytical skills. Why should you join us? Joining SentientGeeks means becoming part of a forward-thinking organization that blends innovation with real-world impact. With a strong reputation for delivering tailored software solutions to global SMEs, SentientGeeks offers a dynamic and collaborative work culture that encourages continuous learning and growth. The company’s diverse technology stack, commitment to Agile best practices, and exposure to modern tools like microservices, cloud platforms, and DevOps pipelines ensure that you stay at the forefront of the industry. More than just technical advancement, SentientGeeks fosters a people-first environment where every idea is valued, career development is supported, and work-life balance is genuinely respected.
Posted 1 day ago
15.0 years
0 Lacs
Chandigarh, India
On-site
Principal Engineer Experience: 15 - 22 Years Exp Salary: Competitive Preferred Notice Period: 60 Days Opportunity Type: Bengaluru (Hybrid) Placement Type: Full-time (*Note: This is a requirement for one of Uplers' Clients) Must have required skills: C OR C++, TCP/IP OR SSL OR TLS OR Deep packet inspection OR HTTP OR HTTPS OR Web Application Firewall OR WAF OR IPS/IDP OR IDP/IPS, AWS OR Azure OR Google Cloud OR Kubernetes One of Uplers clients is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Overview As part of the Inline CASB team, you will have a unique opportunity to work on a world-class CASB solution that provides unparalleled visibility and control for widely used enterprise applications. Netskope Cloud Data Plane engineers architect and design one of the most scalable, high-performance cloud data planes in the world, processing 10+ Gbps of traffic. What’s in it for you In this role, you will be working on Deep Packet Inspection (DPI) of CASB Inline traffic. You will build core functionality to intercept and inspect traffic the CASB Inline traffic which include Generative AI applications in the data path, invoking essential services like DLP (Data Loss Prevention) and Threat Protection (TSS) and enforcing CASB Inline Real-Time Policies (RTP). You will be instrumental in developing state-of-the-art techniques, including AI/ML, to detect activities and apply advanced policies, all at line rate. This is a high-impact position for a technical leader who excels at solving challenging problems and mentoring a world-class engineering team. If you enjoy diving deep into technical challenges to develop innovative solutions that are scalable, accurate, and high-performing, then this role is for you. Job Responsibilities Understand the various use cases and work flows for native/browser access of SaaS apps and support the app access requirements/use cases via Netskope reverse proxy solution. Also maintain & enhance the access control features for the supported SaaS apps. Work on re-architecting the deep packet inspection module to make it intelligent and scalable, with the goal of achieving higher accuracy in activity detection across a wide range of SaaS applications. Work on identifying a smart, scalable solution to reduce the cost of building and maintaining SaaS app connectors, which are responsible for providing deeper visibility into application activities. Work closely with the product management team on the new apps support & to define new access control use cases. Involve in the complete development life cycle starting with understanding various requirements, understand/define functional specs, development with high efficacy/quality & measure the efficacy based on production data. Identify gaps in existing solutions/processes and bring in innovative ideas that help evolve the solution over time. Work closely with the technical support team to handle customer escalations. Analyze the product gaps that resulted in customer issues and improve the signature resiliency and test strategy. Preferred Qualification Bachelor's or Master's degree in Computer Science, Engineering or equivalent strongly preferred. Minimum 15 years of work experience. Preferred Technical Skills (must-have) Programming Mastery: Expert proficiency in C/C++ and strong experience with Python. Networking Protocol Expertise: Deep understanding of networking protocols, including TCP/IP, HTTP/S, WebSocket, DNS, and TLS/SSL decryption (MITM) techniques. Knowledge of L3 VPNs like IPSec and Wireguard. Security Domain Experience (L7 & Network): Proven experience in data plane/data path development for security products (e.g., Firewalls, Proxies, IDPS, DPI engines). Experience in network and web security technologies, including Web Application Firewall (WAF), L7 Access-Policies, Web Security, IDP/IPS, DNS-based security, and L7 DDoS. Must Have: Experience with HTTP proxy development. System Architecture: Strong understanding of computer architecture concepts like multi-threading, CPU scheduling, and memory management. Good understanding of algorithms and data structures for implementing real-time inline data processing. Good hands on experience and knowledge of Linux at a systems level. Troubleshooting & Debugging: Strong analytical and troubleshooting skills using debuggers like gdb and tools like Valgrind. Hands-on experience with packet capture technologies (e.g., tcpdump, Wireshark, libpcap) for network traffic analysis and troubleshooting. Cloud & Containerization: Strong knowledge of cloud solution architectures (AWS, Azure, GCP). Direct experience with container orchestration (Kubernetes) and Container Network Interface (CNI) plugins. Familiarity with inter-service communication protocols in cloud environments (e.g., gRPC, REST). Experience in a CASB, ZTNA, or SSE security environment. Contributions to open-source projects. Additional Technical Skills SASE Architecture: Experience working within a SASE (Secure Access Service Edge) architecture is a major plus. Authentication & Access Control: Strong knowledge of Authentication technologies, including Identity and Access Management, SSO, SAML, OpenID, OAuth2, and MFA. Generative AI (GenAI) Platforms: Familiarity with GenAI platforms and APIs and their communication patterns (e.g., OpenAI, Anthropic, Gemini). DPDK and VPP architecture knowledge is a plus. Testing Methodologies: A proponent of Test-Driven Development (TDD) and knowledge of various unit testing frameworks. Advanced Content Analysis: Experience with advanced content analysis or true file type detection. Inter-Service Communication: Familiarity with modern cloud protocols like gRPC and REST. Security Domain Experience: Experience in a CASB, ZTNA, or SSE security environment. Open-Source Contributions: A history of contributions to open-source projects How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
5.0 years
0 Lacs
India
Remote
Job Title : Data Engineer – AEP Competency Years of Experience : 5-8 years. Location : Remote. Job Type : Permanent. Summary: Experienced software engineer utilizing Big Data & Cloud technologies to program for custom solutions for Adobe Experience Cloud customers and frameworks. What you will do: • Develop low level design as per approved Tech Specification or equivalent inputs provided. • Build and deliver technology deliverables as per Tech Specifications/low level design or equivalent inputs. • Build and delivery technology deliverables compliant with functional requirements. • Build and deliver technology deliverables in accordance with the best practices to ensure compliance with non-functional aspects like scalability, maintainability, performance etc. • Process technology deliverables through full development lifecycle. • Maintain and support existing solutions and technology frameworks. • Attend regular scrum events of equivalent and provide update on the deliverables. • Work independently with none or minimum supervision. Requirements : 6+ years(6 years) of professional software engineering mostly focused on the following: • Developing ETL pipelines involving big data. • Developing data processing\analytics applications primarily using PySpark. • Experience of developing applications on cloud(AWS) mostly using services related to storage, compute, ETL, DWH, Analytics and streaming. • Clear understanding and ability to implement distributed storage, processing and scalable applications. • Experience of working with SQL and NoSQL database. • Ability to write and analyze SQL, HQL and other query languages for NoSQL databases. • Proficiency is writing disitributed & scalable data processing code using PySpark, Python and related libraries. • Experience of developing applications that consume the services exposed as ReST APIs. Special Consideration given for • Experience of working with Container-orchestration systems like Kubernetes. • Experience of working with any enterprise grade ETL tools. • Experience & knowledge with Adobe Experience Cloud solutions. • Experience & knowledge with Web Analytics or Digital Marketing. • Experience & knowledge with Google Cloud platforms. • Experience & knowledge with Data Science, ML/AI, R or Jupyter.
Posted 1 day ago
7.0 years
0 Lacs
India
On-site
Designation: Dot Net Engineer Preferred Experience: 7+ years Responsibilities: Engaging directly with Azure developers and IT professionals in forums to answer technical questions and help solve technical problems. Solve highly complex problems, involving broad, in-depth product knowledge or in-depth product specialty; this may include support of additional product line. Develop code samples, quick-start, and how-to guides to help customers understand complex cloud scenarios. Collaborate with internal teams to produce software design and architecture Write clean, scalable code using .NET programming languages Test and deploy applications and systems Revise, update, refactor, and debug code Improve existing software Develop documentation throughout the software development life cycle (SDLC) Serve as an expert on applications and provide technical support Requirements and skills: Experience with an ASP.NET Core Developer, along with skills in Javascript, CSS, and razor pages. Experience with programming and scripting languages/frameworks such as. net core.NET, ASP.NET, and .NET frameworks. Experience with building data pipelines (ETL) Expertise in SQL queries Experience working with.Net, WinForms, and WPF Experience with claims-based authentication (SAML/OAuth/OIDC), MFA, and RBAC. Good to have: Knowledge of working with Entity Framework Experience working and deploying Azure web services (edited) Experience with Machine learning is a plus Exposure to SSIS is a plus Company Overview: Aventior is a leading provider of innovative technology solutions for businesses across a wide range of industries. At Aventior, we leverage cutting-edge technologies like AI, ML Ops, DevOps, and many more to help our clients solve complex business problems and drive growth. We also provide a full range of data development and management services, including Cloud Data Architecture, Universal Data Models, Data transformation and ETL, Data Lakes, User Management, Analytics and visualization, and automated data capture (for scanned documents and unstructured/semi-structured data sources). Our team of experienced professionals combines deep industry knowledge with expertise in the latest technologies to deliver customized solutions that meet the unique needs of each of our clients. Whether you are looking to streamline your operations, enhance your customer experience, or improve your decision-making process, Aventior has the skills and resources to help you achieve your goals. We bring a well-rounded cross-industry and multi-client perspective to our client engagements. Our strategy is grounded in design, implementation, innovation, migration, and support. We have a global delivery model, a multi-country presence, and a team well-equipped with professionals and experts in the field.
Posted 1 day ago
15.0 years
0 Lacs
India
Remote
About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: AWS Cloud Architecture Experience: 15+ Years Location: Any Infosys DC Cloud Application Principal Engineers having skill set of database development, architectural designing, implementation and performance tuning on both on-premise and cloud technologies. Mandatory Skills ✔ 15+ years in Java Full Stack (Spring Boot, Microservices, ReactJS) ✔ Cloud Architecture: AWS EKS, Kubernetes, API Gateway (APIGEE/Tyk) ✔ Event Streaming: Kafka, RabbitMQ ✔ Database Mastery: PostgreSQL (performance tuning, scaling) ✔ DevOps: GitLab CI/CD, Terraform, Grafana/Prometheus ✔ Leadership: Technical mentoring, decision-making About the Role We are seeking a highly experienced AWS Cloud Architect with 15+ years of expertise in full-stack Java development , cloud-native architecture, and large-scale distributed systems. The ideal candidate will be a technical leader capable of designing, implementing, and optimizing high-performance cloud applications across on-premise and multi-cloud environments (AWS). This role requires deep hands-on skills in Java, Microservices, Kubernetes, Kafka, and observability tools, along with a strong architectural mindset to drive innovation and mentor engineering teams. Key Responsibilities Cloud-Native Architecture & Leadership: Lead the design, development, and deployment of scalable, fault-tolerant cloud applications (AWS EKS, Kubernetes, Serverless). Define best practices for microservices, event-driven architecture (Kafka), and API management (APIGEE/Tyk). Architect hybrid cloud solutions (on-premise + AWS/GCP) with security, cost optimization, and high availability. Full-Stack Development: Develop backend services using Java, Spring Boot, and PostgreSQL (performance tuning, indexing, replication). Build modern frontends with ReactJS (state management, performance optimization). Design REST/gRPC APIs and event-driven systems (Kafka, SQS). DevOps & Observability: Manage Kubernetes (EKS) clusters, Helm charts, and GitLab CI/CD pipelines. Implement Infrastructure as Code (IaC) using Terraform/CloudFormation. Set up monitoring (Grafana, Prometheus), logging (ELK), and alerting for production systems. Database & Performance Engineering: Optimize PostgreSQL for high throughput, replication, and low-latency queries. Troubleshoot database bottlenecks, caching (Redis), and connection pooling. Design data migration strategies (on-premise → cloud). Mentorship & Innovation: Mentor junior engineers and conduct architecture reviews. Drive POCs on emerging tech (Service Mesh, Serverless, AI/ML integrations). Collaborate with CTO/Architects on long-term technical roadmaps.
Posted 1 day ago
7.0 years
0 Lacs
India
Remote
Senior DevOps (Azure, Terraform, Kubernetes) Engineer Location: Remote (Initial 2–3 months in Abu Dhabi office, and then remote from India) T ype: Full-time | Long-term | Direct Client Hire Client: Abu Dhabi Government About The Role Our client, UAE (Abu Dhabi) Government, is seeking a highly skilled Senior DevOps Engineer (with skills on Azure, Terraform, Kubernetes, Argo) to join their growing cloud and AI engineering team. This role is ideal for candidates with a strong foundation in cloud Azure DevOps practices. Key Responsibilities Design, implement, and manage CI/CD pipelines using tools such as Jenkins, GitHub Actions, or Azure DevOps, AKS Develop and maintain Infrastructure-as-Code using Terraform Manage container orchestration environments using Kubernetes Ensure cloud infrastructure is optimized, secure, and monitored effectively Collaborate with data science teams to support ML model deployment and operationalization Implement MLOps best practices, including model versioning, deployment strategies (e.g., blue-green), monitoring (data drift, concept drift), and experiment tracking (e.g., MLflow) Build and maintain automated ML pipelines to streamline model lifecycle management Required Skills 7+ years of experience in DevOps and/or MLOps roles Proficient in CI/CD tools: Jenkins, GitHub Actions, Azure DevOps Strong expertise in Terraform and cloud-native infrastructure (AWS preferred) Hands-on experience with Kubernetes, Docker, and microservices Solid understanding of cloud networking, security, and monitoring Scripting proficiency in Bash and Python Preferred Skills Experience with MLflow, TFX, Kubeflow, or SageMaker Pipelines Knowledge of model performance monitoring and ML system reliability Familiarity with AWS MLOps stack or equivalent tools on Azure/GCP Skills: argo,terraform,kubernetes,azure
Posted 1 day ago
15.0 years
0 Lacs
India
Remote
About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: AWS Cloud Architecture Experience: 15+ Years Location: Any PAN India - Hybrid work model Mandatory Skills ✔ 15+ years in Java Full Stack (Spring Boot, Microservices, ReactJS) ✔ Cloud Architecture: AWS EKS, Kubernetes, API Gateway (APIGEE/Tyk) ✔ Event Streaming: Kafka, RabbitMQ ✔ Database Mastery: PostgreSQL (performance tuning, scaling) ✔ DevOps: GitLab CI/CD, Terraform, Grafana/Prometheus ✔ Leadership: Technical mentoring, decision-making About the Role We are seeking a highly experienced AWS Cloud Architect with 15+ years of expertise in full-stack Java development , cloud-native architecture, and large-scale distributed systems. The ideal candidate will be a technical leader capable of designing, implementing, and optimizing high-performance cloud applications across on-premise and multi-cloud environments (AWS). This role requires deep hands-on skills in Java, Microservices, Kubernetes, Kafka, and observability tools, along with a strong architectural mindset to drive innovation and mentor engineering teams. Key Responsibilities ✅ Cloud-Native Architecture & Leadership: Lead the design, development, and deployment of scalable, fault-tolerant cloud applications (AWS EKS, Kubernetes, Serverless). Define best practices for microservices, event-driven architecture (Kafka), and API management (APIGEE/Tyk). Architect hybrid cloud solutions (on-premise + AWS/GCP) with security, cost optimization, and high availability. ✅ Full-Stack Development: Develop backend services using Java, Spring Boot, and PostgreSQL (performance tuning, indexing, replication). Build modern frontends with ReactJS (state management, performance optimization). Design REST/gRPC APIs and event-driven systems (Kafka, SQS). ✅ DevOps & Observability: Manage Kubernetes (EKS) clusters, Helm charts, and GitLab CI/CD pipelines. Implement Infrastructure as Code (IaC) using Terraform/CloudFormation. Set up monitoring (Grafana, Prometheus), logging (ELK), and alerting for production systems. ✅ Database & Performance Engineering: Optimize PostgreSQL for high throughput, replication, and low-latency queries. Troubleshoot database bottlenecks, caching (Redis), and connection pooling. Design data migration strategies (on-premise → cloud). ✅ Mentorship & Innovation: Mentor junior engineers and conduct architecture reviews. Drive POCs on emerging tech (Service Mesh, Serverless, AI/ML integrations). Collaborate with CTO/Architects on long-term technical roadmaps.
Posted 1 day ago
0 years
0 Lacs
India
On-site
About Us The name Interview Kickstart might have given you a clue. But here's the 30-second elevator pitch - Interviews can be challenging. And when it comes to the top tech companies like Google, Apple, Facebook, Netflix, etc., they can be downright brutal. Most candidates don't make it simply because they don't prepare well enough. Interview Kickstart (IK) helps candidates nail the most challenging tech interviews. To keep up with the upcoming trends, we are launching our new Agentic AI course. Requirements Technical Expertise in at least one of the following topics: Fundamentals of Agentic AI Key AI Agent Frameworks: AutoGen, LangChain, CrewAI, LangGraph Understanding Multi-Agent Systems: Reflection, Planning, and Task Automation Introduction to ReAct (Reasoning + Action) Framework Prompt Engineering & Function Calling Building a simple AI Agent (Code-based for SWEs / Low Code for other Tech Domains) Develop a modular, AI agent using LangGraph or CrewAI capable of reasoning, decision-making, and tool usage Understand the role of graph-based agent workflows (LangGraph) and multi-agent collaboration (CrewAI) Deploy an interactive AI assistant that can execute tasks autonomously Building Applications with LLMs & Agents (Advanced) AI Agent Memory & Long-Term Context Multi-Agent Collaboration & Orchestration Deployment (LLMOps, Langchain, LlamaIndex etc.) Emerging trends: LLMOps, guardrails, and multi-agent systems Evaluation & Optimizing AI Agents: Performance & Cost Efficiency AI Agent Performance Monitoring & Logging Optimizing Inference Speed & Model Costs Fine-Tuning vs. Prompt Engineering Trade-offs Evaluating Agent Effectiveness with Human Feedback Designing Robust and Scalable AI Systems for Modern Applications (For SWEs) Introduction to AI system design: scalability, reliability, performance, and cost optimization Common design patterns for AI applications: pipeline, event-driven, and microservices System architecture for LLM applications: inference engine, data pipeline, API layer, and frontend integration AI-specific challenges: managing large datasets, optimizing latency, and handling model updates Advanced topics: LLMOps, multi-model orchestration, and AI system security Evaluating AI systems: throughput, reliability, accuracy, and cost-efficiency Building Advanced Agents (Code-based for SWEs) Build an Advanced Horizontal Multi-Agent System for a use cases like: A multi-agent system that automates DevOps workflows, including CI/CD monitoring, infrastructure scaling, and system health diagnostics A multi-agent AI healthcare assistant to automate medical FAQs, appointment scheduling, and patient history retrieval An agentic system that analyzes application security risks, detects vulnerabilities, and suggests fixes An AI-powered multi-agent system that analyzes, summarizes, and extracts key insights from legal documents An AI-driven multi-agent system for supply chain automation, handling inventory management, demand forecasting, and logistics tracking Agentic AI For PMs Build Agentic AI system using low code / no code tools for use cases relevant to PMs such as: AI-Powered Feature Prioritization Tool Customer Sentiment Analysis & Roadmap Alignment AI-Driven Competitive Landscape Analysis Agentic AI For TPMs Build Agentic AI system using low code / no code tools for use cases relevant to TPMs such as: AI-Powered Stakeholder Management Bot Multi-Agent AI System for Program Risk Management AI-Driven Engineering Capacity & Resource Allocation Agent Agentic AI For EMs Build Agentic AI system using low code / no code tools for use cases relevant to TPMs such as: Multi-Agent System for Engineering Productivity & Burnout Monitoring Multi-Agent AI System for Engineering Roadmap & Strategy Planning AI Agent for Cloud Cost Optimization in Engineering Workloads Preferred Qualifications: Prior experience building and deploying LLM or agent-based applications in real-world settings Strong proficiency with agent frameworks like LangGraph, CrewAI, or LangChain (code-based for SWEs and low-code or no-code for Tech Domains) Strong understanding of system design principles, especially in AI/ML-based architectures Demonstrated ability to explain complex technical topics to diverse audiences Experience teaching, mentoring, or creating content for working professionals in tech Excellent communication and collaboration skills, with a learner-first mindset Bonus: Contributions to open-source AI projects, publications, or prior experience with AI upskilling programs Responsibilities: Instruction Delivery: Conduct lectures, workshops, and interactive sessions to teach machine learning principles, algorithms, and methodologies. Instructors may use various teaching methods, including lectures, demonstrations, hands-on exercises, and group discussions Industry Engagement: Staying current with the latest trends and advancements in machine learning and related fields, engaging with industry professionals, and collaborating on projects or internships to provide students with real-world experiences Research and Development: Conducting research in machine learning and contributing to developing new techniques, models, or applications Constantly improve the session flow and delivery by working with other instructors, subject matter experts, and the IK team Help the IK team in onboarding and training other instructors and coaches Have regular discussions with IK's curriculum team in evolving the curriculum Should be willing to work on weekends/evenings and be available as per the Pacific time zone
Posted 1 day ago
5.0 years
0 Lacs
India
Remote
Job Role- Senior Backend Developer Job Type- Full-time Work mode- Remote We are seeking a Senior Backend Developer with deep expertise in Python and hands-on experience building and deploying AI-powered systems . This role is ideal for someone who can architect and develop scalable backends that support LLM-based and conversational AI applications in production environments. Core Requirements: 5+ years of software development experience, with at least 2–3 years working on AI/ML applications. Advanced Python proficiency , with familiarity across its AI/data science ecosystem. Strong understanding of API design , system architecture , and building high-performance, low-latency backend systems. Experience integrating and supporting LLM workflows , including prompt engineering , RAG pipelines , and model fine-tuning . Practical experience integrating with STT/TTS APIs (e.g., OpenAI TTS, Azure Speech, ElevenLabs). Familiarity with conversational AI components like NLU and dialogue management. Preferred Skills: Experience with LangChain or LangGraph to support advanced AI agent workflows. Comfortable working with cloud infrastructure (especially AWS), Docker/Kubernetes , and CI/CD pipelines . Hands-on experience with vector databases (e.g., Pinecone, Qdrant, pgvector, Milvus). Bonus: Exposure to frontend technologies (e.g., React/Vue) and real-time voice/audio streaming applications.
Posted 1 day ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Company RADcube – A Stlogics Company is dedicated to innovation in AI & Data. Our mission is to leverage cutting-edge technology to create impactful solutions that drive business success. We foster a collaborative and inclusive culture where creativity and teamwork thrive. About the Role We are seeking a highly skilled ML/AI Developer to join our growing team in India. This role involves developing machine learning and deep learning models, building conversational AI applications, and deploying solutions using cloud-native tools. Work Location : Hyderabad (hi-tech City). Shift Timings :EST Time Zone Experience : Min 5+yrs Responsibilities Design and deploy ML/AI models using CNN, RNN, Seq2Seq, and LLMs. Develop and fine-tune NLP-driven solutions and integrate them into chatbot platforms. Build chatbot solutions using Microsoft Bot Framework, Dialogflow, or OpenAI APIs. Preprocess and engineer datasets using Python for training and evaluation. Deploy AI models and services using AWS or Azure. Collaborate cross-functionally with engineering, product, and DevOps teams. Explore and apply RLHF, prompt engineering, and newer AI optimization strategies. Follow Agile development methodology and participate in sprints. Qualifications Bachelor’s or Master’s in ML Engineering (preferred) or Data Science. 5–6 years of experience in ML/AI engineering or data science roles. Strong Python programming skills. Solid knowledge of algorithms like KNN, SVM, Naive Bayes. Proven work with NLP, CNNs, LLMs, and deep learning architectures. Experience working with cloud platforms – AWS or Azure. Prior experience integrating chatbots or conversational AI in production. Required Skills Familiarity with Microsoft Bot Framework, Dialogflow, or OpenAI APIs. Knowledge of model versioning, monitoring, or MLOps practices. Bonus: Open-source contributions or AI research experience. Equal Opportunity Statement We are committed to diversity and inclusivity in our hiring practices. We encourage applications from individuals of all backgrounds and experiences.
Posted 1 day ago
5.0 - 10.0 years
20 - 27 Lacs
Navi Mumbai, Bengaluru, Mumbai (All Areas)
Work from Office
Greetings!!! This is in regards to a Job opportunity for AI Lead - Agentic AI & RAG with Datamatics Global Services Ltd. Position: AI Lead - Agentic AI & RAG Website: https://www.datamatics.com/ Job Location: Mumbai/Bangalore Job Description: Seeking a skilled AI Engineer to design and deliver a scalable, secure, and intelligent chatbot solution built entirely on the Microsoft Azure ecosystem. The solution will leverage Azure OpenAI, Azure Foundry, and Azure Cognitive Search to power a guided buying assistant with enterprise-grade compliance and AI-driven intelligence. Key Responsibilities & Required Expertise: 1.Deep hands-on experience with Retrieval-Augmented Generation (RAG) using Azure Cognitive Search and Azure Cosmos DB, including embedding generation, chunking, indexing, and semantic retrieval. 2.Strong knowledge of agentic AI design using Semantic Kernel, including task planning, tool/function calling, memory management, and autonomous reasoning flows. 3.Proven experience implementing multi-turn conversational agents using Azure OpenAI Service (GPT-4/4o) in inference-only mode, with secure prompt and session management. 4.Skilled in designing and optimizing prompt engineering strategies including system messages, dynamic templating, and user intent extraction. 5.Familiarity with Azure AI Studio and Azure Machine Learning for managing model deployments, testing, and monitoring. 6.Strong understanding of embedding models within Azure (e.g., text-embedding-ada-002 or Foundry-specific embeddings) for knowledge base enrichment and document understanding. 7. Expertise in building secure, compliant AI systems aligned with SOC 2, GDPR, and enterprise RBAC standards, ensuring no client data is used for training or fine-tuning. 8. Proficiency in Python, particularly with Azure SDKs, Semantic Kernel, and REST APIs for integrating AI into enterprise workflows. 9. Experience embedding AI-driven workflows into Azure-hosted applications, including React JS frontends and OutSystems low-code environments. 10. Ability to collaborate closely with architects, product managers, and UX teams to translate procurement workflows into intelligent assistant behavior.
Posted 1 day ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location : Hyderabad and Chennai Immediate Joiners Experience : 3 to 5 years Mandatory skills: MLOps, Model lifecycle + Python + PySpark + GCP (BigQuery, Dataproc & Airflow), And CI/CD Required Skills and Experience: Strong programming skills: Proficiency in languages like Python, with experience in libraries like TensorFlow, PyTorch, or scikit-learn. Cloud Computing: Deep understanding of GCP services relevant to ML, such as Vertex AI, BigQuery, Cloud Storage, Dataflow, Dataproc, and others. Data Science Fundamentals: Solid foundation in machine learning concepts, statistical analysis, and data modeling. Software Engineering Principles: Experience with software development best practices, version control (e.g., Git), and testing methodologies. MLOps: Familiarity with MLOps principles and practices. Data Engineering: Experience in building and managing data pipelines.
Posted 1 day ago
3.0 - 5.0 years
5 - 8 Lacs
Chennai, Perungudi
Work from Office
Job Brief We are seeking an AI Lead Engineer to join our Digital Engineering team, focusing exclusively on Innovation Initiatives. As part of our dynamic group, you will contribute to cutting-edge AI-driven platforms and future-trend projects. The ideal candidate possesses deep expertise in machine learning and AI, with a proven record of developing scalable solutions. This role demands a delivery-focused mindset, emphasizing quick, incremental outputs in an agile environment. Priority will be given to candidates who have demonstrated experience in building small applications using existing libraries and datasets, creating layers or wrappers to develop innovative solutions. We value the ability to integrate and unify existing technologies rather than developing from scratch, as this approach aligns with our fast-paced, innovation-driven environment. You'll balance innovative thinking with practical implementation, driving substantial business impact through rapid prototyping and deployment of AI solutions. Responsibilities Develop cutting-edge AI solutions for future trends Rapidly prototype and deliver impactful features (DemoBytes) Integrate existing libraries/datasets for novel applications Develop select features from scratch when necessary Collaborate cross-functionally on AI innovations Proactively improve systems based on AI trends Participate in agile development processes Deliver frequent, incremental outputs Balance innovation with practical implementation Requirements and Skills Master's in Computer Science, Engineering, AI, or a related field 7+ years of experience in developing and deploying machine learning models and AI solutions. Handled team of minimum 5. Strong proficiency in Python; familiarity with Java and R Expertise in machine learning, deep learning, natural language processing (NLP), and computer vision Hands-on experience with AI frameworks such as TensorFlow, PyTorch, or Keras Solid understanding of data structures, algorithms, and software design principles Knowledge of RESTful APIs and microservices architecture Familiarity with agile development methodologies Strong problem-solving skills and ability to work effectively in a team environment Excellent communication skills to explain complex AI concepts to both technical and non-technical stakeholders Preferred Qualifications Experience with reinforcement learning and generative AI models Knowledge of MLOps practices and tools (e.g., MLflow, Kubeflow) Experience with database systems (SQL and NoSQL) Contributions to open-source AI projects or research publications in the field of AI.
Posted 1 day ago
15.0 years
0 Lacs
Vishakhapatnam, Andhra Pradesh, India
On-site
Principal Engineer Experience: 15 - 22 Years Exp Salary: Competitive Preferred Notice Period: 60 Days Opportunity Type: Bengaluru (Hybrid) Placement Type: Full-time (*Note: This is a requirement for one of Uplers' Clients) Must have required skills: C OR C++, TCP/IP OR SSL OR TLS OR Deep packet inspection OR HTTP OR HTTPS OR Web Application Firewall OR WAF OR IPS/IDP OR IDP/IPS, AWS OR Azure OR Google Cloud OR Kubernetes One of Uplers clients is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Overview As part of the Inline CASB team, you will have a unique opportunity to work on a world-class CASB solution that provides unparalleled visibility and control for widely used enterprise applications. Netskope Cloud Data Plane engineers architect and design one of the most scalable, high-performance cloud data planes in the world, processing 10+ Gbps of traffic. What’s in it for you In this role, you will be working on Deep Packet Inspection (DPI) of CASB Inline traffic. You will build core functionality to intercept and inspect traffic the CASB Inline traffic which include Generative AI applications in the data path, invoking essential services like DLP (Data Loss Prevention) and Threat Protection (TSS) and enforcing CASB Inline Real-Time Policies (RTP). You will be instrumental in developing state-of-the-art techniques, including AI/ML, to detect activities and apply advanced policies, all at line rate. This is a high-impact position for a technical leader who excels at solving challenging problems and mentoring a world-class engineering team. If you enjoy diving deep into technical challenges to develop innovative solutions that are scalable, accurate, and high-performing, then this role is for you. Job Responsibilities Understand the various use cases and work flows for native/browser access of SaaS apps and support the app access requirements/use cases via Netskope reverse proxy solution. Also maintain & enhance the access control features for the supported SaaS apps. Work on re-architecting the deep packet inspection module to make it intelligent and scalable, with the goal of achieving higher accuracy in activity detection across a wide range of SaaS applications. Work on identifying a smart, scalable solution to reduce the cost of building and maintaining SaaS app connectors, which are responsible for providing deeper visibility into application activities. Work closely with the product management team on the new apps support & to define new access control use cases. Involve in the complete development life cycle starting with understanding various requirements, understand/define functional specs, development with high efficacy/quality & measure the efficacy based on production data. Identify gaps in existing solutions/processes and bring in innovative ideas that help evolve the solution over time. Work closely with the technical support team to handle customer escalations. Analyze the product gaps that resulted in customer issues and improve the signature resiliency and test strategy. Preferred Qualification Bachelor's or Master's degree in Computer Science, Engineering or equivalent strongly preferred. Minimum 15 years of work experience. Preferred Technical Skills (must-have) Programming Mastery: Expert proficiency in C/C++ and strong experience with Python. Networking Protocol Expertise: Deep understanding of networking protocols, including TCP/IP, HTTP/S, WebSocket, DNS, and TLS/SSL decryption (MITM) techniques. Knowledge of L3 VPNs like IPSec and Wireguard. Security Domain Experience (L7 & Network): Proven experience in data plane/data path development for security products (e.g., Firewalls, Proxies, IDPS, DPI engines). Experience in network and web security technologies, including Web Application Firewall (WAF), L7 Access-Policies, Web Security, IDP/IPS, DNS-based security, and L7 DDoS. Must Have: Experience with HTTP proxy development. System Architecture: Strong understanding of computer architecture concepts like multi-threading, CPU scheduling, and memory management. Good understanding of algorithms and data structures for implementing real-time inline data processing. Good hands on experience and knowledge of Linux at a systems level. Troubleshooting & Debugging: Strong analytical and troubleshooting skills using debuggers like gdb and tools like Valgrind. Hands-on experience with packet capture technologies (e.g., tcpdump, Wireshark, libpcap) for network traffic analysis and troubleshooting. Cloud & Containerization: Strong knowledge of cloud solution architectures (AWS, Azure, GCP). Direct experience with container orchestration (Kubernetes) and Container Network Interface (CNI) plugins. Familiarity with inter-service communication protocols in cloud environments (e.g., gRPC, REST). Experience in a CASB, ZTNA, or SSE security environment. Contributions to open-source projects. Additional Technical Skills SASE Architecture: Experience working within a SASE (Secure Access Service Edge) architecture is a major plus. Authentication & Access Control: Strong knowledge of Authentication technologies, including Identity and Access Management, SSO, SAML, OpenID, OAuth2, and MFA. Generative AI (GenAI) Platforms: Familiarity with GenAI platforms and APIs and their communication patterns (e.g., OpenAI, Anthropic, Gemini). DPDK and VPP architecture knowledge is a plus. Testing Methodologies: A proponent of Test-Driven Development (TDD) and knowledge of various unit testing frameworks. Advanced Content Analysis: Experience with advanced content analysis or true file type detection. Inter-Service Communication: Familiarity with modern cloud protocols like gRPC and REST. Security Domain Experience: Experience in a CASB, ZTNA, or SSE security environment. Open-Source Contributions: A history of contributions to open-source projects How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Where Data Does More. Join the Snowflake team. Our Solutions Engineering organization is seeking a Data Platform Architect to join our Field CTO team who can provide leadership in working with both technical and business executives in the design and architecture of the Snowflake Cloud Data Platform as a critical component of their enterprise data architecture and overall ecosystem. In this role you will work directly with the sales team and channel partners to understand the needs of our customers, strategize on how to navigate winning sales cycles, provide compelling value-based demonstrations, support enterprise Proof of Concepts, and ultimately close business. You will leverage your expertise, best practices and reference architectures highlighting Snowflake’s Cloud Data Platform capabilities across Data Warehouse, Data Lake, and Data Engineering workloads. You are equally comfortable in both a business and technical context, interacting with executives and talking shop with technical audiences. IN THIS ROLE YOU WILL GET TO: Apply your multi-cloud data architecture expertise while presenting Snowflake technology and vision to executives and technical contributors at strategic prospects, customers, and partners Work hands-on with prospects and customers to demonstrate and communicate the value of Snowflake technology throughout the sales cycle, from demo to proof of concept to design and implementation Immerse yourself in the ever-evolving industry, maintaining a deep understanding of competitive and complementary technologies and vendors and how to position Snowflake in relation to them Collaborate with Product Management, Engineering, and Marketing to continuously improve Snowflake’s products and marketing Help to scope security related feature enhancements on behalf of customers to work with Product Management on ways to improve the product from a data engineering perspective Assist in clarifying concepts of data engineering, data warehousing, data collaboration, AI and Machine Learning. Share industry best practices, and features in the Snowflake platform; Assist in building demos and prototypes using Snowflake in conjunction with other integrated solutions; Engage with other relevant Snowflake stakeholders internally Collaborate with cross-functional teams, including Sales, Product, Engineering, Marketing, and Support in order to drive customer security feature adoption Partner with our Product Marketing team to define and support Snowflake's AI and Data Cloud and workload awareness and pipeline building via marketing initiatives including conferences, trade shows, and events Work and collaborate with other Field CTO members in areas of: Enterprise AI, Gen AI, Data Engineering, Security, Applications, as a Data Engineering expert you may be involved in collaborative engagements Partner with the best in the industry product team, helping to shape vision and capabilities for Snowflake Data Marketplace ON DAY ONE, WE WILL EXPECT YOU TO HAVE: 5+ years of architecture and data engineering experience within the Enterprise Data space 5+ years experience within a pre-sales environment Outstanding presenting skills to both technical and executive audiences, whether impromptu on a whiteboard or using presentations and demos Broad range of experience within large-scale Database and/or Data Warehouse technology, ETL, analytics and cloud technologies. For example, Data Lake, Data Mesh, Data Fabric Hands-on Development experience with technologies such as SQL, Python, Pandas, Spark, PySpark, Hadoop, Hive and any other Big data technologies Knowledge and hands-on experience with Big data processing tools (Nifi, Spark etc.) and common optimization techniques Knowledge and hands-on experience with CI/CD for data/ml pipelines Knowledge and design expertise in Stream processing pipelines Knowledge and hands-on experience with Containerisation & Orchestration, including Kubernetes, EKS, AKS, Terraform and equivalent technologies Knowledge and hands-on experience with cloud platforms & services (AWS, Azure, GCP) Ability to connect a customer’s specific business problems and Snowflake’s solutions Ability to do deep discovery of customer’s architecture framework and connect those with Snowflake Data Architecture Must have some prior knowledge of Data Engineering tools for ingestion, transformation and curation Familiarity with real-time or near real time use cases (ex. CDC) and technologies (ex. Kafka, Flink) and deep understanding of integration services and tools for building ETL and ELT data pipelines and its orchestration technologies such as Matillion, Fivetran, Airflow, Informatica, Azure Data Factory, etc. Strong architectural expertise in data engineering to confidently present and demo to business executives and technical audiences, and effectively handle any impromptu questions Bachelor’s Degree required, Masters Degree in computer science, engineering, mathematics or related fields, or equivalent experience preferred BONUS POINTS FOR THE FOLLOWING: Hands-on expertise with SQL, Python, Java, Scala and APIs AI/ML skills and hands-on experience and knowledge with model building and training, MLOps Experience selling enterprise SaaS software Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com
Posted 1 day ago
4.0 - 6.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes Job Title: – Senior software engineer Location: Remote Function/Department: Platform Engineering Position Summary We are looking for a talented and motivated Software Engineer with 4-6 years of experience in MERN stack (MongoDB, Express.js, React.js, Node.js) to join our team. The successful candidate will contribute to the development and maintenance of applications, ensuring their performance and quality. Responsibilities Develop and maintain software applications using the MERN stack. Collaborate with cross-functional teams to define, design, and develop new features. Troubleshoot, debug, and optimize existing applications for performance and scalability. Participate in full software development lifecycle, including requirement gathering, design, development, testing and deployment Learn, acquire, and get certified on new technologies as demanded by project Any exposure and experience in AI, ML and Data Science will be of additional advantage. Required Skills proficiency in development, maintenance, and support of software applications with proven experience in Node.js React.js (having knowledge of Angular will be an additional advantage) RDMS (Oracle/SQL Server/Postgres) SQL NoSQL databases like Mongo Experience with front-end technologies such as HTML, CSS and JavaScript. Experience in Open-Source Technologies Preferable: Knowledge of database management tools like Liquibase / Hibernate Well-versed in Devops (e.g. Azure DevOps) and cloud PaaS. Familiarity with Docker and Git Good understanding of coding standards, ability to debug. Good Experience in Software development lifecycle and processes. Strong problem-solving skills. Excellent written and oral communication skills Ability to work effectively in a cross-functional team. Added advantage: Exposure to AI/ML frameworks in Keras, Pytorch, libraries scikit-learn Knowledge and practical application of statistical analysis and mathematical modeling concepts and principles POSITION SPECIFICATIONS Bachelor’s degree in computer science, Information Technology, Electronics Communication. Other branches of engineering with proven skills through past experience / projects can also apply. ⚠️ Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.
Posted 1 day ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Amazon Global Selling is a business that enables sellers from across the world to sell exports their products to customers in USA shopping on Amazon.com and is looking for a SE to join us & help deliver strategic goals for Amazon ecommerce systems. The growth of this business is directly dependent on the ability for sellers across the world to register with Amazon Global Selling and conduct their business in a friction free manner. A core part of the growth of the Global selling business is building solutions for Sellers from “Emerging marketplaces” (which are countries that are relatively newer in the e-commerce domain when compared to USA and Europe, such as India, Vietnam, Thailand, Australia, Singapore etc.), to register for and grow within the Global Selling business. To drive the growth of this business, “Global Selling Foundations Technology team” has a mission to “Offer a Global Selling experience that enables new and existing Global Selling Emerging Marketplaces (GSE) sellers to easily Start, Manage and Grow their business globally”. The team comprises of 3 sub-teams, which are called the “Exports Compliance Technology team”, “Exports Seller Growth Technology team”, and “Exports Seller Fulfilment team”, which support multiple aspects of Amazon Global Selling’s core businesses such as Registration, Payments, Compliance, Business Growth, Cross border Fulfilment, for both Desktop and Mobile users. The team also builds ML, AI, and LLM solutions for Sellers across different Emerging marketplaces. So join us to build the next gen platforms that will enable millions of sellers across the world to Sell Globally. Key job responsibilities Write codes accomplishing some small task or address small scale business problems. Understand and debug minor issues occurring in a service when a TT comes/ PE raised. Prod deployment and sanity. Write scripts for automations to fix commonly occurring issues. Understand and work on AWS services like Lambda, SQS, DDB. automate select systems administration tasks through creation and maintenance of scripts and tools Basic Qualifications 1+ years of software development, or 1+ years of technical support experience Experience troubleshooting and debugging technical systems Experience scripting in modern program languages Bachelor's degree Preferred Qualifications Knowledge of computer science fundamentals such as object-oriented design, operating systems, algorithms, data structures, and complexity analysis Knowledge of programming languages such as C/C++, Python, Java or Perl Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3033813
Posted 1 day ago
15.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Principal Engineer Experience: 15 - 22 Years Exp Salary: Competitive Preferred Notice Period: 60 Days Opportunity Type: Bengaluru (Hybrid) Placement Type: Full-time (*Note: This is a requirement for one of Uplers' Clients) Must have required skills: C OR C++, TCP/IP OR SSL OR TLS OR Deep packet inspection OR HTTP OR HTTPS OR Web Application Firewall OR WAF OR IPS/IDP OR IDP/IPS, AWS OR Azure OR Google Cloud OR Kubernetes One of Uplers clients is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Overview As part of the Inline CASB team, you will have a unique opportunity to work on a world-class CASB solution that provides unparalleled visibility and control for widely used enterprise applications. Netskope Cloud Data Plane engineers architect and design one of the most scalable, high-performance cloud data planes in the world, processing 10+ Gbps of traffic. What’s in it for you In this role, you will be working on Deep Packet Inspection (DPI) of CASB Inline traffic. You will build core functionality to intercept and inspect traffic the CASB Inline traffic which include Generative AI applications in the data path, invoking essential services like DLP (Data Loss Prevention) and Threat Protection (TSS) and enforcing CASB Inline Real-Time Policies (RTP). You will be instrumental in developing state-of-the-art techniques, including AI/ML, to detect activities and apply advanced policies, all at line rate. This is a high-impact position for a technical leader who excels at solving challenging problems and mentoring a world-class engineering team. If you enjoy diving deep into technical challenges to develop innovative solutions that are scalable, accurate, and high-performing, then this role is for you. Job Responsibilities Understand the various use cases and work flows for native/browser access of SaaS apps and support the app access requirements/use cases via Netskope reverse proxy solution. Also maintain & enhance the access control features for the supported SaaS apps. Work on re-architecting the deep packet inspection module to make it intelligent and scalable, with the goal of achieving higher accuracy in activity detection across a wide range of SaaS applications. Work on identifying a smart, scalable solution to reduce the cost of building and maintaining SaaS app connectors, which are responsible for providing deeper visibility into application activities. Work closely with the product management team on the new apps support & to define new access control use cases. Involve in the complete development life cycle starting with understanding various requirements, understand/define functional specs, development with high efficacy/quality & measure the efficacy based on production data. Identify gaps in existing solutions/processes and bring in innovative ideas that help evolve the solution over time. Work closely with the technical support team to handle customer escalations. Analyze the product gaps that resulted in customer issues and improve the signature resiliency and test strategy. Preferred Qualification Bachelor's or Master's degree in Computer Science, Engineering or equivalent strongly preferred. Minimum 15 years of work experience. Preferred Technical Skills (must-have) Programming Mastery: Expert proficiency in C/C++ and strong experience with Python. Networking Protocol Expertise: Deep understanding of networking protocols, including TCP/IP, HTTP/S, WebSocket, DNS, and TLS/SSL decryption (MITM) techniques. Knowledge of L3 VPNs like IPSec and Wireguard. Security Domain Experience (L7 & Network): Proven experience in data plane/data path development for security products (e.g., Firewalls, Proxies, IDPS, DPI engines). Experience in network and web security technologies, including Web Application Firewall (WAF), L7 Access-Policies, Web Security, IDP/IPS, DNS-based security, and L7 DDoS. Must Have: Experience with HTTP proxy development. System Architecture: Strong understanding of computer architecture concepts like multi-threading, CPU scheduling, and memory management. Good understanding of algorithms and data structures for implementing real-time inline data processing. Good hands on experience and knowledge of Linux at a systems level. Troubleshooting & Debugging: Strong analytical and troubleshooting skills using debuggers like gdb and tools like Valgrind. Hands-on experience with packet capture technologies (e.g., tcpdump, Wireshark, libpcap) for network traffic analysis and troubleshooting. Cloud & Containerization: Strong knowledge of cloud solution architectures (AWS, Azure, GCP). Direct experience with container orchestration (Kubernetes) and Container Network Interface (CNI) plugins. Familiarity with inter-service communication protocols in cloud environments (e.g., gRPC, REST). Experience in a CASB, ZTNA, or SSE security environment. Contributions to open-source projects. Additional Technical Skills SASE Architecture: Experience working within a SASE (Secure Access Service Edge) architecture is a major plus. Authentication & Access Control: Strong knowledge of Authentication technologies, including Identity and Access Management, SSO, SAML, OpenID, OAuth2, and MFA. Generative AI (GenAI) Platforms: Familiarity with GenAI platforms and APIs and their communication patterns (e.g., OpenAI, Anthropic, Gemini). DPDK and VPP architecture knowledge is a plus. Testing Methodologies: A proponent of Test-Driven Development (TDD) and knowledge of various unit testing frameworks. Advanced Content Analysis: Experience with advanced content analysis or true file type detection. Inter-Service Communication: Familiarity with modern cloud protocols like gRPC and REST. Security Domain Experience: Experience in a CASB, ZTNA, or SSE security environment. Open-Source Contributions: A history of contributions to open-source projects How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
4.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description We are looking for an expert in iOS application development with solid foundation in enterprise and commercial applications, interested in building highly performant mobile apps with Swift. Your primary focus will be on leading a leading a team, propose reference architectures, create estimates and give inputs to client proposals. Also, you will have to participate in Design Thinking workshops and provide your inputs from mobile application perspective. Also lead the development of user interface and reusable components. You will ensure that these components and the overall application are robust and easy to maintain. You will coordinate with the rest of the team working on different layers of the infrastructure. Therefore, a commitment to collaborative problem solving, sophisticated expandable design, and quality product is important. Responsibilities Developing new user interface for iOS through Storyboarding, Swift UI or coding Networking Libraries and integration with third-party frameworks Building reusable components and libraries for future use Translating designs and wireframes into high quality code Ability to optimise the code through the use of instruments, or various techniques of memory profiling Guide the team to follow best industry practices to deliver clean code keeping performance in check Foster teamwork and lead by example Participating in the organization-wide people initiatives Mentoring junior team members and campus freshers People and Stakeholder management by close interaction with client and internal stakeholders Experience 4+ years’ experience in iOS native application development with SwiftUI, Swift and Objective-C Excellent UI/UX and architecture skills Experience in unit testing and ensuring the developed code passes the quality gate from Sonar Experience in identifying code quality issues during code reviews JSON, REST and Web Services, low energy peripheral devices integration Experience in setting up continuous integration processes and automated unit/UI testing Jira, git or other tools Must Have Skills In depth knowledge in latest stable Swift (5+) and Objective C Expertise in iPhone SDK, Cocoa touch frameworks UIKit, foundation, core data, push notification, AVFoundation, Core location, ARKit, Health App integration and APIs Ability to develop a code that meets Americans with Disabilities Act regulatory requirements. Ability to Perform concurrency and performance testing Ability to organize large-scale front-end mobile application codebases using common mobile design patterns such as MVVM, Clean Swift, MVC or Viper Must have developed apps using Swift and Objective C interoperability In-depth understanding of Adaptive layouts - iOS storyboards, auto layout, Size classes Understanding of interactive application development paradigms, GUI, memory management, file I/O, network & socket programming, concurrency and multi- threading Develop cutting edge functional modules that will be integrated across our iOS Application Experience in code versioning tools such as Git or SVN Understanding and Implementation of SOLID principles in an IOS Application Stay abreast of latest iOS platform features and propose evolution of application to take advantage of the same Experience in Swift UI, Apple iOS class libraries Experience with two-way data synchronisation between client and server database for applications which supports offline capability Unit-test code for robustness, including edge cases, usability, and general reliability Continuously discover, evaluate, and implement new technologies to maximize development efficiency Experience in implementing security policies Experience in automation, CI/CD and Unit testing frameworks Ability to analyse crash log and provide fix Ability to write the code which passes multiple quality gates from Fortify, MobSF, Sonar etc., Good knowledge on fixing the quality issues from Fortify and fixing the issues from Penetration Testing Nice To Have Skills AWS/Azure or any cloud exposure SSO, LDAP, OAuth, SSL integration, Alamofire and StoreKit framework exposure Experience in emerging technologies such as IoT, AI/ML etc. Awareness of enterprise Mobile Application Management (MAM)/Mobile Device Management (MDM) frameworks such as Microsoft Intune, Citrix Endpoint Management will be a plus More advanced data handlers such as WebSocket’s and Offline mobile applications Awareness of Enterprise mobile applications and data protection policies and methods would be a plus EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 day ago
15.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Principal Engineer Experience: 15 - 22 Years Exp Salary: Competitive Preferred Notice Period: 60 Days Opportunity Type: Bengaluru (Hybrid) Placement Type: Full-time (*Note: This is a requirement for one of Uplers' Clients) Must have required skills: C OR C++, TCP/IP OR SSL OR TLS OR Deep packet inspection OR HTTP OR HTTPS OR Web Application Firewall OR WAF OR IPS/IDP OR IDP/IPS, AWS OR Azure OR Google Cloud OR Kubernetes One of Uplers clients is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Overview As part of the Inline CASB team, you will have a unique opportunity to work on a world-class CASB solution that provides unparalleled visibility and control for widely used enterprise applications. Netskope Cloud Data Plane engineers architect and design one of the most scalable, high-performance cloud data planes in the world, processing 10+ Gbps of traffic. What’s in it for you In this role, you will be working on Deep Packet Inspection (DPI) of CASB Inline traffic. You will build core functionality to intercept and inspect traffic the CASB Inline traffic which include Generative AI applications in the data path, invoking essential services like DLP (Data Loss Prevention) and Threat Protection (TSS) and enforcing CASB Inline Real-Time Policies (RTP). You will be instrumental in developing state-of-the-art techniques, including AI/ML, to detect activities and apply advanced policies, all at line rate. This is a high-impact position for a technical leader who excels at solving challenging problems and mentoring a world-class engineering team. If you enjoy diving deep into technical challenges to develop innovative solutions that are scalable, accurate, and high-performing, then this role is for you. Job Responsibilities Understand the various use cases and work flows for native/browser access of SaaS apps and support the app access requirements/use cases via Netskope reverse proxy solution. Also maintain & enhance the access control features for the supported SaaS apps. Work on re-architecting the deep packet inspection module to make it intelligent and scalable, with the goal of achieving higher accuracy in activity detection across a wide range of SaaS applications. Work on identifying a smart, scalable solution to reduce the cost of building and maintaining SaaS app connectors, which are responsible for providing deeper visibility into application activities. Work closely with the product management team on the new apps support & to define new access control use cases. Involve in the complete development life cycle starting with understanding various requirements, understand/define functional specs, development with high efficacy/quality & measure the efficacy based on production data. Identify gaps in existing solutions/processes and bring in innovative ideas that help evolve the solution over time. Work closely with the technical support team to handle customer escalations. Analyze the product gaps that resulted in customer issues and improve the signature resiliency and test strategy. Preferred Qualification Bachelor's or Master's degree in Computer Science, Engineering or equivalent strongly preferred. Minimum 15 years of work experience. Preferred Technical Skills (must-have) Programming Mastery: Expert proficiency in C/C++ and strong experience with Python. Networking Protocol Expertise: Deep understanding of networking protocols, including TCP/IP, HTTP/S, WebSocket, DNS, and TLS/SSL decryption (MITM) techniques. Knowledge of L3 VPNs like IPSec and Wireguard. Security Domain Experience (L7 & Network): Proven experience in data plane/data path development for security products (e.g., Firewalls, Proxies, IDPS, DPI engines). Experience in network and web security technologies, including Web Application Firewall (WAF), L7 Access-Policies, Web Security, IDP/IPS, DNS-based security, and L7 DDoS. Must Have: Experience with HTTP proxy development. System Architecture: Strong understanding of computer architecture concepts like multi-threading, CPU scheduling, and memory management. Good understanding of algorithms and data structures for implementing real-time inline data processing. Good hands on experience and knowledge of Linux at a systems level. Troubleshooting & Debugging: Strong analytical and troubleshooting skills using debuggers like gdb and tools like Valgrind. Hands-on experience with packet capture technologies (e.g., tcpdump, Wireshark, libpcap) for network traffic analysis and troubleshooting. Cloud & Containerization: Strong knowledge of cloud solution architectures (AWS, Azure, GCP). Direct experience with container orchestration (Kubernetes) and Container Network Interface (CNI) plugins. Familiarity with inter-service communication protocols in cloud environments (e.g., gRPC, REST). Experience in a CASB, ZTNA, or SSE security environment. Contributions to open-source projects. Additional Technical Skills SASE Architecture: Experience working within a SASE (Secure Access Service Edge) architecture is a major plus. Authentication & Access Control: Strong knowledge of Authentication technologies, including Identity and Access Management, SSO, SAML, OpenID, OAuth2, and MFA. Generative AI (GenAI) Platforms: Familiarity with GenAI platforms and APIs and their communication patterns (e.g., OpenAI, Anthropic, Gemini). DPDK and VPP architecture knowledge is a plus. Testing Methodologies: A proponent of Test-Driven Development (TDD) and knowledge of various unit testing frameworks. Advanced Content Analysis: Experience with advanced content analysis or true file type detection. Inter-Service Communication: Familiarity with modern cloud protocols like gRPC and REST. Security Domain Experience: Experience in a CASB, ZTNA, or SSE security environment. Open-Source Contributions: A history of contributions to open-source projects How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a highly skilled and experienced Senior Software Engineer to join our team. As a Senior Software Engineer, you will be responsible for developing and maintaining software solutions for our clients. You should have a solid understanding and working experience with Azure cloud platforms, specifically Azure. This includes a deep understanding of Azure fundamentals such as Azure ML, Functions, Blob Storage, ML Studio, AI Foundry, Azure Pipelines, AKS, and Functions. You should also have experience with Python environment setup, dependency management (i.e. pip, conda), and API integrations (API Keys, OAuth). Additionally, exposure to NLP, machine learning, or data science projects is highly preferred. You should also have an awareness of prompt engineering principles and how LLMs (like GPT, Claude, or LLaMA) are used in real-world applications. A strong understanding of transformer architecture and how it powers modern NLP is also required.
Posted 1 day ago
15.0 years
0 Lacs
Uttar Pradesh, India
On-site
Principal Engineer Experience: 15 - 22 Years Exp Salary: Competitive Preferred Notice Period: 60 Days Opportunity Type: Bengaluru (Hybrid) Placement Type: Full-time (*Note: This is a requirement for one of Uplers' Clients) Must have required skills: C OR C++, TCP/IP OR SSL OR TLS OR Deep packet inspection OR HTTP OR HTTPS OR Web Application Firewall OR WAF OR IPS/IDP OR IDP/IPS, AWS OR Azure OR Google Cloud OR Kubernetes One of Uplers clients is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Overview As part of the Inline CASB team, you will have a unique opportunity to work on a world-class CASB solution that provides unparalleled visibility and control for widely used enterprise applications. Netskope Cloud Data Plane engineers architect and design one of the most scalable, high-performance cloud data planes in the world, processing 10+ Gbps of traffic. What’s in it for you In this role, you will be working on Deep Packet Inspection (DPI) of CASB Inline traffic. You will build core functionality to intercept and inspect traffic the CASB Inline traffic which include Generative AI applications in the data path, invoking essential services like DLP (Data Loss Prevention) and Threat Protection (TSS) and enforcing CASB Inline Real-Time Policies (RTP). You will be instrumental in developing state-of-the-art techniques, including AI/ML, to detect activities and apply advanced policies, all at line rate. This is a high-impact position for a technical leader who excels at solving challenging problems and mentoring a world-class engineering team. If you enjoy diving deep into technical challenges to develop innovative solutions that are scalable, accurate, and high-performing, then this role is for you. Job Responsibilities Understand the various use cases and work flows for native/browser access of SaaS apps and support the app access requirements/use cases via Netskope reverse proxy solution. Also maintain & enhance the access control features for the supported SaaS apps. Work on re-architecting the deep packet inspection module to make it intelligent and scalable, with the goal of achieving higher accuracy in activity detection across a wide range of SaaS applications. Work on identifying a smart, scalable solution to reduce the cost of building and maintaining SaaS app connectors, which are responsible for providing deeper visibility into application activities. Work closely with the product management team on the new apps support & to define new access control use cases. Involve in the complete development life cycle starting with understanding various requirements, understand/define functional specs, development with high efficacy/quality & measure the efficacy based on production data. Identify gaps in existing solutions/processes and bring in innovative ideas that help evolve the solution over time. Work closely with the technical support team to handle customer escalations. Analyze the product gaps that resulted in customer issues and improve the signature resiliency and test strategy. Preferred Qualification Bachelor's or Master's degree in Computer Science, Engineering or equivalent strongly preferred. Minimum 15 years of work experience. Preferred Technical Skills (must-have) Programming Mastery: Expert proficiency in C/C++ and strong experience with Python. Networking Protocol Expertise: Deep understanding of networking protocols, including TCP/IP, HTTP/S, WebSocket, DNS, and TLS/SSL decryption (MITM) techniques. Knowledge of L3 VPNs like IPSec and Wireguard. Security Domain Experience (L7 & Network): Proven experience in data plane/data path development for security products (e.g., Firewalls, Proxies, IDPS, DPI engines). Experience in network and web security technologies, including Web Application Firewall (WAF), L7 Access-Policies, Web Security, IDP/IPS, DNS-based security, and L7 DDoS. Must Have: Experience with HTTP proxy development. System Architecture: Strong understanding of computer architecture concepts like multi-threading, CPU scheduling, and memory management. Good understanding of algorithms and data structures for implementing real-time inline data processing. Good hands on experience and knowledge of Linux at a systems level. Troubleshooting & Debugging: Strong analytical and troubleshooting skills using debuggers like gdb and tools like Valgrind. Hands-on experience with packet capture technologies (e.g., tcpdump, Wireshark, libpcap) for network traffic analysis and troubleshooting. Cloud & Containerization: Strong knowledge of cloud solution architectures (AWS, Azure, GCP). Direct experience with container orchestration (Kubernetes) and Container Network Interface (CNI) plugins. Familiarity with inter-service communication protocols in cloud environments (e.g., gRPC, REST). Experience in a CASB, ZTNA, or SSE security environment. Contributions to open-source projects. Additional Technical Skills SASE Architecture: Experience working within a SASE (Secure Access Service Edge) architecture is a major plus. Authentication & Access Control: Strong knowledge of Authentication technologies, including Identity and Access Management, SSO, SAML, OpenID, OAuth2, and MFA. Generative AI (GenAI) Platforms: Familiarity with GenAI platforms and APIs and their communication patterns (e.g., OpenAI, Anthropic, Gemini). DPDK and VPP architecture knowledge is a plus. Testing Methodologies: A proponent of Test-Driven Development (TDD) and knowledge of various unit testing frameworks. Advanced Content Analysis: Experience with advanced content analysis or true file type detection. Inter-Service Communication: Familiarity with modern cloud protocols like gRPC and REST. Security Domain Experience: Experience in a CASB, ZTNA, or SSE security environment. Open-Source Contributions: A history of contributions to open-source projects How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
0.0 - 4.0 years
1 - 5 Lacs
Maharashtra
Work from Office
Responsibilities : Manipulate and preprocess structured and unstructured data to prepare datasets for analysis and model training. Utilize Python libraries like PyTorch, Pandas, and NumPy for data analysis, model development, and implementation. Fine-tune large language models (LLMs) to meet specific use cases and enterprise requirements. Collaborate with cross-functional teams to experime ... nt with AI/ML models and iterate quickly on prototypes. Optimize workflows to ensure fast experimentation and deployment of models to production environments. Implement containerization and basic Docker workflows to streamline deployment processes. Write clean, efficient, and production-ready Python code for scalable AI solutions. Good to Have: Exposure to cloud platforms like AWS, Azure, or GCP. Knowledge of MLOps principles and tools. Basic understanding of enterprise Knowledge Management Systems. Ability to work against tight deadlines. Ability to work on unstructured projects independently. Strong initiative and self-motivated Strong Communication & Collaboration acumen. Required Skills: Proficiency in Python with strong skills in libraries like PyTorch, Pandas, and NumPy. Experience in handling both structured and unstructured datasets. Familiarity with fine-tuning LLMs and understanding of modern NLP techniques. Basics of Docker and containerization principles. Demonstrated ability to experiment, iterate, and deploy code rapidly in a production setting. Strong problem-solving mindset with attention to detail. Ability to learn and adapt quickly in a fast-paced, dynamic environment. What we Offer: Opportunity to work on cutting-edge AI technologies and impactful projects. A collaborative and growth-oriented work environment. Competitive compensation and benefits package. A chance to be a part of a team shaping the future of enterprise intelligence.
Posted 1 day ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the Role We’re looking for a Senior AI Engineer to bring our AI research to life. You’ll be the bridge between paper and product—scaling prototypes into production systems, integrating models into real-world workflows, and powering domain intelligence for use cases like personalized business insights over structured enterprise data. You’ll work closely with researchers, designers, and product teams to ship features that are fast, reliable, and deeply useful. ⸻ Key Responsibilities • Turn research prototypes into robust, scalable APIs and services. • Build end-to-end AI features that simplify UX and maximize utility. • Collaborate with research and frontend teams to deliver seamless AI-powered workflows. • Own engineering quality: write clean, testable code, build CI/CD pipelines, and implement monitoring and observability. • Contribute to applied research modules—LLM customization, reliability upgrades, and GenAI system integrations. • Mentor junior engineers, share best practices, and help build a culture of engineering excellence. ⸻ Qualifications • 3+ years of experience building and shipping AI systems in production. • Strong fundamentals in data structures, system design, and clean code practices. • Hands-on experience with: • Model serving and inference (e.g., Triton, vLLM, FastAPI) • Retrieval systems (vector DBs, key-value stores) • Relational databases and structured data integration • Logging, monitoring, and debugging for ML pipelines • Familiar with GenAI building blocks: prompt engineering, model APIs, agent workflows. • Skilled with dev tooling (Docker, GitHub Actions, etc.) and cloud platforms (AWS/GCP). • You’ve either been at a Tier 1 college—or built something others now rely on. • [Bonus]: DM Ayush with the keyword “gen-engine” to prove you’re detail-oriented. ⸻ Why Join Genloop? Genloop is a research-first AI company building the next generation of customized, continuously learning AI systems. We specialize in adapting open-weight LLMs to enterprise domains—embedding them with domain memory and self-improving intelligence through feedback loops. Our team includes talent from Stanford, Apple, IITs, and top-tier AI firms—united by a shared goal of building production-grade, high-impact AI. ⸻ Compensation & Benefits We offer competitive salaries, meaningful equity, and benefits designed to support great work. Compensation is based on experience, expertise, and location. ⸻ Diversity & Inclusion Genloop is an Equal Opportunity Employer. We’re committed to building a diverse and inclusive workplace where everyone has a voice—and where great ideas can come from anywhere.
Posted 1 day ago
15.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Principal Engineer Experience: 15 - 22 Years Exp Salary: Competitive Preferred Notice Period: 60 Days Opportunity Type: Bengaluru (Hybrid) Placement Type: Full-time (*Note: This is a requirement for one of Uplers' Clients) Must have required skills: C OR C++, TCP/IP OR SSL OR TLS OR Deep packet inspection OR HTTP OR HTTPS OR Web Application Firewall OR WAF OR IPS/IDP OR IDP/IPS, AWS OR Azure OR Google Cloud OR Kubernetes One of Uplers clients is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Overview As part of the Inline CASB team, you will have a unique opportunity to work on a world-class CASB solution that provides unparalleled visibility and control for widely used enterprise applications. Netskope Cloud Data Plane engineers architect and design one of the most scalable, high-performance cloud data planes in the world, processing 10+ Gbps of traffic. What’s in it for you In this role, you will be working on Deep Packet Inspection (DPI) of CASB Inline traffic. You will build core functionality to intercept and inspect traffic the CASB Inline traffic which include Generative AI applications in the data path, invoking essential services like DLP (Data Loss Prevention) and Threat Protection (TSS) and enforcing CASB Inline Real-Time Policies (RTP). You will be instrumental in developing state-of-the-art techniques, including AI/ML, to detect activities and apply advanced policies, all at line rate. This is a high-impact position for a technical leader who excels at solving challenging problems and mentoring a world-class engineering team. If you enjoy diving deep into technical challenges to develop innovative solutions that are scalable, accurate, and high-performing, then this role is for you. Job Responsibilities Understand the various use cases and work flows for native/browser access of SaaS apps and support the app access requirements/use cases via Netskope reverse proxy solution. Also maintain & enhance the access control features for the supported SaaS apps. Work on re-architecting the deep packet inspection module to make it intelligent and scalable, with the goal of achieving higher accuracy in activity detection across a wide range of SaaS applications. Work on identifying a smart, scalable solution to reduce the cost of building and maintaining SaaS app connectors, which are responsible for providing deeper visibility into application activities. Work closely with the product management team on the new apps support & to define new access control use cases. Involve in the complete development life cycle starting with understanding various requirements, understand/define functional specs, development with high efficacy/quality & measure the efficacy based on production data. Identify gaps in existing solutions/processes and bring in innovative ideas that help evolve the solution over time. Work closely with the technical support team to handle customer escalations. Analyze the product gaps that resulted in customer issues and improve the signature resiliency and test strategy. Preferred Qualification Bachelor's or Master's degree in Computer Science, Engineering or equivalent strongly preferred. Minimum 15 years of work experience. Preferred Technical Skills (must-have) Programming Mastery: Expert proficiency in C/C++ and strong experience with Python. Networking Protocol Expertise: Deep understanding of networking protocols, including TCP/IP, HTTP/S, WebSocket, DNS, and TLS/SSL decryption (MITM) techniques. Knowledge of L3 VPNs like IPSec and Wireguard. Security Domain Experience (L7 & Network): Proven experience in data plane/data path development for security products (e.g., Firewalls, Proxies, IDPS, DPI engines). Experience in network and web security technologies, including Web Application Firewall (WAF), L7 Access-Policies, Web Security, IDP/IPS, DNS-based security, and L7 DDoS. Must Have: Experience with HTTP proxy development. System Architecture: Strong understanding of computer architecture concepts like multi-threading, CPU scheduling, and memory management. Good understanding of algorithms and data structures for implementing real-time inline data processing. Good hands on experience and knowledge of Linux at a systems level. Troubleshooting & Debugging: Strong analytical and troubleshooting skills using debuggers like gdb and tools like Valgrind. Hands-on experience with packet capture technologies (e.g., tcpdump, Wireshark, libpcap) for network traffic analysis and troubleshooting. Cloud & Containerization: Strong knowledge of cloud solution architectures (AWS, Azure, GCP). Direct experience with container orchestration (Kubernetes) and Container Network Interface (CNI) plugins. Familiarity with inter-service communication protocols in cloud environments (e.g., gRPC, REST). Experience in a CASB, ZTNA, or SSE security environment. Contributions to open-source projects. Additional Technical Skills SASE Architecture: Experience working within a SASE (Secure Access Service Edge) architecture is a major plus. Authentication & Access Control: Strong knowledge of Authentication technologies, including Identity and Access Management, SSO, SAML, OpenID, OAuth2, and MFA. Generative AI (GenAI) Platforms: Familiarity with GenAI platforms and APIs and their communication patterns (e.g., OpenAI, Anthropic, Gemini). DPDK and VPP architecture knowledge is a plus. Testing Methodologies: A proponent of Test-Driven Development (TDD) and knowledge of various unit testing frameworks. Advanced Content Analysis: Experience with advanced content analysis or true file type detection. Inter-Service Communication: Familiarity with modern cloud protocols like gRPC and REST. Security Domain Experience: Experience in a CASB, ZTNA, or SSE security environment. Open-Source Contributions: A history of contributions to open-source projects How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France