Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Delhi, India
On-site
🚀 We’re Hiring: Data Network Engineer Location: BANGLORE Company: KerryMe India Experience: 5 Years+ Employment Type: Full-Time Are you passionate about building and managing reliable, secure, and high-performance data networks? We’re looking for a Data Network Engineer to join our growing tech team! 🔧 What You’ll Do: Design, implement, and manage enterprise data networks (LAN, WAN, VPN) Monitor network performance and troubleshoot issues Ensure network security and compliance standards Collaborate with cross-functional teams to support business infrastructure Optimize routing protocols, firewalls, and switching technologies ✅ Requirements: Bachelor’s in Computer Science, IT, or related field Strong experience with Cisco/Juniper/Aruba networks Hands-on with routing protocols (BGP, OSPF, EIGRP) Network certification preferred (CCNA/CCNP) Familiarity with cloud networking (AWS/Azure/GCP) is a plus 🌟 Why Join Us? Work with a passionate and innovative team Competitive salary and benefits Opportunities for growth and certifications Cutting-edge technology projects 📩 Apply now or tag someone who would be a great fit! Send your resume to: ishan@kerryme.in #Hiring #NetworkEngineer #ITJobs #DataNetworkEngineer #LinkedInJobs #KerryMeIndia
Posted 2 days ago
4.0 years
0 Lacs
Mohali district, India
On-site
Job Summary: We are seeking a skilled and motivated Java Developer with strong expertise in Spring Boot and prior experience in the banking or financial services domain. The ideal candidate will be responsible for designing, developing, and maintaining backend services for enterprise applications. This is a full-time, on-site role based in Mohali, with occasional travel based on project/client needs. Key Responsibilities: Develop, test, and maintain backend services and APIs using Java and Spring Boot Collaborate with front-end developers, QA engineers, and product teams to deliver high-quality solutions Ensure best practices in code quality, security, and performance Work closely with clients to understand business requirements and implement them effectively Troubleshoot and resolve technical issues during development and deployment Participate in code reviews, daily stand-ups, and project planning meetings Travel to client locations if required, based on project needs Required Skills: 4+ years of hands-on experience in Java development Proficiency in Spring Boot and related technologies (Spring MVC, JPA, etc.) Strong experience working in the banking or financial domain (mandatory) Good understanding of RESTful APIs, microservices architecture, and database systems Familiarity with tools like Git, Maven, Jenkins, JIRA, etc. Strong problem-solving, debugging, and communication skills Preferred Qualifications: Experience working in Agile/Scrum teams Exposure to cloud platforms (AWS, Azure, or GCP) is a plus Bachelor's degree in Computer Science, Information Technology, or related field
Posted 2 days ago
0 years
3 - 6 Lacs
Gurgaon
On-site
Role - Cloud Database Engineer IV Skills - MS SQl + Cloud + Any secondary DB Shift - Should be ok for rotational - 24x7 Experience -Atleast 12 yrs of relevant exp. Job Profile Summary The Cloud Database Engineer Perform database engineering and administration activities including design, planning, configuration, monitoring, automation, self- serviceability, alerting, and space management, database backup and recovery. Plan computerized databases, including base definition, structure, documentation, long- range requirements, operational guidelines, and protection with a capacity to lead and advise on migration and modernization, discover, and execute workload migrations to Cloud (AWS/Azure/GCP). Key Responsibilities List Create, maintain, and use Standard Operating Procedures (SOP’s) for migration execution and ensure long term technical viability and optimization of production deployments and administration. Engage, Consult and Deliver based on interactive customer communications in streamlining project deliverables and scope of work. Capacity Planning: Forecast future database growth based on usage trends and plan for hardware and storage requirements accordingly to ensure scalability and optimal performance. Plan, Create, Manage and Deploy Effective High Availability and Disaster Recovery strategy/Runbooks Patch Management and Upgrades: Plan and execute Database software upgrades, patches, and service packs. Troubleshooting and Issue Resolution: Investigate and resolve complex database-related issues, including data corruption, performance problems, and connectivity challenges. Automation and Scripting: Contribute to automation scripts and tools to streamline repetitive tasks, improve efficiency, and reduce the risk of human errors. Monitoring and Alerting: Set up monitoring and alerting systems to proactively identify and address potential database issues before they become critical. Performance Analysis and Reporting: Generate performance reports and analysis for stakeholders and management to provide insights into the database environment's health and performance. Documentation: Maintain up-to-date documentation of database configurations, procedures, and troubleshooting steps Ticket Handling: Work to resolve Incident, Changes and Service request under the agreed client SLA. Problem Management: Responsible in resolving problem tickets by creating detailed RCA reports Participate in 24X7 production support for Database Operations. Hand’s on with using cloud tech tools such as AWS DMS, SMS, App Migration Service, Migration Hub, Azure Migrate, Data Migration Service, SQL Server DMA, Azure ASR, AWS DRS Migration from SQL server to/from Other RDBMS platform for PaaS models like AWS Aurora, AWS RDS, Azure Database, Azure MI, GCP Cloud SQL. Understanding Cloud basics and perform duties like security management, storage management, Backup Vaults, Key vaults, Server/DB Monitoring Cost Optimization: Compute and workload analysis, License enhancements and features. Knowledge List Proficient Skills in SQL Server Architecture, Installation and Configuration, Performance Tuning, High Availability and Disaster Recovery (HADR), Monitoring and Troubleshooting Database Migrations and Upgrades: Experience in planning and executing database migrations and upgrades, including version compatibility, testing, and minimizing downtime. Ability to Deploy, Manage and Troubleshoot HADR config in one of the following tech buckets SQL Server (Always On, FCI, Loshipping, Replication) MySQL or PostgreSQL( Master slave replication, InnoDB cluster Set) Homogeneous and Heterogeneous Migrations from/to between various Tech bucket (SQL Server and PostgreSQL or/ MySQL) SQL Server in the Cloud: Knowledge of deploying and managing SQL Server in cloud platforms such as Azure SQL Database and Amazon RDS. SQL Server Best Practices: Familiarity with industry best practices for SQL Server administration, including configuration settings, maintenance tasks, and disaster recovery strategies. Ability to communicate technical info and ideas so others will understand. Ability to apply varying leadership skills and traits that create solutions and results to unexpected situations. About Rackspace Technology We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future. More on Rackspace Technology Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know.
Posted 2 days ago
6.0 - 10.0 years
0 Lacs
Gurgaon
On-site
Gurgaon,Haryana,India Job ID 770273 About this opportunity Technical Sales Specialist Cognitive Network Solutions (CNS) is within Global PreSales Center - Bid Office Organization, responsible for providing technical and domain expertise during the sales cycle for solutions related to the RAN Intelligent Controller (RIC) and rApps. You will closely interact with Sales, Product and engineering teams to demonstrate technical feasibility and to ensure alignment of proposed solutions with customer requirements. What you will do Providing quality response for the RFx/tender within the given timeline. Develop solution architectures and respond to RFPs/RFIs for rApp-based offerings. Support in pre-sales RFXs deliverables i.e. Statement of Compliance SoC, Dimensioning, HW -SW configuration in ECP / ACS, Solution description and services estimates. Solution and proposal’s strategy alignment and support to CUs. During RFX , engage with customers units responsible to understand their business and technical needs related to RAN optimization and automation. Articulate the technical capabilities and business benefits of rApps. Create technical documentation including solution blueprints, architecture diagrams, and integration plans. Coordinate with cross SA ( NM) / BA ( BNEW) to tailor rApp solutions based on customer requirements. Collaborate with product management to influence rApp product roadmap based on market feedback. You will bring Good Understanding of the Sales process. Knowledge of SMO (Service Management and Orchestration) and Non-RT RIC. Familiarity with AI/ML frameworks and APIs used in rApp development. Understanding of containerization (Docker, Kubernetes), CI/CD pipelines is a plus. Experience with OSS/BSS integration and RAN analytics. Deep knowledge of O-RAN specifications, RIC architecture, and the rApp ecosystem. Industry trends in 5G, AI/ML for RAN, and O-RAN Alliance developments. Preferred Skills Interpersonal skills Presentation & Communication Skills Teamwork & Collaboration Analytical Thinking Relating & Networking Delivery Results & Customer Expectations Adapting & Responding to Change Able to meet tight deadlines Problem Solving & Strategic Thinking. Ability to work independently and across cross-functional teams. At least 6-10 years of experience from the Telecom industry. Bachelor’s degree in computer science, Electronics engineering or similar Fluent in English in oral and written communication. Preferred Certifications (Optional): O-RAN Alliance training/certification TM Forum or similar telecom standards certifications Cloud certifications (AWS/GCP/Azure) are a plus.
Posted 2 days ago
7.0 years
7 - 8 Lacs
Gurgaon
On-site
Designation: Project Manager Location: Gurugram Job Type: Full-Time Experience: 7+ years of technical project management , with at least 5 years in a cloud-native environment. We are looking for someone with prior experience in software development—ideally someone who has previously developed software and is now looking to transition into a Project Manager role. This role requires a strong background in managing technical projects, especially within cloud infrastructure, DevOps, and software delivery. You’ll bridge the gap between operations and client-facing teams to ensure seamless project execution and superior customer experience. Experience in software development is required, we are looking for someone who has developed software before but now wants to be PM. Key Responsibilities: Lead the planning and execution of multiple projects simultaneously, ensuring on-time, on-budget, and high-quality delivery. Define project scope, objectives, timelines, resource needs, and deliverables. Develop detailed project plans and monitor progress using project management tools. Coordinate with internal teams and client to ensure alignment and smooth execution. Collaborate with cross-functional teams to translate business objectives into technical requirements and actionable project plans. Conduct regular status meetings and provide updates to client and leadership. Identify and resolve issues, risks, and dependencies throughout the project lifecycle. Manage change requests and control scope creep using structured change control processes. Identify, assess and mitigate project risks and issues proactively. Ensure projects are delivered on time, within budget and meet high standards of quality and customer satisfaction. Work closely with DevOps, cloud architects and product teams to support infrastructure-as-code (IaC), CI/CD pipelines and deployment processes. Coordinate with third-party vendors, cloud service providers (e.g., AWS, Azure, GCP) and integration partners. Experience in Agile project management is required Required Qualifications: Bachelor’s degree in Computer Science a related field. 7+ years of technical project management experience with at least 5 years in a cloud-native environment. Proven experience with cloud platforms (AWS, Azure, or Google Cloud) and cloud-based deployment models. Excellent communication skills with the ability to translate complex technical concepts into clear business terms. Strong organizational and time management skills, with the ability to handle multiple priorities. Job Type: Full-time Pay: ₹60,673.54 - ₹73,990.41 per month Benefits: Paid time off Provident Fund Schedule: Monday to Friday Supplemental Pay: Yearly bonus Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Experience: Project management: 7 years (Required) Location: Gurugram, Haryana (Required) Shift availability: Day Shift (Required)
Posted 2 days ago
5.0 years
12 - 18 Lacs
Hyderābād
On-site
Job Title: Sr. Python Full Stack Developer Location: Hyderabad Experience: 5+ years Mode: 6 month contract + ext. Responsibilities: Develop and maintain full-stack applications using Python (Django/Flask/FastAPI) Build responsive frontends with React/Vue.js Work with databases like PostgreSQL, MySQL, MongoDB Implement DevOps practices (Docker, Kubernetes, CI/CD) Collaborate with teams for feature development and issue resolution Write unit tests and ensure code quality Requirements: 5+ years experience in Python full stack development Strong in Python frameworks and JavaScript (React/Vue) Experience with REST APIs, databases, and DevOps Good understanding of version control (Git) Problem-solving skills and team collaboration Desired Skills: Knowledge of AWS/GCP/Azure Experience with microservices architecture Familiarity with Agile methodologies Job Type: Contractual / Temporary Contract length: 6 months Pay: ₹100,000.00 - ₹150,000.00 per month Experience: Python: 5 years (Required) Work Location: In person
Posted 2 days ago
0 years
5 - 8 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Assistant Vice President– Generative AI – Systems Architect Role Overview: We are looking for an experienced Systems Architect with extensive experience in designing and scaling Generative AI systems to production. This role requires an individual with deep expertise in system architecture, software engineering, data platforms, and AI infrastructure, who can bridge the gap between data science, engineering and business. You will be responsible for end-to-end architecture of Gen.AI systems including model lifecycle management, inference, orchestration, pipelines Key Responsibilities: Architect and design end-to-end systems for production-grade Generative AI applications (e.g., LLM-based chatbots, copilots, content generation tools). Define and oversee system architecture covering data ingestion, model training/fine-tuning, inferencing, and deployment pipelines. Establish architectural tenets like modularity, scalability, reliability, observability, and maintainability. Collaborate with data scientists, ML engineers, platform engineers, and product managers to align architecture with business and AI goals. Choose and integrate foundation models (open source or proprietary) using APIs, model hubs, or fine-tuned versions. Evaluate and design solutions based on architecture patterns such as Retrieval-Augmented Generation (RAG), Agentic AI, Multi-modal AI, and Federated Learning. Design secure and compliant architecture for enterprise settings, including data governance, auditability, and access control. Lead system design reviews and define non-functional requirements (NFRs), including latency, availability, throughput, and cost. Work closely with MLOps teams to define the CI /CD processes for model and system updates. Contribute to the creation of reference architectures, design templates, and reusable components. Stay abreast of the latest advancements in GenAI , system design patterns, and AI platform tooling. Qualifications we seek in you! Minimum Qualifications Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Deep understanding of Generative AI architectures, including LLMs, diffusion models, prompt engineering, and model fine-tuning. Strong experience with at least one cloud platform (AWS, GCP, or Azure) and services like SageMaker, Vertex AI, or Azure ML. Experience with Agentic AI systems or orchestrating multiple LLM agents. Experience with multimodal systems (e.g., combining image, text, video, and speech models). Knowledge of semantic search, vector databases, and retrieval techniques in RAG. Familiarity with Zero Trust architecture and advanced enterprise security practices. Experience in building developer platforms/toolkits for AI consumption. Contributions to open-source AI system frameworks or thought leadership in GenAI architecture. Hands-on experience with tools and frameworks like LangChain , Hugging Face, Ray, Kubeflow, MLflow , or Weaviate /FAISS. Knowledge of data pipelines, ETL/ELT, and data lakes/warehouses (e.g., Snowflake, BigQuery , Delta Lake). Solid grasp of DevOps and MLOps principles, including containerization (Docker), orchestration (Kubernetes), CI/CD pipelines, and model monitoring. Familiarity with system design tradeoffs in latency vs cost vs scale for GenAI workloads. Preferred Qualifications: Bachelor’s or Master’s degree in computer science, Engineering, or related field. Experience in software/system architecture, with experience in GenAI /AI/ML. Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Strong interpersonal and communication skills; ability to collaborate and present to technical and executive stakeholders. Certifications in cloud platforms (e.g., AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect Expert, Google Cloud Professional Data Engineer). Familiarity with data governance and security best practices. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Assistant Vice President Primary Location India-Hyderabad Schedule Full-time Education Level Master's / Equivalent Job Posting Jul 21, 2025, 2:57:18 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 2 days ago
2.0 - 4.0 years
7 - 9 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer. Implement RESTful APIs as per the design specifications. Analyze and develop code as per specifications. Prepare unit test plans and conduct unit testing. Review of design/code/Unit test results and provide inputs. Troubleshoot issues and bug fixing. Support production deployment and post development implementation activities. Participate in end to end impact analysis, creating low level design and able to provide estimates for changes and new development requirements and E2E testing for different projects Review the deliverables and ensure the quality of the product. Follow quality process and quality improvement initiatives Requirements To be successful in this role you should meet the following requirements: Relevant work experience of 2-4 years in Java, J2EE, Spring, Spring Boot, AWS and GCP. Very strong working experience and expertise of Java, Spring Boot, AWS, GCP, Micro Services. Hands-on in-depth experience in designing and implementing micro services using the Spring Boot framework. Hands-on experience in using and developing Continuous Integration pipeline using Jenkins, Maven, GitHub and other CI tools used in DevOps ecosystem. Deep Understanding of various protocols – SOAP, REST, HTTP & JSON. Fair knowledge and experience of DevOps & Agile discipline. Knowledge on Cloud Technologies like PaaS & IaaS. Awareness of Kubernetes and Dockers is appreciated. Awareness of information security and common security frameworks. Working experience in APIs and Microservices. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working, and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 2 days ago
7.0 years
2 - 9 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role you will: Participate in the design and planning of projects in conjunction with the Leads, other IT, business users and external partners. Primary accountability is to develop the most appropriate IT solutions in line with the solution design, to meet customer needs, ensuring continuous improvement. Accountable for the operational stability, integrity and availability of products and services supported by the pod. Accountable for the performance of the pod activities and maintaining performance in line with benchmark objectives (inc productivity, quality, and effectiveness measures) Support the solution architect to ensure that solutions and services are supported by the right architectures & systems. Requirements To be successful in this role you should meet the following requirements: Graduation in technology (B.E, B.Tech & Above) with 7+ years of IT experience. Web Technologies: Java, Spring, Spring Boot, Python, RESTful services Experience of coordinating with third party product vendors for product upgrades and testing of required new capabilities via proof of concepts while adhering to HSBC policies and procedures. Java/J2EE, Spring Boot, Micro services, REST, Cloud native application development Cloud: AWS, SHP, GCP, Kubernetes, Docker etc. Tools: Jira, Confluence. Experience in Technology Reference Model maintenance and review Strong Communication & Presentation Skills are required for effective collaboration with multiple stakeholders / teams across the group Proactive approach to design the solutions. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working, and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 2 days ago
2.0 years
3 - 5 Lacs
Mohali
On-site
We are looking for a highly motivated GenAI Engineer with strong hands-on experience working with Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) workflows, and production-ready AI applications. You’ll help design, build, and extend digital products and creative applications that leverage the latest in LLM technologies. You will play a lead role in product development & offering AI services to clients, client onboarding, and delivery of cutting-edge AI solutions, working with a range of modern AI tools, cloud services, and frameworks. Experience: 2 + Years Location: Mohali, Punjab Work Mode: On-site Timings: 10:00 AM – 7:00 PM (Day Shift) Interview Mode: Face-to-Face ( On-Site) Contact: +91-9872993778 (Mon–Fri, 11 AM – 6 PM) How to Apply: If you're passionate about SEO and eager to work in a dynamic and innovative team, we want to hear from you! Send your updated resume to hr@webtunix.com Key Responsibilities: Design and implement generative AI solutions using large language models (LLMs), natural language processing (NLP), and computer vision. Develop, enhance, and scale digital products leveraging LLMs at their core. Lead product development and operations teams to implement GenAI-based solutions. Design and manage client onboarding, rollout, and adoption strategies. Deliver and maintain enhancements based on client-specific needs. Build and maintain RAG pipelines and LLM-based workflows for enterprise applications. Manage LLMOps processes across the entire AI lifecycle (prompt design, fine-tuning, evaluation). Work with cloud-based GenAI platforms (primarily Azure OpenAI, but also Google, AWS, etc.). Implement API integrations, orchestration, and workflow automation. Evaluate, fine-tune, and monitor performance of LLM outputs using observability tools. Required Qualifications: Bachelor’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field — or equivalent hands-on experience. Minimum 2 years of hands-on experience in software development or applied machine learning. Proficiency in Azure AI services, including: Azure OpenAI (GPT-4, Codex, etc.) Azure Cognitive Services Azure Machine Learning for model development and deployment Proven experience with LLM APIs (OpenAI, Azure OpenAI, Gemini, Claude, etc.). Solid hands-on experience in building and deploying RAG pipelines. Proficiency in Python and strong knowledge of Python ecosystems and libraries. Familiarity with core GenAI frameworks: LangChain, LangGraph, LiteLLM, LlamaIndex, etc. Experience with vector databases: FAISS, Milvus, Azure AI Search, AWS Kendra, etc. Practical knowledge of embeddings, model registries (e.g., Hugging Face), and LLM APIs. Experience in prompt engineering, tool/function calling, and structured outputs (Pydantic/JSON Schema). Exposure to LLM observability tools: LangSmith, LangFuse, Opik, etc. Strong Git, API, and cloud platform (AWS, GCP, Azure) experience. Job Type: Full-time Pay: ₹25,000.00 - ₹45,000.00 per month Schedule: Day shift Monday to Friday Experience: Gen AI Engineer: 2 years (Required) Work Location: In person
Posted 2 days ago
0 years
2 - 4 Lacs
Mohali
On-site
Job description About the Role: We’re looking for a skilled and motivated Python Developer to join our engineering team. You will be responsible for developing scalable backend systems, APIs, and integrations, and collaborating with cross-functional teams to deliver robust and high-performing solutions. Experience: - 1yrs -3yrs Note: Immediate joiners and candidates located nearby are preferred. Key Responsibilities: Design, develop, and maintain backend services and APIs using Python. Work with frameworks like Django, Flask, or FastAPI. Build and integrate RESTful and/or GraphQL APIs. Write clean, scalable, and well-documented code. Collaborate with frontend developers, DevOps, and product teams. Implement best practices in software development and testing. Requirements: Strong proficiency in Python and understanding of OOP principles. Experience with Django/Flask/FastAPI. Knowledge of relational (PostgreSQL, MySQL) and/or NoSQL databases (MongoDB). Familiarity with Docker, Git, and CI/CD pipelines. Experience with cloud platforms (AWS, GCP, or Azure) is a plus. Good problem-solving and communication skills. Job Type: Full-time Pay: ₹20,000.00 - ₹35,000.00 per month Benefits: Paid time off Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Work Location: In person Speak with the employer +91 8699032616
Posted 2 days ago
0 years
0 - 1 Lacs
Mohali
On-site
DevOps Intern – Job Description Location: Mohali (Onsite) Internship Duration: 6 Months Stipend: ₹10,000/month Placement Opportunity: Full-Time offer based on performance Job Overview: We are looking for a motivated DevOps Intern to join our infrastructure and automation team. You will assist in building CI/CD pipelines, managing cloud infrastructure, and ensuring reliable deployments. Candidates with strong fundamentals in Linux, scripting, and cloud concepts are preferred. Top performers will be considered for full-time roles. Key Responsibilities: Assist in setting up and maintaining CI/CD pipelines Work on cloud infrastructure (AWS, GCP, or Azure) Automate routine operations using shell or Python scripts Monitor system performance, reliability, and deployments Manage Docker containers and Kubernetes clusters Collaborate with development teams for smooth releases Maintain documentation for DevOps processes and tools Required Qualifications & Skills: Pursuing or completed B.Tech / BCA (2024 or 2025 pass-out) Strong understanding of Linux systems and shell scripting Familiarity with Docker, Git, CI/CD tools (like Jenkins, GitHub Actions) Basic knowledge of cloud platforms (AWS/GCP/Azure) Good grasp of networking fundamentals and system security Strong problem-solving skills and attention to detail Excellent communication and a willingness to learn Job Types: Full-time, Permanent, Fresher, Internship Pay: ₹8,086.00 - ₹10,000.00 per month Benefits: Leave encashment Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Application Question(s): Immediate Joiner ? Work Location: In person
Posted 2 days ago
1.0 years
1 - 8 Lacs
Mohali
Remote
Job Title: Python AI Developer Location: [On-site] Experience Required: Minimum 1 year Job Overview We are looking for a highly motivated and skilled Python AI Developer who has hands-on experience in building AI-driven applications, particularly using RAG (Retrieval-Augmented Generation) frameworks, LLMs (Large Language Models) , and modern AI agentic systems. The ideal candidate should have practical experience with frameworks like Ollama, LangChain, or similar , and a solid understanding of AI Agents and Agentic Architectures . Key Responsibilities Design, develop, and deploy AI-powered solutions leveraging RAG pipelines and LLMs . Work with Ollama, LangChain , or similar frameworks to build intelligent conversational agents. Implement and optimize AI Agent and Agentic Agent architectures for various projects. Integrate AI models with APIs, databases, and knowledge bases to build scalable solutions. Collaborate with cross-functional teams to translate business requirements into technical implementations. Write clean, modular, and well-documented Python code . Perform research and prototyping of new AI technologies to enhance current solutions. Required Skills & Experience Minimum 1 year of experience as an AI/ML or Python Developer. Strong proficiency in Python and libraries like FastAPI, Flask, or Django. Hands-on experience with RAG architectures and LLMs (OpenAI, LLaMA, Falcon, etc.). Experience with Ollama, LangChain, or other AI frameworks . Knowledge of AI Agents and Agentic Agent systems (e.g., Auto-GPT, BabyAGI, CrewAI). Familiarity with vector databases (Pinecone, FAISS, Weaviate, Milvus). Good understanding of NLP pipelines, embeddings, and model fine-tuning. Experience with APIs, cloud services (AWS/GCP/Azure), and containerization (Docker). Problem-solving mindset and ability to work independently on AI projects. Preferred Qualifications (Nice to Have) Prior contributions to open-source AI agentic frameworks. Knowledge of advanced prompt engineering techniques. Exposure to RAG-enhanced chatbots and document Q&A systems. Understanding of MLOps and model deployment pipelines. What We Offer Competitive salary based on skills and experience. Opportunity to work on cutting-edge AI projects with RAG and LLM integrations. A collaborative, innovation-driven environment with career growth potential. Flexible working hours and remote-friendly culture. Job Type: Full-time Pay: ₹16,480.46 - ₹68,391.47 per month Experience: Python: 1 year (Preferred) Location: Mohali, Punjab (Preferred) Work Location: In person
Posted 2 days ago
0 years
1 - 1 Lacs
Mohali
On-site
About the Role We are looking for a passionate Data Science fresher who has completed at least 6 months of practical training, internship, or project experience in the data science field. This is an exciting opportunity to apply your analytical and problem-solving skills to real-world datasets while working closely with experienced data scientists and engineers. Key Responsibilities Assist in data collection, cleaning, and preprocessing from various sources. Support the team in building, evaluating, and optimizing ML models . Perform exploratory data analysis (EDA) to derive insights and patterns. Work on data visualization dashboards and reports using tools like Power BI, Tableau, or Matplotlib/Seaborn. Collaborate with senior data scientists and domain experts on ongoing projects. Document findings, code, and models in a structured manner. Continuously learn and adopt new techniques, tools, and frameworks. Required Skills & Qualifications Education: Bachelor’s degree in Computer Science, Statistics, Mathematics, Engineering, or a related field. Experience: Minimum 6 months internship/training in data science, analytics, or machine learning. Technical Skills: Proficiency in Python (Pandas, NumPy, Scikit-learn, etc.). Understanding of machine learning algorithms (supervised/unsupervised). Knowledge of SQL and database concepts. Familiarity with data visualization tools/libraries. Basic understanding of statistics and probability. Soft Skills: Strong analytical thinking and problem-solving ability. Good communication and teamwork skills. Eagerness to learn and grow in a dynamic environment. Good to Have (Optional) Exposure to cloud platforms (AWS, GCP, Azure). Experience with big data tools (Spark, Hadoop). Knowledge of deep learning frameworks (TensorFlow, PyTorch). What We Offer Opportunity to work on real-world data science projects . Mentorship from experienced professionals in the field. A collaborative, innovative, and supportive work environment. Growth path to become a full-time Data Scientist with us. Job Types: Full-time, Permanent, Fresher Pay: ₹10,000.00 - ₹15,000.00 per month Benefits: Health insurance Schedule: Day shift Fixed shift Monday to Friday Application Question(s): have you done your 6 month training ? Education: Bachelor's (Preferred) Language: English (Preferred) Work Location: In person
Posted 2 days ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 7+ years. Deep understanding of Generative AI fundamentals and transformer-based architectures. Strong experience in Cloud Architecture (e.g., AWS, Azure, GCP) for deploying scalable AI systems. Hands on working experience in working with Generative AI models. Strong working experience in Azure AI. Proven experience with BERT, GPT, LLaMA, and similar LLMs. Strong hands-on experience in prompt engineering and RAG techniques. Experience in fine-tuning and deploying models using frameworks like Hugging Face Transformers, LangChain, or equivalent. Familiarity with multi-agent AI systems and collaborative model workflows. Proficient in Python and machine learning libraries (e.g., PyTorch, TensorFlow). Experience integrating models into enterprise platforms and APIs. Understanding of ML Ops practices and CI/CD pipelines for AI deployment. Background in Natural Language Processing (NLP) and Knowledge Engineering. RESPONSIBILITIES: Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the client’s requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 2 days ago
5.0 - 6.0 years
8 - 15 Lacs
India
On-site
We are seeking a highly skilled Python Developer with expertise in Machine Learning and Data Analytics to join our team. The ideal candidate should have 5-6 years of experience in developing end-to-end ML-driven applications and handling data-driven projects independently. You will be responsible for designing, developing, and deploying Python-based applications that leverage data analytics, statistical modeling, and machine learning techniques. Key Responsibilities: Design, develop, and deploy Python applications for data analytics and machine learning. Work independently on machine learning model development, evaluation, and optimization. Develop ETL pipelines and process large-scale datasets for analysis. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Strong expertise in Machine Learning, Data Analytics, and AI frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Job Type: Full-time Pay: ₹800,000.00 - ₹1,500,000.00 per year Schedule: Day shift Experience: Python: 5 years (Required) Work Location: In person Expected Start Date: 01/08/2025
Posted 2 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position - Python Developer (GCP) Experience - 5+Years Location - Bangalore/ WFO Job Description We are looking for an experienced 5+ years Python Developer with hands-on expertise in Google Cloud Platform (GCP) services. The ideal candidate will be responsible for developing and deploying scalable applications, data processing pipelines, and cloud-native solutions using Python and GCP tools. Responsibilities Develop and maintain Python-based applications and services. Design and implement data pipelines and cloud functions using GCP services such as Cloud Functions, Cloud Run, Pub/Sub, Dataflow, and Big Query. Integrate APIs and third-party services. Optimize performance and scalability of cloud-based applications. Collaborate with DevOps and data teams to build CI/CD pipelines and manage infrastructure using tools like Terraform or Deployment Manager. Write clean, maintainable, and well-documented code. Troubleshoot and resolve technical issues in production and development environments. Perks & Benefits Health and Wellness: Healthcare policy covering your family and parents. Food: Enjoy a scrumptious buffet lunch at the office every day (For Bangalore) Professional Development: Learn and propel your career. We provide workshops, funded online courses and other learning opportunities based on individual needs. Rewards and Recognitions: Recognition and rewards programs in place to celebrate your achievements and contributions. Why join Relanto? Health & Family: Comprehensive benefits for you and your loved ones, ensuring well-being. Growth Mindset: Continuous learning opportunities to stay ahead in your field. Dynamic & Inclusive: Vibrant culture fostering collaboration, creativity, and belonging. Career Ladder: Internal promotions and clear path for advancement. Recognition & Rewards: Celebrate your achievements and contributions. Work-Life Harmony: Flexible arrangements to balance your commitments.
Posted 2 days ago
9.0 - 12.0 years
7 - 10 Lacs
Bengaluru
On-site
We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What you’ll do You will work in the context of the SAP Business Technology Platform (BTP) which provides the multi-cloud foundation and a layer of abstraction for all applications to run on both public clouds (AWS, Azure, GCP, Al iCloud) and private cloud. More specifically you will work on a specific technical Backing Service, the Postgres as a service. Roles and responsibilities: Design, code, test, and assure the quality of complex product features with team members Resolve complex issues within own area of expertise and support others Work in a multi-cloud environment within a DevOps setup. Use Infrastructure as Code methodologies to automate and manage infrastructure and services Learn new technologies/patterns and incorporate them into the product with minimal disruption. Perform self-code review and peer review. Define and formulate coding best practices and guidelines for clean code Initiate, collaborate, and inculcate positive development practices into an agile team Work with architects in designing and delivering robust, secure cloud-native applications WHAT YOU'LL BRING Excellent university degree (bachelor, diploma, master, PhD) in computer science or related engineering discipline 9 to 12 years of relevant industry experience Mandatory: Sound understanding of cloud-native architecture, development, design, and cloud platforms Good knowledge of software design patterns, including microservices and integrated systems. Excellent hands-on proficiency in NodeJS, Typescript/JavaScript, and possibly Go Sound knowledge of relational database management systems with a focus on SQL and administration. Experience running systems on container-based platforms (Kubernetes and/or Cloud Foundry) Confidence working within Unix / Linux shell environments with a focus on BASH/Python for automating scripting IaaS experience with at least one of AWS, Azure, GCP Experience working with CI/CD Excellent communication and interpersonal skills with fluency in English (written & spoken) . Experience and willingness to work in a project setup following agile principles and practices MEET YOUR TEAM The BTP Core Backing Services team is comprised by individual Scrum teams embedded into an international context, all contributing to the SAP Business Technology Platform. As a cloud platform development team, we design, develop, automate, integrate, operate, and support thoroughly crafted, resilient, secure, scaling, and well-tested backing services built on open source, Hyperscaler platform technology or SAP technology. We are dedicated to agile development practices. You will be the perfect match for our team if you fulfil the above profile and also love to innovate, influence, and have an impact, and if you remain eager to learn and improve while also enjoying and being apt to mentor and engage others. #SAPBTPEXCareers #SAPInternalT3 Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 420475 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.
Posted 2 days ago
10.0 years
3 Lacs
Bengaluru
Remote
Company Description Our Mission At Palo Alto Networks® everything starts and ends with our mission: Being the cybersecurity partner of choice, protecting our digital way of life. Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we’re looking for innovators who are as committed to shaping the future of cybersecurity as we are. Who We Are We take our mission of protecting the digital way of life seriously. We are relentless in protecting our customers and we believe that the unique ideas of every member of our team contributes to our collective success. Our values were crowdsourced by employees and are brought to life through each of us everyday - from disruptive innovation and collaboration, to execution. From showing up for each other with integrity to creating an environment where we all feel included. As a member of our team, you will be shaping the future of cybersecurity. We work fast, value ongoing learning, and we respect each employee as a unique individual. Knowing we all have different needs, our development and personal wellbeing programs are designed to give you choice in how you are supported. This includes our FLEXBenefits wellbeing spending account with over 1,000 eligible items selected by employees, our mental and financial health resources, and our personalized learning opportunities - just to name a few! At Palo Alto Networks, we believe in the power of collaboration and value in-person interactions. This is why our employees generally work full time from our office with flexibility offered where needed. This setup fosters casual conversations, problem-solving, and trusted relationships. Our goal is to create an environment where we all win with precision. Job Description Your Career We’re looking for a Principal Engineer to join the Cortex SRE Production team at our India Development Center (IDC). This is a high-impact role where you’ll collaborate with global SRE and DevOps teams to enhance visibility, reliability, and automation across our production infrastructure via tooling and platform development. Your Impact As a Principal Engineer in the Global SRE Automation group, you will shape the future of infrastructure reliability, scale, and developer productivity. You will lead the design and development of cloud-native automation tools, streamline operational workflows, and embed resilience into every layer of the platform. You will: Architect and build automation systems that support self-healing, observability, and service-level assurance Contribute to the developer experience and internal tooling ecosystem, driving reliability through code Influence the SRE strategy by introducing innovations in cloud-native backend services, Kubernetes automation, and platform engineering Partner with global teams to deliver reliable infrastructure, integrating AI models, event-driven systems, and data pipelines to unlock operational insights Set standards for code quality, system design, and operational excellence across the organization Qualifications Your Experience 10+ years of experience in Cloud Engineering, DevOps, or Infrastructure Software Development, with a strong focus on automation, reliability, and platform scalability Deep expertise in AWS and Google Cloud Platform (GCP), with strong understanding of networking, compute, serverless, and cost-optimization services Proficient in Python or Go, with a solid grasp of modern backend development frameworks (e.g., Flask, FastAPI, Gin) and cloud-native application design Hands-on experience building RESTful APIs, microservices, and cloud-native platforms supporting high availability and self-service Designed and integrated Generative AI and LLM-based pipelines, including Retrieval-Augmented Generation (RAG), into internal tooling and operational systems to enhance developer productivity and incident response Applied predictive analytics, anomaly detection, and MLOps for use cases such as cost forecasting, capacity planning, and proactive incident management Built and optimized Cloud FinOps tooling to monitor usage patterns, reduce waste, and provide actionable insights into cloud spend Developed AI-driven automation agents (bots) for cloud operations, alert triage, knowledge retrieval, and ticket deflection Strong experience with: Infrastructure-as-Code: Terraform, CDK Kubernetes: Cluster lifecycle management, Helm/Kustomize, GitOps (ArgoCD) CI/CD pipelines, observability frameworks (Prometheus, Grafana, ELK), and SRE tooling for incident automation Proficient in SQL and NoSQL databases, such as PostgreSQL and Elasticsearch Exposure to Kafka and event-driven architectures for real-time data streaming and integration. Excellent problem-solving, debugging, and systems design skills Demonstrated leadership in cross-functional engineering teams, including mentoring, architectural guidance, and influencing long-term platform direction Additional Information The Team To stay ahead of the curve, it’s critical to know where the curve is, and how to anticipate the changes we’re facing. For the fastest-growing cybersecurity company, the curve is the evolution of cyberattacks and access technology and the products and services that dedicatedly address them. Our engineering team is at the core of our products – connected directly to the mission of preventing cyberattacks and enabling secure access to all on-prem and cloud applications. They are constantly innovating – challenging the way we, and the industry, think about Access and security. These engineers aren’t shy about building products to solve problems no one has pursued before. They define the industry, instead of waiting for directions. We need individuals who feel comfortable in ambiguity, excited by the prospect of challenge, and empowered by the unknown risks facing our everyday lives that are only enabled by a secure digital environment. Our engineering team is provided with an unrivaled chance to create the products and practices that will support our company growth over the next decade, defining the cybersecurity industry as we know it. If you see the potential of how incredible people and products can transform a business, this is the team for you. If the prospect of affecting tens of millions of people, enabling them to work remotely securely and easily in ways never done before, thrill you - you belong with us. Our Commitment We’re problem solvers that take risks and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together. We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at accommodations@paloaltonetworks.com. Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics. All your information will be kept confidential according to EEO guidelines. Is role eligible for Immigration Sponsorship? No. Please note that we will not sponsor applicants for work visas for this position.
Posted 2 days ago
8.0 years
10 Lacs
Bengaluru
On-site
Job Description: Job Title: Senior Data Scientist The Purpose of this Role Are you interested in joining a fast paced and cutting-edge organization where you can make an immediate impact on the business? Do you want to be an AI/ML advocate, using data to answer important questions and shape strategy for leadership? You can be a part of the Artificial Intelligence Center of Excellence (AI CoE) within Fidelity Workplace Investing (WI) that constantly pushes the potential of data to drive the business forward! At Fidelity Workplace Investing Artificial Intelligence Center of Excellence (WI AI COE), you won’t be just tinkering around the edges, you’ll be building critical solutions that will benefit our clients and internal stakeholders for years to come. The Value you Deliver The WI AI COE is seeking a Senior Data Scientist with experience in AI/ML, Natural Language Processing (NLP), Search, virtual assistants/chatbots, large language models (LLMs), Agents/ Agentic flows, deep learning related technologies, predictive and prescriptive analytics. The Expertise and Skills You Bring Bachelor’s/ Master’s Degree in Engineering, Computer Science, Mathematics, Computational Statistics, Operations Research, Machine Learning or related technical fields 8+ years of prior relevant work experience as a data scientist Hands on experience in most of the following: AI/ML Projects, Developing Supervised and Unsupervised Machine Learning Algorithms, Deep Learning Models & Techniques, NLP, LLMs, RAG, Reinforcement Learning, Knowledge Graphs Experience in working with finetuning Large Language Models (LLMs), Prompt Engineering, RAG Applications, Vector Databases, Transformer Architectures is a desirable. Experience in Multivariate Statistics, Experimentation Setup (hypothesis testing, measurements) is also an added advantage. Proficiency in the entire Predictive Modelling life cycle – From exploratory data analysis, data cleaning, Feature Engineering to Model Building and Validation, Operationalization and Scalability and establishing decision strategies. Programming skills in Python, SQL, and hands on experience on using cloud technology platforms like AWS, Azure or GCP. Exposure to AI-centric tools like PyTorch, Langchain, Langgraphis required. Comfort with Git source control and repo-driven development. Ability to partner closely with a diverse set of business stakeholders to identify needs and deliver AI/ML solutions. Demonstrated expertise in synthesizing and presenting research and technical findings to diverse audiences, with a strong ability to quickly acquire new knowledge and a passion for continuous learning. Implement new technologies in a production environment with product, IT, and data engineering teams. How your Work Impacts the Organization The Team The Fidelity Workplace Investing AI COE is a team of data scientists developing advanced analytics and artificial intelligence solutions to support a variety of different applications for a host of clients, both internal and external, across WI. The team occupies a unique position, able to understand the business problems that both our clients and internal partners face, while applying advanced analytics, machine learning, and visualization to solve problems and provide results-oriented solutions to market. Our team of high caliber scientists, mathematicians and statisticians use rigorous quantitative approaches to ensure that we are efficiently building algorithms and technology relevant to the business or customer experience issue at hand. The Purpose of this Role Are you interested in joining a fast paced and cutting-edge organization where you can make an immediate impact on the business? Do you want to be an AI/ML advocate, using data to answer important questions and shape strategy for leadership? You can be a part of the Artificial Intelligence Center of Excellence (AI CoE) within Fidelity Workplace Investing (WI) that constantly pushes the potential of data to drive the business forward! At Fidelity Workplace Investing Artificial Intelligence Center of Excellence (WI AI COE), you won’t be just tinkering around the edges, you’ll be building critical solutions that will benefit our clients and internal stakeholders for years to come. The Value you Deliver The WI AI COE is seeking a Senior Data Scientist with experience in AI/ML, Natural Language Processing (NLP), Search, virtual assistants/chatbots, large language models (LLMs), Agents/ Agentic flows, deep learning related technologies, predictive and prescriptive analytics. The Expertise and Skills You Bring Bachelor’s/ Master’s Degree in Engineering, Computer Science, Mathematics, Computational Statistics, Operations Research, Machine Learning or related technical fields 8+ years of prior relevant work experience as a data scientist Hands on experience in most of the following: AI/ML Projects, Developing Supervised and Unsupervised Machine Learning Algorithms, Deep Learning Models & Techniques, NLP, LLMs, RAG, Reinforcement Learning, Knowledge Graphs Experience in working with finetuning Large Language Models (LLMs), Prompt Engineering, RAG Applications, Vector Databases, Transformer Architectures is a desirable. Experience in Multivariate Statistics, Experimentation Setup (hypothesis testing, measurements) is also an added advantage. Proficiency in the entire Predictive Modelling life cycle – From exploratory data analysis, data cleaning, Feature Engineering to Model Building and Validation, Operationalization and Scalability and establishing decision strategies. Programming skills in Python, SQL, and hands on experience on using cloud technology platforms like AWS, Azure or GCP. Exposure to AI-centric tools like PyTorch, Langchain, Langgraphis required. Comfort with Git source control and repo-driven development. Ability to partner closely with a diverse set of business stakeholders to identify needs and deliver AI/ML solutions. Demonstrated expertise in synthesizing and presenting research and technical findings to diverse audiences, with a strong ability to quickly acquire new knowledge and a passion for continuous learning. Implement new technologies in a production environment with product, IT, and data engineering teams. How your Work Impacts the Organization The Team The Fidelity Workplace Investing AI COE is a team of data scientists developing advanced analytics and artificial intelligence solutions to support a variety of different applications for a host of clients, both internal and external, across WI. The team occupies a unique position, able to understand the business problems that both our clients and internal partners face, while applying advanced analytics, machine learning, and visualization to solve problems and provide results-oriented solutions to market. Our team of high caliber scientists, mathematicians and statisticians use rigorous quantitative approaches to ensure that we are efficiently building algorithms and technology relevant to the business or customer experience issue at hand. Certifications: Category: Data Analytics and Insights
Posted 2 days ago
18.0 years
2 - 6 Lacs
Bengaluru
On-site
Site Name: Bengaluru Luxor North Tower Posted Date: Jul 21 2025 Be You’ at GSK At GSK, we unite science, technology, and talent to get ahead of diseases together. We are a world leader in infectious diseases, including vaccines and HIV, with the broadest portfolio in the industry, pushing the frontiers of respiratory science and improving the lives of millions of patients, recognised for our ESG leadership, making an impact on some of society’s most urgent challenges We’re confident that together we can make a positive impact on the health of more than 2.5 billion people by 2031. When you set out on your adventure at GSK, we make a deal. You commit to living our values and performing against our Innovation, Performance and Trust priorities. In return, GSK commits to providing the right environment for you to thrive. Together, we build an environment where we can all thrive and focus on what matters most to each of us. Job purpose: As the Director, AIML & Decision Science - Global Functions (working closely with the Head of Data & AI for Global Functions and D&A leadership in GSK’s Global Capability Center), you will own and manage AI/ML competency, oversee design, development and deployment of AI/ML & Data Science Products for GSK’s Global Functions. Acting as a competency leader, for business functions and GSK’s Digital & Technology teams (which include Subject Matter Experts and external third-party organizations), you will work closely with senior business leaders (in Finance, Procurement, HR, Legal, Compliance, Risk Management, WREF etc) and various technology leaders to oversee the development and implementation of AI/ML projects, ensuring they are delivered on time, within budget, and with the desired outcomes leveraging GSK’s Enterprise Data Platform. Day-to-day you will use your AIML, data science, business domain and product development expertise to define and refine D&A product visions, strategies and roadmaps to drive the design and delivery of high-quality AI solutions (to enable the company to achieve its objectives. AI and Analytics products currently being delivered by the team include a range of AI and Analytics products, including conversational AI solutions, automation tools, forecasting models, and AI agents designed to achieve business objectives more efficiently than human efforts. Key Responsibilities: Understanding needs, business processes and data requirements of customer-groups and users, and manage key business relationships for your AI/ML portfolio Manage across a portfolio of data products, identify synergies and dependencies, and manage successful delivery. Combining business understanding with knowledge of established and emerging AI/ML capabilities to identify and cultivate use-cases from ideas into potential D&A products Build and manage a team of AI/ML professionals, including data scientists, machine learning engineers, and researchers, ensuring they have clear objectives and the resources needed to succeed. Oversee the development and implementation of AI/ML projects, ensuring they are delivered on time, within budget, and with the desired outcomes Foster a culture of continuous learning and development within the AI/ML team, including mentoring and career growth opportunities Encourage a culture of innovation and experimentation within the team to explore new AI/ML techniques and applications Ensure that AI/ML practices adhere to ethical standards and comply with relevant laws and regulations, particularly regarding data privacy and security. Work closely with other departments, such as GP&T, product management, and business units, to integrate AI/ML solutions into the broader business strategy. Stay abreast of the latest developments in AI/ML technology and research to maintain a competitive edge and apply cutting-edge methods to solve business problems. Engage with internal partners to understand and probe business processes to develop hypotheses and bring structure to requests and translates requirements into an analytic approach Work collaboratively with business leaders in Identifying critical business problems that could be solved using Generative AI. Identify opportunities to leverage the latest Large Language Models (LLMs) Thinking and acting with agility, using experimentation, user demos and other sources of rapid feedback to iteratively refine the D&A Product Roadmap and backlog throughout the Product lifecycle Confirming delivery from development team meets the business requirements and creates the expected value in line with business KPIs Coaching and guiding wider Data, Analytics and AI teams, promoting a culture of continuous improvement Basic qualifications: We are looking for professionals with these required skills to achieve our goals: PhD in Computer Science, Applied Math, Economics, Statistics, Physics, Systems Biology, Computational Biology, Bioinformatics, or related field 18+ years of technical experience in delivering solutions involving Data Science, Machine Learning or Statistics 10+ years in managing cross-functional teams and leading the execution of highly visible complex projects, demonstrating strong project management. Strong relevant experience of Data & Analytics Product Development (ideally within a cloud environment like Azure) Domain knowledge in life science industry is preferred Inclusion at GSK: As an employer committed to Inclusion, we encourage you to reach out if you need any adjustments during the recruitment process. Please contact our Recruitment Team at IN.recruitment-adjustments@gsk.com to discuss your needs. #LI-GSK Skills Artificial Neural Networks (ANNS), Computer Programming, Deep Learning, Feature Engineering, Google Cloud Platform (GCP) for Machine Learning, Machine Learning Operations, Model Evaluation, Model Validation, Performance Optimizations, Software Development Life Cycle (SDLC) Why GSK? Uniting science, technology and talent to get ahead of disease together. GSK is a global biopharma company with a special purpose – to unite science, technology and talent to get ahead of disease together – so we can positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns – as an organisation where people can thrive. We prevent and treat disease with vaccines, specialty and general medicines. We focus on the science of the immune system and the use of new platform and data technologies, investing in four core therapeutic areas (infectious diseases, HIV, respiratory/ immunology and oncology). Our success absolutely depends on our people. While getting ahead of disease together is about our ambition for patients and shareholders, it’s also about making GSK a place where people can thrive. We want GSK to be a place where people feel inspired, encouraged and challenged to be the best they can be. A place where they can be themselves – feeling welcome, valued, and included. Where they can keep growing and look after their wellbeing. So, if you share our ambition, join us at this exciting moment in our journey to get Ahead Together. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. It has come to our attention that the names of GlaxoSmithKline or GSK or our group companies are being used in connection with bogus job advertisements or through unsolicited emails asking candidates to make some payments for recruitment opportunities and interview. Please be advised that such advertisements and emails are not connected with the GlaxoSmithKline group in any way. GlaxoSmithKline does not charge any fee whatsoever for recruitment process. Please do not make payments to any individuals / entities in connection with recruitment with any GlaxoSmithKline (or GSK) group company at any worldwide location. Even if they claim that the money is refundable. If you come across unsolicited email from email addresses not ending in gsk.com or job advertisements which state that you should contact an email address that does not end in “gsk.com”, you should disregard the same and inform us by emailing askus@gsk.com, so that we can confirm to you if the job is genuine.
Posted 2 days ago
3.0 years
0 Lacs
Bengaluru
On-site
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, opportunity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Strong skills around object oriented analysis and design (OOAD), data structures, algorithms, design patterns. Strong knowledge and hands-on experience in Key technologies Java (SpringBoot, Dropwizard or equivalent framework), Spring Boot, Containerisation (Docker and Kubernetes) and OracleDB. Hands on experience in Microservices, RESTful webservices development and WebSockets. Experience with messaging and integration frameworks like JMS, RabbitMQ, AMQP, MQ, Kafka. Experience developing with testing frameworks such as JUnit, Mockito, Karma, Protractor, Jasmine, Mocha, Selenium, and Cucumber. Experience with JDBC/JPBA frameworks such as Hibernate or MyBatis. Thorough understanding of the System Development Life Cycle and Development methodologies including Agile. Experience with SQL databases such as Sybase and Oracle. Command of architecture, design, and business processes. Ability to manage relationships with Business stakeholders. Organize and prioritize within complex delivery programs. Proficient in a front end technology either React/ReactJS, Redux, Angular/AngularJS, ExtJS, JQuery, NodeJS, and other Web frameworks. Working experience in any public Cloud like AWS, Azure, GCP and private cloud(Cloud Foundry). Preferred qualifications, capabilities, and skills Experience working in a financial services environment. Technology coach, and help team solve technology problems. SRE concepts like monitoring, log tracing. Good to have with NoSQL databases such as HBase, Cassandra, and tools such as Apache Spark.
Posted 2 days ago
12.0 - 21.0 years
8 - 12 Lacs
Chennai
Work from Office
Project Overview The candidate will be working on the Model Development as a Service (MDaaS) initiative, Which focuses on scaling machine learning techniques for exception classification, early warning signals, Data quality control, model surveillance, and missing value imputation. The project involves applying advanced ML techniques to large datasets and integrating them into financial analytics systems. Key Responsibilities Set up Data Pipelines: Configure storage in cloud-based compute environments and repositories for large-scale data ingestion and processing. Develop and Optimize Machine Learning Models: Implement Machine Learning for Exception Classification (MLEC) to classify financial exceptions. Conduct Missing Value Imputation using statistical and ML-based techniques. Develop Early Warning Signals for detecting anomalies in multi-variate/univariate time-series financial data. Build Model Surveillance frameworks to monitor financial models. Apply Unsupervised Clustering techniques for market segmentation in securities lending. Develop Advanced Data Quality Control frameworks using TensorFlow-based validation techniques. Experimentation & Validation: Evaluate ML algorithms using cross-validation and performance metrics. Implement data science best practices and document findings. Data Quality and Governance: Develop QC mechanisms to ensure high-quality data processing and model outputs. Required Skillset Strong expertise in Machine Learning & AI (Supervised & Unsupervised Learning). Proficiency in Python, TensorFlow, SQL, and Jupyter Notebooks. Deep understanding of time-series modeling, anomaly detection, and risk analytics. Experience with big data processing and financial data pipelines. Ability to deploy scalable ML models in a cloud environment. Deliverables & Timeline Machine Learning for Exception Classification (MLEC): Working codes & documentation Missing Value Imputation: Implementation & validation reports Early Warning Signals: Data onboarding & anomaly detection models Model Surveillance: Fully documented monitoring framework Securities Lending: Clustering algorithms for financial markets Advanced Data QC: Development of a general-purpose QC library Preferred Qualifications Prior experience in investment banking, asset management, or trading desks. Strong foundation in quantitative finance and financial modeling. Hands-on experience with TensorFlow, PyTorch, and AWS/GCP AI services
Posted 2 days ago
0 years
4 - 6 Lacs
Bengaluru
Remote
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA As a Senior Associate Cloud Services Architect at NTT DATA, you'll play a critical role in our clients' digital transformation journeys. Working closely with clients, sales, and delivery teams, you will design Managed Services solutions across the cloud technology domain, bringing your expertise to the forefront to co-create value and ensure secure, innovative, and competitive solutions. Your Responsibilities will include developing DATA models to meet client requirements by integrating technology and service design across the cloud domain. You'll conduct sales presentations, and product demonstrations, and help maintain our cutting-edge company products. Along the way, you'll support the service design deliverables, interfacing with internal teams and partners to ensure our solutions are robust and scalable. Engaging in the complete solution development process means you'll map client requirements against proposed service elements and architectures, coordinating end-to-end solutions that align with technological and service standards. You'll implement consulting approaches to consolidate client needs, prepare transition documents, and define both the current state and the desired future state of our solutions. Stay engaged with client projects even after implementation, offering continued support and participating in renewals. You'll help us test and validate new design features, provide suggestions for improving existing service offers, and work closely with our sales teams to discuss commercial models with clients. To thrive in this role, you need to have: A good understanding of cloud technologies, networking, security, and system administration. Basic Azure, GCP, and/or AWS infrastructure architect and pre-sales skills. Basic knowledge of virtualization, hybrid computing environments, storage technology, and cloud-native technologies. A developing understanding of Managed Services service and delivery models, including cloud, global, and distributed delivery models. The capability to work with costing models alongside sales and finance stakeholders. Strong communication and presentation abilities, with the ability to facilitate workshops with clients and internal teams. The ability to effectively communicate potential improvements and the value of solutions to all stakeholder levels. A passion for staying updated on emerging trends in cloud technology and managed services. A bachelor’s degree in information technology/systems or a related field. Workplace type : Remote Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.
Posted 2 days ago
0 years
40 - 42 Lacs
Bengaluru
On-site
Roles and Responsibilities: Design and implement scalable, secure, and resilient Landing Zones on GCP. Define and enforce Cloud Governance frameworks , including IAM, policies, VPC design, and logging/monitoring. Architect and drive Greenfield , Brownfield , and Bluefield GCP transformation/migration projects. Collaborate with enterprise architecture and security teams to define network topology, service exposure (intra/expose) and compliance needs. Design cloud-native applications and modernization strategies using GCP services. Develop CI/CD pipelines and Infrastructure as Code (IaC) using Terraform or Deployment Manager. Act as GCP SME and guide development, DevOps, and security teams on GCP best practices. Required Skills: Strong experience in Google Cloud Platform (GCP) architecture and services. Hands-on in building and managing Landing Zones . Deep understanding of Governance , Security , IAM , Networking , and Billing structure in GCP. Experience in handling Greenfield (new cloud setup) and Brownfield (existing migration) environments. Familiarity with Bluefield (hybrid migration) approach. Proficiency in Terraform, GCP Deployment Manager or similar IaC tools. Understanding of Intra- and internet-facing service exposures . Strong communication and documentation skills. Preferred Qualifications: GCP Professional Cloud Architect certification Experience in cloud migration or hybrid cloud setups Working knowledge of DevOps and automation tools Job Type: Full-time Pay: ₹4,000,000.00 - ₹4,200,000.00 per year Schedule: Day shift Work Location: In person
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France