Jobs
Interviews

6199 Retrieval Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Position Overview Job Title: Senior Engineer - Oracle, AVP Location: Pune, India Corporate Title: AVP Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. Engineer also responsible for owning the delivery capacity, defining the application strategy, providing the technical product vision, creating the roadmap, and driving its execution. It may also involve taking functional oversight of engineering delivery for diverse suite of applications. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities P lanning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank. Understanding the bank’s technology at a deep level Collaborating with other Product Managers, development leads, architects, Operations and key clients (internal and/or external) Working with a variety of people across multiple departments and organizations in order to satisfy the needs of the bank and the clients, in compliance with architectural principles and guidelines, legal and regulatory requirements Driving the development of technical solutions to ensure they meet business needs and comply with architectural principles and guidelines alongside legal and regulatory requirements to ensure that the needs of the bank and the client are constantly met Your Skills And Experience 10+ years of hands-on experience of Oracle PL-SQL development (Oracle 12c, 19c) 7+ years of hands-on experience of shell scripting, connectivity, housekeeping, archiving, file handling Professional experience of Control M, MQ, Reporting Suite (BI, Tableau, Cognos), Linux/SLES upgrades, BPM suite products Experience of contributing to software design and architecture including consideration of meeting non-functional requirements (e.g., reliability, scalability, observability, testability) Understanding of relevant Architecture styles and their trade-offs - e.g., Microservices, Monolith, Batch. Professional experience in building applications into one of the cloud platforms (Azure, AWS or GCP) and usage of their major infra components (Software Defined Networks, IAM, Compute, Storage, etc.) Experience designing and implementing distributed enterprise applications Professional experience of at least one "CI/CD" tool such as Team City, Jenkins, GitHub Actions Professional experience of Agile build and deployment practices (DevOps) Professional experience of defining interface and internal data models – both logical and physical Experience of working with a globally distributed team requiring remote interaction across locations, time zones and diverse cultures Excellent communication skills (verbal and written) Ideal to Have Experience of working in one or more large data integration projects/products Experience and knowledge of Data Engineering topics such as partitioning, optimization based on different goals (e.g. retrieval performance vs insert performance) A passion for problem solving with strong analytical capabilities. Experience related to any of general ledger functionalities, reference data, BPM workflow, legacy application decom etc. Understanding of data security principle, data masking s and implementation considerations Education/Qualifications Degree from an accredited college or university with a concentration in Engineering or Computer Science How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 week ago

Apply

0 years

9 - 18 Lacs

Saket, Delhi, India

Remote

Company: Ultrasafe AI Website: Visit Website Business Type: Small/Medium Business Company Type: Product & Service Business Model: B2B Funding Stage: Seed Industry: Emerging Technologies Salary Range: ₹ 9-18 Lacs PA Job Description About UltraSafeAI UltraSafeAI is a US-based technology company at the forefront of developing secure, reliable, and explainable AI systems. We specialize in proprietary AI technologies including advanced LLMs, CNNs, VLLMs, intelligent agents, computer vision systems, and cutting-edge ML algorithms. Our focus is on B2B AI adoption, providing end-to-end integration using our proprietary technology stack to automate entire business processes. We create intelligent solutions that prioritize safety, transparency, and human alignment across various industries including healthcare, finance, legal, and enterprise services. Our mission is to enable seamless AI adoption while maintaining the highest standards of safety and ethical considerations. Position Overview We're seeking experienced Python Engineers with expertise in building modern AI systems, particularly focusing on Retrieval Augmented Generation (RAG) architectures and agentic AI frameworks. The ideal candidate will have strong experience with FastAPI, vector databases, and orchestrating complex AI agents to solve real-world problems. As a Python Engineer at UltraSafeAI, you'll work on designing and implementing scalable, secure AI systems that power our clients' critical business processes. You'll collaborate with a global team of engineers, researchers, and domain experts to create solutions that set new standards for safety and reliability in AI. Key Responsibilities Design and develop robust FastAPI applications that serve as the backbone for our AI systems Implement sophisticated RAG (Retrieval Augmented Generation) architectures utilizing vector databases, embeddings, and reranking techniques Build and orchestrate multi-agent AI systems using frameworks like LangGraph, CrewAI, or Agent Development Kit Develop intelligent knowledge retrieval systems that enhance LLM capabilities with domain-specific information Create scalable, maintainable, and well-tested code that meets our high-quality standards Contribute to architecture decisions and technology selection for new projects Collaborate with cross-functional teams to understand requirements and deliver solutions Participate in code reviews and knowledge sharing to elevate the entire engineering team Stay current with the rapidly evolving AI landscape and recommend new approaches Required Qualifications Strong expertise in FastAPI development with authentication, async processing, and API design Practical experience building RAG systems with vector databases (Pinecone, FAISS, Chroma, Qdrant, etc.) Experience with embedding models and techniques for effective information retrieval Hands-on experience with at least one agentic framework (LangGraph, CrewAI, Agent Development Kit, etc.) Solid understanding of LLMs and their integration into production systems Excellent problem-solving skills and attention to detail Strong communication skills and ability to work in a distributed team Self-motivated with the ability to work independently in a remote environment Highly Desirable Experience with specialized vector database operations and optimizations Knowledge of reranking techniques to improve retrieval quality Experience building complex multi-agent systems that coordinate specialized AI agents Background in specific domains like finance, healthcare, or legal Contributions to open-source projects Understanding of AI safety considerations and alignment techniques Experience with deployment and monitoring of AI systems in production Why Join UltraSafeAI? 100% Remote Work: Work from anywhere in the world with flexible hours Cutting-Edge Technology: Build systems using the latest advancements in AI Meaningful Impact: Create AI solutions that prioritize safety and human wellbeing Continuous Learning: Regular knowledge sharing sessions and education stipend Global Team: Collaborate with talented professionals from diverse backgrounds Work-Life Balance: Flexible PTO policy and respect for personal time Career Growth: Clear paths for advancement in a rapidly growing company Modern Tech Stack: Work with the latest tools and technologies in AI development Remote Work Environment At UltraSafeAI, We've Built a Fully Distributed Team From Day One. We Believe Talent Is Global, And We've Created Processes And Tools To Ensure Remote Work Is Efficient, Collaborative, And Enjoyable Asynchronous-first communication with minimal required meetings Collaborative documentation and knowledge management Regular virtual team-building events and optional in-person retreats Flexible working hours with core collaboration windows Application Process Initial application review Technical interviews with the engineering team Technical assessment Code review and discussion of your assessment Final interview with leadership Offer IMPORTANT NOTE REGARDING THE TECHNICAL ASSESSMENT: Our technical assessment is designed to evaluate your actual skills and problem-solving abilities. We expect candidates to complete the assessment independently without using AI tools or external help. During the interview process, you will be asked to explain your code and may be required to write similar solutions in real-time. Any discrepancies between your assessment performance and interview performance will be carefully considered in our hiring decision. Join Us in Building Safer AI If you're passionate about creating AI systems that are not only powerful but also safe, transparent, and aligned with human values, we'd love to hear from you. At UltraSafeAI, you'll help shape the future of AI technology while working with a global team of talented, mission-driven professionals.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Sr. Manager - Data Platform, Engineering - Hyderabad, India . About Warner Bros. Discovery Warner Bros. Discovery, a premier global media and entertainment company, offers audiences the world's most differentiated and complete portfolio of content, brands and franchises across television, film, streaming and gaming. The new company combines Warner Media’s premium entertainment, sports and news assets with Discovery's leading non-fiction and international entertainment and sports businesses. For more information, please visit www.wbd.com. Roles & Responsibilities As an Engineering Manager here you are passionate about using software-based approaches to solve complex data-driven challenges and automate those solutions. Within our organization, you’ll lead efforts aimed at scaling our existing data offerings and establish the technical strategy for how we can better equip engineers and leaders with Data Platform. You’ll build a deep understanding of our digital streaming service and use that knowledge, coupled with your engineering, infrastructure, data, and cloud knowledge, to optimize and evolve how we understand our technical ecosystem. To be successful, you’ll need to be deeply technical and capable of holding your own with other strong peers. You possess excellent collaboration and diplomacy skills. You have experience practicing infrastructure-as-code, data lake management, AI/ML Knowledge, and Analytics. In addition, you’ll have strong systems knowledge and troubleshooting abilities. Develop streaming and batch analytics & pipelines to build data impactful data products around a semantic layer Create tools and frameworks that enhance data processing, information retrieval, governance, and data quality in a cost-effective and user-friendly manner. Promote a culture of experimentation and data-driven innovation. Inspire and motivate through internal and external presentations and other speaking opportunities. Own the end-to-end architecture of the data platform, ensuring its efficiency, cost-effectiveness, security, and governance. Collaborate closely with other engineers to design and build an optimal and cost-efficient platform solution. Work in partnership with other engineers and managers to design and develop foundational elements of the platform. Assist in hiring, mentoring, and coaching engineers. Help build an engineering team that prioritizes empathy, diversity and inclusion. What To Bring Bachelor’s degree in computer science or similar discipline 12+ years of commendable track record of delivering complex software engineering systems and distributed platforms using open source technologies 12+ years of experience and proficiency in building and managing real-time data processing pipelines with streaming platforms like Apache Kafka, AWS Kinesis, or Google Pub/Sub. 10+ years of strong foundation in distributed data processing concepts, event-driven architectures, understanding of batch and stream processing technologies and building streaming and batch pipelines 12+ years of programming experience with proficiency in Java, C, C++ or similar languages. 8+ years of experience in wide variety of distributed systems and technologies – such as Apache Flink, Apache Spark, Kafka, Airflow, Kubernetes, Databricks, Snowflake 10+ years of Cloud (AWS preferred) experience 10+ years of experience with containerization (Docker) and orchestration tools (Kubernetes), Prometheus, Grafana, Kibana, Elasticsearch, Cassandra, DynamoDB 10+ years of experience to design scalable and resilient streaming applications with microservices architecture Experience leading in a highly cross-functional environment, likely collaborating closely with Engineering, Product Management, and/or Data Science Strong interpersonal, communication and presentation skills. Nice to Have Exposure to tools like Apache Beam or Spark Streaming. Familiarity with integrating ML models into Flink pipelines is a plus. What We Offer A Great Place to work. Equal opportunity employer Fast track growth opportunities How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request.

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

Remote

About Lingaro: Lingaro Group is the end-to-end data services partner to global brands and enterprises. We lead our clients through their data journey, from strategy through development to operations and adoption, helping them to realize the full value of their data. Since 2008, Lingaro has been recognized by clients and global research and advisory firms for innovation, technology excellence, and the consistent delivery of highest-quality data services. Our commitment to data excellence has created an environment that attracts the brightest global data talent to our team. Requirements: At least 8 years of experience as a Data Engineer, including min. 6 years of experience working with GCP cloud-based infrastructure & systems and min. 2 years in Technical Leading role. Experience to lead the technical teams, collaborate with our key business stakeholders and help drive our operations. Ability to actively participate/lead discussions with clients to identify and assess concrete and ambitious avenues for improvement. Deep knowledge of Google Cloud Platform and cloud computing services. Extensive experience in design, build, and deploy data pipelines in the cloud, to ingest data from various sources like databases, APIs or streaming platforms. Proficient in database management systems such as SQL (Big Query is a must), NoSQL. Candidate should be able to design, configure, and manage databases to ensure optimal performance and reliability. Programming skills (SQL, Python, other scripting). Proficient in data modeling techniques and database optimization. Knowledge of query optimization, indexing, and performance tuning is necessary for efficient data retrieval and processing. Knowledge of at least one orchestration and scheduling tool (Airflow is a must). Experience with data integration tools and techniques, such as ETL and ELT Candidate should be able to integrate data from multiple sources and transform it into a format that is suitable for analysis. Excellent communication skills to effectively collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders. Ability to convey technical concepts to non-technical stakeholders in a clear and concise manner. Ability to actively participate/lead discussions with clients to identify and assess concrete and ambitious avenues for improvement. Tools knowledge: Git, Jira, Confluence, etc. Open to learn new technologies and solutions. Experience in multinational environment and distributed teams. Nice to have: Certifications in big data technologies or/and cloud platforms. Knowledge of AecorSoft – data integrator. Experience with BI solutions (e.g. Looker, Power BI, Tableau). Experience with Apache Spark, especially in GCP environment. Tasks: Acting as a Senior Consultant/Solution Advisor: You will be a part of the team accountable for design, model and development of whole GCP data ecosystem for one of our Client’s (Cloud Storage, Cloud Functions, BigQuery). Involvement throughout the whole process starting with the gathering, analyzing, modelling, and documenting business/technical requirements will be needed. The role will include direct contact with clients. Guiding and mentoring the data engineering team, providing technical direction, overseeing the design and implementation of data solutions, and ensuring adherence to best practices and quality standards in data engineering projects. Train and mentor less experienced data engineers, providing guidance and knowledge transfer. Modelling the data from various sources and technologies. Troubleshooting and supporting the most complex and high impact problems, to deliver new features and functionalities. Designing and optimizing data storage architectures, including data lakes, data warehouses, or distributed file systems. Implementing techniques like partitioning, compression, or indexing to optimize data storage and retrieval. Identifying and resolving bottlenecks, tuning queries, and implementing caching strategies to enhance data retrieval speed and overall system efficiency. Identifying and resolving issues related to data processing, storage, or infrastructure. Monitoring system performance, identifying anomalies, and conducting root cause analysis to ensure smooth and uninterrupted data operations. Why join us: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment.

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

Remote

About Lingaro: Lingaro Group is the end-to-end data services partner to global brands and enterprises. We lead our clients through their data journey, from strategy through development to operations and adoption, helping them to realize the full value of their data. Since 2008, Lingaro has been recognized by clients and global research and advisory firms for innovation, technology excellence, and the consistent delivery of highest-quality data services. Our commitment to data excellence has created an environment that attracts the brightest global data talent to our team. Requirements: At least 8+years experience as DE - Python, GCP, BigQuery Object oriented programming, Python and SQL Strong knowledge in cloud computing platforms - Google Cloud - Candidate should be able to design, build, and deploy data pipelines in the cloud, to ingest data from various sources like databases, APIs or streaming platforms. Experience in composer or Apache Airflow and knowledge of Dataplex is a plus Experience in dq checks implementation, any frameworks like Clouddq, Pydeequ. Good Knowledge of Dq dimensions Experience working with GCP cloud-based infrastructure & systems · Programming skills (SQL, Python, other scripting), Proficient in data modelling techniques and database optimization. Knowledge of query optimization, indexing, and performance tuning is necessary for efficient data retrieval and processing. Proficient in database management systems such as SQL (Big Query is a must), NoSQL. Candidate should be able to design, configure, and manage databases to ensure optimal performance and reliability. Experience with data integration tools and techniques, such as ETL and ELT Candidate should be able to integrate data from multiple sources and transform it into a format that is suitable for analysis. Tasks: You will be a part of the team accountable for design, model and development of whole GCP data ecosystem for one of our Client’s (Cloud Storage, Cloud Functions, BigQuery) Involvement throughout the whole process starting with the gathering, analyzing, modelling, and documenting business/technical requirements will be needed. The role will include direct contact with clients. Modelling the data from various sources and technologies. Troubleshooting and supporting the most complex and high impact problems, to deliver new features and functionalities. Designing and optimizing data storage architectures, including data lakes, data warehouses, or distributed file systems. Implementing techniques like partitioning, compression, or indexing to optimize data storage and retrieval. Identifying and resolving bottlenecks, tuning queries, and implementing caching strategies to enhance data retrieval speed and overall system efficiency. Identifying and resolving issues related to data processing, storage, or infrastructure. Monitoring system performance, identifying anomalies, and conducting root cause analysis to ensure smooth and uninterrupted data operations. Train and mentor junior data engineers, providing guidance and knowledge transfer. Why join us: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Organization : Ray Business Technologies is a CMMI Level 3, ISO 27001:2013 Certified Company and The Member of NASSCOM, AIIA, NJTC and HYSEA. Product Overview: AIUN is an AI-powered enterprise assistant developed by Ray Business Technologies to automate customer interactions, optimize decision-making, and streamline business workflows. Our customers span industries such as banking, logistics, legal, retail, and education, where managing high inquiry volumes, regulatory compliance, and complex data processing is critical. AIUN addresses inefficiencies in communication and decision-making. Many businesses struggle with responding to repetitive inquiries, tracking compliance updates, and managing vast amounts of data. AIUN automates email responses, extracts key insights from documents, monitors regulatory changes, and ensures seamless transitions between AI and human agents. Roles and Responsibilities: ● Translating business requirements into analytical solutions ● Do hands-on data analysis, using statistical techniques ● Design and implement AI/ML solutions using advanced frameworks and technologies, ensuring scalability, efficiency, and alignment with business goals. ● Develop and optimize GenAI frameworks like AutoGen, and NLP models tailored to specific use cases. ● Build and deploy production-ready RAG (Retrieval-Augmented Generation) systems, chatbots, and other AI-driven applications using OpenAI APIs, Ollama, Llama, and llamaparse. ● Leverage Azure cloud services along with event-driven architecture in Python, to deliver high-performing AI solutions. ● Apply advanced prompt engineering techniques, including Chain of Thought (CoT) prompting, for enhancing AI model interactions. ● Use Docker effectively, including executing Docker commands for containerization and deployments in cloud environments. ● Ensure solutions adhere to best practices in system design, addressing trade-offs, security, performance, and efficiency. ● Performs code reviews and regression tests as well as triages and fixes issues to ensure the quality of code. ● Collaborates with others inside the project team to accomplish project objectives. ● Lead a lean team of senior and associate software engineers and data scientists. ● Communicate complex technical concepts effectively to both technical and non-technical stakeholders. Qualifications: ● Bachelor’s/Master’s degree in Engineering, Information Systems, Statistics, Math, Computer Science or related field and 6+ years of engineering work experience. ● Proficiency in Python (mandatory). ● Skill Set; NLP, NLU, NLI ● Experience working with both structured and unstructured data. ● 4+ years of hands-on experience with AI/ML frameworks (e.g., PyTorch, TensorFlow) and programming in Python. ● Demonstrated experience with RAG systems, chatbot development, and working with GenAI technologies like LLM fine-tuning and OpenAI APIs. ● Deep understanding of Autogen components and advanced NLP techniques. ● Familiarity with best practices in system design, including security, performance optimization, and scalability. Levels of Responsibility: ● Work to be under supervision. ● Collaborate with founders and CTO towards the vision and direction of the product. ● Requires verbal and written communication skills to convey information. May require basic negotiation, influence, tact, etc. ● Tasks do not have defined steps; planning, problem-solving, and prioritization must occur to complete the tasks effectively.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Role Title - Senior AI/ML Analyst Role Type - Full time Role Reports to Chief Technology Officer Work Location - Plenome Technologies 8 th floor, E Block, IITM Research Park, Taramani Job Overview The Technical Lead will drive our AI strategy and implementation while managing a team of developers. Key responsibilities include architecting LLM solutions, ensuring scalability, implementing multilingual capabilities, and developing healthcare-specific AI models. You will oversee the development of AI agents that can understand and process medical information, interact naturally with healthcare professionals, and handle complex medical workflows. This includes ensuring data privacy, maintaining medical accuracy, and adapting models for different healthcare contexts. Job Specifications Educational Qualifications - Any UG/PG graduates Professional Experience 4+ years of Experience in Generative AI, LLM Modules, ML development experience Key Job Responsibilities ML applications & training · Understanding of machine learning concepts and experience with ML frameworks like PyTorch, Tensorflow, or others · Experience with production of ML applications on web or mobile platforms NLP & feature engineering · Experience in developing customized AI powered features from scratch to production involving NLP and other models · Designing, deploying and subsequent training of multimodal applications based on clinical requirements LLMs & fine-tuning · Experience with open-source LLMs (preferably Llama models) and fine-tuning through client data and open-source data · Experience with LLM frameworks like LangChain, Llama Index or others, and with any vector databases · Implement RAG architecture to enhance model accuracy with real-time retrieval from clinical databases and medical literature Data pipelines & architecture · Design end-to-end clinical AI applications, from data ingestion to deployment in clinical settings with integrations · Experience with Docker and Kubernetes for application serving at large scale, and developing data pipelines and training workflows API development · Experience with deploying LLM models on cloud platforms (AWS, Azure or others) · Experience with backend and API developments for external integrators Documentation & improvements · Version control with Git, and ticketing bugs and features with tools like Jira or Confluence Behavioral competencies Attention to detail · Ability to maintain accuracy and precision in financial records, reports, and analysis, ensuring compliance with accounting standards and regulations. Integrity and Ethics · Commitment to upholding ethical standards, confidentiality, and honesty in financial practices and interactions with stakeholders. Time management · Effective prioritization of tasks, efficient allocation of resources, and timely completion of assignments to meet sprint deadlines and achieve goals. Adaptability and Flexibility · Capacity to adapt to changing business environments, new technologies, and evolving accounting standards, while remaining flexible in response to unexpected challenges. Communication & collaboration · Experience presenting to stakeholders and executive teams · Ability to bridge technical and non-technical communication · Excellence in written documentation and process guidelines to work with other frontend teams Leadership competencies Team leadership and team building · Lead and mentor a backend and database development team, including junior developers, and ensure good coding standards · Agile methodology to be followed, Scrum meetings to be conducted for sync-ups Strategic Thinking · Ability to develop and implement long-term goals and strategies aligned with the organization’s vision · Ability to adopt new tech and being able to handle tech debt to bring the team up to speed with client requirements Decision-Making · Capable of making informed and effective decisions, considering both short-term and long-term impacts · Insight into resource allocation and sprint building for various projects Team Building · Ability to foster a collaborative and inclusive team environment, promoting trust and cooperation among team members Code reviews · Troubleshooting, weekly code reviews and feature documentation and versioning, and standards improvement Improving team efficiency · Research and integrate AI-powered development tools (GitHub Copilot, Amazon Code Whisperer) Added advantage points Regulatory compliances · Experience with HIPAA, GDPR compliant software and data storage systems · Experience in working with PII data and analytical data in highly regulated domains (finance, healthcare, and others) · Understanding of healthcare data standards and codes (FHIR, SNOMED) for data engineering AI safety measures · Knowledge of privacy protection and anti-data leakage practices in AI deployments Interested candidates can share the updated resumes to below mentioned ID. Contact Person - Janani Santhosh - Senior HR Professional Email ID - careers@plenome.com

Posted 1 week ago

Apply

0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

About the Role We're building intelligent agents powered by LLMs (like GPT), automation tools (n8n, Loveable, LangChain, Supabase), and external APIs. If you’re passionate about AI workflows, prompt engineering, autonomous agents, and real-world use cases — this role is for you. Responsibilities Build and deploy AI agents that use LLMs for autonomous decision-making. Integrate APIs, databases, and business logic into agent workflows. Develop RAG (Retrieval Augmented Generation) pipelines using vector databases (e.g., Pinecone, Supabase, Weaviate). Fine-tune and optimize prompts for specific tasks or domains. Collaborate with product and engineering to create real-world automation solutions. Deployment on Production. Required Skills Hands-on experience with GPT APIs, OpenAI, Gemini, or similar LLMs. Familiarity with n8n / LangChain / AutoGen or similar agent frameworks. Strong understanding of JSON, APIs, webhooks, and async data flows. Experience in Python or JavaScript (Node.js). Bonus Points Knowledge of Supabase, Pinecone, Redis Vector. Experience deploying agents via web or mobile interfaces. Background in workflow automation or chatbot development.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point into valuable insights. Join us and be part of a team that's turning the vision of better healthcare into reality—one line of code at a time. Together, we're shaping the future and making a meaningful impact on the world. About The Role Technology that once promised to simplify patient care has, in many cases, created more complexity. At Innovaccer, we tackle this challenge by leveraging the vast amount of healthcare data available and replacing long-standing issues with intelligent, data-driven solutions. Data is the foundation of our innovation. We are looking for a Senior AI Engineer who understands healthcare data and can build algorithms that personalize treatments based on a patient's clinical and behavioral history. This role will help define and build the next generation of predictive analytics tools in healthcare. A Day in the Life Design and build scalable AI platform architecture to support ML development, agentic frameworks, and robust self-serve AI pipelines Develop agentic frameworks and a catalog of AI agents tailored for healthcare use cases Design and deploy high-performance, low-latency AI applications Build and optimize ML/DL models, including generative models like Transformers and GANs Construct and manage data ingestion and transformation pipelines for scalable AI solutions Conduct experiments, statistical analysis, and derive insights to guide development Collaborate with data scientists, engineers, product managers, and business stakeholders to translate AI innovations into real-world applications Partner with business leaders and clients to understand pain points and co-create scalable AI-driven solutions Experience with Docker, Kubernetes, AWS/Azure Preferred Skills Proficient in Python for building scalable, high-performance AI applications LLM optimization and deployment at scale Requirements What You Need 3+ years of software engineering experience with strong API development skills 3+ years of experience in data science, including at least 1+ year building generative AI pipelines, agents, and RAG systems Strong Python programming skills, particularly in enterprise application development and optimization Frameworks like LangChain Vector databases Embedding models and Retrieval-Augmented Generation (RAG) design Familiarity with at least one ML platform Benefits Here's What We Offer Generous Leave Policy: Up to 40 days of leave annually Parental Leave: One of the industry's best parental leave policies Sabbatical Leave: Take time off for upskilling, research, or personal pursuits Health Insurance: Comprehensive coverage for you and your family Pet-Friendly Office*: Bring your furry friends to our Noida office Creche Facility for Children*: On-site care at our India offices Pet-friendly and creche facilities are available at select locations only (e.g., Noida for pets)

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Technical Architect- Python & AI/ML: About Us: Headquartered in Sunnyvale, with offices in Dallas & Hyderabad, Fission Labs is a leading software development company, specializing in crafting flexible, agile, and scalable solutions that propel businesses forward. With a comprehensive range of services, including product development, cloud engineering, big data analytics, QA, DevOps consulting, and AI/ML solutions, we empower clients to achieve sustainable digital transformation that aligns seamlessly with their business goals. Key Responsibilities: · Design and architect complex Generative AI solutions using AWS technologies · Develop advanced AI architectures incorporating state-of-the-art GenAI technologies · Create and implement Retrieval Augmented Generation (RAG) and GraphRAG solutions · Architect scalable AI systems using AWS Bedrock and SageMaker · Design and implement agentic AI systems with advanced reasoning capabilities · Develop custom AI solutions leveraging vector databases and advanced machine learning techniques · Evaluate and integrate emerging GenAI technologies and methodologies Private and Confidential. Technical Expertise Requirements Generative AI Technologies · Expert-level understanding of: o Retrieval Augmented Generation (RAG) o GraphRAG methodologies o LoRA (Low-Rank Adaptation) techniques o Vector Database architectures o Agentic AI design principles AWS AI Services · Comprehensive expertise in: o AWS Bedrock o Amazon SageMaker o AWS AI/ML services ecosystem o Cloud-native AI solution design Technical Skills · Advanced Python programming for AI/ML applications · Deep understanding of: o Large Language Models (LLMs) o Machine Learning architectures o AI model fine-tuning techniques o Prompt engineering o AI system design and integration Core Competencies · Advanced AI solution architecture · Machine learning model optimization · Cloud-native AI system design · Performance tuning of GenAI solutions · Enterprise AI strategy development Technical Stack · Programming Languages: Python (required) · Cloud Platform: AWS · AI Technologies: o Bedrock o SageMaker o Vector Databases · Machine Learning Frameworks: o PyTorch o TensorFlow o Hugging Face · AI Integration Tools: o LangChain o LlamaIndex We Offer: ●Opportunity to work on business challengesfrom top global clientele with high impact. Vast opportunities for self-development, including online university access and sponsored certifications. Sponsored Tech Talks, industry events & seminars to foster innovationand learning. Generous benefits package including health insurance, retirement benefits, flexible work hours, and more. Supportive work environment with forums to explore passionsbeyond work. This role presents an exciting opportunity for a motivated individual to contribute to the development of cutting-edge solutions while advancing their career in a dynamicand collaborative environment.

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Join flutrr and Shape the Future of AI-Powered Experiences! Are you passionate about building intelligent systems that connect with users on a deeper level? Do you thrive on solving complex problems at the intersection of machine learning, recommendation systems, and generative AI? flutrr is looking for a creative and driven AI/ML Engineer to join our growing team in Kolkata! In this role, you will be at the heart of our innovation, designing and building the next generation of multi-modal recommendation and personalization engines that power our platform. Your Key Responsibilities: 🧠 Design & Build: Architect and develop sophisticated multi-modal recommendation systems leveraging a rich mix of text, images, and user behavior data. 📈 Optimize & Rank: Implement and fine-tune advanced ranking models and end-to-end personalization pipelines to enhance user engagement. 🤖 Embed & Extract: Utilize state-of-the-art models (like CLIP, BERT) and services (OpenAI APIs) to extract powerful text and image embeddings. ✍️ Prompt & Perfect: Take the lead on prompt engineering to create intuitive and effective language-based AI features. 🔍 Retrieve & Rank: Construct efficient retrieval and ranking pipelines using vector databases like FAISS or Pinecone for lightning-fast similarity search. ⚙️ Develop & Deploy: Build, maintain, and deploy robust ML APIs using FastAPI or Flask to serve our models at scale. 🧩 Integrate & Innovate: Seamlessly integrate third-party AI APIs or custom-trained models to continuously push the boundaries of what's possible. What You Bring to the Table: Experience: 3-5 years of hands-on experience in an AI/ML engineering role. Python Proficiency: Strong command of Python and its core data science/ML libraries (e.g., PyTorch, TensorFlow, Scikit-learn, Pandas). RecSys Expertise: Proven experience with recommendation systems, including collaborative filtering, content-based, hybrid models, and ranking algorithms. Multi-Modal ML: Skilled in combining diverse data types—structured data, text (BERT/GPT), and images (CLIP/CNNs)—to build holistic models. Vector Search: Familiarity with vector similarity search concepts and tools (FAISS, Pinecone, Milvus). LLM & Embeddings: Comfortable working with Large Language Models (LLMs), prompt engineering, and various embedding techniques. API Development: Proficient in building and deploying production-ready APIs with FastAPI or Flask. RAG Systems: A solid understanding of Retrieval-Augmented Generation (RAG) systems and frameworks like LangChain or LlamaIndex. ✨ Bonus Points For: Exposure to or a keen interest in Generative AI for text, music, or other media. Prior experience in working with a fast paced startup. Why flutrr? Be part of a dynamic and innovative team in the heart of Kolkata. Work on cutting-edge AI problems with a real-world impact. A culture of learning, collaboration, and growth. Competitive salary and employee benefits. Ready to make your mark? If you're excited by this challenge, we'd love to hear from you! Please apply directly on LinkedIn or send your resume and a brief introduction to careers@flutrr.com.

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Backend Development Proficient in Python (2 to 5 years of experience) Hands-on experience with FastAPI or Flask Data & Search Technologies Experience With Vector Databases Such As PGVector OpenSearch FAISS Implementation of similarity search and embedding-based retrieval System Design & Performance Ability to design and maintain scalable, observable, and high-performance services. Bonus Skills (Nice to Have) Experience with React or frontend integration.

Posted 1 week ago

Apply

0 years

1 - 2 Lacs

Chandigarh

On-site

MARUTI Spare parts and Accessories Dispatcher. The key responsibilities include material picking and put-away, arranging materials on shelves, cleaning, packaging materials for deliveries, stocktaking, and ensuring quality standards are met. The dispatcher will assist in receiving, checking, identifying, and storing materials and will retrieve and deliver materials to end users with proper documentation. Key performance indicators include speedy and accurate binning/retrieval of materials and prompt delivery of materials to users. Job Type: Full-time Pay: ₹13,000.00 - ₹18,000.00 per month Benefits: Cell phone reimbursement Work Location: In person

Posted 1 week ago

Apply

3.5 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Job Summary: We are seeking a skilled and experienced Python Developer to join our dynamic development team. The ideal candidate will have at least 3.5 years of hands-on experience in Python programming and must be proficient in building and integrating REST APIs, working with the Django web framework, and writing optimized SQL queries. Key Responsibilities: ● Design, develop, and maintain robust, scalable, and high-performance backend services using Python and Django . ● Create and consume RESTful APIs for web and mobile applications. ● Collaborate with frontend developers, QA teams, and product managers to deliver high-quality products. ● Write clean, maintainable, and efficient code following best practices. ● Develop database schemas, write complex SQL queries, and ensure optimal data storage and retrieval performance. ● Debug and troubleshoot issues across applications and services. ● Participate in code reviews and mentor junior developers as required. ● Contribute to all phases of the development lifecycle, from design to deployment. Required Skills & Qualifications: ● Minimum 3.5 years of professional experience in Python development . ● Strong experience with Django framework. ● Solid understanding of RESTful API design, development, and integration. ● Good working knowledge of SQL and relational databases (e.g., PostgreSQL, MySQL). ● Familiarity with Git, CI/CD pipelines, and Agile/Scrum methodologies. ● Strong problem-solving skills and the ability to work independently as well as collaboratively. Good to Have: ● Knowledge of ORM (Object Relational Mapping) tools such as Django ORM or SQLAlchemy. ● Experience with cloud platforms (AWS, Azure, GCP) and containerization tools (Docker, Kubernetes). ● Familiarity with testing frameworks (PyTest, UnitTest) and code quality tools.

Posted 1 week ago

Apply

0 years

1 - 1 Lacs

Puducherry

On-site

Designation – Warehouse Executive Location - Casablanca, Pondicherry Experience - 6months - 1yr Qualification- Any Bachelors · Manage stock movement between the store and warehouse. · Receive and inspect return parcels, updating inventory details. · Check product condition and verify brand authenticity. · Coordinate sales returns and ensure proper documentation. · Conduct stock audits and generate regular reports. · Oversee warehousing tasks like storage, retrieval, and dispatch. · Communicate with vendors and suppliers regarding stock levels. · Handle documentation for stock allocation and transfers. · Assist with stock reordering and purchase order management. · Implement improvements in stock handling and inventory processes. Interested candidates can send their resume to mercy@hidesign.com Job Types: Full-time, Permanent Pay: ₹10,000.00 - ₹15,000.00 per month Application Question(s): How many years of experience in warehouse? What is your current inhand salary per month? Are you from Pondy? How many days of notice period need to be served? Work Location: In person

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description – Business Intelligence Developer (OAC, PowerBI, ETL, Data Modelling) Competency: Oracle ERP Analytics We are seeking an experienced Business Intelligence Developer with 8+ years of experience having expertise in Oracle Analytics Cloud (OAC), PowerBI, ETL tools, and Data Modelling to join our dynamic team. The successful candidate will be responsible for developing and maintaining scalable data models, creating insightful analytics dashboards, and managing ETL workflows to support data-driven decision-making across the organization. They will work closely with customers, data architects, software developers, and business analysts for suitable product development. The candidate will be highly skilled individual and will be accountable for their career development and growth in EY. Responsibilities: Collaborate with stakeholders to understand data requirements and translate business needs into data models. Design and implement effective data models to support business intelligence activities. Develop and maintain ETL processes to ensure data accuracy and availability. Create interactive dashboards and reports using Oracle Analytics Cloud (OAC) and PowerBI. Work with stakeholders to gather requirements and translate business needs into technical specifications. Optimize data retrieval and develop dashboard visualizations for performance efficiency. Ensure data integrity and compliance with data governance and security policies. Collaborate with IT and data teams to integrate BI solutions into the existing data infrastructure. Conduct data analysis to identify trends, patterns, and insights that can inform business strategies. Provide training and support to end-users on BI tools and dashboards. Document all processes, models, and activities to maintain transparency and facilitate knowledge sharing. Stay up to date with the latest BI technologies and best practices to drive continuous improvement. Qualifications: Bachelor’s degree in computer science, Information Systems, Business Analytics, or a related field. Proven experience with Oracle Analytics Cloud (OAC), PowerBI, and other BI tools. Strong experience in ETL (SSIS, Informatica, Dell Boomi etc) processes and data warehousing solutions. Proficiency in data modelling techniques and best practices. Solid understanding of SQL and experience with relational databases. Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud). Excellent analytical, problem-solving, and project management skills. Ability to communicate complex data concepts to non-technical stakeholders. Detail-oriented with a strong focus on accuracy and quality. Well-developed business acumen, analytical and strong problem-solving attitude with the ability to visualize scenarios, possible outcomes & operating constraints. Strong consulting skills with proven experience in client and stakeholder management and collaboration abilities. Good communication skills both written and oral, ability to make impactful presentations & expertise at using excel & PPTs. Detail-oriented with a commitment to quality and accuracy. Good to have knowledge on data security and controls to address customer’s data privacy needs inline to regional regulations such as GDPR, CCPA et EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

1.0 years

1 - 2 Lacs

India

On-site

Company name: KLM AXIVA FINVEST LTD. Corporate Office: Edappally, Kochi Website: www.klmaxiva.com we have opening CCTV surveillance Executive, Similar profile experienced candidates can apply immediately . Job description: Conduct regular surveillance of branch activities to ensure compliance with company policies and procedures. Identify and report any suspicious or unauthorized activities to the appropriate authorities. Oversee and manage vault operations, including the secure storage and retrieval of valuables. Compile and generate daily reports on branch activities, highlighting any irregularities or incidents. Collaborate with external vendors to ensure the timely provision of services related to surveillance and security. Pay: 13000 - 18000 per month(No Food & Accommodation) Schedule: Rotational shift: 9.30AM-7.00PM, 7.00PM-5.00AM Job Type: Permanent Pay: ₹13,000.00 - ₹20,000.00 per month Benefits: Provident Fund Experience: CCTV Surveillance: 1 year (Preferred) Work Location: In person

Posted 1 week ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Are you a hands-on technical leader driven by a passion for creating exceptional developer tools and infrastructure? As our Developer Enablement Leader, you will be instrumental in shaping the future of our engineering practices. You'll lead the charge in evaluating, implementing, and driving the adoption of cutting-edge technologies and best practices – from CI/CD pipelines and testing frameworks to monitoring solutions. Your work will directly empower our engineers to build high-quality software faster and more efficiently. You will also play a key role in enhancing our internal development platform, ensuring it provides a robust and scalable foundation for all our teams. If you possess deep expertise in DevOps principles, a relentless drive for automation, and a proven history of building and scaling developer infrastructure, we encourage you to apply. Responsibilities What You'll Do: Fuel Developer Productivity: Your primary mission will be to empower our development teams to be as productive and efficient as possible. This means: Orchestrating Innovation: Conduct experimentation and build products that would accelerate developer flow Cloud-Native Empowerment: Playing a vital role in enabling developers to build and deploy applications seamlessly on our chosen cloud platform (GCP, OpenShift), GKE, making the cloud a natural extension of their development workflow. Inner Source Evangelist: Collaborating with teams to cultivate a culture of knowledge sharing and innovation by encouraging developers to contribute to internal projects and collaborate across team boundaries. Promote new Products with AI: Conduct experimentation and build products that would accelerate developer flow. Design, develop, and deploy AI-powered solutions for code acceleration and tech debt reduction, leveraging Large Language Models (LLMs), Agentic AI, and Retrieval Augmented Generation (RAG). DevSecOps Champion: Driving the adoption of DevSecOps principles and practices, embedding security into every stage of the development lifecycle. Lead and Inspire: You'll be a technical leader, mentor, and advocate for our development teams. This means: Providing Expert Guidance: Sharing your deep knowledge on a variety of topics related to developer tooling, best practices, and emerging technologies. Participating in Code Reviews: Providing constructive feedback on code quality, architectural alignment, and adherence to best practices. Staying Ahead of the Curve: Keeping your finger on the pulse of the latest industry trends and emerging technologies in the developer tooling space. Championing Continuous Improvement: Continuously seeking ways to improve our platform, processes, and the overall developer experience. Collaborate and Communicate: You'll be a critical bridge between development teams and other stakeholders, ensuring everyone is aligned and working towards a common vision. This means: Working Closely with Teams: Collaborating with development teams, architects, product managers, security teams, and the "Tools" team (if applicable). Communicating Effectively: Explaining complex technical concepts clearly to both technical and non-technical audiences. Presenting at Events: Sharing your knowledge and insights at team meetings, workshops, and conferences, inspiring others to embrace new technologies and best practices. Acting as a Liaison: Representing the needs of development teams to other departments, ensuring their voices are heard. Qualifications What You'll Bring: A bachelor's degree in computer science or a related field. 12+ years of experience in software development, with a focus on Java. A deep understanding of object-oriented design principles and patterns. A proven track record of driving adoption of developer tools and best practices. Hands-on experience with modern development tools and technologies (e.g., Git, Gradle, Tekton, OpenShift / Kubernetes, SonarQube, Checkmarx, FOSSA). Experience with cloud platforms (e.g., PCF, Azure, GCP). Familiarity with agile development methodologies and a passion for Extreme Programming (XP). Excellent communication, interpersonal, and presentation skills. Strong problem-solving and analytical skills. The ability to work independently and as part of a team. Additional Skills: Experience with developer enablement initiatives. Experience with DevSecOps practices. Experience with API design and development. Experience with microservices architecture. Experience mentoring and coaching junior developers. Knowledge about Large Language Models (LLMs), Agentic AI, and Retrieval Augmented Generation (RAG). Key Skills: Java, Spring Boot is mandatory, good to have Angular, React experiences Developer Enablement CI/CD Cloud Technologies – GCP Preferably Agile Development Communication Problem-Solving Technical Leadership Software Architecture Patterns Test-Driven Development (TDD) SQL Databases (e.g., SQL Server, PostgreSQL, Oracle) NoSQL Databases (e.g., MongoDB, Cassandra) Why Join the Dev Tools & Enablement Team? This isn't just about lines of code; it's about empowering an entire organization to innovate and create. You'll be working alongside a passionate team dedicated to making the developer experience the best it can be. We offer a collaborative environment where you can learn, grow, and make a real impact. You'll have the opportunity to shape the future of how we build software, from the ground up. Ready to Empower Our Developers? If you're ready to take on this exciting challenge and help us create a world-class development environment, we encourage you to apply. We're looking for someone who is passionate, driven, and committed to making a difference in the lives of our developers. Join us and help us build the future of software development!

Posted 1 week ago

Apply

5.0 years

8 - 9 Lacs

Hyderābād

On-site

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Lead Software Engineer at JPMorgan Chase within Corporate Technology, you play a vital role in an agile team dedicated to enhancing, building, and delivering reliable, market-leading technology products in a secure, stable, and scalable manner. As a key technical contributor, you are tasked with implementing essential technology solutions across diverse technical domains, supporting various business functions to achieve the firm's strategic goals. Job responsibilities Develop appropriate level designs and ensure consensus from peers where necessary. Collaborate with software engineers and cross-functional teams to design and implement deployment strategies using AWS Cloud and Databricks pipelines. Work with software engineers and teams to design, develop, test, and implement solutions within applications. Engage with technical experts, key stakeholders, and team members to resolve complex problems effectively. Understand leadership objectives and proactively address issues before they impact customers. Design, develop, and maintain robust data pipelines to ingest, process, and store large volumes of data from various sources. Implement ETL (Extract, Transform, Load) processes to ensure data quality and integrity using tools like Apache Spark and PySpark. Monitor and optimize the performance of data systems and pipelines. Implement best practices for data storage, retrieval, and processing Maintain comprehensive documentation of data systems, processes, and workflows. Ensure compliance with data governance and security policies Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Formal training or certification in AWS/Databricks with 10+ years of applied experience. Expertise in programming languages such as Python and PySpark. 10+ years of professional experience in designing and implementing data pipelines in a cloud environment. Proficient in design, architecture, and development using AWS Services, Databricks, Spark, Snowflake, etc. Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform. Familiarity with container and container orchestration technologies such as ECS, Kubernetes, and Docker. Ability to troubleshoot common Big Data and Cloud technologies and issues. Practical cloud native experience Preferred qualifications, capabilities, and skills 5+ years of experience in leading and developing data solutions in the AWS cloud. 10+ years of experience in building, implementing, and managing data pipelines using Databricks on Spark or similar cloud technologies

Posted 1 week ago

Apply

3.0 years

7 - 10 Lacs

Hyderābād

On-site

About the Job : Sanofi is a pioneering global healthcare company committed to advancing the miracles of science to enhance the well-being of individuals worldwide. Operating in over 100 countries, our dedicated team is focused on reshaping the landscape of medicine, transforming the seemingly impossible into reality. We strive to provide life-changing treatment options and life-saving vaccines, placing sustainability and social responsibility at the forefront of our aspirations. Embarking on an expansive digital transformation journey, Sanofi is committed to accelerating its data transformation and embracing artificial intelligence (AI) and machine learning (ML) solutions. This strategic initiative aims to expedite research and development, enhance manufacturing processes, elevate commercial performance, and deliver superior drugs and vaccines to patients faster, ultimately improving global health and saving lives. What you will be doing:: As a dynamic Data Science practitioner, you are passionate about challenging the status quo and ensuring the development and impact of Sanofi's AI solutions for the patients of tomorrow. You are an influential leader with hands-on experience deploying AI/ML and GenAI solutions, applying state-of-the-art algorithms with technically robust lifecycle management. Your keen eye for improvement opportunities and demonstrated ability to deliver solutions in cross-functional environments make you an invaluable asset to our team. Main Responsibilities: This role demands a dynamic and collaborative individual with a strong technical background, capable of leading the development and deployment of advanced machine learning while maintaining a focus on meeting business objectives and adhering to industry best practices. Key highlights include: Model Design and Development: Lead the development of custom Machine Learning (ML) and Large Language Model (LLM) components for both batch and stream processing-based AI ML pipelines. Create model components, including data ingestion, preprocessing, search and retrieval, Retrieval Augmented Generation (RAG), and fine-tuning, ensuring alignment with technical and business requirements. Develop and maintain full-stack applications that integrate ML models, focusing on both backend processes and frontend interfaces Collaborative Development: Work closely with data engineer, ML Ops, software engineers and other team Tech team members to collaboratively design, develop, and implement ML model solutions, fostering a cross-functional and innovative environment. Contribute to both backend and frontend development tasks to ensure seamless user experiences. Model Evaluation: Collaborate with other data science team members to develop, validate, and maintain robust evaluation solutions and tools for assessing model performance, accuracy, consistency, and reliability during development and User Acceptance Testing (UAT). Implement model optimizations to enhance system efficiency based on evaluation results. Model Deployment: Work closely with the MLOps team to facilitate the deployment of ML and Gen AI models into production environments, ensuring reliability, scalability, and seamless integration with existing systems. Contribute to the development and implementation of deployment strategies for ML and Gen AI models. Implement frontend interfaces to monitor and manage deployed models effectively. Internal Collaboration: Collaborate closely with product teams, business stakeholders, data science team members to ensure the smooth integration of machine learning models into production systems. Foster strong communication channels and cooperation across different teams for successful project outcomes. Problem Solving: Proactively troubleshoot complex issues related to machine learning model development and data pipelines. Innovatively develop solutions to overcome challenges, contributing to continuous improvement in model performance and system efficiency. Key Functional Requirements & Qualifications: Education and experience: PhD in mathematics, computer science, engineering, physics, statistics, economics, operation research or a related quantitative discipline with strong coding skills, OR Master’s Degree in relevant domain with 3+ years of data science experience Technical skills: Disciplined AI/ML development, including CI/CD and orchestration Cloud and high-performance computing proficiency (AWS, GCP, Databricks, Apache Spark). Experience deploying models in agile, product-focused environments Full-stack AI application expertise preferred, including experience with front-end frameworks (e.g., React) and backend technologies. Communication and collaboration: Excellent written and verbal communication A demonstrated ability to collaborate with cross-functional team (e.g. business, product and digital) Why Choose Us? Bring the miracles of science to life alongside a supportive, future-focused team Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs Sanofi achieves its mission, in part, by offering rewarding career opportunities which inspire employee growth and development. Our 6 Recruitment Principles clarify our commitment to you and your role in driving your career. Our people are responsible for managing their career Sanofi posts all non-executive opportunities for our people We give priority to internal candidates Managers provide constructive feedback to all internal interviewed candidates We embrace diversity to hire best talent We expect managers to encourage career moves across the whole organization Pursue Progress Discover Extraordinary Better is out there. Better medications, better outcomes, better science. But Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Telangana

On-site

Overview: As a Senior Software Engineer, you will play a key role in designing, developing, and deploying robust and scalable software solutions. You will work on both front-end and back-end development, with a significant focus on integrating and operationalizing LLMs and agentic AI systems. You will be part of an Agile team, contributing to all phases of the software development lifecycle, from concept and design to testing, deployment, and maintenance. Responsibilities: Design, develop, and maintain high-quality web applications using React or Angular, C#/.NET, and TypeScript/JavaScript. Develop and integrate RESTful APIs and backend services. Work extensively with SQL Server for database design, development, and optimization. Apply strong knowledge of Object-Oriented Programming (OOP), algorithms, and software design patterns to create efficient and maintainable code. Develop and implement solutions leveraging Large Language Models (LLMs), including effective prompting techniques and Retrieval Augmented Generation (RAG) with vector databases. Design and build agentic AI systems, potentially involving multi-agent workflows and agent tools (experience with MCP is a plus). Contribute to and improve CI/CD pipelines for automated testing and release processes. Implement and manage system observation and telemetry to ensure performance and reliability. Develop components and scripts using Python and TypeScript as needed. Ensure application security by implementing security fundamentals, including OAuth2.x and other best practices. Collaborate effectively within an Agile development environment, participating in sprint planning, stand-ups, and retrospectives. Optionally, contribute to cloud-based development initiatives. Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience. 6+ years of professional software development experience. Proven experience with front-end frameworks such as React or Angular. Experience in C# and the .NET framework. Expertise in TypeScript and JavaScript. Solid experience with SQL Server, including database design and query optimization. Demonstrable experience in designing and developing REST APIs. Deep understanding of OOP principles, data structures, algorithms, and software design patterns. Hands-on experience with CI/CD tools and release pipelines. Experience working in an Agile development environment. Demonstrable experience working with LLMs (e.g., model integration, fine-tuning, prompt engineering). Experience with agentic frameworks and developing multi-agent workflows is highly desirable. Proven ability in crafting effective prompts for LLMs. Preferred experience with RAG architecture and vector databases (e.g., Pinecone). Familiarity with agent tools and frameworks (MCP experience is a plus). Experience with system observation, logging, and telemetry tools. Proficiency in Python. Strong understanding of security fundamentals (e.g., OAuth2.x, OWASP).

Posted 1 week ago

Apply

5.0 years

0 Lacs

Telangana

On-site

Overview: RealPage is at the forefront of the Generative AI revolution, dedicated to shaping the future of artificial intelligence within the PropTech domain. Our Agentic AI team is focused on driving innovation by building next generation AI applications and enhancing existing systems with Generative AI capabilities. As part of our team, you’ll contribute to RealPage’s AI go-tomarket (GTM) strategy, creating cutting-edge solutions that empower our users and clients. We are seeking a Senior Full Stack AI Developer to help us with the development, deployment, and scaling of advanced AI applications that address real-world challenges. In this role, you will fine-tune pre-trained foundation models, apply promptengineering and RAG techniques, utilize Agentic frameworks, and work closely with design and product teams to create impactful AI solutions that support our business objectives and transform user experiences. Responsibilities: Agentic AI Engineering: Evaluate and utilize appropriate language models (e.g., GPT-4, LLaMA, Gemini, Claude) to develop AI applications such as text generation, summarization, conversational assistants, OCR, generative analytics and copilots, agents, and beyond. Assess and utilize appropriate AI tools, LLMs, vector databases, RAG (Retrieval-Augmented Generation) solutions, and Agentic frameworks based on project needs. Master Prompt Engineering: Design effective prompts to minimize hallucinations, anticipate and resolve edge cases, and ensure the robustness of generative AI solutions. Tailor Full User Experience: Capable of writing code for UIs (React / Vue / Svelte etc) and integrations (REST, gRCP, MCP). Establish Evaluation Frameworks: Develop and maintain frameworks to validate and measure LLM performance, testing models across a range of capabilities and edge cases for optimal outcomes. Leadership and Mentorship: Provide technical leadership and mentorship to ML engineers and Data Scientists fostering a collaborative team environment and improving overall team effectiveness. Stakeholder Communication: Translate complex needs into clear terms for technical and non-technical stakeholders, ensuring alignment on project goals and expected outcomes. Cross-Functional Collaboration: Work with interdisciplinary teams, including AI engineers, software engineers, product managers, and domain experts, to create integrated Agentic AI experiences. Stay Current in AI Advancements: Track the latest AI research, tools, and trends, and adopt innovative approaches to continuously enhance project outcomes and drive improvement. Not being shy about doing PoCs and prototypes. Understand Business Needs: Cultivate a growth mindset, build an understanding of RealPage’s business processes, products, and challenges to better align AI solutions with organizational goals. Qualifications: Degree in Computer Science, Machine Learning, Data Science, or a related field, or equivalent industry experience. 5+ years in software engineering and/or Data Science, with at least 2 years in Generative AI, transformers, and large language models (LLMs). Strong proficiency in Python with modern Python tooling (e.g., Pydantic, FastAPI, asyncio), JavaScript, TypeScript and SQL, experience in writing production-grade code. Proficiency in prompt engineering to enhance model output reliability and quality. Familiarity with Vector embeddings, RAG architectures and Agentic frameworks for sophisticated generative AI applications. Expertise with cloud platforms for AI (AWS, GCP, Azure) and experience with containerization (Docker), orchestration (Kubernetes), and CI/CD practices. Demonstrated ability to lead projects and mentor team members in Agile environments, driving collaboration and fostering team effectiveness. Exceptional communication skills to effectively convey complex technical subjects to both technical and non-technical stakeholders.

Posted 1 week ago

Apply

4.0 years

20 Lacs

Hyderābād

On-site

Job Description : We are seeking a skilled and dynamic Azure Data Engineer to join our growing data engineering team. The ideal candidate will have a strong background in building and maintaining data pipelines and working with large datasets on the Azure cloud platform. The Azure Data Engineer will be responsible for developing and implementing efficient ETL processes, working with data warehouses, and leveraging cloud technologies such as Azure Data Factory (ADF), Azure Databricks, PySpark, and SQL to process and transform data for analytical purposes. Key Responsibilities : - Data Pipeline Development : Design, develop, and implement scalable, reliable, and high-performance data pipelines using Azure Data Factory (ADF), Azure Databricks, and PySpark. - Data Processing : Develop complex data transformations, aggregations, and cleansing processes using PySpark and Databricks for big data workloads. - Data Integration : Integrate and process data from various sources such as databases, APIs, cloud storage (e.g., Blob Storage, Data Lake), and third-party services into Azure Data Services. - Optimization : Optimize data workflows and ETL processes to ensure efficient data loading, transformation, and retrieval while ensuring data integrity and high performance. - SQL Development : Write complex SQL queries for data extraction, aggregation, and transformation. Maintain and optimize relational databases and data warehouses. - Collaboration : Work closely with data scientists, analysts, and other engineering teams to understand data requirements and design solutions that meet business and analytical needs. - Automation & Monitoring : Implement automation for data pipeline deployment and ensure monitoring, logging, and alerting mechanisms are in place for pipeline health. - Cloud Infrastructure Management : Work with cloud technologies (e.g., Azure Data Lake, Blob Storage) to store, manage, and process large datasets. - Documentation & Best Practices : Maintain thorough documentation of data pipelines, workflows, and best practices for data engineering solutions. Job Type: Full-time Pay: Up to ₹2,000,000.00 per year Experience: Azure: 4 years (Required) Python: 4 years (Required) SQL: 4 years (Required) Work Location: In person

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

Hyderābād

On-site

Job Information Industry IT Services Date Opened 07/03/2025 Salary Confidential Job Type Full time Work Experience 1-3 years City Hyderabad(hybrid) State/Province Telangana Country India Zip/Postal Code 500081 Job Description Veltris is a Digital Product Engineering Services partner committed to driving technology enabled transformation across enterprises, businesses, and industries. We specialize in delivering next generation solutions for sectors including healthcare, technology, communications, manufacturing, and finance. With a focus on innovation and acceleration, Veltris empowers clients to build, modernize, and scale intelligent products that deliver connected, AI-powered experiences. Our experience centric approach, agile methodologies, and exceptional talent enable us to streamline product development, maximize platform ROI, and drive meaningful business outcomes across both digital and physical ecosystems. In a strategic move to strengthen our healthcare offerings and expand industry capabilities, Veltris has acquired BPK Technologies. This acquisition enhances our domain expertise, broadens our go-to-market strategy, and positions us to deliver even greater value to enterprise and mid-market clients in healthcare and beyond. Position: Data Entry Operator Job Responsibilities and Tasks: Accurately input, update, and maintain data in company systems or databases. Verify the accuracy of information and resolve any discrepancies. Organize and manage data files to ensure easy retrieval and access. Follow data entry protocols to meet confidentiality and security standards. Collaborate with team members to ensure data consistency across departments. Generate reports from data systems as management requests. Perform routine quality checks to ensure data integrity. Skills and Qualifications: Prior experience in data entry or a similar role is preferred but not required. Excellent typing speed and accuracy. Strong attention to detail and organizational skills. Proficiency in Microsoft Office Suite (Excel, Word) or similar software. Familiarity with database systems or CRM platforms is a plus. Ability to meet deadlines and manage multiple tasks efficiently. Location/Mode: Hyderabad(Hybrid) Shift timing-2pm IST to 11 pm IST Preferred Attributes: Self-motivated, proactive with good English communication skills. Demonstrated ability to handle sensitive and confidential information with discretion. Strong organizational and time management skills. Disclaimer: The information provided herein is for general informational purposes only and reflects the current strategic direction and service offerings of Veltris. While we strive for accuracy, Veltris makes no representations or warranties regarding the completeness, reliability, or suitability of the information for any specific purpose. Any statements related to business growth, acquisitions, or future plans, including th acquisition of BPK Technologies, are subject to change without notice and do not constitute a binding commitment. Veltris reserves the right to modify its strategies, services, or business relationships at its sole discretion. For the most up-to-date and detailed information, please contact Veltris directly.

Posted 1 week ago

Apply

10.0 - 15.0 years

5 - 10 Lacs

Gurgaon

On-site

Senior Manager EXL/SM/1422447 ServicesGurgaon Posted On 15 Jul 2025 End Date 29 Aug 2025 Required Experience 10 - 15 Years Basic Section Number Of Positions 3 Band C2 Band Name Senior Manager Cost Code D001713 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - UK & Europe Organization Services LOB Analytics - UK & Europe SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill DATA SCIENCE Minimum Qualification B.COM Certification No data available Job Description Role Overview: We are seeking a highly skilled and forward-thinking Manager – Data Science & Generative AI to lead data-driven innovation within our organization. This role requires a blend of technical expertise in machine learning and GenAI, team leadership, and strategic thinking to build scalable solutions and drive business transformation through intelligent systems. As a Manager, you will lead cross-functional teams, guide AI/ML project lifecycles, and help deploy Generative AI models into real-world applications, ranging from content generation to intelligent automation. Key Responsibilities: Lead the end-to-end lifecycle of data science and Generative AI projects—from problem scoping and data exploration to model deployment and performance monitoring. Manage a team of data scientists and ML engineers, providing technical guidance and mentorship. Develop and implement advanced machine learning models including deep learning, NLP, and GenAI architectures (e.g., transformers, LLMs). Collaborate with product, engineering, and business teams to identify AI opportunities, define use cases, and deploy innovative solutions. Evaluate and fine-tune pre-trained foundation models (e.g., GPT, Claude, LLaMA, Gemini) for domain-specific applications using prompt engineering, fine-tuning, or retrieval-augmented generation (RAG). Lead the creation of synthetic data, content generation, and intelligent assistants using GenAI capabilities. Drive responsible AI practices and ensure model transparency, fairness, and compliance with ethical and regulatory guidelines. Contribute to strategic roadmaps for AI adoption across the enterprise. Present findings, models, and strategies to stakeholders and senior leadership. Required Skills & Experience: 6–10 years of experience in Data Science or AI/ML, including at least 2–3 years in a managerial or lead role. Strong proficiency in Python and data science libraries (NumPy, Pandas, scikit-learn, TensorFlow, PyTorch). Proven experience in building and deploying ML models in production environments. Solid experience working with Generative AI tools and platforms (e.g., OpenAI APIs, Hugging Face, LangChain, Vertex AI, Azure OpenAI). Expertise in NLP, LLMs, prompt engineering, fine-tuning, and text generation. Familiarity with cloud platforms (AWS, GCP, Azure) and MLOps pipelines. Strong understanding of data engineering concepts, APIs, and scalable architectures. Excellent problem-solving, project management, and communication skills. Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or related field (PhD is a plus). Nice to Have: Experience with Retrieval-Augmented Generation (RAG) and vector databases (e.g., FAISS, Pinecone, Weaviate). Familiarity with model safety, explainability (XAI), and ethical AI frameworks. Publications or contributions in AI/ML conferences or open-source projects. Hands-on experience in GenAI use cases like document summarization, chatbots, code generation, or marketing automation. Workflow Workflow Type Back Office

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies