Jobs
Interviews

227 Llms Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Wipro Limited is a prominent technology services and consulting company dedicated to creating innovative solutions that cater to the complex digital transformation requirements of clients. With a comprehensive range of capabilities in consulting, design, engineering, and operations, Wipro assists clients in achieving their ambitious goals and establishing sustainable businesses. With a global presence of over 230,000 employees and business partners spanning across 65 countries, Wipro is committed to supporting customers, colleagues, and communities in thriving amidst an ever-changing world. For more information, please visit www.wipro.com. As an AI/ML Engineer at Wipro's innovation arm, Lab45, located in Bangalore, you will be at the forefront of developing cutting-edge products, platforms, and solutions. Lab45 aims to leverage cognitive computing, hyper-automation, robotics, cloud, analytics, and emerging technologies such as Blockchain, AR/VR, and Software Defined Vehicles to empower clients and customers in adapting to the digital landscape and achieving greater success. The collaborative environment at Lab45 fosters creativity and accelerates the ideation process throughout Wipro. In this role, you will play a key part in designing, developing, and maintaining robust backend systems that integrate advanced AI and ML capabilities. Your responsibilities will include implementing and optimizing LangChain for efficient language model chaining, managing data using vector databases to enhance model performance, developing solutions based on large language models, Retrieval-Augmented Generation, and Graph RAG, and collaborating with cross-functional teams to drive innovation in AI-driven solutions. As a part of our dynamic team, you will be expected to stay updated on industry trends in AI and ML technologies and contribute to continuous improvement in our solutions through clean coding practices and effective documentation. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science or a related field, along with a minimum of 5 years of experience as a Software Engineer focusing on backend development. Proficiency in Python programming, familiarity with relevant frameworks and libraries in AI and ML, expertise in LangChain, vector databases, MongoDB, and Pinecone, as well as a deep understanding of LLMs, RAG, Graph RAG, and other Generative AI concepts are essential. Additionally, strong skills in software design, data structures, algorithms, version control systems like Git, CI/CD pipelines, analytical thinking, problem-solving abilities, and excellent communication skills are required. Preferred skills include experience with cloud platforms such as AWS, Google Cloud, Azure, and knowledge of scalable architecture, containerization, and orchestration technologies like Docker and Kubernetes. Join Wipro's reinvention journey and be a part of an organization that encourages constant evolution, growth, and empowerment. We are looking for individuals inspired by reinvention, who are ready to design their own career path and contribute to building a modern Wipro. Come to Wipro and realize your ambitions in an inclusive environment where diversity and innovation thrive. Applications from individuals with disabilities are especially encouraged.,

Posted 9 hours ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Harness is a high-growth company that is disrupting the software delivery market. Our mission is to enable the 30 million software developers in the world to deliver code to their users reliably, efficiently, securely and quickly, increasing customers pace of innovation while improving the developer experience. We offer solutions for every step of the software delivery lifecycle to build, test, secure, deploy and manage reliability, feature flags and cloud costs. The Harness Software Delivery Platform includes modules for CI, CD, Cloud Cost Management, Feature Flags, Service Reliability Management, Security Testing Orchestration, Chaos Engineering, Software Engineering Insights and continues to expand at an incredibly fast pace. Harness is led by technologist and entrepreneur Jyoti Bansal, who founded AppDynamics and sold it to Cisco for $3.7B. We're backed with $425M in venture financing from top-tier VC and strategic firms, including J.P. Morgan, Capital One Ventures, Citi Ventures, ServiceNow, Splunk Ventures, Norwest Venture Partners, Adage Capital Partners, Balyasny Asset Management, Gaingels, Harmonic Growth Partners, Menlo Ventures, IVP, Unusual Ventures, GV (formerly Google Ventures), Alkeon Capital, Battery Ventures, Sorenson Capital, Thomvest Ventures and Silicon Valley Bank. We're thrilled to invite passionate backend engineers to join our dynamic product team within the AI Developer Assistant product division. As we continue to expand our cutting-edge Generative AI and machine learning product, your role will be pivotal in driving its growth and evolution. Join us in shaping the future of AI-driven innovation in software delivery, where endless opportunities for professional development and impact await. You'll join the AI Code Assistant engineering team, focusing on end-to-end testing of web applications. The team is developing a Generative AI-powered agent that uses natural language to automate key aspects of developers daily tasks. This agent will generate test cases, write and maintain automation tests, and significantly reduce the burden on developers, managers, and QA teams. About the Role: - Driving end to end, design and implementation of features - Design, develop, and maintain integrations with multiple AI and cloud systems like Google Vertex AI, Azure, AWS and OpenAI - Build Gen AI based Agentic architecture for a first class AI enabled user experience - Use Prompt engineering techniques to leverage the best LLM models - Collaborate with product management, engineering teams and Machine learning engineers for requirements through to delivery - Design and implement scalable AI services About You: - 3+ years of experience developing backend systems - 3+ years of experience in Python with some knowledge of JavaScript (preferably Typescript) - Experience in RestAPIs, RPC, Deploying services - Experience in data modeling using NoSQL systems like MongoDB or SQL systems like MySQL, Postgres, etc. - Bachelor's degree, CS preferred, or equivalent professional experience - Experience working AWS to build scalable applications - Preference for candidates with experience in: - Building Code Generation products using LLMs - Knowledge of browser automation tools such as Puppeteer Work Location: Bangalore - Hybrid What You Will Have At Harness: - Experience building a transformative product - End-to-end ownership of your projects - Competitive salary - Comprehensive healthcare benefit - Flexible work schedule - Paid Time Off and Parental Leave - Monthly, quarterly, and annual social and team building events - Monthly internet reimbursement Harness In The News: - Harness Grabs a $150m Line of Credit - Welcome Split! - SF Business Times - 2024 - 100 Fastest-Growing Private Companies in the Bay Area - Forbes - 2024 America's Best Startup Employers - SF Business Times - 2024 Fastest Growing Private Companies Awards - Fast Co - 2024 100 Best Workplaces for Innovators Note on Fraudulent Recruiting/Offers: We have become aware that there may be fraudulent recruiting attempts being made by people posing as representatives of Harness. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. If you believe that you have been the target of an interview/offer scam by someone posing as a representative of Harness, please do not provide any personal or financial information and contact us immediately at security@harness.io. You can also find additional information about this type of scam and report any fraudulent employment offers via the Federal Trade Commissions website (https://consumer.ftc.gov/articles/job-scams), or you can contact your local law enforcement agency.,

Posted 11 hours ago

Apply

3.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Solution Architect/Business Development Manager at NTT DATA, you will play a crucial role in specializing in Hyperscalers and cloud-based AI services, particularly Large Language Models (LLMs) offered by major cloud providers. Your responsibilities will include assessing client needs, recommending appropriate cloud AI technologies, sizing opportunities and cloud infrastructure requirements, and collaborating with delivery teams to create end-to-end solutions with accurate costing. You will need to demonstrate deep expertise in cloud-based AI services such as AWS Bedrock, Azure OpenAI Service, Google Vertex AI, and their supported models. Your key roles and responsibilities will include solution architecture & technical leadership, business development, project & delivery leadership, and AI agent development. You will be required to develop compelling proposals and solution presentations for cloud-based AI implementations, nurture client relationships, and lead technical discovery sessions with clients. Additionally, you will need to architect multi-agent systems that leverage cloud platform capabilities, develop frameworks for agent orchestration and governance, and design cloud-native agent solutions that integrate with existing enterprise systems. To be successful in this role, you should have at least 8 years of experience in solution architecture or technical consulting roles, with 3 years of specialized experience working with LLMs and Private AI solutions. A strong understanding of cloud infrastructure sizing, optimization, and cost management for AI workloads is essential, along with the ability to convert business requirements into technical specifications. A bachelor's degree in computer science, AI, or a related field is required, and the ability to travel up to 25% may be necessary. Preferred qualifications include a master's degree or PhD in Computer Science or a related technical field, as well as cloud certifications such as AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect Expert, and Google Cloud Professional Cloud Architect. Experience with autonomous agent development using cloud-based AI services, deploying and fine-tuning LLMs on cloud platforms, and prompt engineering and LLM optimization techniques is also desirable. Strong problem-solving abilities, excellent communication skills, and an analytical mindset are essential for this role. This position is based in Delhi or Bangalore and offers a hybrid working environment. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. With a commitment to helping clients innovate, optimize, and transform for long-term success, NTT DATA invests over $3.6 billion each year in R&D. As a Global Top Employer, NTT DATA has diverse experts in more than 50 countries and offers services including business and technology consulting, data and artificial intelligence, industry solutions, application development, infrastructure management, and connectivity.,

Posted 13 hours ago

Apply

2.0 - 6.0 years

0 Lacs

vadodara, gujarat

On-site

As a Technical Lead at Qualifacts, your primary responsibilities will involve participating in the design and development of technical solutions for complex web-based EHR systems. You will oversee the day-to-day activities of a sprinting team, including sprint planning, release milestone planning, and performance management. Writing fast, efficient, and high-quality code to deliver value to customers is crucial, with a strong emphasis on thorough testing to ensure minimal bugs. Building and guiding a strong team is essential, providing coaching, mentorship, and accountability to ensure each team member embraces Qualifacts" core values and culture. Your role will involve bringing new ideas, solutions, and feedback to the team, as well as assisting in architecting scalable and high-performance solutions while adhering to best practices in software design and coding standards. Collaboration with Engineering leaders to drive the development life cycle from requirements analysis to implementation and support is key. You will be responsible for ensuring the engineering team understands business requirements based on artifacts delivered by Business Analysts and Solutions Architects. Troubleshooting software applications and providing technical support to achieve development objectives will also be part of your responsibilities. Additionally, you will work closely with cross-functional teams to define technical requirements and ensure timely delivery of software solutions. Participation in Agile/Scrum methodologies, staying updated on emerging technologies and industry trends, and contributing to the continuous improvement of the development process are integral aspects of this role. In terms of qualifications, you should possess a Bachelor's degree or equivalent in computer science, information systems, business administration, or a related field. With 10+ years of experience as a full stack developer and 2+ years leading a software development team in an Agile environment, strong leadership skills and a customer-focused mindset are essential. Demonstrated success in coaching, mentoring, and supervisory responsibilities, along with a keen sense of priority and urgency, are required. Technical skills including experience with ES6, React, Angular, or similar technologies, strong computer science fundamentals, and a background in software development are necessary. Understanding of test-driven development, a great work ethic, and motivation to learn and improve are key attributes. Experience with Django, OpenAI, LLMs, and web networking is considered a plus. At Qualifacts, we are an equal opportunity employer dedicated to celebrating diversity and fostering an inclusive environment for all employees. If you are passionate about engineering best practices, self-driven, and accountable, we encourage you to join our team and contribute to our mission of delivering high-quality technical solutions to our customers.,

Posted 14 hours ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

The DevSecOps Security engineer will be responsible for enabling security testing services throughout the lifecycle of an application with the required processes and technologies. This includes cultivating a mindset of "secure by design" within the developer community, supporting driving automation via the application's CI/CD Pipeline, and supporting vulnerability remediation. The ideal candidate should have experience in Security testing activities such as SAST, DAST, Container Image scanning, and associated tools. A deep understanding of modern web application architectures including Microservices, SPAs, and APIs is essential. Experience with writing automation scripts, DevOps platforms like Tekton, CloudBuild, Github Actions, and cloud platforms such as GCP, Azure, or AWS is required. Good knowledge of Agile processes, AI/ML, and LLMs is also desired. Qualifications for this role include three or more years of experience in DevSecOps or Application Security Testing, along with an MCA or B.E/B.Tech (Computer Science/IT) or MS-IT degree from an accredited institution. DevSecOps or Application Security related certifications are preferred. Knowledge of Information Security Policies/Frameworks, being a self-starter, strong interpersonal skills, good communication and presentation skills, willingness to learn new technologies, and work flexible hours across time zones are necessary attributes. Position responsibilities involve defining policies and processes to support DevSecOps for the Enterprise, engaging early with developers in the software development lifecycle, identifying and implementing opportunities for automating security testing, facilitating the onboarding of applications into security tools, supporting application teams with vulnerability remediation, spreading awareness about application security and DevSecOps, working closely with security tool vendors, and producing necessary operational and vulnerability metrics for cyber and operations Leadership.,

Posted 14 hours ago

Apply

5.0 - 9.0 years

0 Lacs

telangana

On-site

As a Senior AI Engineer at Teradata, you will play a vital role in shaping the future of enterprise AI by designing and deploying advanced AI agents that integrate deeply with business operations. You will be part of a high-caliber team of AI researchers, engineers, and data scientists, working on cutting-edge AI solutions for large-scale enterprise environments. Your responsibilities will include architecting and implementing Agentic AI systems, building AI observability pipelines, designing data platform components, and integrating LLMs and multi-modal models into robust AI agents. You will collaborate closely with product, research, and MLOps teams to ensure smooth integration between AI agents and user-facing applications. Your role will involve implementing safeguards, feedback loops, and evaluation metrics to ensure AI safety, reliability, and compliance. Additionally, you will stay current with AI research, especially in the areas of reasoning, planning, and autonomous systems, and contribute to the development of reliable and deterministic AI systems. To be successful in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field. A genuine excitement for AI and large language models (LLMs) is advantageous, along with 5+ years of experience in software architecture, backend systems, or AI infrastructure. Strong engineering skills in Python, Java, or Golang, along with hands-on experience in Machine learning & deep learning frameworks like TensorFlow, PyTorch, and Scikit-learn, are essential. Your background should include experience with LLMs, transformers, AI observability tools, and modern data platform architecture. Familiarity with distributed systems, microservices, cloud platforms, and containerized environments like Docker and Kubernetes is preferred. Bonus points for research experience or contributions to open-source agentic frameworks. Teradata offers a people-first culture, flexible work model, focus on well-being, and commitment to Diversity, Equity, and Inclusion, making it an ideal workplace for passionate AI professionals.,

Posted 15 hours ago

Apply

3.0 - 7.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI, and autonomous intelligent systems. We create intelligent, user-first digital products that redefine industries through the power of AI and engineering excellence. We are looking for a Senior Software Engineer-AI with 4-6 years of hands-on experience in Artificial Intelligence/ML and a passion for innovation. This role is ideal for someone who thrives in a startup environment, fast-paced, product-driven, and full of opportunities to make a real impact. You will contribute to building intelligent, scalable, and production-grade AI systems, with a strong focus on Generative AI and Agentic AI technologies. As a Senior Software Engineer-AI at Techvantage.ai, you will be responsible for building and deploying AI-driven applications and services, focusing on Generative AI and Large Language Models (LLMs). You will design and implement Agentic AI systemsautonomous agents capable of planning and executing multi-step tasks. Collaborate with cross-functional teams including product, design, and engineering to integrate AI capabilities into products. Write clean, scalable code and build robust APIs and services to support AI model deployment. Own feature delivery end-to-endfrom research and experimentation to deployment and monitoring. Stay current with emerging AI frameworks, tools, and best practices and apply them in product development. Contribute to a high-performing team culture and mentor junior team members as needed. In order to excel in this role, we are seeking candidates with 3-6 years of overall software development experience, with at least 3 years specifically in AI/ML engineering. Strong proficiency in Python is required, along with hands-on experience in PyTorch, TensorFlow, and Transformers (Hugging Face). Proven experience working with LLMs (e.g., GPT, Claude, Mistral) and Generative AI models (text, image, or audio) is highly desirable. Practical knowledge of Agentic AI frameworks (e.g., LangChain, AutoGPT, Semantic Kernel) is a plus. Experience building and deploying ML models to production environments is essential. Familiarity with vector databases (Pinecone, Weaviate, FAISS) and prompt engineering concepts is beneficial. Comfortable working in a startup-like environment is crucialself-motivated, adaptable, and willing to take ownership. A solid understanding of API development, version control, and modern DevOps/MLOps practices is expected from the ideal candidate.,

Posted 15 hours ago

Apply

4.0 - 8.0 years

0 Lacs

kerala

On-site

As a Senior AI Engineer (Tech Lead) at EY, you will have the opportunity to leverage your technical expertise to develop and implement cutting-edge AI solutions. With a minimum of 4 years of experience in Data Science and Machine Learning, including NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture, you will play a crucial role in driving innovation and creating impactful solutions for enterprise industry use cases. Your responsibilities will include contributing to the design and implementation of state-of-the-art AI solutions, leading a team of 4-6 developers, and collaborating with stakeholders to identify business opportunities and define AI project goals. You will stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. By utilizing generative AI techniques, such as LLMs and Agentic Framework, you will develop innovative solutions tailored to specific business requirements. Your role will also involve integrating with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to enhance generative AI capabilities. Additionally, you will be responsible for implementing and optimizing end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Your expertise in data engineering, DevOps, and MLOps practices will be valuable in curating, cleaning, and preprocessing large-scale datasets for generative AI applications. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, and demonstrate proficiency in Python, Data Science, Machine Learning, OCR, and document intelligence. Strong collaboration with software engineering and operations teams, along with excellent problem-solving and analytical skills, will be essential in translating business requirements into technical solutions. Moreover, your familiarity with trusted AI practices, data privacy, security, and ethical considerations will ensure the fairness, transparency, and accountability of AI models and systems. Additionally, having a solid understanding of NLP techniques, frameworks like TensorFlow or PyTorch, and cloud platforms such as Azure, AWS, or GCP will be beneficial in deploying AI solutions in a cloud environment. Proficiency in designing or interacting with agent-based AI architectures, implementing optimization tools and techniques, and driving DevOps and MLOps practices will also be advantageous in enhancing the performance and efficiency of AI models. Join EY to build an exceptional experience for yourself and contribute to creating a better working world for all through the power of AI and technology.,

Posted 16 hours ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

Do you aspire to be part of a dynamic team that develops cutting-edge products and machine learning solutions at Microsoft, reaching millions of users every month The Microsoft Turing team is an innovative group specializing in engineering and applied research, dedicated to advancing deep learning models, large language models, and groundbreaking conversational search experiences. At the forefront of conversational search platform and innovation, the team drives core copilot experiences within Microsoft's ecosystem, spanning BizChat, Office, and Windows. As a Principal Applied Scientist within the Turing team, you will lead and execute various data science tasks within tight timelines. Your responsibilities will include hands-on activities such as model training, evaluation set creation, infrastructure development for training and evaluation processes, and more. Collaboration with internal and external data science, product, and engineering teams across different time zones will be essential for successful project delivery. Microsoft's mission revolves around empowering individuals and organizations worldwide to accomplish more. As team members, we embrace a growth mindset, strive for innovation to empower others, and foster collaboration to achieve our collective objectives. Upholding values of respect, integrity, and accountability, we cultivate an inclusive culture where everyone can excel professionally and personally. In your role as a Principal Applied Scientist, you will: - Drive projects from inception to implementation, engaging in data analysis, heuristic formulation, model creation using Large Language Models (LLMs), and establishing engineering pipelines for model execution. - Provide documentation, guidance to junior team members, and collaborate with stakeholders across different time zones to ensure project alignment and timely progress. - Develop evaluation techniques, datasets, criteria, and metrics for model assessments, often involving State-of-the-Art (SOTA) models and metrics/datasets. - Engage in hands-on tasks such as pre-training, fine-tuning, and utilization of language models, encompassing dataset preparation, review, and continual refinement. Proficiency in training frameworks, formats, and stacks like megatron is also required. This role demands active participation in a diverse, globally dispersed team environment that values collaboration and innovation. You will play a pivotal role in shaping the design, functionality, security, performance, scalability, manageability, and supportability of Microsoft products leveraging our deep learning technology. Qualifications: Required Qualifications: - Bachelor's, Master's, or Doctorate degree in Statistics, Econometrics, Computer Science, Electrical/Computer Engineering, or related field with 8+ to 12+ years of relevant experience. - At least 3 years of experience in delivering team-level outcomes. - 2+ years of industrial coding experience in languages like C++, C#, C, Java, or Python. - Previous exposure to data analysis in large-scale systems, pattern identification, or evaluation dataset creation. - Familiarity with machine learning, deep learning frameworks, Large Language Models (LLMs), and prompting techniques. - Strong communication skills to convey technical details effectively across organizational boundaries. Preferred Qualifications: - 5+ years of experience in creating publications such as patents, libraries, or peer-reviewed academic papers. - 2+ years of experience in presenting at conferences or industry events as an invited speaker. #MSAI #Turing #LLMs #Modeltraining #M365CORE #MSAI #Turing,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a leading financial services and healthcare technology company based on revenue, SS&C is headquartered in Windsor, Connecticut, and has 27,000+ employees in 35 countries. Some 20,000 financial services and healthcare organizations, from the world's largest companies to small and mid-market firms, rely on SS&C for expertise, scale, and technology. We are looking for an experienced AI Developer with proficiency in Python, Generative AI (GenAI) incorporating Retrieval-Augmented Generation (RAG), Robotic Process Automation (RPA), and advanced document intelligence utilizing OCR and LLMs. In this role, you will be responsible for developing AI-driven solutions that extract valuable insights from intricate documents, whether digital or scanned, by utilizing tools such as Tesseract, Hugging Face Transformers, or similar technologies. Your primary focus will be on creating end-to-end automation and AI integration for intelligent document processing, enhancing decision-making capabilities and optimizing workflow efficiencies throughout the organization. Key Responsibilities: - Develop and implement GenAI solutions with RAG pipelines to facilitate intelligent querying and summarization of document repositories. - Extract and organize data from complex documents (e.g., PDFs, Images) using a combination of OCR engines (e.g., Tesseract) and AI-based document and vision language models (DiNO, SmolVLM, etc.). - Incorporate OCR+LLM pipelines into business applications for processing scanned forms, contracts, and other unstructured documents. - Automate repetitive, document-centric tasks using RPA tools (e.g., Blue Prism, etc.). - Design and manage Python-based workflows to coordinate document ingestion, extraction, and LLM-powered processing. - Collaborate across various teams including product, data, and operations to deliver scalable, AI-enhanced document automation solutions. - Ensure model performance, compliance, and audit readiness for all document-handling workflows. Required Qualifications: - Minimum of 4 years of hands-on programming experience with Python. - Demonstrated expertise in building RAG-based GenAI applications using tools like LangChain, LlamaIndex, or equivalent. - Proficiency in OCR tools (e.g., Tesseract, PaddleOCR) and transformer-based document models. - Experience working with LLMs for document understanding, summarization, and Q&A. - Proficient in RPA development utilizing platforms like Blueprism, etc. - Knowledge of vector databases (e.g., FAISS, Pinecone) and embeddings for semantic retrieval. - Strong understanding of REST APIs, JSON, and data integration workflows. Please note that unless explicitly requested or approached by SS&C Technologies, Inc. or any of its affiliated companies, the company will not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services.,

Posted 1 day ago

Apply

2.0 - 6.0 years

0 - 0 Lacs

kolkata, west bengal

On-site

As a Consultant Data Scientist on a contractual role for a duration of 6 months, you should possess a strong understanding of Data Structures and Algorithms. Your proficiency in advanced programming skills, especially in Python, is mandatory for this role. Additionally, having a working knowledge of C++ would be beneficial. Your responsibilities will include hands-on experience with AI/ML, Deep Learning (CNN, RNN), LLMs, classification, and clustering. It is essential to have a good grasp of algorithm design and development. Familiarity with reinforcement learning, computer vision, or NLP would be an advantage in this position. The ideal candidate should be open to working part-time and must be comfortable with a contractual arrangement. The budget for this role ranges from 80k to 1.3 Lacs per month. The job type is contractual/temporary, with a contract length of 6 months. In addition to the technical requirements, you should be willing to work in person at the specified work location. As a Consultant Data Scientist, you will have the benefit of a flexible schedule. If you meet these qualifications and are interested in this opportunity, please send your CV to soumojit.roy@rebaca.com. To apply for this role, please consider the following questions: - Are you willing to work as a consultant on a 6-month contract - On a scale of 1 to 10, how would you rate your proficiency in Python - What are your monthly salary expectations We look forward to receiving your application and potentially welcoming you to our team as a Consultant Data Scientist.,

Posted 2 days ago

Apply

3.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Google Cloud Architect in Pune (Hybrid) with over 10 years of experience, including 3+ years specifically on GCP, you will play a crucial role in leading the design and delivery of comprehensive cloud solutions on Google Cloud Platform. Your responsibilities will involve collaborating with data engineering, DevOps, and architecture teams to create scalable, secure, and cost-effective cloud platforms. Your key responsibilities will include designing scalable data and application architectures utilizing tools such as BigQuery, Dataflow, Composer, Cloud Run, Pub/Sub, and other related GCP services. You will be leading cloud migration, modernization, and CI/CD automation through the use of technologies like Terraform, Jenkins, GitHub, and Cloud Build. Additionally, you will be responsible for implementing real-time and batch data pipelines, chatbot applications using LLMs (Gemini, Claude), and automating reconciliation and monitoring processes. Your role will also involve collaborating closely with stakeholders to ensure technical solutions align with business objectives. The ideal candidate for this role should have a minimum of 3 years of experience working with GCP and possess a strong proficiency in key tools such as BigQuery, Dataflow, Cloud Run, Airflow, GKE, and Cloud Functions. Hands-on experience with Terraform, Kubernetes, Jenkins, GitHub, and cloud-native CI/CD is essential. In addition, you should have a solid understanding of DevSecOps practices, networking, and data architecture concepts like Data Lake, Lakehouse, and Mesh. Proficiency in Python, SQL, and ETL frameworks such as Ab Initio is also required. Preferred qualifications for this role include GCP Certifications (Cloud Architect, DevOps, ML Engineer), experience with Azure or hybrid environments, and domain expertise in sectors like Banking, Telecom, or Retail.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

At our organization, we prioritize people and are dedicated to providing cutting-edge AI solutions with integrity and passion. We are currently seeking a Senior AI Developer who is proficient in AI model development, Python, AWS, and scalable tool-building. In this role, you will play a key part in designing and implementing AI-driven solutions, developing AI-powered tools and frameworks, and integrating them into enterprise environments, including mainframe systems. Your responsibilities will include developing and deploying AI models using Python and AWS for enterprise applications, building scalable AI-powered tools, designing and optimizing machine learning pipelines, implementing NLP and GenAI models, developing Retrieval-Augmented Generation (RAG) systems, maintaining AI frameworks and APIs, architecting cloud-based AI solutions using AWS services, writing high-performance Python code, and ensuring the scalability, security, and performance of AI solutions in production. To qualify for this role, you should have at least 5 years of experience in AI/ML development, expertise in Python and AWS, a strong background in machine learning and deep learning, experience in LLMs, NLP, and RAG systems, hands-on experience in building and deploying AI models, proficiency in cloud-based AI solutions, experience in developing AI-powered tools and frameworks, knowledge of mainframe integration and enterprise AI applications, and strong coding skills with a focus on software development best practices. Preferred qualifications include familiarity with MLOps, CI/CD pipelines, and model monitoring, a background in developing AI-based enterprise tools and automation, and experience with vector databases and AI-powered search technologies. Additionally, you will benefit from health insurance, accident insurance, and a competitive salary based on various factors including location, education, qualifications, experience, technical skills, and business needs. You will also be expected to actively participate in monthly team meetings, team-building efforts, technical discussions, peer reviews, contribute to the OP-Wiki/Knowledge Base, and provide status reports to OP Account Management as required. OP is a technology consulting and solutions company that offers advisory and managed services, innovative platforms, and staffing solutions across various fields such as AI, cybersecurity, and enterprise architecture. Our team is comprised of dynamic, creative thinkers who are dedicated to delivering quality work. As a member of the OP team, you will have access to industry-leading consulting practices, strategies, technologies, innovative training, and education. We are looking for a technology leader with a strong track record of technical excellence and a focus on process and methodology.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be responsible for designing, developing, and deploying AI agents powered by LLMs such as GPT-4 Claude and Gemini. Your role will involve integrating AI agents with various tools, APIs, databases, and automation frameworks. You will be required to develop reusable prompt chains and workflows for common tasks and decision-making processes. Additionally, you will utilize frameworks like LangChain, AutoGen, CrewAI, or Semantic Kernel to manage multi-agent architectures. Your tasks will also include fine-tuning or instructing LLMs for specific use-cases or industry applications, as well as optimizing performance, reliability, and cost-efficiency of AI workflows. Collaboration with data scientists, product managers, and engineers to design end-to-end AI solutions is an essential part of this role. Furthermore, you will implement automation in internal tools, customer interactions, or operational pipelines using AI agents. To be successful in this position, you must have strong experience with LLMs such as OpenAI GPT, Anthropic Claude, or Meta Llama. Hands-on experience with agentic frameworks like LangChain, AutoGen, CrewAI, etc., is required. Proficiency in Python and relevant AI libraries such as HuggingFace, Transformers, LangChain is a must. A solid understanding of prompt engineering and retrieval-augmented generation (RAG) is also necessary. Knowledge of automation tools like Zapier, Make, Airflow, or custom Python automation will be beneficial. Previous experience working with APIs, webhooks, and data integrations is essential. Nice-to-have qualifications include experience with vector databases like Pinecone, Weaviate, FAISS, and knowledge of fine-tuning or customizing open-source LLMs. Familiarity with cloud platforms such as AWS, GCP, Azure, and deployment of AI solutions is an added advantage. Experience with UI/UX for chatbot or agent interfaces would also be beneficial. Joining Rysysth Technologies as an AI Agent Developer will provide you with exciting challenges and opportunities to work with cutting-edge technologies in the AI space.,

Posted 2 days ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking for a GCP Python Gen AI LLM RAG Vertex AI to join their team in Hyderabad, Telangana (IN-TG), India. As a part of this inclusive and forward-thinking organization, you will need to have 4+ years of Software Engineering experience or equivalent demonstrated through various means such as work experience, training, military experience, or education. The ideal candidate should have at least 2+ years of working experience with GCP (Google Cloud Platform) or alternate public/hybrid cloud, with a proven track record of delivering products at scale using cloud services and architectures. Additionally, you should possess 2+ years of experience with Python and 3+ years of experience with GenAI, LLMs, RAG, vector databases, and conversational bots. Exposure to Playbooks, Vertex AI, ADK, and Voice AI is also required. It would be beneficial to have knowledge of LangChain and/or LangGraph, and 4+ years of experience in the Contact Center industry, specifically in design, development, testing, integration with vendors, CRMs, and business applications. Proficiency in IVR/IVA, NLU/NLP, Real-Time Omni-channel Agent experience, and customer journey optimization using AI/ML is a plus. Furthermore, familiarity with Node JS, JAVA, Spring Boot, Kafka, Distributed Caches (GemFire, Redis), Elastic Search technologies, GraphQL, NoSQL Databases (Cassandra or Mongo), Graph Databases, and Public Cloud Marketplace services is desirable. Experience with Deep Domain Driven Design and cloud-native Microservices designed for massive scale and seamless resiliency on platforms like PCF/VMWare Tanzu, K8s, or Serverless cloud technologies will be an advantage. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. As a Global Top Employer, NTT DATA has diverse experts in over 50 countries and a robust partner ecosystem. Their services encompass business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is also a leading provider of digital and AI infrastructure worldwide, committed to helping clients innovate, optimize, and transform for long-term success. If you are an exceptional, innovative, and passionate individual looking to grow with a prestigious organization, apply now to be a part of NTT DATA's dynamic team.,

Posted 2 days ago

Apply

1.0 - 5.0 years

0 Lacs

hyderabad, telangana

On-site

NTT DATA is looking for a GCP Python Gen AI LLM RAG Vertex AI to join the team in Hyderabad, Telangana, India. As a potential candidate, you should have at least 4 years of Software Engineering experience or equivalent, demonstrated through work experience, training, military experience, or education. To be considered for this role, you must have a minimum of 2 years of experience working with GCP (Google Cloud Platform) or alternate public/hybrid cloud, with a proven track record of delivering products using cloud services and architectures at scale. Additionally, you should have at least 2 years of experience with Python, and 3 years of experience with GenAI, LLMs, RAG, vector databases, and conversational bots. Experience with Playbooks and Vertex AI is required, along with exposure to ADK and Voice AI. Knowledge of LangChain and/or LangGraph is considered a plus. Furthermore, candidates with 4+ years of Contact Center industry experience, including design, development, testing, integration with vendors, CRMs, and business applications, are preferred. Familiarity with IVR/IVA, NLU/NLP, Real-Time Omni channel Agent experience, customer journey, and CX/AX optimization using AI/ML is advantageous. Proficiency in Node JS, JAVA, Spring Boot, Kafka, Distributed Caches, Elastic Search technologies, GraphQL, and NoSQL Databases is beneficial. Experience with Graph Databases and Public Cloud Marketplace services is also a plus. Deep Domain Driven Design experience with cloud-native Microservices designed for massive scale and seamless resiliency is desirable, preferably deployed on PCF/VMWare Tanzu, K8s, or Serverless cloud technologies. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. The company is committed to helping clients innovate, optimize, and transform for long-term success. NTT DATA has a diverse team of experts in more than 50 countries and collaborates with a robust partner ecosystem. Their services encompass business and technology consulting, data and artificial intelligence, industry solutions, and the development, implementation, and management of applications, infrastructure, and connectivity. As a leading provider of digital and AI infrastructure, NTT DATA is dedicated to advancing organizations and society into the digital future confidently and sustainably.,

Posted 2 days ago

Apply

5.0 - 12.0 years

0 Lacs

karnataka

On-site

You are an experienced professional with 5 to 12 years of experience, looking for a Full-Time opportunity as an Azure Generative AI Engineer. Your primary skills include Data Science, Machine Learning, Python, AI, and Azure. You will be responsible for designing, developing, and deploying Generative AI applications using Python, FastAPI, Azure Durable Functions, and Azure OpenAI Services. You must build and implement Retrieval-Augmented Generation (RAG) pipelines for enterprise AI applications and work on end-to-end Generative AI projects. Your role involves integrating AI/ML search and data services such as Azure Cognitive Search, Cosmos DB, and other Azure-native tools. Automation of key components of the RAG workflow for ensuring production-grade performance and scalability is crucial. You will be evaluating, validating, and fine-tuning the outputs of Generative AI models for accuracy and relevance, implementing monitoring, observability, and guardrails to ensure responsible and secure use of AI technologies, and collaborating with cross-functional teams to deliver innovative AI solutions. To qualify for this role, you must have strong programming experience in Python, including frameworks like FastAPI and Flask. A deep understanding of Generative AI, LLMs, and RAG methodologies is essential. Proficiency in Azure cloud services, especially Azure OpenAI, Functions, AI Search, and Cosmos DB is required. Hands-on experience with machine learning, deep learning, and natural language processing (NLP) is a must. Familiarity with ML/DL frameworks such as TensorFlow, PyTorch, and libraries like scikit-learn, spaCy, etc., is expected. You should possess solid knowledge of machine learning algorithms, including GPTs, CNNs, RNNs, k-NN, Naive Bayes, SVM, and Decision Forests, along with experience in evaluating and fine-tuning LLM performance and implementing responsible AI practices. Preferred qualifications include certifications in Azure AI Engineer Associate or related fields. Experience working with managed LLMs beyond Azure OpenAI (e.g., OpenAI API, Anthropic, Cohere, etc.) and knowledge of DevOps, CI/CD, and infrastructure automation for AI pipelines are considered advantageous.,

Posted 2 days ago

Apply

3.0 - 18.0 years

0 Lacs

pune, maharashtra

On-site

About the position: We are looking for a Data Architect with 3 to 18 years of experience to take charge of designing and implementing scalable data systems, primarily focusing on Artificial Intelligence (AI), Generative AI (GenAI), and Agentic AI. The ideal candidate should possess in-depth knowledge of modern data architecture, advanced analytics, and intelligent agent frameworks to support the development of autonomous and intelligent systems. Your role will involve collaborating with data scientists, engineers, and business stakeholders to deliver innovative solutions aligned with enterprise objectives. Key Responsibilities: - Design and implement enterprise-grade data architectures. - Lead initiatives related to data modeling, governance, metadata management, data lineage, and master data management. - Develop scalable data solutions to facilitate real-time inference and autonomous agent systems. - Architect and deploy end-to-end pipelines to support AI/ML workloads, including data ingestion, feature engineering, and model lifecycle management. - Work closely with AI research and product teams to operationalize GenAI models and integrate them into business workflows. - Implement and scale retrieval-augmented generation (RAG) and fine-tuning frameworks in production environments, as well as knowledge graphs. - Design data platforms to support multi-agent systems, ensuring seamless orchestration, communication, and memory management among agents. - Architect data flow and backend systems to support agentic workflows such as task decomposition, context switching, and reinforcement feedback loops alongside knowledge graphs. - Utilize frameworks like LangChain, Langgraph, AutoGen, CrewAI, or similar tools to build and manage autonomous and collaborative agents. - Demonstrate expertise in feedback loop design and development for multi-agent or agentic frameworks. Key Skills: - Bachelor's or master's degree in computer science, Data Science, Engineering, or a related field. - Hands-on experience with AI/ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. - Proven track record of working with Generative AI models, LLMs (e.g., GPT, Claude, LLaMA), and orchestration frameworks (LangChain, LlamaIndex, Langgraph). - Familiarity with multi-agent frameworks (e.g., CrewAI, AutoGen, ReAct, CAMEL) and agentic AI design principles. - Strong understanding of data governance, security, and compliance frameworks (GDPR, HIPAA, etc.). - Excellent communication, leadership, and stakeholder management skills. Preferred Qualifications: - Experience in building production-grade agentic systems with adaptive learning and decision-making capabilities. - Knowledge of knowledge graphs, semantic search, and advanced RAG pipelines. - Certifications in cloud platforms or AI/ML specializations (e.g., AWS Certified Data Analytics, Google ML Engineer). Join Xoriant: Xoriant is a leading provider of digital engineering services, known for its expertise in building and operating complex platforms and products at scale. With a legacy of three decades in software engineering, we blend modern technology skills in Data & AI (GenAI), cloud & security, domain, and process consulting to address intricate technology challenges. Serving over 100 Fortune 500 companies and tech startups, we aim to support their growth journey. As a "right-sized" company, we bring agility through our 5000+ passionate XFactors from over 20 countries, fostering a culture focused on purpose and employee happiness. Join us at Xoriant: - Experience an inclusive workspace where imagination turns into reality every day. - Contribute to a better future through tech & innovation as part of a passionate team. - Make a positive impact in the community by volunteering and building a stronger business and society. - Support your career growth to ensure long-term success by staying curious and driving innovation. - Prioritize well-being with multiple health benefits and maintain work-life balance. - Value your work with meaningful rewards and recognitions. - Celebrate diversity, inclusivity, and togetherness through festivals as one Xoriant Family. - Connect directly with leaders and voice your opinion through Candid Connects. - Bring new ideas to the fore and realize them through engineering in a culture of Ideation. If you believe you have the XFactor, we have a chair dedicated to your name. Apply now to be a part of Xoriant's journey. Visit www.xoriant.com for more information. Important Notice: If you encounter any suspicious job offers or fraudulent communication bearing Xoriant branding, please contact us at careers@xoriant.com immediately. Equal Employment Opportunity Statement: At Xoriant, we are dedicated to offering equal employment opportunities to all individuals, irrespective of race, color, religion, gender, national origin, age, disability, or veteran status. Our inclusive workplace values diversity and ensures fair treatment and respect for all employees, fostering a sense of belonging. We strive to create a supportive environment where everyone has the opportunity to succeed and contribute to our collective success. Qualifications: - Bachelor's or master's degree in Computer Science or a related field. Location: - Primary Location: Pune - Other Locations: Mumbai Job Details: - Role: Data Architect GenAI - Location: Pune/Hyderabad/Mumbai - Experience: 3 to 18 years - Job type: Full-time - Work type: Hybrid - Job Posting: Jul 17, 2025, 4:10:40 AM Primary Skills: AI Secondary Skills: Generative AI,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for developing, training, and fine-tuning Machine Learning models for AI/ML applications. This includes designing and implementing data pipelines for data processing, model training, and inference. Additionally, you will be deploying models using MLOps and integrating them with cloud infrastructure. Collaboration with product managers and designers to conceptualize AI-driven features will also be a key part of your role. You will also be expected to research and implement various ML and AI techniques to improve performance. To excel in this role, you should have proficiency in Python and ML frameworks such as Scikit-learn, XGBoost, TensorFlow, PyTorch. Experience with SQL and ETL data pipelines, including data processing and feature engineering, will be beneficial. Familiarity with Docker and container-based deployments to create cloud-agnostic products is required. A strong understanding of AI and Machine Learning concepts such as Supervised Learning, Unsupervised Learning, Deep Learning, and Reinforcement Learning is essential. Knowledge of at least one cloud platform (AWS, Azure, GCP) and ML deployment strategies, preferably Azure, is preferred. Exposure to LLMs (e.g., OpenAI, Hugging Face, Mistral) and foundation models will be an advantage. Understanding of various Statistical models is also expected. If you have 5 to 7 years of experience in the relevant field and possess the mentioned skills and qualifications, we would like to hear from you.,

Posted 2 days ago

Apply

1.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced Reporting GenAI Consultant with a strong background in developing AI-driven reporting solutions. Your role involves building and integrating Generative AI capabilities into BI platforms to enable natural language insights, automated report generation, and interactive dialogue with data. You will leverage your hands-on experience working with LLMs, prompt engineering, and modern data visualization tools to deliver innovative reporting solutions. Your responsibilities include designing, developing, and deploying GenAI-based reporting solutions that generate insights summaries, dashboards, and narrative analytics from structured and unstructured data. You will build natural language interfaces and conversational agents for querying data, enabling users to interact with reports through plain English. Additionally, you will integrate GenAI features like ChatGPT, Azure OpenAI, or Vertex AI with enterprise BI platforms such as Power BI, Tableau, Qlik, ThoughtSpot, etc. Furthermore, you will implement automated insight generation using LLMs to summarize trends, detect anomalies, and generate key takeaways. Collaboration with data engineering and BI teams is crucial to optimize data models and ensure clean, prompt-ready datasets. You will design and fine-tune prompts and templates for contextual report summarization and storytelling, and conduct POCs and pilots to evaluate the feasibility and impact of GenAI-driven reporting use cases. It is essential to ensure that solutions are secure, scalable, and compliant with enterprise governance policies. To excel in this role, you should have 10+ years of experience in Business Intelligence/Analytics with 1-2 years in Generative AI implementations. Strong experience in Power BI with exposure to augmented analytics features is required. Your expertise should include working with LLMs for natural language understanding and summarization, prompt engineering, few-shot learning, and custom summarization models. A good understanding of data storytelling, narrative generation, and auto-generated insights is essential. Experience in integrating APIs for AI models into web or reporting tools is beneficial, along with familiarity with Python or JavaScript for model integration and backend logic. Excellent communication and stakeholder management skills are also necessary for this role. Preferred qualifications include experience with RAG (Retrieval-Augmented Generation), LangChain, or similar frameworks, exposure to voice-based analytics or speech-to-insight solutions, knowledge of data governance, privacy (GDPR/CPRA), and enterprise security standards, as well as familiarity with cloud platforms like Azure.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer III at JPMorgan Chase within the AI/ML Data Platform team, you will play a crucial role in designing and delivering cutting-edge technology products in a secure, stable, and scalable manner. Your expertise will be instrumental in implementing critical technology solutions across various technical domains to support the firm's business objectives effectively. You will collaborate closely with business stakeholders, product teams, and technology experts to develop software solutions that align with strategic goals. Your responsibilities will include architecting, designing, and developing AI products using generative AI, natural language processing, and other AI-ML technologies. Working alongside software developers, data scientists, and product teams, you will establish timelines for product features and ensure effective communication with business stakeholders. In this role, you will conduct data modeling for AI software solutions, devise data persistence strategies, and create robust data pipelines. Setting coding standards for repositories, performing code reviews, and overseeing product deployments on public and private clouds will also be part of your responsibilities. You will be responsible for managing server costs through monitoring and tuning to ensure efficient operations. To qualify for this role, you should have formal training or certification in software engineering concepts along with a minimum of 3 years of practical experience. Your hands-on experience should cover system design, application development, testing, operational stability, and Agile SDLC. Proficiency in Python, Java, and JavaScript is essential, along with expertise in technologies like FastAPI, Spring, Agent Building tools, and LLMs. Additionally, you should possess advanced knowledge of automation and continuous delivery methods, with a strong grasp of agile methodologies such as CI/CD, Application Resiliency, and Security. Demonstrated proficiency in software applications and technical processes related to cloud, AI, ML, and mobile technologies is crucial. A deep understanding of the financial services industry, IT systems, microservice design patterns, data structures, algorithms, and cloud services like AWS and Terraform is highly desirable. Preferred qualifications include exposure to Python libraries like pandas, scipy, and numpy, as well as familiarity with python concurrency through multiprocessing. Knowledge of grid computing concepts and the financial services industry will be advantageous in this role.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Analytics focused Senior Software Engineer at PubMatic, you will be responsible for developing advanced AI agents to enhance data analytics capabilities. Your expertise in building and optimizing AI agents, along with strong skills in Hadoop, Spark, Scala, Kafka, Spark Streaming, and cloud-based solutions, will play a crucial role in improving data-driven insights and analytical workflows. Your key responsibilities will include building and implementing a highly scalable big data platform to process terabytes of data, developing backend services using Java, REST APIs, JDBC, and AWS, and building and maintaining Big Data pipelines using technologies like Spark, Hadoop, Kafka, and Snowflake. Additionally, you will design and implement real-time data processing workflows, develop GenAI-powered agents for analytics and data enrichment, and integrate LLMs into existing services for query understanding and decision support. You will work closely with cross-functional teams to enhance the availability and scalability of large data platforms and PubMatic software functionality. Participating in Agile/Scrum processes, discussing software features with product managers, and providing customer support over email or JIRA will also be part of your role. We are looking for candidates with three plus years of coding experience in Java and backend development, solid computer science fundamentals, expertise in developing software engineering best practices, hands-on experience with Big Data tools, and proven expertise in building GenAI applications. The ability to lead feature development, debug distributed systems, and learn new technologies quickly are essential. Strong interpersonal and communication skills, including technical communications, are highly valued. To qualify for this role, you should have a bachelor's degree in engineering (CS/IT) or an equivalent degree from well-known Institutes/Universities. PubMatic employees globally have returned to our offices via a hybrid work schedule to maximize collaboration, innovation, and productivity. Our benefits package includes paternity/maternity leave, healthcare insurance, broadband reimbursement, and office perks like healthy snacks, drinks, and catered lunches. About PubMatic: PubMatic is a leading digital advertising platform that provides transparent advertising solutions to publishers, media buyers, commerce companies, and data owners. Our vision is to enable content creators to run a profitable advertising business and invest back into the multi-screen and multi-format content that consumers demand.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

Develop mathematical models for trading, forecasting, or data processing, and continuously enhance existing models. Analyze large datasets to identify trends and insights, as well as conduct research on financial markets and economic conditions. Utilize strong coding skills in Python and other languages to implement mathematical models. Explore unconventional data sources to drive innovation and enhance model performance. Translate mathematical models into algorithms for practical application. Manage risk by developing mathematical models to assess and mitigate risks associated with products or strategies. Build predictive models based on historical data using Monte Carlo modeling techniques. Demonstrate proficiency in writing pseudo code and Python code to test mathematical models. Possess a good understanding of trading systems and trade execution processes to minimize latency and slippages. Have a comprehensive knowledge of financial and capital markets to make informed decisions. Oversee the development and implementation of investment strategies using mathematical and statistical tools. Utilize AI, machine learning, and linear latent models to enhance efficiency and productivity. Apply data science, mathematics, and statistics to develop or enhance investment strategies. Ensure compliance with SEBI regulations and other legal requirements. Identify and implement mathematical and statistical models for effective risk management of strategies and products. Establish and monitor internal controls to manage and mitigate risks efficiently. Requirements: - Bachelor's degree in Mathematics, Master's degree preferred or currently pursuing a PhD in Mathematics. - Relevant experience in mathematics, statistics, computer science, or finance is preferred. - Minimum of 3-5 years of experience in building mathematical models and data sciences. - Proven ability to develop and implement mathematical models in live market strategies. - Uphold high ethical standards and prioritize clients" best interests. This is a full-time position with a day shift schedule requiring in-person work.,

Posted 4 days ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

The Specialized Analytics Manager provides full leadership and supervisory responsibility. You will provide operational/service leadership and direction to team(s). You will apply in-depth disciplinary knowledge through provision of value-added perspectives or advisory services. You may contribute to the development of new techniques, models, and plans within the area of expertise. Excellent communication and diplomacy skills are required for this role. You will generally have responsibility for volume, quality, timeliness of end results, and shared responsibility for planning and budgets. Your work will affect an entire area, which eventually affects the overall performance and effectiveness of the sub-function/job family. You will have full supervisory responsibility, ensuring motivation and development of the team through professional leadership. This will include duties such as performance evaluation, compensation, hiring, disciplinary and terminations, as well as direction of daily tasks and responsibilities. The Data Science Analyst is a developing professional role within the GWFO team. In this role, you will apply specialized knowledge in machine learning, statistical modeling, and data analysis to monitor, assess, analyze, and evaluate processes and data. You will identify opportunities to leverage advanced analytics to improve business outcomes, automate processes, and generate actionable insights. Additionally, you will contribute to the development and deployment of innovative AI solutions, including generative AI and agentic AI applications. You will work with cross-functional stakeholders to gather and process operational data from various sources to examine past business performance and identify areas for improvement. As a successful candidate, you should ideally have 7+ years of relevant experience in data science, machine learning, or a related field. You should possess advanced process management skills, be organized and detail-oriented, curious about learning and developing new skill sets, particularly in the area of artificial intelligence, have a positive outlook with a can-do mindset, and strong programming skills in Python and proficiency in relevant data science libraries such as scikit-learn, TensorFlow, PyTorch, and Transformers. Experience with statistical modeling techniques, building GenAI solutions, agentic AI frameworks, and data visualization tools such as Tableau or Power BI is required. Strong logical reasoning capabilities, willingness to learn new skills, and good communication and presentation skills are also essential. You should have a Bachelors/University degree or equivalent experience. This job description provides a high-level review of the types of work performed, and other job-related duties may be assigned as required. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity, review Accessibility at Citi. View Citis EEO Policy Statement and the Know Your Rights poster.,

Posted 4 days ago

Apply

6.0 - 13.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

We are seeking a highly experienced candidate with over 13 years of experience for the role of Technical Project Manager(Data) in Trivandrum/Kochi location. As a Technical Project Manager, your responsibilities will revolve around owning the end-to-end delivery of data platform, AI, BI, and analytics projects. It is essential to ensure alignment with business objectives and stakeholder expectations. Your role will involve developing and maintaining comprehensive project plans, roadmaps, and timelines for various aspects including data ingestion, transformation, governance, AI/ML models, and analytics deliverables. Leading cross-functional teams, comprising data engineers, data scientists, BI analysts, architects, and business stakeholders, to deliver high-quality and scalable solutions within the defined budget and timeframe will be a key aspect of this role. Furthermore, you will be responsible for defining, prioritizing, and managing product and project backlogs covering data pipelines, data quality, governance, AI services, and BI dashboards or reporting tools. Collaboration with business units to capture requirements and translate them into actionable user stories and acceptance criteria for data and analytics solutions is crucial. Overseeing BI and analytics areas, including dashboard development, embedded analytics, self-service BI enablement, and ad hoc reporting capabilities, will also be part of your responsibilities. It is imperative to ensure data quality, lineage, security, and compliance requirements are integrated throughout the project lifecycle in collaboration with governance and security teams. Coordinating UAT, performance testing, and user training to ensure successful adoption and rollout of data and analytics products is vital. Acting as the primary point of contact for all project stakeholders, providing regular status updates, managing risks and issues, and escalating when necessary are essential aspects of this role. Additionally, facilitating agile ceremonies such as sprint planning, backlog grooming, demos, and retrospectives to foster a culture of continuous improvement is expected. Driving post-deployment monitoring and optimization of data and BI solutions to meet evolving business needs and performance standards is also a key responsibility. Primary Skills required for this role include: - Over 13 years of experience in IT with at least 6 years in roles such as Technical Product Manager, Technical Program Manager, or Delivery Lead - Hands-on development experience in data engineering, including data pipelines, ETL processes, and data integration workflows - Proven track record in managing data engineering, analytics, or AI/ML projects end to end - Solid understanding of modern data architecture, data lakes, warehouses, pipelines, ETL/ELT, governance, and AI tooling - Hands-on familiarity with cloud platforms (e.g., Azure, AWS, GCP) and DataOps/MLOps practices - Strong knowledge of Agile methodologies, sprint planning, and backlog grooming - Excellent communication and stakeholder management skills, including working with senior execs and technical leads Secondary Skills that would be beneficial for this role include: - Background in computer science, engineering, data science, or analytics - Experience or solid understanding of data engineering tools and services in AWS, Azure & GCP - Exposure or solid understanding of BI, Analytics, LLMs, RAG, prompt engineering, or agent-based AI systems - Experience leading cross-functional teams in matrixed environments - Certifications such as PMP, CSM, SAFe, or equivalent are a plus If you meet the above requirements and are looking for a challenging opportunity in Technical Project Management within the data domain, we encourage you to apply before the closing date on 18-07-2025.,

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies