Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
15 - 20 Lacs
Thiruvananthapuram Taluk, India
Remote
Are you passionate about building AI systems that create real-world impact? We are hiring a Senior AI Engineer with 5+ years of experience to design, develop, and deploy cutting-edge AI/ML solutions. 📍 Location: [Trivandrum / Kochi / Remote – customize based on your need] 💼 Experience: 5+ years 💰 Salary: ₹15–20 LPA 🚀 Immediate Joiners Preferred 🔧 What You’ll Do Design and implement ML/DL models for real business problems Build data pipelines and perform preprocessing for large datasets Use advanced techniques like NLP, computer vision, reinforcement learning Deploy AI models using MLOps best practices Collaborate with data scientists, developers & product teams Stay ahead of the curve with the latest research and tools ✅ What We’re Looking For 5+ years of hands-on AI/ML development experience Strong in Python, with experience in TensorFlow, PyTorch, Scikit-learn, Hugging Face Knowledge of NLP, CV, DL architectures (CNNs, RNNs, Transformers) Experience with cloud platforms (AWS/GCP/Azure) and AI services Solid grasp of MLOps, model versioning, deployment, monitoring Strong problem-solving, communication, and mentoring skills 💻 Tech Stack You’ll Work With Languages: Python, SQL Libraries: TensorFlow, PyTorch, Keras, Transformers, Scikit-learn Tools: Git, Docker, Kubernetes, MLflow, Airflow Platforms: AWS, GCP, Azure, Vertex AI, SageMaker Skills: cloud platforms (aws, gcp, azure),docker,computer vision,git,pytorch,airflow,hugging face,nlp,ml,ai,deep learning,kubernetes,mlflow,mlops,tensorflow,scikit-learn,python,machine learning
Posted 2 weeks ago
2.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Data Scientist – Agentic AI | Cloud AI | AR/VR Location: Gurgaon- 5 days WFO Experience: 2 to 5 Years Employment Type: Full-time ⸻ About the Role We’re looking for a hands-on Data Scientist who’s excited by the next frontier of AI — from Agentic AI systems and multi-agent collaboration to cloud-based AI services and immersive experiences in AR/VR. This is your chance to work on cutting-edge innovation across verticals such as retail, healthcare, gaming, and enterprise tech. ⸻ Key Responsibilities • Build, deploy, and optimize AI/ML models with a focus on Agentic AI and multi-agent coordination frameworks (e.g., AutoGen, LangChain, etc.) • Leverage Google Cloud AI and Azure AI Services (Vision AI, Vertex AI, Azure ML) for scalable training, inference, and deployment • Apply Computer Vision techniques for object detection, image classification, and spatial tracking in AR/VR contexts • Collaborate cross-functionally with product, engineering, and design teams to bring AI-driven AR/VR applications to life • Utilize Multi-Cloud Pipelines (MCP) for managing end-to-end workflows in a hybrid cloud setup • Continuously experiment with emerging AI techniques in the realm of generative agents, prompt engineering, and memory-based systems ⸻ Must-Have Skills • 2–5 years of hands-on experience in AI/ML model development and deployment • Strong Python skills with experience in frameworks like TensorFlow, PyTorch, or OpenCV
Posted 2 weeks ago
0.0 - 7.0 years
0 Lacs
Saidapet, Chennai, Tamil Nadu
On-site
Job Information Date Opened 07/11/2025 City Saidapet Country India Job Role AI/ML Engineering State/Province Tamil Nadu Industry IT Services Job Type Full time Zip/Postal Code 600096 Job Description Introduction to the Role: Are you passionate about building intelligent systems that learn, adapt, and deliver real-world value? Join our high-impact AI & Machine Learning Engineering team and be a key contributor in shaping the next generation of intelligent applications. As an AI/ML Engineer , you’ll have the unique opportunity to develop, deploy, and scale advanced ML and Generative AI (GenAI) solutions in production environments, leveraging cutting-edge technologies, frameworks, and cloud platforms . In this role, you will collaborate with cross-functional teams including data engineers, product managers, MLOps engineers, and architects to design and implement production-grade AI solutions across domains. If you're looking to work at the intersection of deep learning, GenAI, cloud computing, and MLOps — this is the role for you. Accountabilities: Design, develop, train, and deploy production-grade ML and GenAI models across use cases including NLP, computer vision, and structured data modeling. Leverage frameworks such as TensorFlow , Keras , PyTorch , and LangChain to build scalable deep learning and LLM-based solutions. Develop and maintain end-to-end ML pipelines with reusable, modular components for data ingestion, feature engineering, model training, and deployment. Implement and manage models on cloud platforms such as AWS , GCP , or Azure using services like SageMaker , Vertex AI , or Azure ML . Apply MLOps best practices using tools like MLflow , Kubeflow , Weights & Biases , Airflow , DVC , and Prefect to ensure scalable and reliable ML delivery. Incorporate CI/CD pipelines (using Jenkins, GitHub Actions, or similar) to automate testing, packaging, and deployment of ML workloads. Containerize applications using Docker and orchestrate scalable deployments via Kubernetes . Integrate LLMs with APIs and external systems using LangChain, Vector Databases (e.g., FAISS, Pinecone), and prompt engineering best practices. Collaborate closely with data engineers to access, prepare, and transform large-scale structured and unstructured datasets for ML pipelines. Build monitoring and retraining workflows to ensure models remain performant and robust in production. Evaluate and integrate third-party GenAI APIs or foundational models where appropriate to accelerate delivery. Maintain rigorous experiment tracking, hyperparameter tuning, and model versioning. Champion industry standards and evolving practices in ML lifecycle management , cloud-native AI architectures , and responsible AI. Work across global, multi-functional teams, including architects, principal engineers, and domain experts. Essential Skills / Experience: 4–7 years of hands-on experience in developing, training, and deploying ML/DL/GenAI models . Strong programming expertise in Python with proficiency in machine learning , data manipulation , and scripting . Demonstrated experience working with Generative AI models and Large Language Models (LLMs) such as GPT, LLaMA, Claude, or similar. Hands-on experience with deep learning frameworks like TensorFlow , Keras , or PyTorch . Experience in LangChain or similar frameworks for LLM-based app orchestration. Proven ability to implement and scale CI/CD pipelines for ML workflows using tools like Jenkins , GitHub , GitLab , or Bitbucket Pipelines . Familiarity with containerization (Docker) and orchestration tools like Kubernetes . Experience working with cloud platforms (AWS, Azure, GCP) and relevant AI/ML services such as SageMaker , Vertex AI , or Azure ML Studio . Knowledge of MLOps tools such as MLflow , Kubeflow , DVC , Weights & Biases , Airflow , and Prefect . Strong understanding of data engineering concepts , including batch/streaming pipelines, data lakes, and real-time processing (e.g., Kafka ). Solid grasp of statistical modeling , machine learning algorithms , and evaluation metrics. Experience with version control systems (Git) and collaborative development workflows. Ability to translate complex business needs into scalable ML architectures and systems. Desirable Skills / Experience: Working knowledge of vector databases (e.g., FAISS , Pinecone , Weaviate ) and semantic search implementation. Hands-on experience with prompt engineering , fine-tuning LLMs, or using techniques like LoRA , PEFT , RLHF . Familiarity with data governance , privacy , and responsible AI guidelines (bias detection, explainability, etc.). Certifications in AWS, Azure, GCP, or ML/AI specializations. Experience in high-compliance industries like pharma , banking , or healthcare . Familiarity with agile methodologies and working in iterative, sprint-based teams. Work Environment & Collaboration: You will be a key member of an agile, forward-thinking AI/ML team that values curiosity, excellence, and impact. Our hybrid work culture promotes flexibility while encouraging regular in-person collaboration to foster innovation and team synergy. You'll have access to the latest technologies, mentorship, and continuous learning opportunities through hands-on projects and professional development resources. Why Join Us? Build and deploy cutting-edge LLM and GenAI applications that solve real-world problems Collaborate with thought leaders across engineering, product, and data science Work in a dynamic, cloud-native, and automation-driven AI environment Accelerate your growth through certification programs and continuous learning Be part of an innovation-first team that values openness, agility, and integrity About Agilisium - Agilisium, is an AWS technology Advanced Consulting Partner that enables companies to accelerate their "Data-to-Insights-Leap. With $25+ million in annual revenue and over 40% year-over-year growth, Agilisium is one of the fastest-growing IT solution providers in Southern California. Our most important asset? People. Talent management plays a vital role in our business strategy. We’re looking for “drivers”; big thinkers with growth and strategic mindset — people who are committed to customer obsession, aren’t afraid to experiment with new ideas. And we are all about finding and nurturing individuals who are ready to do great work. At Agilisium, you’ll collaborate with great minds while being challenged to meet and exceed your potential
Posted 2 weeks ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Job Summary We are seeking a forward-thinking AI Architect to design, lead, and scale enterprise-grade AI systems and solutions across domains. This role demands deep expertise in machine learning, generative AI, data engineering, cloud-native architecture, and orchestration frameworks. You will collaborate with cross-functional teams to translate business requirements into intelligent, production-ready AI solutions. Key Responsibilities Architecture & Strategy : Design end-to-end AI architectures that include data pipelines, model development, MLOps, and inference serving. Create scalable, reusable, and modular AI components for different use cases (vision, NLP, time series, etc.). Drive architecture decisions across AI solutions, including multi-modal models, LLMs, and agentic workflows. Ensure interoperability of AI systems across cloud (AWS/GCP/Azure), edge, and hybrid environments. Technical Leadership Guide teams in selecting appropriate models (traditional ML, deep learning, transformers, etc.) and technologies. Lead architectural reviews and ensure compliance with security, performance, and governance policies. Mentor engineering and data science teams in best practices for AI/ML, GenAI, and MLOps. Model Lifecycle & Engineering Oversee implementation of model lifecycle using CI/CD for ML (MLOps) and/or LLMOps workflows. Define architecture for Retrieval Augmented Generation (RAG), vector databases, embeddings, prompt engineering, etc. Design pipelines for fine-tuning, evaluation, monitoring, and retraining of models. Data & Infrastructure Collaborate with data engineers to ensure data quality, feature pipelines, and scalable data stores. Architect systems for synthetic data generation, augmentation, and real-time streaming inputs. Define solutions leveraging data lakes, data warehouses, and graph databases. Client Engagement / Product Integration Interface with business/product stakeholders to align AI strategy with KPIs. Collaborate with DevOps teams to integrate models into products via APIs/microservices. Required Skills & Experience Core Skills : Strong foundation in AI/ML/DL (Scikit-learn, TensorFlow, PyTorch, Transformers, Langchain, etc.) Advanced knowledge of Generative AI (LLMs, diffusion models, multimodal models, etc.) Proficiency in cloud-native architectures (AWS/GCP/Azure) and containerization (Docker, Kubernetes) Experience with orchestration frameworks (Airflow, Ray, LangGraph, or similar) Familiarity with vector databases (Weaviate, Pinecone, FAISS), LLMOps platforms, and RAG design Architecture & Programming Solid experience in architectural patterns (microservices, event-driven, serverless) Proficient in Python and optionally Java/Go Knowledge of APIs (REST, GraphQL), streaming (Kafka), and observability tooling (Prometheus, ELK, Grafana) Tools & Platforms ML lifecycle tools: MLflow, Kubeflow, Vertex AI, Sagemaker, Hugging Face, etc. Prompt orchestration tools: LangChain, CrewAI, Semantic Kernel, DSPy (nice to have) Knowledge of security, privacy, and compliance (GDPR, SOC2, HIPAA, etc.) (ref:hirist.tech)
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
chandigarh
On-site
Senior Zuora Developer-Offshore2 Job Title: Senior Zuora Developer Location: Offshore No of Positions: 2 Job Summary Key Responsibilities Lead the implementation of Quote-to-Cash, CPQ, and Billing systems, ensuring alignment with industry best practices. Configure Zuora Billing, Subscription Management, and finance settings to optimize performance and accuracy. Integrate Zuora with ERP systems like Salesforce, NetSuite, SAP, Oracle, Vertex, and Avalara for seamless data flow. Act as a Subject Matter Expert (SME) on Subscription Business Models and Consumption-Based Billing strategies. Collaborate with cross-functional teams and consulting partners to deliver scalable and robust solutions. Provide technical guidance, troubleshooting, and support to ensure high availability and reliability of Zuora implementations. Qualifications Certification: Certified Zuora Administrator and other relevant technical certifications. Experience: 5+ years of technical experience, with 4+ years specifically in Zuora Billing and Subscription Management. Proven track record in implementing Quote-to-Cash, CPQ, and Billing systems for enterprise clients. Demonstrated ability to manage complex workflows, configurations, and integrations in Zuora. Previous experience working with consulting partners is highly desirable.,
Posted 2 weeks ago
0 years
0 Lacs
Budaun Sadar, Uttar Pradesh, India
On-site
MinutestoSeconds is a dynamic organization specializing in outsourcing services, digital marketing, IT recruitment, and custom IT projects. We partner with SMEs, mid-sized companies, and niche professionals to deliver tailored solutions. We would love the opportunity to work with YOU!! Requirements JD: About the Role: We are looking for a highly motivated and innovative AI/ML Engineer to join our growing team. You will play a key role in designing, developing, and deploying machine learning models and AI-driven solutions that solve real-world business problems. This is a hands-on role requiring a deep understanding of ML algorithms, data preprocessing, model optimization, and scalable deployment. Key Responsibilities: Design and implement scalable ML solutions for classification, regression, clustering, and recommendation use cases Collaborate with data scientists, engineers, and product teams to translate business requirements into ML use cases Preprocess large datasets using Python, SQL, and modern ETL tools Train, validate, and optimize machine learning and deep learning models Deploy models using MLOps best practices (CI/CD, model monitoring, versioning) Continuously improve model performance and integrate feedback loops Research and experiment with the latest in AI/ML trends, including GenAI, LLMs, and transformers Document models and solutions for reproducibility and compliance Required Skills: Strong proficiency in Python, with hands-on experience in NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, etc. Solid understanding of supervised and unsupervised learning, NLP, and time-series forecasting Experience with cloud platforms such as AWS, GCP, or Azure (preferred: SageMaker, Vertex AI, or Azure ML Studio) Familiarity with Docker, Kubernetes, and MLOps practices Proficient in writing efficient and production-grade code Excellent problem-solving and critical-thinking skills Good to Have: Experience with LLMs, Generative AI, or OpenAI APIs Exposure to big data frameworks like Spark or Hadoop Knowledge of feature stores, data versioning tools (like DVC or MLflow) Published work, research papers, or contributions to open-source ML projects
Posted 2 weeks ago
4.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
BU - SALT SL - Tax Tech Location Bangalore 1) Proposed designation Consultant 2) Role Type Individual Contributor 3) Reporting to Manager 4) Geo to be supported US 5) Work Timing 11:30 AM to 8:30 PM IST 6) Roles and responsibilities This hands-on role expects the candidate to apply functional and technical knowledge of SAP and third party tax engines like Onesource/Vertex Establishes and maintains relationships with business leaders. Uses deep business knowledge to drive engagement on major tax integration projects Gather business requirements, lead analysis and drive high level E2E design Drive configuration and development activities to meet business requirements through a combination of internal and external teams Manages external software vendors and System Integrators on project implementation, ensuring adherence to established SLAs Take a domain lead role on IT projects to ensure that all business stakeholders included and ensure they receive sufficient and timely communications Provides leadership to teams and integrates technical expertise and business understanding to create superior solutions for the company and customers. Consults with team members and other organizations, customer and vendors on complex issues. Mentors others in the team on process/technical issues. Uses formal system implementation methodologies during project preparation, requirements gathering, design, build/test and deployment Experience in handling support incidents, major incidents and problems would be an added advantage 7) Educational Qualification BE/B.Tech/MCA 8) Work experience Between 4 to 9 years relevant experience 9) Mandatory Skills SAP Onesource Vertex Indirect tax integration 10) Prefered skills Indirect tax concepts SAP native tax 11) Key behavioural attributes/requirements Hands on experience in Tax Engine Integration(OneSource/Vertex/Avalara) with ERP systems (Oracle/SAP) Understanding of Indirect Tax Concepts Good understanding of O2C and P2P processes Hands on experience in Tax engine configurations (OneSource or Vertex) Experience in troubleshooting tax related issues and solution design Knowledge of Avalara and VAT compliance is a plus 12) Other Information Interview Process: 2 technical round + HR round Does the job role involves travelling : Yes Does the busy season apply to this role No,
Posted 2 weeks ago
3.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
Remote
Job Overview PitchBook’s Product Owner works collaboratively with key Product stakeholders and teams to deliver the department’s product roadmap. This role takes an active part in aligning our engineering activities with Product Objectives across new product capabilties as well as data and scaling improvements to our core technologies, with a focus on AI/ML data extraction, collection, and enrichment capabilities. Team Overview The Data Technology team within PitchBook’s Product organization develops solutions to support and accelerate our data operations processes. This domain impacts core workflows of data capture, ingestion, and hygiene across our core private and public capital markets datasets. This role works on our AI/ML Collections Data Extraction & Enrichment teams, closely integrated with Engineering and Product Management to ensure we are delivering against our Product Roadmap. These teams provide backend AI/ML services that power PitchBook’s data collection activities and related internal conent management systems. Outline Of Duties And Responsibilities Be a domain expert for your product area(s) and understand user workflows and needs Actively define backlog priority for your team(s) in collaboration with Product and Engineering Manage delivery of features according to the Product Roadmap Validate the priority and impact of incoming requirements from Product Management and Engineering Break down prioritized requirements into well-structured backlog items for the engineering team to complete Create user stories and acceptance criteria that indicate successful implementation of requirements Communicate requirements, acceptance criteria, and technical details to stakeholders across multiple PitchBook departments Define, create, and manage metrics that represent team performance Manage, track, and mitigate risks or blockers of Feature delivery Support execution of AI/ML collections work related but not limited to AI/ML data extraction, collection, and enrichment services. Support PitchBook’s values and vision Participate in various company initiatives and projects as requested Experience, Skills And Qualifications Bachelor's degree in Information Systems, Engineering, Data Science, Business Administration, or a related field 3+ years of experience as a Product Manager or Product Owner within AI/ML or enterprise SaaS domains A proven record of shipping high impact data pipeline or data collection-related tools and services Familiarity with AI/ML workflows, especially within model development, data pipelines, or classification systems Experience collaborating with globally distributed product engineering and operations teams across time zones Excellent communication skills to drive clarity and alignment between business stakeholders and technical teams Bias for action and a willingness to roll up your sleeves and do what is necessary to meet team goals Experience translating user-centric requirements and specifications into user stories and tasks Superior attention to detail including the ability to manage multiple projects simultaneously Strong verbal and written communication skills, including strong audience awareness Experience with shared SDLC and workspace tools like JIRA, Confluence, and data reporting platforms Preferred Qualifications Direct experience with applied AI/ML Engineering services. Strong understanding of supervised and unsupervised ML models, including their training data needs and lifecycle impacts Background in fintech supporting content collation, management, and engineering implementation Experience with data quality measurements, annotation systems, knowledge graphs, and ML Model evaluation Exposure to cloud –based ML infrastructure and data pipeline orchestration tools such as AWS SageMaker, GCP Vertex AI, Airflow, and dbt Certifications related to Agile Product Ownership / Product Management such as CSPO, PSPO, POPM are a plus Working Conditions The job conditions for this position are in a standard office setting. Employees in this position use PC and phone on an on-going basis throughout the day. This role collaborates with Seattle and New York-based stakeholders, and typical overlap is between 6:30 – 8:30AM Pacific. Limited corporate travel may be required to remote offices or other business meetings and events. Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. 037_PitchBookDataInc PitchBook Data, Inc Legal Entity
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Job Title: AI/ML Architect / Senior AI/ML Engineer (8+ Years Experience) Location: [Onsite/Remote/Hybrid – Customize as per your need] Employment Type: Full-time 🔍 About the Role: We are seeking a seasoned AI/ML Architect / Senior Engineer with 10+ years of hands-on experience in Artificial Intelligence, Machine Learning, and Data Science. The ideal candidate will have worked across various industries (e.g., healthcare, finance, retail, manufacturing, etc.) and demonstrated a deep understanding of the end-to-end ML lifecycle — from data ingestion to model deployment and monitoring. You’ll play a strategic and technical leadership role in designing and scaling intelligent systems while staying ahead of evolving market trends in AI, ML, and GenAI. 🎯 Key Responsibilities: Architect, design, and implement scalable AI/ML solutions across multiple domains. Translate business problems into technical solutions using data-driven methodologies. Lead model development, deployment, and operationalization using MLOps best practices. Evaluate and incorporate emerging trends such as Generative AI (e.g., LLMs) , AutoML , Federated Learning , and Responsible AI . Mentor and guide junior engineers and data scientists. Collaborate with product managers, data engineers, and stakeholders for end-to-end delivery. Establish best practices in experimentation, model validation, reproducibility, and monitoring. Work with modern data stack and cloud ecosystems (AWS, Azure, GCP). 🧠 Required Skills and Experience: 8+ years of experience in AI/ML, Data Science, or related roles. Proficient in Python, R, SQL, and key libraries (TensorFlow, PyTorch, Scikit-learn, XGBoost, etc.). Strong experience with MLOps tools (MLflow, Kubeflow, SageMaker, Vertex AI, etc.). Expertise in developing, tuning, and deploying ML/DL models in production environments. Experience in NLP, Computer Vision, Time-Series Forecasting, and/or GenAI. Familiar with model explainability (SHAP, LIME), fairness, and bias mitigation techniques. Solid knowledge of cloud-based architectures (Azure, AWS, or GCP). Experience across domains such as fintech, healthcare, e-commerce, logistics, or manufacturing. 🌐 Preferred Qualifications: Master's or Ph.D. in Computer Science, Data Science, Statistics, or a related field. Experience integrating AI with business applications (e.g., ERP, CRM, RPA platforms). Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines. Familiarity with data governance, privacy-preserving AI, and compliance standards (GDPR, HIPAA). 🌟 Why Join Us? Work with cross-functional, forward-thinking teams on impactful projects. Opportunity to lead initiatives in cutting-edge AI and industry 4.0 innovations . Flexible work culture with continuous learning and growth opportunities. Access to the latest tools, cloud infrastructure, and high-compute environments.
Posted 2 weeks ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Project Role : Technology Architect Project Role Description : Design and deliver technology architecture for a platform, product, or engagement. Define solutions to meet performance, capability, and scalability needs. Must have skills : Google Cloud Platform Architecture Good to have skills : Google Cloud Machine Learning Services Minimum 12 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML lead, you will be responsible for developing applications and systems that utilize AI tools and Cloud AI services. Your typical day will involve applying CCAI and Gen AI models as part of the solution, utilizing deep learning, neural networks and chatbots. Should have hands-on experience in creating, deploying, and optimizing chatbots and voice applications using Google Conversational Agents and other tools. Roles & Responsibilities: I. Solutioning and designing CCAI applications and systems utilizing Google Cloud Machine Learning Services, dialogue flow CX, agent assist, conversational AI. II. Design, develop, and maintain intelligent chatbots and voice applications using Google Dialogflow CX. III. Integrate Dialogflow agents with various platforms, such as Google Assistant, Facebook Messenger, Slack, and websites. Hands-on experience with IVR integration and telephony systems such as Twilio, Genesys, Avaya IV. Integrate with IVR systems and Proficiency in webhook setup and API integration. V. Develop Dialogflow CX - flows, pages, webhook as well as playbook and integration of tool into playbooks. VI. Creation of agents in Agent builder and integrating them into end end to pipeline using python. VII. Apply GenAI-Vertex AI models as part of the solution, utilizing deep learning, neural networks, chatbots, and image processing. VIII. Work with Google Vertex AI for building, training and deploying custom AI models to enhance chatbot capabilities IX. Implement and integrate backend services (using Google Cloud Functions or other APIs) to fulfill user queries and actions. X. Document technical designs, processes, and setup for various integrations. XI. Experience with programming languages such as Python/Node.js Professional & Technical Skills: - Must To Have Skills: CCAI/Dialogflow CX hands on experience and generative AI understanding. - Good To Have Skills: Cloud Data Architecture, Cloud ML/PCA/PDE Certification, - Strong understanding of AI/ML algorithms, NLP and techniques. - Experience with chatbot , generative AI models, prompt Engineering. - Experience with cloud or on-prem application pipeline with production-ready quality. Additional Information: 1 The candidate should have a minimum of 10 years of experience in Google Cloud Machine Learning Services/Gen AI/Vertex AI/CCAI. 2 The ideal candidate will possess a strong educational background in computer science, mathematics, or a related field, along with a proven track record of delivering impactful data-driven solutions., 15 years full time education
Posted 2 weeks ago
8.0 - 11.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications PhD or equivalent degree in Computer Science/Applied Mathematics/Applied Statistics/Artificial Intelligence Preferences to research scholars from IITs, NITs and IIITs ( Research Scholars who are submitted their thesis) Work Experience 8 to 11 Years of experience with strong record of publications in top tier conferences and journals
Posted 2 weeks ago
8.0 - 11.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications PhD or equivalent degree in Computer Science/Applied Mathematics/Applied Statistics/Artificial Intelligence Preferences to research scholars from IITs, NITs and IIITs ( Research Scholars who are submitted their thesis) Work Experience 8 to 11 Years of experience with strong record of publications in top tier conferences and journals
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Company Description Banthry is an AI-powered legal companion designed to revolutionize the legal industry by automating research, drafting, opinion generation, and case management. Our proprietary domain-specific AI models streamline workflows, enabling legal professionals to focus on strategic decision-making and client engagement. Banthry aims to transform how legal professionals operate, enhancing efficiency and productivity. Role Description This is a remote role for an Artificial Intelligence Intern (Full Stack). The intern will be responsible for assisting in developing and implementing AI models, working on various Agentic Projects, writing and testing code, and working on Frontend and Backend. Day-to-day tasks will include conducting data analysis, applying machine learning techniques, solving complex problems, building RAG pipelines, AI agents and improving existing systems. Qualifications Strong foundation in Computer Science and Programming Proficient in AI tools like Google CLI, Cursor, Vertex, Google Studio, Claude etc. Must be able to independently build frontend and backend (Use of AI tools is must for high efficiency and productivity) Knowledge and earlier experience of Fine tuning LLM, Building RAG pipeline and Agentic AI Ability to work in high pressure, short deadline environment. Excellent written and verbal communication skills Experience or knowledge in the legal industry is a plus Pursuing or completed Bachelor's degree in Computer Science, Data Science, or related field Stipend and Work Schedule Stipend Upto- Rs. 12000 Work Days- Monday to Saturday (5-6 Hours Per Day) 2 Months Internship Performance based PPO
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Salary: As per experience. Experience: 3-5 years on Machine Learning or AI engineering. Job Summary: We are seeking a passionate and skilled AI/ML Engineer to join our team to design, develop, and deploy machine learning models and intelligent systems. You will work closely with software developers, and product managers to integrate AI solutions into real-world applications. Key Responsibilities: · Design, develop, and train machine learning models (e.g. clustering, NLP). · Basic understanding of AI Algorithm and underling models(RAG/CAG). · Build scalable pipelines for data ingestion, preprocessing, and model deployment. · Implement and fine-tune deep learning models using frameworks like TensorFlow, PyTorch, or Hugging Face. · Collaborate with cross-functional teams to define business problems and develop AI-driven solutions. · Monitor model performance and ensure continuous learning and improvement. · Deploy ML models using Docker, CI/CD, cloud services like AWS/Azure/GCP · Stay updated with the latest AI research and apply best practices to business use cases. Requirements: · Bachelor's or Master’s degree in Computer Science or related field. · Strong knowledge of Python and ML libraries (pandas, NumPy, etc.). · Experience with NLP, Computer Vision, or Recommendation Systems is a plus. · Familiarity with model evaluation metrics and handling bias/fairness in ML models. · Good understanding of REST APIs and cloud-based AI solutions. · Experience with Generative AI (e.g., OpenAI, LangChain, LLM fine-tuning). · Experience on any, Azure SQL, Databricks and Snowflake. Preferred Skills: · Experience with vector databases, semantic search and Agentic frame work. · Experience using platforms like Azure AI, AWS SageMaker, or Google Vertex AI.
Posted 2 weeks ago
8.0 - 13.0 years
10 - 14 Lacs
Bengaluru
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP BTP Integration Suite, SAP FI S/4HANA Accounting, Solid Experience in Corporate , Tax regimes, including Sales & Good to have skills : No Function SpecialtyMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing project progress, coordinating with teams, and ensuring successful application development. We are seeking a Senior Tax Technology Specialist to join our team. This role requires a seasoned professional with extensive experience in tax engines, indirect tax management (particularly Vertex O Series), and a strong foundation in SAP systems. The ideal candidate will manage complex tax project across the Sales and Use Tax, helping to streamline tax processes and maintain global compliance. Roles & Responsibilities:- Implementing new requirements and maintaining the Vertex O Series system, focusing on the global indirect tax solution - Configuring and managing Vertex tax rules, rates, and jurisdictions to ensure precise and compliant tax calculations for all transactions - Supporting mapping updates to tax matrices and conducting end-to-end testing to ensure no regression impacts across jurisdictions (US and OUS) - Collaborating with IT and finance teams to align tax systems with business needs and compliance requirements Developing and maintaining detailed documentation, including SOPs and user guides, for Vertex-related processes Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Integration with Vertex O Series/Sabrix, SAP FI CO Finance- Strong understanding of SAP FI CO Finance- Must Have Skills: Experience in SAP Integration with Vertex O Series/Sabrix along with SAP FI CO Finance- Extensive experience with Vertex O Series and familiarity with SAP tax-related solutions - Strong knowledge of tax regimes, including Sales & Use, VAT, GST, HST, and Corporate Tax Excellent analytical skills and keen attention to detail - Good To Have Skills: Experience in BRIM/FICA modules and DRC is beneficial but not mandatory- Good To Have Skills: Experience in SAP ABAP development, SAP PI/PO, and SAP SD/MM modules. Additional Information:- The candidate should have a minimum of 8+ years of experience in SAP Integration with Vertex O Series/Sabrix.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 2 weeks ago
14.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Roles & responsibilities Strategic Leadership & Vision Lead and manage a 100-member AI delivery team to ensure successful project delivery. Develop and implement AI strategies and solutions in collaboration with Product Leads and Solution Architects. Ensure all PODs are aligned with project timelines and organizational objectives, delivering high quality and excellent CSAT. Drive vendor teams to meet project timelines and deliverables Stakeholder Engagement & Communication Collaborate with stakeholders to define project scope, requirements, and deliverables. Communicate project status, updates, and issues to stakeholders regularly. Resolve conflicts and provide solutions to ensure smooth project execution Project Execution & Delivery Oversight Monitor project progress and performance (KPIs), ensuring timely and within-budget delivery. Manage project budgets, resources, and timelines effectively. Identify and mitigate risks to ensure project success. Team Management & Culture Building Provide leadership and guidance to team members. Foster a collaborative and innovative work environment. Ensure compliance with industry standards and regulations. Mandatory technical & functional skills AI & Machine Learning Expertise Understanding of supervised, unsupervised, and reinforcement learning; NLP and Vision Experience with AI/ML platforms (e.g., Azure ML, AWS SageMaker, Google Vertex AI). Data Engineering & Analytics Proficiency in data pipelines, ETL processes, and data governance. Strong grasp of data quality, lineage, and auditability Knowledge of big data tools (e.g., Spark, Hadoop, Databricks) Cloud & Infrastructure Hands-on experience with cloud platforms (Azure preferred in enterprise audit environments). Understanding of containerization (Docker, Kubernetes) and CI/CD pipelines. Audit Domain Knowledge Familiarity with audit workflows, risk assessment models, and compliance frameworks. Understanding of regulatory standards (e.g., SOX, GDPR, ISO 27001). Project & Program Management Tools Proficiency in tools like JIRA, Confluence, MS Project, and Azure DevOps. Experience with Agile, Scrum, and SAFe methodologies. Strategic Planning & Execution Ability to translate business goals into AI project roadmap Experience in managing multi-disciplinary teams across geographies Stakeholder Management Strong communication and negotiation skills with internal and external stakeholders. Ability to manage expectations and drive consensus. Risk & Compliance Management Proactive identification and mitigation of project risks. Ensuring compliance with internal audit standards and external regulations. Leadership & Team Development Proven ability to lead large teams, mentor senior leads, and foster innovation. Conflict resolution and performance management capabilities. Key behavioral attributes/requirements Demonstrates ability to think critically and demonstrate confidence to solve problems and suggest solutions Be a quick learner and demonstrate adaptability to change, with strong stakeholder and negotiation skills Should be willing to and capable of delivering under tight timelines, basis the business needs including working on weekends Willingness to work based on delivery timelines and flexibility to stretch beyond regular hours depending on project criticality Responsibilities #KGS Roles & responsibilities Strategic Leadership & Vision Lead and manage a 100-member AI delivery team to ensure successful project delivery. Develop and implement AI strategies and solutions in collaboration with Product Leads and Solution Architects. Ensure all PODs are aligned with project timelines and organizational objectives, delivering high quality and excellent CSAT. Drive vendor teams to meet project timelines and deliverables Stakeholder Engagement & Communication Collaborate with stakeholders to define project scope, requirements, and deliverables. Communicate project status, updates, and issues to stakeholders regularly. Resolve conflicts and provide solutions to ensure smooth project execution Project Execution & Delivery Oversight Monitor project progress and performance (KPIs), ensuring timely and within-budget delivery. Manage project budgets, resources, and timelines effectively. Identify and mitigate risks to ensure project success. Team Management & Culture Building Provide leadership and guidance to team members. Foster a collaborative and innovative work environment. Ensure compliance with industry standards and regulations . Mandatory technical & functional skills AI & Machine Learning Expertise Understanding of supervised, unsupervised, and reinforcement learning; NLP and Vision Experience with AI/ML platforms (e.g., Azure ML, AWS SageMaker, Google Vertex AI). Data Engineering & Analytics Proficiency in data pipelines, ETL processes, and data governance. Strong grasp of data quality, lineage, and auditability Knowledge of big data tools (e.g., Spark, Hadoop, Databricks) Cloud & Infrastructure Hands-on experience with cloud platforms (Azure preferred in enterprise audit environments). Understanding of containerization (Docker, Kubernetes) and CI/CD pipelines. Audit Domain Knowledge Familiarity with audit workflows, risk assessment models, and compliance frameworks. Understanding of regulatory standards (e.g., SOX, GDPR, ISO 27001). Project & Program Management Tools Proficiency in tools like JIRA, Confluence, MS Project, and Azure DevOps. Experience with Agile, Scrum, and SAFe methodologies. Strategic Planning & Execution Ability to translate business goals into AI project roadmap Experience in managing multi-disciplinary teams across geographies Stakeholder Management Strong communication and negotiation skills with internal and external stakeholders. Ability to manage expectations and drive consensus. Risk & Compliance Management Proactive identification and mitigation of project risks. Ensuring compliance with internal audit standards and external regulations . Leadership & Team Development Proven ability to lead large teams, mentor senior leads, and foster innovation. Conflict resolution and performance management capabilities. Key behavioral attributes/requirements Demonstrates ability to think critically and demonstrate confidence to solve problems and suggest solutions Be a quick learner and demonstrate adaptability to change, with strong stakeholder and negotiation skills Should be willing to and capable of delivering under tight timelines, basis the business needs including working on weekends Willingness to work based on delivery timelines and flexibility to stretch beyond regular hours depending on project criticality Qualifications This role is for you if you have the below Educational Qualifications B.Tech or M.Tech in CSE Work Experience 14+ years of Professional Relevant Experience
Posted 2 weeks ago
2.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You'll Do Oversee the implementation of detailed technology solutions for clients using company products, outsourced solutions, or proprietary tools/techniques. As a member of the Avalara Implementation team your goal is to provide world-class service to our customers. You will live by our cult of the customer philosophy and will increase the satisfaction of our customers. As part of the Implementation Team, you'd focus on New Product Introductions, with enhanced focus on customer onboarding. You will work from Pune office 5 days in a week. You will report to Manager, implementation (Viman Nagar, Pune) What Your Responsibilities Will Be You will have to lead planning and delivery of multiple client implementations simultaneously. You will have to ensure that customer requirements are defined and met within the configuration and the final deliverable. You will have to coordinate between internal implementation and technical resources and client teams to ensure smooth delivery. You will have to assist clients with developing testing plans and procedures. You will have to train clients on all Avalara products and services including the ERP and e-commerce integrations (called "AvaTax connectors"). You will have to demo sales and use tax products, including pre-written and custom-built software applications. You will have to support customers' success by answering application questions, tracking issues, monitoring changes, and resolving or escalating problems according to company guidelines. You will have to provide training and end-user support during customer onboarding. Given our clientele based in US/UK, you are ready to work in shifts as per business requirement. What You’ll Need To Be Successful 2-5 years of software implementation within the B2B sector. Bachelor's degree (BCA, MCA, B.Tech) from an accredited college or university, or equivalent career experience. Experience in implementing ERP solutions. Understanding of the tax, tax processes, data and systems concepts complex issues related to them. Experience in techno functional role and the capability of translating our requirements to technical configurations. Flexibility and a willingness to immerse themselves in the detail of projects to quickly. Personify the Avalara Success Traits: Ownership, Simplicity, Curiosity, Adaptability, Urgency, Optimism, Humility. Preferred Qualifications Install and configure the following ERPs: WooCommerce, Sage 100, Sage Intacct, Dynamics GP, D365 Sales, D365 Business Central, Salesforce Sales Cloud, NetSuite, QuickBooks, along with the ability to explain the various configuration options and demonstrate sales order/invoicing processes. Experience with Tax Automation: lead the implementation of tax engines, returns and/or exemption certificate systems for Avalara, Tax Jar, Vertex, or similar software. Knowledgeable in APIs. How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing over 54 billion customer API calls and over 6.6 million tax returns a year. Our growth is real - we're a billion dollar business - and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.
Posted 2 weeks ago
3.0 - 6.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Summary Position Summary Consultant–Tax Technology Consulting – Oracle EBS Do you have a passion to work for US-based clients of Deloitte Tax and transform their current state of tax to the next generation of tax functions? Are you ready to take the next step in your career to find new methods and processes to assist clients in improving their tax operations? Are you ready to fulfill your potential and want to have a significant impact on global initiatives? If the answer to all the above is “Yes,” come join the Tax Technology Consulting group in Deloitte India (Offices of the U.S), a service line of Deloitte Tax LLP! Deloitte Tax Services India Private Limited commenced operations in June 2004. Since then, nearly all of the Deloitte Tax LLP (“Deloitte Tax”) U.S. service lines have obtained support through Deloitte Tax in India. Deloitte Tax in India offers you opportunities to gain experience U.S. taxation, a much sought-after career option. At Deloitte , we are leading clients through the tax transformation taking place in the marketplace. We offer a broad range of fully integrated tax services and add greater impact to clients by combining technology and tax technical resources to uncover insights and smarter solutions for navigating an increasingly complex global environment. Work you will do Increasingly complex tax decisions can have a significant effect—positive or negative—on the future of our clients’ business.Ourapproachcombinesinsightandinnovationfrommultipledisciplineswithbusinessandindustryknowledge to help our clients excelglobally. Key responsibilities will be: - ü Conduct Client workshops ü Gather and document tax requirements for business and performing system fit and gapanalysis ü Advising clients on Tax department strategy/policy including Tax assessment from a people, process, technology, and governance point of view ü Process improvements, redesigning client tax departments and evaluating automation opportunities ü Work on design and development of tax solutions ü Conductuseracceptancetestingtocompilecomprehensivetestscenariosandidentifyflawsaswellasimprovements to newly built systems andprocesses Qualification And Experience Required – ü Full time Masters/Bachelor’s in Engineering/Finance/Accounts or equivalent from reputedUniversity ü MBA or Chartered Accountant with experience in Finance, Accounting, Taxation andAuditing ü 3-6 years of experience Oracle EBS finance modules or Oracle Financials Cloud modules that impact tax. ü Preferred experience with the following Oracle modules: E-BusinessTax/Oracle ERP cloud tax module, (Withholding Taxapplication) Trading CommunityArchitecture Order Management /iStore Accounts Receivables Purchasing /iExpense AccountsPayable, (Withholding Taxapplication) Supplier Master / iSupplier Portal FixedAssets ProjectAccounting GeneralLedger Oracle BI ü Financial consolidation processes and applications (e.g., Hyperionapplications) ü Proficiency in MS Office applications, specifically Excel, Word, PowerPoint, andAccess ü Effective communication with strong relationship managementskills ü Team player, adhering to the timelines for finishingdeliverables ü Strong project management and leadershipabilities ü Relentlessfocusonqualityofworkproductswhileadheringtocompletingdeliverablesontime Preferred: ü Knowledgeofbusinessandtaxprocesses,creatingfunctionalspecifications,identifying,and developing requirements for new reports, preparing test scripts, and providing user training andsupport ü Indirect Tax (VAT, Sales/Use) and/or Direct tax (income, provision), withholding taxexperience ü Knowledge of country specific localization capabilities of Oracle EBS and Oracle fusion applications ü Experience with third party tax software like Vertex, ONESOURCE, SOVOS (Taxware), Avalara etc. ü Basic or advanced knowledge of PL/SQL The Team Tax Technology Consulting (TTC) - Ever expanding regulations and increasing scrutiny on multinational corporations has made it necessary for leading-edge tax departments to serve a critical role in the risk management and overall performance of the enterprise. This has resulted in an opportunity for Deloitte to provide even greater value through our tax services, in helping develop tax departments of the future that are strategic, agile, and focused on creating value for the business. Deloitte's TMC group helps our clients’ tax department move forward from their current state to the next generation of taxfunctionsandisdedicatedtofindingnewmethodsandprocessestoassistclientsinimprovingtheirtaxoperations. Deloitte Tax LLP professionals are aligned worldwide to serve our clients’ needs through the TMC group. Deloitte TMC teams include industry, tax, organizational change, technology, and co-sourcing specialists who can help make the necessary connections between our clients’ global strategies and the many options for carrying them out in the tax function. How You Will Grow At Deloitte, we have invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in the same way. So, we provide a range of resources including live classrooms, team- based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India . Deloitte supports your progression through a well-defined career path by providing challenging assignments, mentoring, and targeted trainings. Recent postgraduates begin as a consultant. The career path from there is to senior consultant, then manager, senior manager and onto a path to director, partner, or principal. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people, and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306439
Posted 2 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Role & Responsibilities Utilize Google Cloud Platform & Data Services to modernize legacy applications. Understand technical business requirements and define architecture solutions that align to Ford Motor & Credit Companies Patterns and Standards. Collaborate and work with global architecture teams to define analytics cloud platform strategy and build Cloud analytics solutions within enterprise data factory. Provide Architecture leadership in design & delivery of new Unified data platform on GCP. Understand complex data structures in analytics space as well as interfacing application systems. Develop and maintain conceptual, logical & physical data models. Design and guide Product teams on Subject Areas and Data Marts to deliver integrated data solutions. Leverage cloud AI/ML Platforms to deliver business and technical requirements. Provide architectural guidance for optimal solutions considering regional Regulatory needs. Provide architecture assessments on technical solutions and make recommendations that meet business needs and align with architectural governance and standard. Guide teams through the enterprise architecture processes and advise teams on cloud-based design, development, and data mesh architecture. Provide advisory and technical consulting across all initiatives including PoCs, product evaluations and recommendations, security, architecture assessments, integration considerations, etc. Responsibilities Required Skills and Selection Criteria: Google Professional Solution Architect certification. 8+ years of relevant work experience in analytics application and data architecture, with deep understanding of cloud hosting concepts and implementations. 5+ years’ experience in Data and Solution Architecture in analytics space. Solid knowledge of cloud data architecture, data modelling principles, and expertise in Data Modeling tools. Experience in migrating legacy analytics applications to Cloud platform and business adoption of these platforms to build insights and dashboards through deep knowledge of traditional and cloud Data Lake, Warehouse and Mart concepts. Good understanding of domain driven design and data mesh principles. Experience with designing, building, and deploying ML models to solve business challenges using Python/BQML/Vertex AI on GCP. Knowledge of enterprise frameworks and technologies. Strong in architecture design patterns, experience with secure interoperability standards and methods, architecture tolls and process. Deep understanding of traditional and cloud data warehouse environment, with hands on programming experience building data pipelines on cloud in a highly distributed and fault-tolerant manner. Experience using Dataflow, pub/sub, Kafka, Cloud run, cloud functions, Bigquery, Dataform, Dataplex , etc. Strong understanding on DevOps principles and practices, including continuous integration and deployment (CI/CD), automated testing & deployment pipelines. Good understanding of cloud security best practices and be familiar with different security tools and techniques like Identity and Access Management (IAM), Encryption, Network Security, etc. Strong understanding of microservices architecture. Qualifications Nice to Have Bachelor’s degree in Computer science/engineering, Data science or related field. Strong leadership, communication, interpersonal, organizing, and problem-solving skills Good presentation skills with ability to communicate architectural proposals to diverse audiences (user groups, stakeholders, and senior management). Experience in Banking and Financial Regulatory Reporting space. Ability to work on multiple projects in a fast paced & dynamic environment. Exposure to multiple, diverse technologies, platforms, and processing environments.
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderābād
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Pune, Maharashtra, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor’s degree or equivalent practical experience. 5 years of experience with software development in one or more programming languages. 3 years of experience testing, maintaining, or launching software products. 1 year of experience with software design and architecture. Preferred qualifications: 5 years of experience with data structures/algorithms. 1 year of experience in a technical leadership role. Experience developing accessible technologies. About the job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. In this role, you will manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Participate in, or lead design reviews with peers and stakeholders to decide amongst available technologies. Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency). Build large-scale data processing pipelines with appropriate quality/reliability checks. Debug large-scale data pipelines. Build proper monitoring for both the health of data pipelines and quality of data. Treat access/privacy/compliance as first class operators for the data pipelines. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Posted 2 weeks ago
0.6 years
1 - 2 Lacs
Jaipur
On-site
Snapmint is on a mission of democratizing no/low-cost installment purchases for the next 200 Mn Indians. Of the 300 million credit-eligible consumers in India, less than 30 million actively use credit cards. Snapmint is reinventing credit for the next 200M consumers by providing them the freedom to buy what they want and pay for them in installments without a credit card. In a short period of time, Snapmint has reached over a million consumers in 2200 cities and has powered over 200 crores worth of purchases. Job Title: Customer ServiceExecutive (Voice/Non-Voice) Department: Customer Support / Service Location: Dev Nager, Tonk Road Experience Required: (0.6-3 Years) Industry: E-commerce / Banking / Customer Service / Call Center Minimum 0.6 year of experience in Email/Chat/In-call and Outbound calling process in any BPO, e-commerce, fintech, and any customer experience organization. Handle inbound and/or outbound customer calls professionally. For Email/chat – Typing speed must be in btw 30-35 wpm. Proven customer support experience or experience as a Client Service Representative. Strong phone contact handling skills and active listening. Escalate unresolved issues to the appropriate departments or higher authorities. Follow up with customers when necessary and ensure resolution. Graduation and above. Excellent verbal/written communication skills and basic computer knowledge. Age should be a maximum of 28. Work from the office and 6 days of work (roster off). Near Tonk Road, Dev Nagar (Maximum travel distance should be 10-12 km). Tamil, Telugu and Malayalam language is a plus. Benefits: Fixed Salary + Incentives Paid training and supportive team environment Career growth opportunities Health insurance PF is included Candidates with experience from Teleperformance, Vertex Cosmos, Girnar, Dealshare, Innovana Thinklabs, and Dr. ITM will be a plus. Job Types: Full-time, Permanent Pay: ₹12,000.00 - ₹22,000.00 per month Benefits: Health insurance Life insurance Paid sick time Provident Fund Schedule: Day shift Fixed shift Supplemental Pay: Performance bonus Language: Hindi (Preferred) English (Preferred) Work Location: In person
Posted 2 weeks ago
3.0 years
16 - 20 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 weeks ago
3.0 years
16 - 20 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 weeks ago
3.0 years
16 - 20 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 weeks ago
15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Business Analyst Lead – Generative AI Experience: 7–15 Years Location: Bangalore Designation Level: Lead Role Overview: We are looking for a Business Analyst Lead with a strong grounding in Generative AI to bridge the gap between innovation and business value. In this role, you'll drive adoption of GenAI tools (LLMs, RAG systems, AI agents) across enterprise functions, aligning cutting-edge capabilities with practical, measurable outcomes. Key Responsibilities: 1. GenAI Strategy & Opportunity Identification Collaborate with cross-functional stakeholders to identify high-impact Generative AI use cases (e.g., AI-powered chatbots, content generation, document summarization, synthetic data). Lead cost-benefit analyses (e.g., fine-tuning open-source models vs. adopting commercial LLMs like GPT-4 Enterprise). Evaluate ROI and adoption feasibility across departments. 2. Requirements Engineering for GenAI Projects Define and document both functional and non-functional requirements tailored to GenAI systems: Accuracy thresholds (e.g., hallucination rate under 5%) Ethical guardrails (e.g., PII redaction, bias mitigation) Latency SLAs (e.g., <2 seconds response time) Develop prompt engineering guidelines, testing protocols, and iteration workflows. 3. Stakeholder Collaboration & Communication Translate technical GenAI concepts into business-friendly language. Manage expectations on probabilistic outputs and incorporate validation workflows (e.g., human-in-the-loop review). Use storytelling and outcome-driven communication (e.g., “Automated claims triage reduced handling time by 40%.”) 4. Business Analysis & Process Modeling Create advanced user story maps for multi-agent workflows (AutoGen, CrewAI). Model current and future business processes using BPMN to reflect human-AI collaboration. 5. Tools & Technical Proficiency Hands-on experience with LangChain, LlamaIndex for LLM integration. Knowledge of vector databases, RAG architectures, LoRA-based fine-tuning. Experience using Azure OpenAI Studio, Google Vertex AI, Hugging Face. Data validation using SQL and Python; exposure to synthetic data generation tools (e.g., Gretel, Mostly AI). 6. Governance & Performance Monitoring Define KPIs for GenAI performance: Token cost per interaction User trust scores Automation rate and model drift tracking Support regulatory compliance with audit trails and documentation aligned with EU AI Act and other industry standards. Required Skills & Experience: 7–10 years of experience in business analysis or product ownership, with recent focus on Generative AI or applied ML. Strong understanding of the GenAI ecosystem and solution lifecycle from ideation to deployment. Experience working closely with data science, engineering, product, and compliance teams. Excellent communication and stakeholder management skills, with a focus on enterprise environments. Preferred Qualifications: Certification in Business Analysis (CBAP/PMI-PBA) or AI/ML (e.g., Coursera/Stanford/DeepLearning.ai) Familiarity with compliance and AI regulations (GDPR, EU AI Act). Experience in BFSI, healthcare, telecom, or other regulated industries.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi