Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Project Role : Technology Architect Project Role Description : Design and deliver technology architecture for a platform, product, or engagement. Define solutions to meet performance, capability, and scalability needs. Must have skills : Google Cloud Platform Architecture Good to have skills : Google Cloud Machine Learning Services Minimum 12 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML lead, you will be responsible for developing applications and systems that utilize AI tools and Cloud AI services. Your typical day will involve applying CCAI and Gen AI models as part of the solution, utilizing deep learning, neural networks and chatbots. Should have hands-on experience in creating, deploying, and optimizing chatbots and voice applications using Google Conversational Agents and other tools. Roles & Responsibilities: I. Solutioning and designing CCAI applications and systems utilizing Google Cloud Machine Learning Services, dialogue flow CX, agent assist, conversational AI. II. Design, develop, and maintain intelligent chatbots and voice applications using Google Dialogflow CX. III. Integrate Dialogflow agents with various platforms, such as Google Assistant, Facebook Messenger, Slack, and websites. Hands-on experience with IVR integration and telephony systems such as Twilio, Genesys, Avaya IV. Integrate with IVR systems and Proficiency in webhook setup and API integration. V. Develop Dialogflow CX - flows, pages, webhook as well as playbook and integration of tool into playbooks. VI. Creation of agents in Agent builder and integrating them into end end to pipeline using python. VII. Apply GenAI-Vertex AI models as part of the solution, utilizing deep learning, neural networks, chatbots, and image processing. VIII. Work with Google Vertex AI for building, training and deploying custom AI models to enhance chatbot capabilities IX. Implement and integrate backend services (using Google Cloud Functions or other APIs) to fulfill user queries and actions. X. Document technical designs, processes, and setup for various integrations. XI. Experience with programming languages such as Python/Node.js Professional & Technical Skills: - Must To Have Skills: CCAI/Dialogflow CX hands on experience and generative AI understanding. - Good To Have Skills: Cloud Data Architecture, Cloud ML/PCA/PDE Certification, - Strong understanding of AI/ML algorithms, NLP and techniques. - Experience with chatbot , generative AI models, prompt Engineering. - Experience with cloud or on-prem application pipeline with production-ready quality. Additional Information: 1 The candidate should have a minimum of 10 years of experience in Google Cloud Machine Learning Services/Gen AI/Vertex AI/CCAI. 2 The ideal candidate will possess a strong educational background in computer science, mathematics, or a related field, along with a proven track record of delivering impactful data-driven solutions., 15 years full time education
Posted 1 week ago
8.0 - 11.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications PhD or equivalent degree in Computer Science/Applied Mathematics/Applied Statistics/Artificial Intelligence Preferences to research scholars from IITs, NITs and IIITs ( Research Scholars who are submitted their thesis) Work Experience 8 to 11 Years of experience with strong record of publications in top tier conferences and journals
Posted 1 week ago
8.0 - 11.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications PhD or equivalent degree in Computer Science/Applied Mathematics/Applied Statistics/Artificial Intelligence Preferences to research scholars from IITs, NITs and IIITs ( Research Scholars who are submitted their thesis) Work Experience 8 to 11 Years of experience with strong record of publications in top tier conferences and journals
Posted 1 week ago
0 years
0 Lacs
India
Remote
Company Description Banthry is an AI-powered legal companion designed to revolutionize the legal industry by automating research, drafting, opinion generation, and case management. Our proprietary domain-specific AI models streamline workflows, enabling legal professionals to focus on strategic decision-making and client engagement. Banthry aims to transform how legal professionals operate, enhancing efficiency and productivity. Role Description This is a remote role for an Artificial Intelligence Intern (Full Stack). The intern will be responsible for assisting in developing and implementing AI models, working on various Agentic Projects, writing and testing code, and working on Frontend and Backend. Day-to-day tasks will include conducting data analysis, applying machine learning techniques, solving complex problems, building RAG pipelines, AI agents and improving existing systems. Qualifications Strong foundation in Computer Science and Programming Proficient in AI tools like Google CLI, Cursor, Vertex, Google Studio, Claude etc. Must be able to independently build frontend and backend (Use of AI tools is must for high efficiency and productivity) Knowledge and earlier experience of Fine tuning LLM, Building RAG pipeline and Agentic AI Ability to work in high pressure, short deadline environment. Excellent written and verbal communication skills Experience or knowledge in the legal industry is a plus Pursuing or completed Bachelor's degree in Computer Science, Data Science, or related field Stipend and Work Schedule Stipend Upto- Rs. 12000 Work Days- Monday to Saturday (5-6 Hours Per Day) 2 Months Internship Performance based PPO
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Salary: As per experience. Experience: 3-5 years on Machine Learning or AI engineering. Job Summary: We are seeking a passionate and skilled AI/ML Engineer to join our team to design, develop, and deploy machine learning models and intelligent systems. You will work closely with software developers, and product managers to integrate AI solutions into real-world applications. Key Responsibilities: · Design, develop, and train machine learning models (e.g. clustering, NLP). · Basic understanding of AI Algorithm and underling models(RAG/CAG). · Build scalable pipelines for data ingestion, preprocessing, and model deployment. · Implement and fine-tune deep learning models using frameworks like TensorFlow, PyTorch, or Hugging Face. · Collaborate with cross-functional teams to define business problems and develop AI-driven solutions. · Monitor model performance and ensure continuous learning and improvement. · Deploy ML models using Docker, CI/CD, cloud services like AWS/Azure/GCP · Stay updated with the latest AI research and apply best practices to business use cases. Requirements: · Bachelor's or Master’s degree in Computer Science or related field. · Strong knowledge of Python and ML libraries (pandas, NumPy, etc.). · Experience with NLP, Computer Vision, or Recommendation Systems is a plus. · Familiarity with model evaluation metrics and handling bias/fairness in ML models. · Good understanding of REST APIs and cloud-based AI solutions. · Experience with Generative AI (e.g., OpenAI, LangChain, LLM fine-tuning). · Experience on any, Azure SQL, Databricks and Snowflake. Preferred Skills: · Experience with vector databases, semantic search and Agentic frame work. · Experience using platforms like Azure AI, AWS SageMaker, or Google Vertex AI.
Posted 1 week ago
8.0 - 13.0 years
10 - 14 Lacs
Bengaluru
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP BTP Integration Suite, SAP FI S/4HANA Accounting, Solid Experience in Corporate , Tax regimes, including Sales & Good to have skills : No Function SpecialtyMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing project progress, coordinating with teams, and ensuring successful application development. We are seeking a Senior Tax Technology Specialist to join our team. This role requires a seasoned professional with extensive experience in tax engines, indirect tax management (particularly Vertex O Series), and a strong foundation in SAP systems. The ideal candidate will manage complex tax project across the Sales and Use Tax, helping to streamline tax processes and maintain global compliance. Roles & Responsibilities:- Implementing new requirements and maintaining the Vertex O Series system, focusing on the global indirect tax solution - Configuring and managing Vertex tax rules, rates, and jurisdictions to ensure precise and compliant tax calculations for all transactions - Supporting mapping updates to tax matrices and conducting end-to-end testing to ensure no regression impacts across jurisdictions (US and OUS) - Collaborating with IT and finance teams to align tax systems with business needs and compliance requirements Developing and maintaining detailed documentation, including SOPs and user guides, for Vertex-related processes Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Integration with Vertex O Series/Sabrix, SAP FI CO Finance- Strong understanding of SAP FI CO Finance- Must Have Skills: Experience in SAP Integration with Vertex O Series/Sabrix along with SAP FI CO Finance- Extensive experience with Vertex O Series and familiarity with SAP tax-related solutions - Strong knowledge of tax regimes, including Sales & Use, VAT, GST, HST, and Corporate Tax Excellent analytical skills and keen attention to detail - Good To Have Skills: Experience in BRIM/FICA modules and DRC is beneficial but not mandatory- Good To Have Skills: Experience in SAP ABAP development, SAP PI/PO, and SAP SD/MM modules. Additional Information:- The candidate should have a minimum of 8+ years of experience in SAP Integration with Vertex O Series/Sabrix.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 1 week ago
14.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Roles & responsibilities Strategic Leadership & Vision Lead and manage a 100-member AI delivery team to ensure successful project delivery. Develop and implement AI strategies and solutions in collaboration with Product Leads and Solution Architects. Ensure all PODs are aligned with project timelines and organizational objectives, delivering high quality and excellent CSAT. Drive vendor teams to meet project timelines and deliverables Stakeholder Engagement & Communication Collaborate with stakeholders to define project scope, requirements, and deliverables. Communicate project status, updates, and issues to stakeholders regularly. Resolve conflicts and provide solutions to ensure smooth project execution Project Execution & Delivery Oversight Monitor project progress and performance (KPIs), ensuring timely and within-budget delivery. Manage project budgets, resources, and timelines effectively. Identify and mitigate risks to ensure project success. Team Management & Culture Building Provide leadership and guidance to team members. Foster a collaborative and innovative work environment. Ensure compliance with industry standards and regulations. Mandatory technical & functional skills AI & Machine Learning Expertise Understanding of supervised, unsupervised, and reinforcement learning; NLP and Vision Experience with AI/ML platforms (e.g., Azure ML, AWS SageMaker, Google Vertex AI). Data Engineering & Analytics Proficiency in data pipelines, ETL processes, and data governance. Strong grasp of data quality, lineage, and auditability Knowledge of big data tools (e.g., Spark, Hadoop, Databricks) Cloud & Infrastructure Hands-on experience with cloud platforms (Azure preferred in enterprise audit environments). Understanding of containerization (Docker, Kubernetes) and CI/CD pipelines. Audit Domain Knowledge Familiarity with audit workflows, risk assessment models, and compliance frameworks. Understanding of regulatory standards (e.g., SOX, GDPR, ISO 27001). Project & Program Management Tools Proficiency in tools like JIRA, Confluence, MS Project, and Azure DevOps. Experience with Agile, Scrum, and SAFe methodologies. Strategic Planning & Execution Ability to translate business goals into AI project roadmap Experience in managing multi-disciplinary teams across geographies Stakeholder Management Strong communication and negotiation skills with internal and external stakeholders. Ability to manage expectations and drive consensus. Risk & Compliance Management Proactive identification and mitigation of project risks. Ensuring compliance with internal audit standards and external regulations. Leadership & Team Development Proven ability to lead large teams, mentor senior leads, and foster innovation. Conflict resolution and performance management capabilities. Key behavioral attributes/requirements Demonstrates ability to think critically and demonstrate confidence to solve problems and suggest solutions Be a quick learner and demonstrate adaptability to change, with strong stakeholder and negotiation skills Should be willing to and capable of delivering under tight timelines, basis the business needs including working on weekends Willingness to work based on delivery timelines and flexibility to stretch beyond regular hours depending on project criticality Responsibilities #KGS Roles & responsibilities Strategic Leadership & Vision Lead and manage a 100-member AI delivery team to ensure successful project delivery. Develop and implement AI strategies and solutions in collaboration with Product Leads and Solution Architects. Ensure all PODs are aligned with project timelines and organizational objectives, delivering high quality and excellent CSAT. Drive vendor teams to meet project timelines and deliverables Stakeholder Engagement & Communication Collaborate with stakeholders to define project scope, requirements, and deliverables. Communicate project status, updates, and issues to stakeholders regularly. Resolve conflicts and provide solutions to ensure smooth project execution Project Execution & Delivery Oversight Monitor project progress and performance (KPIs), ensuring timely and within-budget delivery. Manage project budgets, resources, and timelines effectively. Identify and mitigate risks to ensure project success. Team Management & Culture Building Provide leadership and guidance to team members. Foster a collaborative and innovative work environment. Ensure compliance with industry standards and regulations . Mandatory technical & functional skills AI & Machine Learning Expertise Understanding of supervised, unsupervised, and reinforcement learning; NLP and Vision Experience with AI/ML platforms (e.g., Azure ML, AWS SageMaker, Google Vertex AI). Data Engineering & Analytics Proficiency in data pipelines, ETL processes, and data governance. Strong grasp of data quality, lineage, and auditability Knowledge of big data tools (e.g., Spark, Hadoop, Databricks) Cloud & Infrastructure Hands-on experience with cloud platforms (Azure preferred in enterprise audit environments). Understanding of containerization (Docker, Kubernetes) and CI/CD pipelines. Audit Domain Knowledge Familiarity with audit workflows, risk assessment models, and compliance frameworks. Understanding of regulatory standards (e.g., SOX, GDPR, ISO 27001). Project & Program Management Tools Proficiency in tools like JIRA, Confluence, MS Project, and Azure DevOps. Experience with Agile, Scrum, and SAFe methodologies. Strategic Planning & Execution Ability to translate business goals into AI project roadmap Experience in managing multi-disciplinary teams across geographies Stakeholder Management Strong communication and negotiation skills with internal and external stakeholders. Ability to manage expectations and drive consensus. Risk & Compliance Management Proactive identification and mitigation of project risks. Ensuring compliance with internal audit standards and external regulations . Leadership & Team Development Proven ability to lead large teams, mentor senior leads, and foster innovation. Conflict resolution and performance management capabilities. Key behavioral attributes/requirements Demonstrates ability to think critically and demonstrate confidence to solve problems and suggest solutions Be a quick learner and demonstrate adaptability to change, with strong stakeholder and negotiation skills Should be willing to and capable of delivering under tight timelines, basis the business needs including working on weekends Willingness to work based on delivery timelines and flexibility to stretch beyond regular hours depending on project criticality Qualifications This role is for you if you have the below Educational Qualifications B.Tech or M.Tech in CSE Work Experience 14+ years of Professional Relevant Experience
Posted 1 week ago
2.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You'll Do Oversee the implementation of detailed technology solutions for clients using company products, outsourced solutions, or proprietary tools/techniques. As a member of the Avalara Implementation team your goal is to provide world-class service to our customers. You will live by our cult of the customer philosophy and will increase the satisfaction of our customers. As part of the Implementation Team, you'd focus on New Product Introductions, with enhanced focus on customer onboarding. You will work from Pune office 5 days in a week. You will report to Manager, implementation (Viman Nagar, Pune) What Your Responsibilities Will Be You will have to lead planning and delivery of multiple client implementations simultaneously. You will have to ensure that customer requirements are defined and met within the configuration and the final deliverable. You will have to coordinate between internal implementation and technical resources and client teams to ensure smooth delivery. You will have to assist clients with developing testing plans and procedures. You will have to train clients on all Avalara products and services including the ERP and e-commerce integrations (called "AvaTax connectors"). You will have to demo sales and use tax products, including pre-written and custom-built software applications. You will have to support customers' success by answering application questions, tracking issues, monitoring changes, and resolving or escalating problems according to company guidelines. You will have to provide training and end-user support during customer onboarding. Given our clientele based in US/UK, you are ready to work in shifts as per business requirement. What You’ll Need To Be Successful 2-5 years of software implementation within the B2B sector. Bachelor's degree (BCA, MCA, B.Tech) from an accredited college or university, or equivalent career experience. Experience in implementing ERP solutions. Understanding of the tax, tax processes, data and systems concepts complex issues related to them. Experience in techno functional role and the capability of translating our requirements to technical configurations. Flexibility and a willingness to immerse themselves in the detail of projects to quickly. Personify the Avalara Success Traits: Ownership, Simplicity, Curiosity, Adaptability, Urgency, Optimism, Humility. Preferred Qualifications Install and configure the following ERPs: WooCommerce, Sage 100, Sage Intacct, Dynamics GP, D365 Sales, D365 Business Central, Salesforce Sales Cloud, NetSuite, QuickBooks, along with the ability to explain the various configuration options and demonstrate sales order/invoicing processes. Experience with Tax Automation: lead the implementation of tax engines, returns and/or exemption certificate systems for Avalara, Tax Jar, Vertex, or similar software. Knowledgeable in APIs. How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing over 54 billion customer API calls and over 6.6 million tax returns a year. Our growth is real - we're a billion dollar business - and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Summary Position Summary Consultant–Tax Technology Consulting – Oracle EBS Do you have a passion to work for US-based clients of Deloitte Tax and transform their current state of tax to the next generation of tax functions? Are you ready to take the next step in your career to find new methods and processes to assist clients in improving their tax operations? Are you ready to fulfill your potential and want to have a significant impact on global initiatives? If the answer to all the above is “Yes,” come join the Tax Technology Consulting group in Deloitte India (Offices of the U.S), a service line of Deloitte Tax LLP! Deloitte Tax Services India Private Limited commenced operations in June 2004. Since then, nearly all of the Deloitte Tax LLP (“Deloitte Tax”) U.S. service lines have obtained support through Deloitte Tax in India. Deloitte Tax in India offers you opportunities to gain experience U.S. taxation, a much sought-after career option. At Deloitte , we are leading clients through the tax transformation taking place in the marketplace. We offer a broad range of fully integrated tax services and add greater impact to clients by combining technology and tax technical resources to uncover insights and smarter solutions for navigating an increasingly complex global environment. Work you will do Increasingly complex tax decisions can have a significant effect—positive or negative—on the future of our clients’ business.Ourapproachcombinesinsightandinnovationfrommultipledisciplineswithbusinessandindustryknowledge to help our clients excelglobally. Key responsibilities will be: - ü Conduct Client workshops ü Gather and document tax requirements for business and performing system fit and gapanalysis ü Advising clients on Tax department strategy/policy including Tax assessment from a people, process, technology, and governance point of view ü Process improvements, redesigning client tax departments and evaluating automation opportunities ü Work on design and development of tax solutions ü Conductuseracceptancetestingtocompilecomprehensivetestscenariosandidentifyflawsaswellasimprovements to newly built systems andprocesses Qualification And Experience Required – ü Full time Masters/Bachelor’s in Engineering/Finance/Accounts or equivalent from reputedUniversity ü MBA or Chartered Accountant with experience in Finance, Accounting, Taxation andAuditing ü 3-6 years of experience Oracle EBS finance modules or Oracle Financials Cloud modules that impact tax. ü Preferred experience with the following Oracle modules: E-BusinessTax/Oracle ERP cloud tax module, (Withholding Taxapplication) Trading CommunityArchitecture Order Management /iStore Accounts Receivables Purchasing /iExpense AccountsPayable, (Withholding Taxapplication) Supplier Master / iSupplier Portal FixedAssets ProjectAccounting GeneralLedger Oracle BI ü Financial consolidation processes and applications (e.g., Hyperionapplications) ü Proficiency in MS Office applications, specifically Excel, Word, PowerPoint, andAccess ü Effective communication with strong relationship managementskills ü Team player, adhering to the timelines for finishingdeliverables ü Strong project management and leadershipabilities ü Relentlessfocusonqualityofworkproductswhileadheringtocompletingdeliverablesontime Preferred: ü Knowledgeofbusinessandtaxprocesses,creatingfunctionalspecifications,identifying,and developing requirements for new reports, preparing test scripts, and providing user training andsupport ü Indirect Tax (VAT, Sales/Use) and/or Direct tax (income, provision), withholding taxexperience ü Knowledge of country specific localization capabilities of Oracle EBS and Oracle fusion applications ü Experience with third party tax software like Vertex, ONESOURCE, SOVOS (Taxware), Avalara etc. ü Basic or advanced knowledge of PL/SQL The Team Tax Technology Consulting (TTC) - Ever expanding regulations and increasing scrutiny on multinational corporations has made it necessary for leading-edge tax departments to serve a critical role in the risk management and overall performance of the enterprise. This has resulted in an opportunity for Deloitte to provide even greater value through our tax services, in helping develop tax departments of the future that are strategic, agile, and focused on creating value for the business. Deloitte's TMC group helps our clients’ tax department move forward from their current state to the next generation of taxfunctionsandisdedicatedtofindingnewmethodsandprocessestoassistclientsinimprovingtheirtaxoperations. Deloitte Tax LLP professionals are aligned worldwide to serve our clients’ needs through the TMC group. Deloitte TMC teams include industry, tax, organizational change, technology, and co-sourcing specialists who can help make the necessary connections between our clients’ global strategies and the many options for carrying them out in the tax function. How You Will Grow At Deloitte, we have invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in the same way. So, we provide a range of resources including live classrooms, team- based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India . Deloitte supports your progression through a well-defined career path by providing challenging assignments, mentoring, and targeted trainings. Recent postgraduates begin as a consultant. The career path from there is to senior consultant, then manager, senior manager and onto a path to director, partner, or principal. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people, and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306439
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Role & Responsibilities Utilize Google Cloud Platform & Data Services to modernize legacy applications. Understand technical business requirements and define architecture solutions that align to Ford Motor & Credit Companies Patterns and Standards. Collaborate and work with global architecture teams to define analytics cloud platform strategy and build Cloud analytics solutions within enterprise data factory. Provide Architecture leadership in design & delivery of new Unified data platform on GCP. Understand complex data structures in analytics space as well as interfacing application systems. Develop and maintain conceptual, logical & physical data models. Design and guide Product teams on Subject Areas and Data Marts to deliver integrated data solutions. Leverage cloud AI/ML Platforms to deliver business and technical requirements. Provide architectural guidance for optimal solutions considering regional Regulatory needs. Provide architecture assessments on technical solutions and make recommendations that meet business needs and align with architectural governance and standard. Guide teams through the enterprise architecture processes and advise teams on cloud-based design, development, and data mesh architecture. Provide advisory and technical consulting across all initiatives including PoCs, product evaluations and recommendations, security, architecture assessments, integration considerations, etc. Responsibilities Required Skills and Selection Criteria: Google Professional Solution Architect certification. 8+ years of relevant work experience in analytics application and data architecture, with deep understanding of cloud hosting concepts and implementations. 5+ years’ experience in Data and Solution Architecture in analytics space. Solid knowledge of cloud data architecture, data modelling principles, and expertise in Data Modeling tools. Experience in migrating legacy analytics applications to Cloud platform and business adoption of these platforms to build insights and dashboards through deep knowledge of traditional and cloud Data Lake, Warehouse and Mart concepts. Good understanding of domain driven design and data mesh principles. Experience with designing, building, and deploying ML models to solve business challenges using Python/BQML/Vertex AI on GCP. Knowledge of enterprise frameworks and technologies. Strong in architecture design patterns, experience with secure interoperability standards and methods, architecture tolls and process. Deep understanding of traditional and cloud data warehouse environment, with hands on programming experience building data pipelines on cloud in a highly distributed and fault-tolerant manner. Experience using Dataflow, pub/sub, Kafka, Cloud run, cloud functions, Bigquery, Dataform, Dataplex , etc. Strong understanding on DevOps principles and practices, including continuous integration and deployment (CI/CD), automated testing & deployment pipelines. Good understanding of cloud security best practices and be familiar with different security tools and techniques like Identity and Access Management (IAM), Encryption, Network Security, etc. Strong understanding of microservices architecture. Qualifications Nice to Have Bachelor’s degree in Computer science/engineering, Data science or related field. Strong leadership, communication, interpersonal, organizing, and problem-solving skills Good presentation skills with ability to communicate architectural proposals to diverse audiences (user groups, stakeholders, and senior management). Experience in Banking and Financial Regulatory Reporting space. Ability to work on multiple projects in a fast paced & dynamic environment. Exposure to multiple, diverse technologies, platforms, and processing environments.
Posted 1 week ago
5.0 years
0 Lacs
Hyderābād
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Pune, Maharashtra, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor’s degree or equivalent practical experience. 5 years of experience with software development in one or more programming languages. 3 years of experience testing, maintaining, or launching software products. 1 year of experience with software design and architecture. Preferred qualifications: 5 years of experience with data structures/algorithms. 1 year of experience in a technical leadership role. Experience developing accessible technologies. About the job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. In this role, you will manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Participate in, or lead design reviews with peers and stakeholders to decide amongst available technologies. Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency). Build large-scale data processing pipelines with appropriate quality/reliability checks. Debug large-scale data pipelines. Build proper monitoring for both the health of data pipelines and quality of data. Treat access/privacy/compliance as first class operators for the data pipelines. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Posted 1 week ago
0.6 years
1 - 2 Lacs
Jaipur
On-site
Snapmint is on a mission of democratizing no/low-cost installment purchases for the next 200 Mn Indians. Of the 300 million credit-eligible consumers in India, less than 30 million actively use credit cards. Snapmint is reinventing credit for the next 200M consumers by providing them the freedom to buy what they want and pay for them in installments without a credit card. In a short period of time, Snapmint has reached over a million consumers in 2200 cities and has powered over 200 crores worth of purchases. Job Title: Customer ServiceExecutive (Voice/Non-Voice) Department: Customer Support / Service Location: Dev Nager, Tonk Road Experience Required: (0.6-3 Years) Industry: E-commerce / Banking / Customer Service / Call Center Minimum 0.6 year of experience in Email/Chat/In-call and Outbound calling process in any BPO, e-commerce, fintech, and any customer experience organization. Handle inbound and/or outbound customer calls professionally. For Email/chat – Typing speed must be in btw 30-35 wpm. Proven customer support experience or experience as a Client Service Representative. Strong phone contact handling skills and active listening. Escalate unresolved issues to the appropriate departments or higher authorities. Follow up with customers when necessary and ensure resolution. Graduation and above. Excellent verbal/written communication skills and basic computer knowledge. Age should be a maximum of 28. Work from the office and 6 days of work (roster off). Near Tonk Road, Dev Nagar (Maximum travel distance should be 10-12 km). Tamil, Telugu and Malayalam language is a plus. Benefits: Fixed Salary + Incentives Paid training and supportive team environment Career growth opportunities Health insurance PF is included Candidates with experience from Teleperformance, Vertex Cosmos, Girnar, Dealshare, Innovana Thinklabs, and Dr. ITM will be a plus. Job Types: Full-time, Permanent Pay: ₹12,000.00 - ₹22,000.00 per month Benefits: Health insurance Life insurance Paid sick time Provident Fund Schedule: Day shift Fixed shift Supplemental Pay: Performance bonus Language: Hindi (Preferred) English (Preferred) Work Location: In person
Posted 1 week ago
3.0 years
16 - 20 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
3.0 years
16 - 20 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
3.0 years
16 - 20 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Business Analyst Lead – Generative AI Experience: 7–15 Years Location: Bangalore Designation Level: Lead Role Overview: We are looking for a Business Analyst Lead with a strong grounding in Generative AI to bridge the gap between innovation and business value. In this role, you'll drive adoption of GenAI tools (LLMs, RAG systems, AI agents) across enterprise functions, aligning cutting-edge capabilities with practical, measurable outcomes. Key Responsibilities: 1. GenAI Strategy & Opportunity Identification Collaborate with cross-functional stakeholders to identify high-impact Generative AI use cases (e.g., AI-powered chatbots, content generation, document summarization, synthetic data). Lead cost-benefit analyses (e.g., fine-tuning open-source models vs. adopting commercial LLMs like GPT-4 Enterprise). Evaluate ROI and adoption feasibility across departments. 2. Requirements Engineering for GenAI Projects Define and document both functional and non-functional requirements tailored to GenAI systems: Accuracy thresholds (e.g., hallucination rate under 5%) Ethical guardrails (e.g., PII redaction, bias mitigation) Latency SLAs (e.g., <2 seconds response time) Develop prompt engineering guidelines, testing protocols, and iteration workflows. 3. Stakeholder Collaboration & Communication Translate technical GenAI concepts into business-friendly language. Manage expectations on probabilistic outputs and incorporate validation workflows (e.g., human-in-the-loop review). Use storytelling and outcome-driven communication (e.g., “Automated claims triage reduced handling time by 40%.”) 4. Business Analysis & Process Modeling Create advanced user story maps for multi-agent workflows (AutoGen, CrewAI). Model current and future business processes using BPMN to reflect human-AI collaboration. 5. Tools & Technical Proficiency Hands-on experience with LangChain, LlamaIndex for LLM integration. Knowledge of vector databases, RAG architectures, LoRA-based fine-tuning. Experience using Azure OpenAI Studio, Google Vertex AI, Hugging Face. Data validation using SQL and Python; exposure to synthetic data generation tools (e.g., Gretel, Mostly AI). 6. Governance & Performance Monitoring Define KPIs for GenAI performance: Token cost per interaction User trust scores Automation rate and model drift tracking Support regulatory compliance with audit trails and documentation aligned with EU AI Act and other industry standards. Required Skills & Experience: 7–10 years of experience in business analysis or product ownership, with recent focus on Generative AI or applied ML. Strong understanding of the GenAI ecosystem and solution lifecycle from ideation to deployment. Experience working closely with data science, engineering, product, and compliance teams. Excellent communication and stakeholder management skills, with a focus on enterprise environments. Preferred Qualifications: Certification in Business Analysis (CBAP/PMI-PBA) or AI/ML (e.g., Coursera/Stanford/DeepLearning.ai) Familiarity with compliance and AI regulations (GDPR, EU AI Act). Experience in BFSI, healthcare, telecom, or other regulated industries.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Us: RocketFrog.ai is an AI Studio for Business, delivering cutting-edge AI solutions across Healthcare, Pharma, BFSI, Hi-Tech, and Consumer Services industries. We specialize in Agentic AI, Deep Learning models, and AI-driven Product Development to create real business impact. 🚀 Ready to take a Rocket Leap with Science? Role Overview: We are looking for an Agentic AI Engineer with 3 to 5 years of experience in designing, developing, and deploying AI-powered applications. The ideal candidate will have strong hands-on experience in Agentic AI frameworks, Prompt Engineering, and machine learning, with a keen interest in building intelligent systems that drive business outcomes. Key Responsibilities: Collaborate closely with business analysts to understand requirements and translate them into actionable AI agent workflows. Design and implement Agentic AI agents that solve complex business problems, ensuring they are scalable, efficient, and aligned with business goals. Build and optimize end-to-end AI systems, focusing on seamless automation, intelligent decision-making, and process improvements. Drive the development of intelligent workflows by integrating AI agents with existing enterprise systems and platforms. Lead the iterative improvement of AI-driven applications, constantly refining the models to enhance performance and business outcomes. Participate in client discussions to understand their challenges, propose AI-powered solutions, and ensure successful deployment and adoption. Continuously evaluate and improve the AI system’s accuracy, reliability, and responsiveness in real-world business environments. Contribute to the continuous learning culture by staying up-to-date with the latest AI research, tools, and frameworks. Required Skills & Expertise: AI/ML Engineering: Hands-on experience in building AI-driven applications with a focus on intelligent workflows and decision-making. Agentic AI Frameworks: Experience with frameworks such as LangGraph or LiveKit to design and develop Agentic AI solutions. LLM and Prompt Engineering: Expertise in optimizing interactions with large language models through advanced prompt engineering techniques. MCP & A2A: Proficiency in Multi-Channel Processing (MCP) and Agent-to-Agent (A2A) communication frameworks. RAG (Retrieval-Augmented Generation): Ability to apply RAG techniques to enhance AI response generation and improve model performance. System Architecture: Strong knowledge of building scalable architectures using tools like FastAPI, Uvicorn, and Celery. AI/ML Platforms: Experience with platforms like Hugging Face for using pre-trained models for business applications. Communication Skills: Ability to communicate complex AI/ML concepts clearly to both technical and non-technical stakeholders. Preferences: Working knowledge of Amazon Bedrock , Google Vertex AI , or Microsoft Azure AI Services . Mathematical Foundation: Strong foundation in data structures, probability & statistics, and machine learning. Required Background: Bachelor’s or Master’s degree in Computer Science , IT , Data Science , or AI . 3 to 5 years of work experience as an AI Engineer. Why Join RocketFrog.ai? 🚀 Be part of the growth journey of RocketFrog.ai! Work in an AI-driven company that is making a real business impact. Collaborate with leading AI experts and work on cutting-edge technologies. Enjoy a dynamic, fast-paced environment with career growth opportunities. 👉 If you are passionate about AI & innovation, let’s connect!
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. We are looking for a Site Reliability Engineer (SRE) with a strong background in Google Cloud Platform (GCP), Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.). The ideal candidate will be responsible for ensuring the reliability, performance, and scalability of our on-premises and cloud-based systems along with focus on reducing costs for Google Cloud. What You’ll Do Work in a DevSecOps environment responsible for the building and running of large-scale, massively distributed, fault-tolerant systems. Work closely with development and operations teams to build highly available, cost effective systems with extremely high uptime metrics. Work with cloud operations team to resolve trouble tickets, develop and run scripts, and troubleshoot Create new tools and scripts designed for auto-remediation of incidents and establishing end-to-end monitoring and alerting on all critical aspects Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Participate in a team of first responders in a 24/7 environment, follow the sun operating model for incident and problem management What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 2-5 years of experience in software engineering, systems administration, database administration, and networking. 1+ years of experience developing and/or administering software in public cloud Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives. Experience in languages such as Python, Bash, Java, Go JavaScript and/or node.js Demonstrable cross-functional knowledge with systems, storage, networking, security and databases System administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart You have experience designing, analyzing and troubleshooting large-scale distributed systems. You take a system problem-solving approach, coupled with strong communication skills and a sense of ownership and drive You have experience managing Infrastructure as code via tools such as Terraform or CloudFormation You are passionate for automation with a desire to eliminate toil whenever possible You’ve built software or maintained systems in a highly secure, regulated or compliant industry You thrive in and have experience and passion for working within a DevOps culture and as part of a team We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Posted 1 week ago
2.0 years
0 Lacs
Mohali district, India
On-site
Location: Mohali, Punjab Job Type: Full-Time Exp: Minimum of 2 years of experience in advanced AI development. About RChilli RChilli is a leader in AI-driven HR technology, powering next-generation recruitment solutions globally. We thrive on innovation, agility, and a mission to revolutionize the way HR teams work with intelligent automation. As we expand our capabilities in Agentic AI, we are looking for a passionate technologist to lead and drive this initiative. Position Summary We are urgently seeking a hands-on, visionary professional to lead the development and deployment of Agentic AI systems . This role is central to our next phase of AI innovation and requires deep technical acumen in building autonomous AI agents integrated with leading cloud platforms. Key Responsibilities Design and Architect Agentic AI solutions aligned with business goals. Lead the End-to-End Development of AI agents, from ideation through production deployment. Integrate multi-agent systems across cloud environments including AWS, Google Cloud, and Azure . Ensure scalable, secure, and reliable deployments of AI systems. Collaborate with Product, Engineering, and DevOps teams to maintain high availability and performance of AI solutions. Stay ahead of AI trends to introduce cutting-edge innovations into the product lifecycle. Technical Requirements Proven track record in designing and deploying Agentic AI systems . Proficiency with cloud-native agent development platforms: AWS Bedrock Google Vertex AI Agent Builder Azure AI Studio & Azure AI Agent Service Deep understanding of cloud architecture , APIs, serverless frameworks, and deployment strategies. Familiarity with LLMs, prompt engineering, and orchestrating autonomous agents for complex tasks. Strong programming background (e.g., Python, Node.js) and experience with model fine-tuning and orchestration tools. Nice to Have Experience with tools such as LangChain, AutoGen, CrewAI, or similar agent frameworks. Background in MLOps and CI/CD for AI systems. Contributions to open-source AI agent frameworks. What We Offer A chance to lead a frontier role in cutting-edge AI development. Work with a global team of innovators . Competitive compensation aligned with market standards.
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title - Indirect Tax Manager/Senior Manager - S&C GN-CFO&EV Management Level: 07-Manager/06-Senior Manager Location: Gurgaon, Mumbai, Bangalore, Pune, Hyderabad Must have skills: Onesource or Vertex or Sabrix implementation Good to have skills: Avalara, Indirect Tax functional experience Experience: 8+ years Educational Qualification: MBA(Finance) or CA or CMA Job Summary: Identify opportunities building own network within the firm to drive business development activities. Lead project delivery, client conversations, pitch proposals and manage stakeholders on the project, both internal and external. Prepare business case and provide solution options, project plans, estimates, staffing requirements and execution approach for the tax opportunities to the stakeholders. Lead the team of experienced resources and guide members on project executions as per timelines. Lead the solution design and implementation aspects of engagement(s) ensuring high quality within constraints of time and budget. Coordinate with client organizations and work towards maintaining and enhancing effective client relationships. Be responsible for performance management of resources, support recruitment and other people initiatives including training, and so on. Develop key thought leadership material on tax function or other related transformation projects. Roles & Responsibilities: Leadership skills to boost efficiency and productivity of the team Ability to collaborate with geographically dispersed teams Ability to solve complex business problems and deliver client delight Strong writing skills to build point of views on current industry trends Good analytical and problem-solving skills with an aptitude to learn quickly Excellent communication, interpersonal and presentation skills Cross-cultural competence with an ability to thrive in a dynamic consulting environment , Professional & Technical Skills: MBA from a Tier-1 B-school. CA or CPA 8+ years of work experience preferably in financial areas order to cash, source to pay, record to report with tax relevance Must have at least 3 full lifecycles implementation experience in implementing Enterprise Resource Planning (ERP) or tax technology: Tax Type - VAT, GST, SUT, WHT, Digital Compliance Reporting ERP - SAP or Oracle Tax Technologies - Vertex O Series, OneSource, SOVOS Tax add-on tools - Vertex Accelerator, OneSource Global Next, LCR-Dixon Deep understanding of multiple tax types and business processes Must have experience in handling a team of 5-10 resources independently Experience in digital reporting, compliance and e-invoicing solutions Exposure to working in globally distributed workforce environment, both onshore and offshore Additional Information: An opportunity to work on transformative projects with key G2000 clients Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners and, business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everything—from how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge and capabilities Opportunity to thrive in a culture that is committed to accelerate equality for all. Engage in boundaryless collaboration across the entire organization. About Our Company | Accenture
Posted 1 week ago
3.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Job Title : AI/ML Engineer Job Summary We are seeking a talented and passionate AI/ML Engineer with at least 3 years of experience to join our growing data science and machine learning team. The ideal candidate will have hands-on experience in building and deploying machine learning models, data preprocessing, and working with real-world datasets. You will collaborate with cross-functional teams to develop intelligent systems that drive business value. Key Responsibilities Design, develop, and deploy machine learning models for various business use cases. Analyze large and complex datasets to extract meaningful insights. Implement data preprocessing, feature engineering, and model evaluation pipelines. Work with product and engineering teams to integrate ML models into production environments. Conduct research to stay up to date with the latest ML and AI trends and technologies. Monitor and improve model performance over time. Required Qualifications Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. Minimum 3 years of hands-on experience in building and deploying machine learning models. Strong proficiency in Python and ML libraries such as scikit-learn, TensorFlow, PyTorch, and XGBoost. Experience with training, fine-tuning, and evaluating ML models in real-world applications. Proficiency in Large Language Models (LLMs) including experience using or fine-tuning models like .BERT, GPT, LLaMA, or open-source transformers. Experience with model deployment, serving ML models via REST APIs or microservices using frameworks like FastAPI, Flask, or TorchServe. Familiarity with model lifecycle management tools such as MLflow, Weights & Biases, or Kubeflow. Understanding of cloud-based ML infrastructure (AWS SageMaker, Google Vertex AI, Azure ML, etc.). Ability to work with large-scale datasets, perform feature engineering, and optimize model performance. Strong communication skills and the ability to work collaboratively in cross-functional teams. (ref:hirist.tech)
Posted 1 week ago
12.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Experience Required 12 - 20 years Job Description About Axis My India Axis My India Limited is India’s leading consumer data intelligence company, committed to enabling data-driven decision-making. It has launched the "A" App , a people empowerment platform designed to enhance the lives of a billion citizens by providing access to authentic information and practical solutions for everyday needs. The app follows a PHYGITAL model, leveraging Axis My India’s vast network of 5,000+ locations across 700 districts. Powered by Google Cloud and Google Gen AI , it continuously learns and improves to deliver better user experiences. The company is spearheaded by Mr. Pradeep Gupta who is a leading name in market research and is India’s top psephologist. Job Description The Senior Technical Project Delivery Manager will lead the end-to-end execution of the People Empowerment Platform (Super-App), ensuring timely delivery, technical excellence, and alignment with business goals. This role demands strong project management, cloud-native architecture expertise (GCP), and Agile delivery leadership. The manager will collaborate with cross-functional teams, vendors, and partners to deliver a scalable, secure, and AI-powered platform. Requirements Project & Delivery Management Lead Agile-based project execution across scope, schedule, cost, and quality. Manage CI/CD pipelines, DevOps practices, and performance monitoring. Ensure timely delivery of milestones and risk mitigation. Cloud-Native Architecture & Development Oversee microservices deployment on GCP using Kubernetes (GKE). Optimize AI/ML workflows, data pipelines, and real-time analytics. Ensure secure, scalable, and high-availability architecture. Technology Oversight Manage full-stack delivery: Flutter (frontend), Node.js (backend), GCP services. Ensure seamless integration of tools like BigQuery, Vertex AI, Looker, and Jenkins. Drive observability, automation, and performance optimization. Vendor & Stakeholder Management Collaborate with system integrators, Google Cloud partners, and consultants. Conduct technical due diligence and negotiate SLAs and contracts. Align internal and external teams for smooth execution. Security & Compliance Enforce SSDLC, cloud security, and data privacy standards. Implement disaster recovery and failover strategies. Monitor compliance with GDPR and other regulatory frameworks. Leadership & Process Improvement Mentor technical teams and promote Agile, DevOps, and innovation. Continuously improve delivery processes and cost efficiency. Contribute to strategic planning and platform evolution. Key Attributes Strong leadership and stakeholder management Deep expertise in GCP, microservices, and DevOps Analytical mindset with a focus on performance and scalability High accountability, ownership, and collaborative spirit CI/CD automation, Kubernetes orchestration, and DevOps pipeline optimization. Real-time AI/ML-driven insights using BigQuery, Looker, and Vertex AI. Experience And Qualification Strong understanding of the Industry and familiarity with research products and services. Excellent communication and interpersonal skills, with the ability to build rapport and negotiate effectively with stakeholders at all levels. Strategic thinker with a demonstrated ability to analyze market trends, identify opportunities, and develop targeted strategies to drive growth. Education B.E./B.Tech in Computer Science or IT (MBA preferred) 8+ years in technical project management and cloud-based delivery 5+ years of hands-on development experience Certifications preferred: PMP, SAFe, GCP, ITIL Benefits Competitive salary and benefits package Opportunity to make significant contributions to a dynamic company Walking distance from Chakala metro station, making commuting easy and convenient At Axis My India, we value discipline and focus. Our team members wear brand on sleeves, adhere to a no-mobile policy during work hours, and work from our office with alternate Saturdays off. If you thrive in a structured environment and are committed to excellence, we encourage you to apply. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#9C27B0;border-color:#9C27B0;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Pune, Maharashtra, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor’s degree or equivalent practical experience. 5 years of experience with software development in one or more programming languages. 3 years of experience testing, maintaining, or launching software products. 1 year of experience with software design and architecture. Preferred qualifications: 5 years of experience with data structures/algorithms. 1 year of experience in a technical leadership role. Experience developing accessible technologies. About The Job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. In this role, you will manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Participate in, or lead design reviews with peers and stakeholders to decide amongst available technologies. Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency). Build large-scale data processing pipelines with appropriate quality/reliability checks. Debug large-scale data pipelines. Build proper monitoring for both the health of data pipelines and quality of data. Treat access/privacy/compliance as first class operators for the data pipelines. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
🚀 Why Headout? We're a rocketship: 9-figure revenue, record growth, and profitable With $130M in revenue, guests in 100+ cities, and 18 months of profitability, Headout is the fastest-growing marketplace in the travel industry, and we're just getting started. We've raised $60M+ from top-tier investors and are building a durable company for the long term — because that's what our mission needs and deserves. We're growing, profitable and nowhere near done. What we do is important In an increasingly digital world, there is a desperate need to augment our human experience by getting us to interact with the real world around us and the people in it. At Headout, our mission is to be the easiest, fastest, and most delightful way to head out to a real-life experience — from immersive tours to museums to live events and everything in between. Why now? The foundation is strong. The opportunity ahead is even bigger. We've hit profitability, built momentum, and proven the model — but there's so much more to build. If you're looking to join a company where the trajectory is steep and your impact is real, this is the moment. Our culture Reinventing the travel industry isn't easy, but that's the fun part. We care deeply about ownership, craft, and impact, and we're here to do the best work of our careers. We won't pretend like it's for everyone but if you're a builder who loves solving tough problems, you'll feel right at home. Read more about our unique values here: https://bit.ly/HeadoutPlaybook 👩💻 The Role As a Machine Learning Engineer at Headout, you will play a pivotal role in developing AI-powered solutions that enhance our platform and create exceptional experiences for travelers worldwide. At Headout, we firmly believe that intelligent algorithms can transform how people discover and engage with travel experiences. Collaborating closely with multifaceted teams across the organization, you'll design, develop, and deploy sophisticated ML models across various applications including recommendations, search optimization, pricing, and operational efficiency. 🌟 What makes the role stand out? Global Impact: Your algorithms will serve millions of travelers across 190+ countries, optimizing experiences throughout the customer journey - from discovery and decision-making to post-purchase engagement. Diverse AI Applications: Work on a variety of machine learning projects spanning different domains. One day you might be improving our recommendation engine, the next you could be optimizing search rankings or developing forecasting models for operational planning. End-to-End Ownership: Take ML solutions from ideation to production. You'll help identify opportunities where ML can add value, design solutions, implement models, and measure real-world impact. Data-Rich Environment: Leverage rich, multi-dimensional data from user behavior, transaction patterns, content characteristics, and operational metrics to build comprehensive models that drive meaningful outcomes. Tangible Results: See the concrete impact of your work through key business metrics. Your models will contribute to increased conversion rates, enhanced user engagement, optimized operations, and improved customer satisfaction. Technical Innovation: As machine learning and AI technologies evolve, you'll be at the forefront of evaluating and implementing new approaches that keep Headout competitive in a dynamic industry. 🎯 What skills you need to have You have a minimum of 4 years of experience in machine learning engineering across different applications such as recommendations, classification, prediction, or natural language processing A strong foundation in ML fundamentals and techniques is essential. Proficiency in Python is a must, with experience in frameworks like TensorFlow, PyTorch, scikit-learn, or similar ML libraries (Hugging Face, XGBoost etc) You have practical experience taking ML models from development to production, including data preprocessing, feature engineering, model training, evaluation, and deployment Experience with A/B testing and experimentation frameworks to measure and validate the impact of ML solutions You possess strong problem-solving skills and the ability to translate business requirements into effective technical implementations Your communication skills enable you to work effectively with cross-functional teams and explain complex technical concepts to non-technical stakeholders Familiarity with large-scale data processing technologies (Spark, Kafka, Flink etc.) and cloud-based ML services (Vertex AI, Sagemaker etc) is a plus EEO statement At Headout, we don't just accept differences — we celebrate it, we support it, and we thrive on it for the benefit of our employees, our partners, and the community at large. Headout provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age or disability. During the interview process, if you need assistance or an accommodation due to a disability, you may contact the recruiter assigned to your application or email us at life@headout.com. Privacy policy Please note that once you apply for this job profile your personal data will be retained for a period of one (1) year. Headout shall process this data for recruitment purposes only. Once the relevant job profile is filled or once the time period of one (1) year from the date of the job application has passed, whichever is later, Headout shall either delete your data or inform you that it shall keep it in its database for future roles. In compliance with the relevant privacy laws, you have the right to request access to your personal data, to request that your personal data be rectified or erased, and to request that the processing of your personal data be restricted. If you have any concerns or questions about the way Headout handle your data, you can contact our Data Protection Officer for more information.
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Pune, Maharashtra, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor’s degree or equivalent practical experience. 5 years of experience with software development in one or more programming languages. 3 years of experience testing, maintaining, or launching software products. 1 year of experience with software design and architecture. Preferred qualifications: 5 years of experience with data structures/algorithms. 1 year of experience in a technical leadership role. Experience developing accessible technologies. About The Job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. In this role, you will manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Participate in, or lead design reviews with peers and stakeholders to decide amongst available technologies. Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency). Build large-scale data processing pipelines with appropriate quality/reliability checks. Debug large-scale data pipelines. Build proper monitoring for both the health of data pipelines and quality of data. Treat access/privacy/compliance as first class operators for the data pipelines. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France