Jobs
Interviews

1334 Vertex Jobs - Page 31

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Summary: We are seeking an experienced and highly motivated Senior Data Scientist with a strong background in Generative AI , agentic systems , and end-to-end deployment of AI solutions. This role demands proficiency not only in building intelligent systems but also in operationalizing them using MLOps best practices and cloud-based deployment strategies. The ideal candidate will thrive at the intersection of data science, software engineering, and infrastructure. Key Responsibilities: Design, develop, and deploy advanced AI-powered chatbots and agent-based solutions using frameworks like LangChain and LangGraph . Build and maintain backend APIs and user-facing interfaces using FastAPI and Streamlit . Leverage Snowflake for data integration, transformation, and analytics. Implement and manage MLOps pipelines for model training, validation, versioning, and monitoring. Deploy models and applications on AWS and GCP , ensuring scalability, reliability, and security. Use Redis for caching and real-time data access in production-grade applications. Collaborate with cross-functional teams to translate business needs into technical solutions. Prepare and deliver effective presentations to stakeholders and leadership teams. Ensure code quality, testing, and documentation in Python environments. Required Skills: 5–8 years of hands-on experience in Data Science , with a focus on deployment. Proven experience in Chatbot development , Agentic AI , and LLM-based frameworks . Strong Python programming skills, including use of PyTorch , LangChain , and LangGraph . Proficient in MLOps practices : CI/CD for ML, model monitoring, version control, and automated retraining. Experience deploying solutions on cloud platforms (AWS, GCP). Backend and app development experience with FastAPI , Streamlit , and Redis . Strong working knowledge of Snowflake and data engineering workflows. Excellent communication and presentation skills , with the ability to convey technical topics to non-technical audiences. Experience with containerization tools like Docker and orchestration with Kubernetes . Exposure to prompt engineering and LLM fine-tuning. Nice to Have: Familiarity with tools like MLflow , DVC , Vertex AI . Experience working in agile product teams or startup environments. Show more Show less

Posted 1 month ago

Apply

20.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

DGM/General Manager- F & A Qualification: MBA Finance/ CA / ICWAI Experience 20 years and above Exposure Knowledge in Real Estate Mandatory Location - Hyderabad -Madhapur (Telangana) From Vertex Homes, Hyderabad. Job Description Role & Responsibilities: Knowledge in Balance sheet, TDS Returns Co-ordinate with IT Department at the time scrutiny Knowledge in CMA date to be provided to Banks Preparing Cash flow statements and other reports for submission of Banks Coordinate with internal and external auditors for finalization of books Preparation of statements required for MIS Data ROC filing knowledge Bills verification, Payment to Vendors, contractors Coordinate with sites for PO's and work orders, TDS deductions etc., Proficient in tax regulations, including income tax compliance. Arranging monthly meeting with departmental Heads Coordinating with CRM team for collection follow up (ref:iimjobs.com) Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI, and autonomous intelligent systems. We build intelligent, cutting-edge solutions designed to scale and evolve with the future of artificial intelligence. Role Overview We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc. Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g, GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc. Understanding of ML lifecycle management and reproducibility. Preferred Qualifications Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer Work on cutting-edge AI platforms and infrastructure. Cross-functional collaboration with top ML, research, and product teams. Competitive compensation package - no constraints for the right candidate. (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role : Senior Generative AI Engineer Location : Chennai (on-site) Exp : 3 - 5 years FarmwiseAI is a leading Geospatial AI company based in Chennai, specializing in AI driven agriculture solutions that enable data-driven decision-making for governments, lenders, and businesses. Founded in 2020, we deliver real-time advisory, automated land mapping, and crop-monitoring products at scale to foster sustainable development. As an AI-first organization, we embed AI assistance across our entire product lifecycle from brainstorming and architecture to testing and deployment to empower every team member to leverage AI in their day-to-day work. We are hiring a Generative AI Engineer to build, deploy, and optimize multimodal AI services across text, speech, and vision. You'll work on RAG, synthetic data generation, agent workflows, and integrate STT/TTS/OCR with scalable backend systems. Generative Pipelines : Design applications for RAG, CAG, text classification, summarization, image/video generation, OCR, and synthetic data generation. Multimodal Integration : Work with STT, TTS, IVR, OCR, and vision inputs to enable seamless AI interactions. AI Agent Workflows : Develop modular, multi-step orchestrations for document, conversational, and data-based user journeys. Containerization & Deployment : Collaborate with DevOps to containerize services, manage Kubernetes orchestration, and implement CI/CD for agile delivery. Observability : Instrument services using OpenTelemetry, Prometheus, and logging tools to ensure SLO-driven production reliability. Collaboration : Work cross-functionally with product, data science, and frontend teams to define APIs (REST/GraphQL) and ensure smooth integration. Documentation & Mentorship : Participate in architecture reviews, write clear documentation, and mentor junior engineers and interns Bachelor's/Masters in Computer Science, Data Science, IT, or related field. 2 - 3 years of experience building AI/ML products in Python. Must be proficient in AI-first coding tools like Claude Code, Cursor, Roocode, etc. Proven experience in deploying GenAI applications and agents in production. Strong hands-on with vector search, embedding-based retrieval, STT, TTS, OCR/vision. Familiarity with Docker, Kubernetes, frontend development, and CI/CD workflows. Strong debugging, performance tuning, and cost-optimization skills. Excellent communication, teamwork, and mentoring abilities. Languages & Tools (mandatory) : Python (pandas, scikit-learn, PyTorch, Tensorflow, etc.), Git/GitHub , AWS or GCP. Generative AI stack (mandatory) : LangChain, LlamaIndex, transformers, frontier LLMs (OpenAI, Anthropic, Gemini models) and open models (DeepSeek, Qwen, Llama and Phi models). Vector stores : FAISS, Pinecone, Qdrant, Weaviate, etc. Keyword Index : Elasticsearch, Apache Solr, Typesense, etc. Validation frameworks : Pydantic, Instructor, etc. LLM Abstraction libraries : Lite LLM Asynchronous or parallel programming : asyncio, joblib, etc. API frameworks : FastAPI, Flask, etc. FE prototyping : Streamlit, Gradio, etc Agentic AI Frameworks (mandatory, 1) : Google Agents Development Kit , LangGraph, OpenAI Agents SDK, PydanticAI. Speech & Vision (nice-to-have) : OpenAI Realtime Voice API/Whisper; ElevenLabs/Smallest.ai TTS; LlamaParse/JinaAI/Mistral OCR. Observability & Monitoring (nice-to-have) : OpenTelemetry, Prometheus , LangSmith, Pydantic Logfire . Cloud & DevOps (nice-to-have) : Docker, Kubernetes, GitHub Actions. Domain experience in AgriTech, FinTech, HRTech or EduTech. Experience and profound interest in reading and implementing research papers. Open-source contributions or published evaluation suites. Exposure to managed cloud AI services (Vertex AI, Bedrock, JumpStart). Familiarity with React/Next.js integration (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

58.0 years

0 Lacs

Greater Lucknow Area

On-site

Job Description We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems. This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Learning. Key Responsibilities Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks. Own the complete ML lifecycle : data ingestion, model development, experimentation, evaluation, deployment, and monitoring. Leverage transfer learning, foundation models, or self-supervised approaches where suitable. Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure. Continuously monitor model performance and implement retraining workflows to ensure accuracy over time. Stay ahead of the curve on cutting-edge AI research (e.g., generative AI, video understanding, audio embeddings) and incorporate innovations into production systems. Write clean, well-documented, and reusable code to support agile experimentation and long-term platform : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 58+ years of experience in AI/ML Engineering, with at least 3 years in applied deep learning. Technical Skills Languages : Expert in Python; good knowledge of R or Java is a plus. ML/DL Frameworks : Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX. Computer Vision : Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe). Audio AI : Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc.). Data Engineering : Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data. NLP/LLMs : Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred. Cloud & MLOps : Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML. Deployment & Infrastructure : Experience with Docker, Kubernetes, REST APIs, serverless ML inference. CI/CD & Version Control : Git, DVC, ML pipelines, Jenkins, Airflow, etc. Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components. Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders. Proven ability to work cross-functionally with designers, engineers, product managers, and analysts. Demonstrated bias for action, rapid experimentation, and iterative delivery of impact. (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

10.0 - 14.0 years

35 - 65 Lacs

Bengaluru

Work from Office

Job Summary We are seeking a highly experienced Principal Architect with 10 to 14 years of experience to join our team. The ideal candidate will have expertise in Vertex AI Project Stakeholder Management Google APIGEE Risk Management Delivery Management Cloud Storage Google Data Studio and Looker. This role requires domain experience in Utilities and offers a hybrid work model with day shifts. Responsibilities Lead the design and implementation of complex architectural solutions using Vertex AI and Google APIGEE. Oversee project stakeholder management to ensure alignment with business objectives and successful project delivery. Provide risk management strategies to identify assess and mitigate potential risks throughout the project lifecycle. Manage delivery processes to ensure timely and efficient completion of projects. Utilize Cloud Storage solutions to manage and store data securely and efficiently. Develop and maintain data visualizations and reports using Google Data Studio and Looker. Collaborate with cross-functional teams to ensure seamless integration of architectural solutions. Conduct regular reviews and assessments of architectural frameworks to ensure they meet industry standards and best practices. Implement continuous improvement initiatives to enhance the efficiency and effectiveness of architectural processes. Provide technical guidance and mentorship to junior team members. Ensure compliance with regulatory requirements and industry standards in the Utilities domain. Communicate effectively with stakeholders to provide updates on project progress and address any concerns. Drive innovation and stay updated with the latest technological advancements in the field. Qualifications Possess a strong background in Vertex AI and Google APIGEE with hands-on experience. Demonstrate expertise in project stakeholder management and risk management. Have a proven track record in delivery management and cloud storage solutions. Show proficiency in creating data visualizations and reports using Google Data Studio and Looker. Exhibit domain knowledge and experience in the Utilities sector. Display excellent communication and collaboration skills. Hold a relevant certification in cloud architecture or a related field. Certifications Required Google Professional Cloud Architect Certified Project Management Professional (PMP)

Posted 1 month ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

This is a remote position. BayApps is looking for an Oracle EBS Financials Functional Analyst to be the focal point for support and enhancement of Oracle EBS Financials business processes. Key activities for this role will include business process refinement, solution design, configuring EBS modules, testing, and end user support for key Finance modules in a global Oracle environment. The candidate will be a part of the Finance Solutions Delivery organization, and will have technical ownership of all aspects from project implementation to process enhancements to sustaining support. Responsibilities : Work closely with business stakeholders and users to gather the end-user requirements and communicate IT priorities and delivery status to the business units Development of test scenarios and test cases, orchestrate the execution, test run validation of functional user testing Design and development of third party integrations, operational workflows, the development and execution of the roll-out strategies, cut-over plans, end-user training and support and end-user documentation Understand, communicate, and educate on the complexities, interdependencies and data flow of business processes across Oracle EBS finance modules, including GL, AP, AR, CM, FA and EBTax Development of clear functional business requirements/specifications Troubleshooting production issues through discussion with end users and technical resources, including problem recognition, research isolation and resolution steps. Maintain the health and effectiveness of the Oracle platform over time Take ownership of issues and work with business users and the development team to find resolutions Provide day-to-day functional support and troubleshooting including table level SQL research queries Drive open and comprehensive communications with key stakeholders, managing their expectations through clear and frequent communications Maintain and modify configuration, security, and access of Oracle modules Create and maintain application and process documentation, as well as training materials Guide and lead testing activities from unit testing to Production validation Requirements Qualifications : Minimum of 8 years of experience with Oracle R12.2 Financials modules, including GL, AP, AR, XLA, CM, FA, EBTax, iExpense, AGIS, Advance Collections Experience working on Oracle Enterprise Command Centers, Lockbox Payments, Customer epayments such as Credit Card and ACH Good understanding of financial tables and SQL technology Strong subledger accounting knowledge is a must. Should be able to analyze and identify any root causes in case of accounting and during period close issues. Experience with the below modules will be considered a plus, Inventory, Purchasing, OM, Service Contracts, Installed Base Experience with the below tools is a plus, DOMO, Vertex, OneSource, Pagero, Revpro, GetPaid, CyberSource, Runpayments Experience with Salesforce is a plus Must be an effective communicator (written and oral) across all levels of organization, including users, developers and management Must have experience documenting requirements and developing system / user test plans Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

New Delhi, Delhi, India

On-site

About Us Vertex Institute of AI is an early-stage, mission-driven EdTech startup poised to revolutionize how AI education is delivered in India. We believe that empowering students with future-ready skills is the key to unlocking India's demographic dividend. We have a proven product-market fit, a handful of prestigious university clients, and a vision to become the most trusted partner for academic institutions and students across the country. Now, we're looking for a passionate and experienced sales leader to build our growth engine and scale our impact. What You'll Do (Key Responsibilities) Develop & Execute the Growth Strategy: Own the end-to-end sales strategy for both B2B (universities & K-12 schools) and B2C channels. Define targets, segment the market, and create a scalable roadmap for revenue growth. Lead & Mentor the Sales Team: Lead our existing team of Business Development Executives. Set clear KPIs, provide hands-on coaching, foster a high-performance culture, and be responsible for hiring and training new members as we scale. Drive B2B University Partnerships: Personally lead and oversee the entire B2B sales cycle—from initial outreach and online meetings to onsite presentations and contract negotiations with key university stakeholders (TPOs, Deans, VCs). Pioneer the K9-12 School Vertical: Adapt our value proposition to create and execute a winning GTM strategy for high schools, focusing on their unique pain points of brand differentiation and student outcomes. Build a Predictable Sales Funnel: Implement and manage a CRM to build a robust, predictable sales pipeline. Ensure rigorous tracking of all sales activities and provide accurate forecasting to the leadership. Refine the B2C Strategy: Collaborate closely with the marketing team to convert our workshop attendees into long-term bootcamp customers, optimizing the funnel for higher conversion rates. Be the Voice of the Customer: Act as the primary link between the market and our product/delivery teams, providing invaluable feedback to ensure our programs remain best-in-class and solve real-world problems. Who You Are (Our Ideal Candidate) An Experienced EdTech Sales Leader: You have 8+ years of experience in sales, with at least 3 years in a leadership role within the EdTech or B2B SaaS space. A Master of B2B Institutional Sales: You have a proven, demonstrable track record of successfully selling high-value solutions or services to universities, engineering colleges, or K9-12 schools in India. You understand the ecosystem, the stakeholders, and the sales cycles. A Natural "Builder" and Leader: You have experience building sales teams and processes from a very early stage. You lead by example, are an exceptional mentor, and know how to motivate a team to punch above its weight. Strategic & Hands-On: You can devise a high-level sales strategy in the morning and jump on a discovery call with a TPO in the afternoon. No task is too big or too small. Exceptional Communicator: You possess outstanding presentation, negotiation, and relationship-building skills. You can command a room of senior academics and build rapport with students with equal ease. Passionate & Mission-Driven: You are genuinely excited by our mission to transform education and student careers through AI. You see this as an opportunity to create lasting impact. What We Offer A competitive salary and a highly lucrative performance-based incentive plan. A significant leadership role with the opportunity to shape the future of a high-growth startup. Meaningful ESOPs - as a foundational leader, you will have a real stake in the value you help create. A dynamic, collaborative, and fast-paced work environment. The opportunity to make a tangible difference in the lives of thousands of students. If you are ready to take on the most challenging and rewarding role of your career, we would love to talk to you. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

Key Responsibilities Design, develop, and deploy machine learning models for prediction, recommendation, anomaly detection, NLP, or image processing tasks. Work with large, complex datasets to extract insights and build scalable solutions. Collaborate with data engineers to create efficient data pipelines and feature engineering workflows. Evaluate model performance using appropriate metrics and improve models through iterative testing and tuning. Communicate findings, insights, and model outputs clearly to non-technical stakeholders. Stay up to date with the latest machine learning research, frameworks, and technologies. Required Skills Strong programming skills in Python (Pandas, NumPy, Scikit-learn, etc.). Hands-on experience with ML/DL frameworks like TensorFlow, PyTorch, XGBoost, or LightGBM. Experience in building, deploying, and maintaining end-to-end ML models in production. Solid understanding of statistics, probability, and mathematical modeling. Proficiency with SQL and data manipulation in large-scale databases. Familiarity with version control (Git), CI/CD workflows, and model tracking tools (MLflow, DVC, etc.). Preferred Skills Experience with cloud platforms like AWS, GCP, or Azure (e.g., SageMaker, Vertex AI). Knowledge of MLOps practices and tools for scalable ML deployments. Exposure to real-time data processing or streaming (Kafka, Spark). Experience with NLP, Computer Vision, or Time Series Forecasting. Show more Show less

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Bengaluru, Delhi / NCR, Mumbai (All Areas)

Hybrid

Role & responsibilities Role - SAP FICO Functional Consultant with Combination of Vertex skill set we are looking Should have Strong E2E Implementation in Hana Note - If you have strong hands on experience in SAP FICO + Vertex, DO Apply Location - Bangalore, Mumbai, Gurugram Exp - 5 to 15years relevant Company - BIG5 Type- FTE IF INTRESTED AND MATCHING KINDLY SHARE UPDATED CV TO subashini@sigmacareers.in Regards Subashini

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Tata Consultancy Services is hiring Google Data Engineer !!! Role**Google Data Engineer EXP***3-5 years Location***Bangalore, Chennai, Hyderabad Experience Experience level of 3 to 5 years in data engineering, data warehousing, or a related field. Experience with dashboarding tools like plx dashboard and looker studio Experience with building data pipelines, reports, best practices and frameworks. Experience with design and development of scalable and actionable solutions (dashboards, automated collateral, web applications). Experience with code refactoring for optimal performance. Experience writing and maintaining ETLs which operate on a variety of structured and unstructured sources. Familiarity with non-relational data storage systems (NoSQL and distributed database management systems). Skills Strong proficiency in SQL, NoSQL, ETL tools, BigQuery and at least one programming language (e.g., Python, Java). Strong understanding of data structures, algorithms, and software design principles. Experience with data modeling techniques and methodologies. Proficiency in troubleshooting and debugging complex data-related issues. Ability to work independently and as part of a team. Responsibilities Data Pipeline Development: Design, implement, and maintain robust and scalable data pipelines to extract, transform, and load data from various sources into our data warehouse or data lake. Data Modeling and Warehousing: Collaborate with data scientists and analysts to design and implement data models that optimize query performance and support complex analytical workloads. Cloud Infrastructure: Leverage Google Cloud and other internal storage platforms to build and manage scalable and cost-effective data storage and processing solutions. Data Quality Assurance: Implement data quality checks and monitoring processes to ensure the accuracy, completeness, and consistency of data. Performance Optimization: Continuously monitor and optimize data pipelines and queries for performance and efficiency. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand their data needs and deliver solutions that meet their requirements. Desirable Experience Cloud Storage or equivalent cloud platforms Knowledge of BigQuery ingress and egress patterns Experience in writing Airflow DAGs Knowledge of pubsub,dataflow or any declarative data pipeline tools using batch and streaming ingestion Other GCP Services: Vertex AI Interested candidates can apply !!! Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position: Solution Architect Location: Chennai/ Bangalore/ Kuala Lumpur Experience: 8+ years Employment Type: Full-time Job Overview Join Moving Walls, a trailblazer in the Out-of-Home (OOH) advertising and AdTech ecosystem, as a Solution Architect. This pivotal role places you at the heart of our innovative journey, designing and implementing scalable, efficient, and transformative solutions for our award-winning platforms like LMX and MAX . With a focus on automating and enhancing media transactions, you’ll enable a seamless connection between media buyers and sellers in a rapidly evolving digital-first landscape. As a Solution Architect, you will bridge the gap between business objectives and technical execution, working in an Agile environment with POD-based execution models to ensure ownership and accountability. You will drive initiatives that revolutionize the way data and technology shape OOH advertising. Why Join Us? ● Innovative Vision: Be part of a team committed to "Creating the Future of Outernet Media", where every solution impacts global markets across Asia, ANZ, Africa, and more. ● Cutting-edge Projects: Work on features like programmatic deal automation, data-driven audience insights, and dynamic campaign management for platforms connecting billions of ad impressions. ● Collaborative Culture: Collaborate with multidisciplinary teams, including Sales, Product Management, and Engineering, to craft solutions that are customized and impactful. What You’ll Do: ● Architect scalable and innovative solutions for AdTech products, ensuring alignment with organizational goals and market needs. ● Collaborate with cross-functional teams to gather, analyze, and translate business requirements into technical designs. ● Lead the development of programmatic solutions, dynamic audience segmentation tools, and integrations for global markets. ● Enhance existing products by integrating advanced features like dynamic rate cards, bid management, and inventory mapping. ● Advocate for best practices in system design, ensuring the highest standards of security, reliability, and performance. What You Bring: ● A strong technical background with hands-on experience in cloud-based architectures, API integrations, and data analytics. ● Proven expertise in working within an Agile environment and leading POD-based teams to deliver high-impact results. ● Passion for AdTech innovation and the ability to navigate complex, fast-paced environments. ● Excellent problem-solving skills, creativity, and a customer-centric mindset. Key Responsibilities 1. Solution Design: ○ Develop end-to-end solution architectures for web, mobile, and cloud-based platforms using the specified tech stack. ○ Translate business requirements into scalable and reliable technical solutions. 2. Agile POD-Based Execution: ○ Collaborate with cross-functional POD teams (Product, Engineering, QA, and Operations) to deliver iterative and focused solutions. ○ Ensure clear ownership of deliverables within the POD, fostering accountability and streamlined execution. ○ Contribute to defining and refining the POD stages to ensure alignment with organizational goals. 3. Collaboration and Stakeholder Management: ○ Work closely with product, engineering, and business teams to define technical requirements. ○ Lead technical discussions with internal and external stakeholders. 4. Technical Expertise: ○ Provide architectural guidance and best practices for system integrations, APIs, and microservices. ○ Ensure solutions meet non-functional requirements like scalability, reliability, and security. 5. Documentation: ○ Prepare and maintain architectural documentation, including solution blueprints and workflows. ○ Create technical roadmaps and detailed design documentation. 6. Mentorship: ○ Guide and mentor engineering teams during development and deployment phases. ○ Review code and provide technical insights to improve quality and performance. 7. Innovation and Optimization: ○ Identify areas for technical improvement and drive innovation in solutions. ○ Evaluate emerging technologies to recommend the best tools and frameworks. Required Skills and Qualifications ● Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field. ● Proven experience as a Solution Architect or a similar role. ● Expertise in programming languages and frameworks: Java, Angular, Python, C++ ● Proficiency in AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, or Keras. ● Experience in deploying AI models in production, including optimizing for performance and scalability. ● Understanding of deep learning, NLP, computer vision, or generative AI techniques. ● Hands-on experience with model fine-tuning, transfer learning, and hyperparameter optimization. ● Strong knowledge of enterprise architecture frameworks (TOGAF, Zachman, etc.). ● Expertise in distributed systems, microservices, and cloud-native architectures. ● Experience in API design, data pipelines, and integration of AI services within existing systems. ● Strong knowledge of databases: MongoDB, SQL, NoSQL. ● Proficiency in working with large-scale datasets, data wrangling, and ETL pipelines. ● Hands-on experience with CI/CD pipelines for AI development. ● Version control systems like Git and experience with ML lifecycle tools such as MLflow or DVC. ● Proven track record of leading AI-driven projects from ideation to deployment. ● Hands-on experience with cloud platforms (AWS, Azure, GCP) for deploying AI solutions. ● Familiarity with Agile methodologies, especially POD-based execution models. ● Strong problem-solving skills and ability to design scalable solutions. ● Excellent communication skills to articulate technical solutions to stakeholders. Preferred Qualifications ● Experience in e-commerce, Adtech or OOH (Out-of-Home) advertising technology. ● Knowledge of tools like Jira, Confluence, and Agile frameworks like Scrum or Kanban. ● Certification in cloud technologies (e.g., AWS Solutions Architect). Tech Stack ● Programming Languages: Java, Python or C++ ● Frontend Framework: Angular ● Database Technologies: MongoDB, SQL, NoSQL ● Cloud Platform: AWS ● Familiarity with data processing tools like Pandas, NumPy, and big data frameworks (e.g., Hadoop, Spark). ● Experience with cloud platforms for AI (AWS SageMaker, Azure ML, Google Vertex AI). ● Understanding of APIs, microservices, and containerization tools like Docker and Kubernetes. Share your profile to kushpu@movingwalls.com Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

About Us Evangelist Apps is a UK-based custom software development company specializing in full-stack web and mobile app development, CRM/ERP solutions, workflow automation, and AI-powered platforms. Trusted by global brands like British Airways, Third Bridge, Hästens Beds, and Duxiana, we help clients solve complex business problems with technology. We’re now expanding into AI-driven services and are looking for our first Junior AI Developer to join the team. This is an exciting opportunity to help lay the groundwork for our AI capabilities. Role Overview As our first Junior AI Developer, you’ll work closely with our senior engineers and product teams to research, prototype, and implement AI-powered features across client solutions. You’ll contribute to machine learning models, LLM integrations, and intelligent automation systems that enhance user experiences and internal workflows. Key Responsibilities Assist in building and fine-tuning ML models for tasks like classification, clustering, or NLP Integrate AI services (e.g., OpenAI, Hugging Face, AWS, or Vertex AI) into applications Develop proof-of-concept projects and deploy lightweight models into production Preprocess datasets, annotate data, and evaluate model performance Collaborate with product, frontend, and backend teams to deliver end-to-end solutions Keep up to date with new trends in machine learning and generative AI Must-Have Skills Solid understanding of Python and popular AI/ML libraries (e.g., scikit-learn, pandas, TensorFlow, or PyTorch) Familiarity with foundational ML concepts (e.g., supervised/unsupervised learning, overfitting, model evaluation) Experience with REST APIs and working with JSON-based data Exposure to LLMs or prompt engineering is a plus Strong problem-solving attitude and eagerness to learn Good communication and documentation skills Nice-to-Haves (Good to Learn On the Job) Experience with cloud-based ML tools (AWS Sagemaker, Google Vertex AI, or Azure ML) Basic knowledge of MLOps and deployment practices Prior internship or personal projects involving AI or automation Contributions to open-source or Kaggle competitions What We Offer Mentorship from experienced engineers and a high-learning environment Opportunity to work on real-world client projects from day one Exposure to multiple industry domains including expert networks, fintech, healthtech, and e-commerce Flexible working hours and remote-friendly culture Rapid growth potential as our AI practice scales Show more Show less

Posted 1 month ago

Apply

3.0 - 7.0 years

7 - 16 Lacs

Hyderābād

On-site

AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

0 years

0 - 0 Lacs

India

On-site

Primary responsibilities: Parent Relationship Management: Ensure all parents are aptly welcomed and comfortably seated. Effectively address/resolve parents’ enquiries across mediums i.e. in-person, over the phone, email, company website etc. Escalate all unresolved grievances of the parents to the Principal and Marketing Team at Vertex for prompt resolution ending with a parent delight Adroitly track all parents’ queries via organizational query traction mechanism like CRM etc. Generate parents’ delight by ensuring high responsiveness and closing the loop with parents on all issues and keep them updated/engaged during the process of resolution. Efficiently guide the parents on school systems and processes and ensure that the repository of updated information is always available. Ensure an ambient and parent-friendly environment in the front office area with assistance from the admin department. Facilitate the information of all elements pertaining to a child’s life cycle in the school as well as post school activities, summer camps etc. Sales and Marketing: Pre-sales: Efficiently manage the pre-sales process like; keeping a track on all leads whether from web, telephone, walk-ins etc and participate in planning of activities like society camps or mall activities, pre-school tie ups, corporate tie ups, RWA and parent engagement activities. It should handle the entire sales process effectively for potential parents from first interface to closure, thus positively augmenting the conversions from walk-in to admissions. Contact potential parents, discuss their requirements, and present the VIBGYOR brand in order to commensurate the parent needs Should be an active team member in achieving the annual admission targets and objectives in line with the Organization Admission Target Plan Job Type: Full-time Pay: ₹35,000.00 - ₹50,000.00 per month Benefits: Health insurance Leave encashment Provident Fund Schedule: Day shift Work Location: In person Speak with the employer +91 8591889918

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Global Data Insight & Analytics organization is looking for a top-notch Software Engineer who has also got Machine Learning knowledge & Experience to add to our team to drive the next generation of AI/ML (Mach1ML) platform. In this role you will work in a small, cross-functional team. The position will collaborate directly and continuously with other engineers, business partners, product managers and designers from distributed locations, and will release early and often. The team you will be working on is focused on building Mach1ML platform – an AI/ML enablement platform to democratize Machine Learning across Ford enterprise (like OpenAI’s GPT, Facebook’s FBLearner, etc.) to deliver next-gen analytics innovation. We strongly believe that data has the power to help create great products and experiences which delight our customers. We believe that actionable and persistent insights, based on high quality data platform, help business and engineering make more impactful decisions. Our ambitions reach well beyond existing solutions, and we are in search of innovative individuals to join this Agile team. This is an exciting, fast-paced role which requires outstanding technical and organization skills combined with critical thinking, problem-solving and agile management tools to support team success. Responsibilities What you'll be able to do: As a Software Engineer, you will work on developing features for Mach1ML platform, support customers in model deployment using Mach1ML platform on GCP and On-prem. You will follow Rally to manage your work. You will incorporate an understanding of product functionality and customer perspective for model deployment. You will work on the cutting-edge technologies such as GCP, Kubernetes, Docker, Seldon, Tekton, Airflow, Rally, etc. Position Responsibilities: Work closely with Tech Anchor, Product Manager and Product Owner to deliver machine learning use cases using Ford Agile Framework. Work with Data Scientists and ML engineers to tackle challenging AI problems. Work specifically on the Deploy team to drive model deployment and AI/ML adoption with other internal and external systems. Help innovate by researching state-of-the-art deployment tools and share knowledge with the team. Lead by example in use of Paired Programming for cross training/upskilling, problem solving, and speed to delivery. Leverage latest GCP, CICD, ML technologies Critical Thinking: Able to influence the strategic direction of the company by finding opportunities in large, rich data sets and crafting and implementing data driven strategies that fuel growth including cost savings, revenue, and profit. Modelling: Assessments, and evaluating impacts of missing/unusable data, design and select features, develop, and implement statistical/predictive models using advanced algorithms on diverse sources of data and testing and validation of models, such as forecasting, natural language processing, pattern recognition, machine vision, supervised and unsupervised classification, decision trees, neural networks, etc. Analytics: Leverage rigorous analytical and statistical techniques to identify trends and relationships between different components of data, draw appropriate conclusions and translate analytical findings and recommendations into business strategies or engineering decisions - with statistical confidence Data Engineering: Experience with crafting ETL processes to source and link data in preparation for Model/Algorithm development. This includes domain expertise of data sets in the environment, third-party data evaluations, data quality Visualization: Build visualizations to connect disparate data, find patterns and tell engaging stories. This includes both scientific visualization as well as geographic using applications such as Seaborn, Qlik Sense/PowerBI/Tableau/Looker Studio, etc. Qualifications Minimum Requirements we seek: Bachelor’s or master’s degree in computer science engineering or related field or a combination of education and equivalent experience. 3+ years of experience in full stack software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, Streamlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Our Preferred Requirements: Master’s degree in computer science engineering, or related field or a combination of education and equivalent experience. Demonstrated successful application of analytical methods and machine learning techniques with measurable impact on product/design/business/strategy. Proficiency in programming languages such as Python with a strong emphasis on machine learning libraries, generative AI frameworks, and monitoring tools. Utilize tools and technologies such as TensorFlow, PyTorch, scikit-learn, and other machine learning libraries to build and deploy machine learning solutions on cloud platforms. Design and implement cloud infrastructure using technologies such as Kubernetes, Terraform, and Tekton to support scalable and reliable deployment of machine learning models, generative AI models, and applications. Integrate machine learning and generative AI models into production systems on cloud platforms such as Google Cloud Platform (GCP) and ensure scalability, performance, and proactive monitoring. Implement monitoring solutions to track the performance, health, and security of systems and applications, utilizing tools such as Prometheus, Grafana, and other relevant monitoring tools. Conduct code reviews and provide constructive feedback to team members on machine learning-related projects. Knowledge and experience in agentic workflow based application development and DevOps Stay up to date with the latest trends and advancements in machine learning and data science. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Dear Job Seekers, Greetings from Voice Bay! We are currently hiring for Machine Learning Engineer , If you are interested, please submit your application. Please find below the JD for your consideration: Work Location – Hyderabad Exp – 4 – 10 Years Work Mode – 5 Days Work From Office Mandatory Key Responsibilities  Design, develop, and implement end-to-end machine learning models, from initial data exploration and feature engineering to model deployment and monitoring in production environments.  Build and optimize data pipelines for both structured and unstructured datasets, focusing on advanced data blending, transformation, and cleansing techniques to ensure data quality and readiness for modeling.  Create, manage, and query complex databases, leveraging various data storage solutions to efficiently extract, transform, and load data for machine learning workflows.  Collaborate closely with data scientists, software engineers, and product managers to translate business requirements into effective, scalable, and maintainable ML solutions.  Implement and maintain robust MLOps practices, including version control, model monitoring, logging, and performance evaluation to ensure model reliability and drive continuous improvement.  Research and experiment with new machine learning techniques, tools, and technologies to enhance our predictive capabilities and operational efficiency. Required Skills & Experience  5+ years of hands-on experience in building, training, and deploying machine learning models in a professional, production-oriented setting.  Demonstrable experience with database creation and advanced querying (e.g., SQL, NoSQL), with a strong understanding of data warehousing concepts.  Proven expertise in data blending, transformation, and feature engineering, adept at integrating and harmonizing both structured (e.g., relational databases, CSVs) and unstructured (e.g., text, logs, images) data.  Strong practical experience with cloud platforms for machine learning development and deployment; significant experience with Google Cloud Platform (GCP) services (e.g., Vertex AI, BigQuery, Dataflow) is highly desirable.  Proficiency in programming languages commonly used in data science (e.g., Python is preferred, R).  Solid understanding of various machine learning algorithms (e.g., regression, classification, clustering, dimensionality reduction) and experience with advanced techniques like Deep Learning, Natural Language Processing (NLP), or Computer Vision.  Experience with machine learning libraries and frameworks (e.g., scikit-learn, TensorFlow, PyTorch).  Familiarity with MLOps tools and practices, including model versioning, monitoring, A/B testing, and continuous integration/continuous deployment (CI/CD) pipelines.  Experience with containerization technologies like Docker and orchestration tools like Kubernetes for deploying ML models as REST APIs.  Proficiency with version control systems (e.g., Git, GitHub/GitLab) for collaborative development. Educational Background  Bachelor's or Master's degree in Statistics, Mathematics, Computer Science, Engineering, Data Science, or a closely related quantitative field.  Alternatively, a significant certification in Data Science, Machine Learning, or Cloud AI combined with relevant practical experience will be considered.  A compelling combination of relevant education and professional experience will also be valued. Interested Candidates can share their Resume to the below mentioned Email I.D tarunrai@voicebaysolutions.in hr@voicebaysolutions.in Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

This posting is for one of our International Clients. About the Role We’re creating a new certification: Inside Gemini: Gen AI Multimodal and Google Intelligence (Google DeepMind) . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. SAP MM Job Description: Position: SAP Senior MM Consultant Required Qualifications: Bachelor’s degree (or equivalent experience) Preferably Engineering Minimum two e2e Implementation Project along with experience in Support / Roll out / Upgrade Projects 6 to 9 Yrs. of Relevant experience Professional Mandatory Requirements: Strong knowledge of Business Processes Implementation Methodology Consumables Procurement Process Imports Procurement Source determination Demand Flow STO Automatic A/C Determination Automatic PO Conversion Pricing Procedure Output Determination Batch Management Sub-Contracting Third Party Sub-Contracting A/C Entries for the Document posting Serialization Consignment Pipeline Invoice planning Automatic PO Procedures Evaluated receipt Settlement EDI associated to Order/Delivery/Confirmation/Invoice/Material Master Data Migration with LSMW/BDC Added Advantage: Domain Experience will be added advantage. Worked with taxation components like Vertex will be added advantage. Knowledge on ABAP debugging. SAP MM Certification will be added advantage. Knowledge on Integration Modules like WM / QM / PP / SD will be an added advantage. Roles/Responsibilities: Strong configuration hands on experience in Material Management. Integration with WM / QM / PP / SD modules and with external applications. Responsible for planning and executing SAP Implementation / Development / Support activities regard to SAP – Material Management and ability to Lead the team. Understand client requirements, provide solutions, functional specifications and configure the system accordingly Ability to create presentation/workshop material for Blueprint that need to be conveyed and be able to present them to the client. Ability to create Process Flows in Microsoft Visios for the clients proposed business processes. Ability to create Process Definition Document / Design Document (PDD) and Business Process Procedure (BPP) for the solutions provided. Ability to configure SAP MM and deliver work products / packages conforming to the Client's Standards & Requirements. General: Should have good written & communication skills. Should able to handle the client individually. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. SAP MM Job Description: Position: SAP Senior MM Consultant Required Qualifications: Bachelor’s degree (or equivalent experience) Preferably Engineering Minimum two e2e Implementation Project along with experience in Support / Roll out / Upgrade Projects 6 to 9 Yrs. of Relevant experience Professional Mandatory Requirements: Strong knowledge of Business Processes Implementation Methodology Consumables Procurement Process Imports Procurement Source determination Demand Flow STO Automatic A/C Determination Automatic PO Conversion Pricing Procedure Output Determination Batch Management Sub-Contracting Third Party Sub-Contracting A/C Entries for the Document posting Serialization Consignment Pipeline Invoice planning Automatic PO Procedures Evaluated receipt Settlement EDI associated to Order/Delivery/Confirmation/Invoice/Material Master Data Migration with LSMW/BDC Added Advantage: Domain Experience will be added advantage. Worked with taxation components like Vertex will be added advantage. Knowledge on ABAP debugging. SAP MM Certification will be added advantage. Knowledge on Integration Modules like WM / QM / PP / SD will be an added advantage. Roles/Responsibilities: Strong configuration hands on experience in Material Management. Integration with WM / QM / PP / SD modules and with external applications. Responsible for planning and executing SAP Implementation / Development / Support activities regard to SAP – Material Management and ability to Lead the team. Understand client requirements, provide solutions, functional specifications and configure the system accordingly Ability to create presentation/workshop material for Blueprint that need to be conveyed and be able to present them to the client. Ability to create Process Flows in Microsoft Visios for the clients proposed business processes. Ability to create Process Definition Document / Design Document (PDD) and Business Process Procedure (BPP) for the solutions provided. Ability to configure SAP MM and deliver work products / packages conforming to the Client's Standards & Requirements. General: Should have good written & communication skills. Should able to handle the client individually. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

14 - 24 Lacs

Hyderabad, Bengaluru

Work from Office

Indirect Tax Role with Vertex/Workday Experience along with ERP &Implementation experience needed, Looking for 8+ Yrs of experience Location: Hyderabad & Bangalore NP: immediate to 20 days max

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role - Java Developer Experience - 3-5 yrs Location - Bangalore Backend ● Bachelors/Masters in Computer science from a reputed institute/university ● 3-7 years of strong experience in building Java/golang/python based server side solutions ● Strong in data structure, algorithm and software design ● Experience in designing and building RESTful micro services ● Experience with Server side frameworks such as JPA (HIbernate/SpringData), Spring, vertex, Springboot, Redis, Kafka, Lucene/Solr/ElasticSearch etc. ● Experience in data modeling and design, database query tuning ● Experience in MySQL and strong understanding of relational databases. ● Comfortable with agile, iterative development practices ● Excellent communication (verbal & written), interpersonal and leadership skills ● Previous experience as part of a Start-up or a Product company. ● Experience with AWS technologies would be a plus ● Experience with reactive programming frameworks would be a plus · Contributions to opensource are a plus ● Familiarity with deployment architecture principles and prior experience with container orchestration platforms, particularly Kubernetes, would be a significant advantage Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Opportunity – Sales Development Representative (Outbound) | Gurugram & Bangalore About Spyn eAt Spyne, we are transforming how cars are marketed and sold with cutting-edge Generative AI. What started as a bold idea—using AI-powered visuals to help auto dealers sell faster online—has now evolved into a full-fledged, AI-first automotive retail ecosystem . Backed by $16M in Series A funding from Accel, Vertex Ventures, and other top investors, we’re scaling at breakneck spee d:Launched industry-first AI-powered Image, Video & 360° solutions for Automotive deale rsLaunching Gen AI powered Automotive Retail Suite to power Inventory, Marketing, CRM for deale rsOnboarded 1500+ dealers across US, EU and other key markets in the past 2 years of laun chGearing up to onboard 10K+ dealers across global market of 200K+ deale rs150+ members team with near equal split on R&D and G TM Learn more about our produc ts:Spyne AI Products - Studi oA I, Retai lAISeries A Announcement - CNBC-T V1 8, Yourst ory We’re coming to Bangalore – the heart of India’s B2B SaaS ecosys tem!We’re building a high-impact team in Bangalore to be part of our next growth chapter. This is a chance to join a breakout SaaS company at the frontline of innovation, right from India’s fastest-growing tech hub. What are we looking for? We’re lookin g for energetic and drive n SDRs to fue l our outbound engine for the US market. If you love prospecting, thrive on high-quality conversations, and want to make a mark in a hyper-growth AI SaaS startup—this is your ca lli ng. 📍 Lo cation: Bangalore (Work from Office, 5 days a w eek)🌎 Shift T imings: US Shift (6 PM – 3 AM IST) 🚀 Why th is role?Be part of the GTM team expanding into the US—a key growt h marketOwn the top-of-the-funnel motion and help shape our outreach strategyBe among the first hires in Bangalore as we set up in the SaaS capital of India 📌 What wil l you do?Conduct outbound outreach via LinkedIn, email, and phoneIdentify and qualify decision-makers at car dealerships and auto retailersGenerate qualified leads and book meetings for the S ales teamPersonalize outreach to maximize e ngagementCollaborate with AEs for smooth hand-offsMaintain CRM hygiene and track key metrics (connects, conversio ns, etc.) 🏆 What will make you successful in this role?Prior experience in outbound lead generation or in side salesStrong communication (written + verbal)High energy, self-starte r attitudeFamiliarity with tools like LinkedIn Sales Nav, HubSpot/Salesfor ce, ApolloComfortable worki ng in the US time zone 📊 What will a typical quarter at Spyne look like?Book qualified meetings that convert to revenueExecute creative outbound campaigns tail ored for USCollaborate closely with Marketin g and SalesExceed KPIs and gear up for AE or leade rship roles 🔹 How will we set you up for success?Hands-on onboarding and enablement on AI-powered S aaS productsTarget ICP briefs, pitch reviews, and objection handl ing sessions1:1 coaching and mentorship from S ales leaders 🎯 What y ou must have:1–3 years of experience in outbound SDR or BDR roles (pre ferably SaaS)Exposure to global clients, especially US-based, is a strong plusA results-first, hustle-hard mindset with eage rness to grow 🚀 Why Spyne?S trong Culture: High-ownership, zero-politi cs, fast-pac edFast Growth: $5M to $20M ARR trajec tory in motio nUpskill Fast: Learn from top GTM leaders, founders , and adviso rsCareer Path: Clear track to AE or Sales ManagerCom petitive Comp: Base + performance incentives + gro wth bonuses 📢 If you want to thrive in a high-energy sales role, help us take GenAI to the global auto industry, and be part of Spyne’s Bangalore chapter—this is an opportunity you don’ t want to miss. Show more Show less

Posted 1 month ago

Apply

6.0 - 11.0 years

7 - 15 Lacs

Bengaluru

Hybrid

Production Support (L2 Incident Management, L3 Problem Management) Enhancements, Monitoring jobs, reporting Coordination with onsite, client and other stakeholders. Vertex Platform (Tax)

Posted 1 month ago

Apply

4.0 - 13.0 years

0 Lacs

Pune, Maharashtra, India

On-site

TCS is conducting Face to Face interview on 21st June in TCS Pune Office for Oracle Fusion Finance Consultant Job Role: Oracle Fusion Finance Consultant Job Experience: 4 to 13 Years Interview Location: TCS Pune Office Interview Mode: Face to Face Interview Interview Date: 21st June , 2024 Venue: Tata Consultancy Services Ltd, Sahyadri Park 1, Auditorium Plot No. 2 & 3, Phase 3, Rajiv Gandhi Infotech Park, Maan, Hinjawadi, Pune, Maharashtra 411057 What we are looking for: Should have good working knowledge on Fusion Tax, Vertex and GL Requires a background in Oracle financials, particularly in the development and implementation of tax configuration to support a solution which spans the companies and supporting commercial operations. This will involve understanding both the Oracle cloud solution and the consequences to business processes of the adopting out of the box solutions where possible to ensure the correct recording and reporting of tax. Implementation experience using Vertex is must Good Knowledge on Payables (AP),General Ledger (GL), Fixed Assets (FA) Knowledge in Enterprise Structure Knowledge of Tax in different countries including Latam Experience in implementation projects. Working with large and diverse teams in multiple locations Qualification: 15 years of full time education. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies