Jobs
Interviews

1305 Vertex Jobs - Page 35

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Req ID: 323767 We are currently seeking a Gen AI Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job DutiesExercise expertise in ideating and developing AI/ML applications on prediction, recommendation, text analytics, computer vision, bots, content intelligence Apply statistical skills and advanced statistical techniques and concepts. Demonstrate deep knowledge of ML frameworks such as TensorFlow, PyTorch, Keras, Spacy, and scikit-learn. Leverage advanced knowledge of Python open-source software stack such as Django or Flask, Django Rest or FastAPI, etc. Deep knowledge in statistics and Machine Learning models, deep learning models, NLP, Generative Adversarial Networks (GAN), and other generative models. Experience working with RAG technologies and LLM frameworks (Langchain and LlamaIndex), LLM model registries (Hugging Face), LLM APIs, embedding models, and vector databases (FAISS, Milvus, etc.). Employ technical knowledge and hands-on experience with Azure OpenAI, Google Vertex Gen AI, and AWS LLM foundational models, BERT, Transformers, PaLM, Bard, etc. Display proficiency in programming languages such as Python and understanding of various Python packages. Experience with TensorFlow, PyTorch, or Keras. Develop and implement GenAI solutions, collaborating with cross-functional teams, and supporting the successful execution of AI projects for a diverse range of clients. Assist in the design and implementation of GenAI use cases, projects, and POCs across multiple industries. Contribute to the development of frameworks, capabilities, and features for NTT DATA"™s global GenAI platform and TechHub. Work on RAG models to enhance AI solutions by incorporating relevant information retrieval mechanisms. Create and maintain data infrastructure to ingest, normalize, and combine datasets for actionable insights. Collaborate with data science teams to build, tune, and iterate on machine learning models and prompts. Work closely with customers to understand their requirements and deliver customized AI solutions. Interact at appropriate levels to ensure client satisfaction and project success. Communicate complex technical concepts clearly to non-technical audiences. Preferred experience in Private AI and Smart Agent Solutions Minimum Skills Required 2+ years of experience architecting high-impact GenAI solutions for diverse clients, preferably in Private AI and Smart Agentic Solutions 5+ year(s) of experience participating in projects that focused on one or more of the following areas: o Predictive Analytics o Data Design o Generative AI o AI/ML o ML Ops 3+ years of experience using Python. Ability to travel at least 25%. Bachelor"™s Degree required.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Senior Data Scientist Total Experience: 4Yrs–8+Yrs with 2–3 years of relevant experience. Location: Chennai - Hybrid About the Role: We are seeking an experienced Senior Data Scientist to take part in the development of advanced AI solutions with a strong focus on Generative AI (Gen AI) and LLMs. This role requires detep technical expertise in cutting-edge frameworks such as LangChain, CrewAI, and AutoGen, as well as experience in building full-stack AI systems to production. Key Responsibilities: Architect, design, and implement Gen AI / LLM-based solutions using LangChain, CrewAI, AutoGen. Develop advanced RAG pipelines, structured prompting (e.g., Chain of Thoughts), and implementKnowledge Graphs to enhance AI reasoning Lead full-cycle AI product development, including model training, evaluation, deployment, and CI/CD integration for scalable delivery Work with large and complex datasets, performing preprocessing, feature engineering, and web scraping where required Build and maintain AI model pipelines and services integrated with Vector DBs, Graph DBs, and BigData platforms. Utilize cloud-based AI offerings (Azure AI Services, AWS Bedrock, GCP Vertex AI) for deploying scalable and secure AI products Collaborate cross-functionally with engineering, product, and domain experts to translate business needs into AI solutions Stay updated on the latest AI technologies and best practices. Must-Have Skills: Core AI Capabilities Gen AI, LLMs (OpenAI, Cohere, Claude, etc.), Agentic AI, Prompting, Chain of Thoughts LangChain, CrewAI, AutoGen, Advanced RAG technics Knowledge Graph design and implementation Strong Python and SQL programming AI Product C Deployment End-to-end AI product development lifecycle Model deployment, evaluation, CI/CD pipeline automation Web scraping and classic ML/NLP model development and validation Databases Vector DBs, Graph DBs, BigData platforms etc. Cloud Integration Azure AI Services, AWS Bedrock, GCP AI models Cloud-native AI deployments using managed services Good-to-Have Skills: Model governance and benchmarking frameworks Experience with Streamlit or similar tools for rapid prototyping Knowledge of Time Series forecasting and signal-based modeling Domain expertise or project experience in: Healthcare, Automotive, Retail, Oil C Gas Show more Show less

Posted 1 month ago

Apply

2.0 - 3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Us - Attentive.ai is a fast-growing vertical SaaS start-up, funded by Peak XV (Surge), InfoEdge, Vertex Ventures, and Tenacity Ventures that provides innovative software solutions for the landscape, paving & construction industries in the United States. Our mission is to help businesses in this space improve their operations and grow their revenue through our simple & easy-to-use software platforms. Position Description: We are looking for a DevOps Engineer to join our engineering team and help us develop and expand various our internal pipelines and infrastructure As a DevOps Engineer at Attentive, you will be working closely with different engineering, computer vision, testing, and product teams to improve and expand their workflows and cloud resources. We offer an inspiring environment full of young people with a lot of ambition. You get the freedom to implement your own designs, solutions, and creativity Roles & Responsibilities Knowledge of building and setting up new development tools and infrastructure Setup uptime checks, resource health monitoring, and other monitoring tools (Gcp stack-driver, ELK) Managing and scaling cloud-based infrastructure Troubleshooting and resolving infrastructure issues Develop and integrate solutions for the automation of SDL processes such as automated code checks, tests, deployments, rollbacks, etc Automating the build, test, and release process Creating and maintaining documentation for infrastructure and processes Follows the established processes and best practices to ensure code quality and security. Requirements 2-3 years of work experience as a Cloud & DevOps engineer Excellent understanding of Python, Groovy , and bash scripting Experience working on Linux-based infrastructure Experience working on cloud services like GCP,AWS Hands-on experience with CICD Tools Experienced in deploying a containerized application, static websites deployment, etc Working knowledge of deploying and maintaining tools like Github, JIRA, Jenkins Experience in IaC tools Terraform, Ansible Good To Haves Experience working with serverless application deployments. Experience working with source code scanning and dependency management. Familiarity with data management and ML Ops (Machine Learning) process and tools. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Overview Of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Role Overview As the ideal AI/ML Engineer, you are someone who can do a deep technical dive and communicate effectively with others. You love wrangling messy data into an elegant solution, and helping others understand the power of their data. This role is a chance to have a huge impact on how businesses operate and make decisions on a daily basis. Responsibilities Dive deep into a wide range of data (tabular, text, image, etc.) to identify pain-points and deliver data-driven insights to our clients. Utilize AI/ML and analytical techniques to determine areas of opportunities to help meet business goals. Leverage GCP services to train and deploy off-the-shelf (Vertex AI/AutoML/BQML) or custom models to address a client’s business problem. Drive successful delivery of AI/ML projects and also contribute to key practice initiatives, whenever needed. Create reports and presentations that showcase the value of your solution to our clients. Qualifications 3-6+ years of experience with data science and AI/ML. Python (TensorFlow, Keras, SciKit-Learn, PyTorch), SQL, Shell Scripting experience. Data Engineering experience in Data Cleansing, ETL/ELT Pipelines, Vector DBs, Relational DBs, NoSQL DBs, Warehouses Generative AI experience in LLMs, Prompt Engineering, Tuning, RAG, LangChain Statistics & Modeling experience with Time-Series, Clustering, Regression, Classification, Recommendation Systems, Deep Learning, Ensemble Modeling, Reinforcement Learning, EDA, Data Visualization, Feature Engineering, Model Evaluation, Responsible AI MLOps experience with GIT, building CI/CD Pipelines, API Development, Docker, Deployment, Retraining Pipelines, Monitoring, Model Versioning. Google Cloud Experience in the following tools: Vertex AI, Document AI, Cloud Run, Cloud Functions, BigQuery, Pub/Sub, Cloud Storage. Kubernetes, Looker, Graph Data Science experience is a plus. Ability to communicate complex, technical processes to non-technical business stakeholders. Ability to track changing business requirements and deliver quality solutions both independently and with teams of varying skill sets. Bachelor’s degree in Data Science, Computer Science or similar. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class. Show more Show less

Posted 1 month ago

Apply

0 years

0 - 0 Lacs

Cochin

On-site

We are seeking a dynamic and experienced AI Trainer with expertise in Machine Learning, Deep Learning, and Generative AI including LLMs (Large Language Models) . The candidate will train students and professionals in real-world applications of AI/ML as well as the latest trends in GenAI such as ChatGPT, LangChain, Hugging Face Transformers, Prompt Engineering, and RAG (Retrieval-Augmented Generation) . Key Responsibilities: Deliver hands-on training sessions in AI, ML, Deep Learning , and Generative AI . Teach the fundamentals and implementation of algorithms like regression, classification, clustering, decision trees, neural networks, CNNs, and RNNs. Train students in LLMs (e.g., OpenAI GPT, Meta LLaMA, Google Gemini) and prompt engineering techniques . LangChain Hugging Face Transformers LLM APIs (OpenAI, Cohere, Anthropic, Google Vertex AI) Vector databases (FAISS, Pinecone, Weaviate) RAG pipelines Design and evaluate practical labs and capstone projects (e.g., chatbot, image generator, smart assistants). Keep training materials updated with latest industry developments and tools. Provide mentorship for student projects and support during hackathons or workshops. Required Skills: AI/ML Core: Python, NumPy, pandas, scikit-learn, Matplotlib, Jupyter Good knowledge in Machine Learning and Deep Learning algorithms Deep Learning: TensorFlow / Keras / PyTorch OpenCV (for Computer Vision), NLTK/spaCy (for NLP) Generative AI & LLM: Prompt engineering (zero-shot, few-shot, chain-of-thought) LangChain and LlamaIndex (RAG frameworks) Hugging Face Transformers OpenAI API, Cohere, Anthropic, Google Gemini, etc. Vector DBs like FAISS, ChromaDB, Pinecone, Weaviate Streamlit, Gradio (for app prototyping) Qualifications: B.E/B.Tech/M.Tech/M.Sc in AI, Data Science, Computer Science, or related Practical experience in AI/ML, LLMs, or GenAI projects Previous experience as a developer/trainer/corporate instructor is a plus Salary / Remuneration: ₹30,000 – ₹75,000/month based on experience and engagement type Job Type: Full-time Pay: ₹30,000.00 - ₹75,000.00 per month Schedule: Day shift Application Question(s): How many years of experience you have ? Can you commute to Kakkanad, Kochi ? What is your expected Salary ? Work Location: In person

Posted 1 month ago

Apply

0 years

3 - 10 Lacs

Hyderābād

Remote

Hyderabad, India Job ID: R-1071996 Apply prior to the end date: June 16th, 2025 When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you'll be doing… As Finance Functional Consultant in the IT Corporate Systems group, you will be part of the Finance Functional team supporting strategic and transformation initiatives such as 1ERP. 1ERP is a multi-year program to consolidate various ERP platforms into a single ERP platform to drive efficiencies. The primary function of this role is intended to take a broad-based view of business processes across core functional domains for both US and global business entities. What we're looking for… The primary responsibility will be to leverage a deep understanding of all the processes of Core Finance - Taxation module along with a good understanding of General Ledger, Accounts Payable, Accounts Receivables, Fixed Asset Accounting in SAP S4 HANA. You will deliver best in class out of box capabilities through planning, analysis, design and leading development teams to realize the business processes. These will be enterprise-wide business processes cantered on SAP S/4 HANA ERP platform and spanning across other SAP and non-SAP systems. You will interface with business partners, system integration leads, functional leads, and development lead in order to fulfil the stated primary goal. Gathering preparation, configuration, preparing test scenarios and test scripts. Preparing functional specifications, cutover strategies and issue resolution post-Go Live. Creating and tracking SAP OSS notes and working with SAP to resolve issues. Analyzing business specification documents, developing test plans, test strategy, test scope and defining test cases and automating test scripts. Preparing reports and training materials, training personnel, and delivering presentations. Implementing SAP best practices business processes, global templates, and configuration of these best practices. Identifying as-is processes and to-be processes and Map Business Processes in SAP S/4 Hana and SAP Cloud Systems. Where you'll be working… In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. You'll need to have: Bachelor's degree or six or more years of experience. Six or more years of relevant work experience. Experience with a minimum of two full-cycle SAP S4 HANA implementations. Experience with tax and working with large, complex transformation projects. Knowledge of indirect tax analysis of supply chains. Experience with gathering tax business requirements and design of tax configuration/solutions. Experience in Implementation of new SAP tax applications if required as per the business requirement. Experience with SAP, both transactional processes and core tax components. Experience with Vertex - use/input taxation, Vertex -VAT/VAT exempt Tax - Tax codes/Tax Assist rules/Jurisdiction codes/calculation procedures, Vertex Tax Accelerator Mapping (Tax drivers mapping), Vertex Custom User Exit mapping, Vertex – SAP Tax Accelerator reports, Advanced Tax Return for Tax on Sales/Purchases, Vertex non-deductible Tax/Reverse Charge, Vertex – RFC connectivity and updates. Integration between FI-MM, FI-CA. Good understanding on Invoice to Pay Processes like Accounts Payable, Travel & Expense etc. Even better if you have one or more of the following: Masters degree in Commerce/MBA Finance /Chartered Accountant and 8 or more years of work experience. Experience in S4HANA implementation, and certification. Experience on the Business side. Experience in integration points with other SAP modules and non-SAP systems, IDOC / XML and other interfaces. Ability to deliver simple-to-complex design solutions for the process enhancements in RICEFW. Knowledge of custom programs and able to perform troubleshooting by de-bugging whenever required. Knowledge of implementing ERP systems using project lifecycle processes, including design, testing, implementation, and support. Ability to perform functional and performance tests on the system in order to verify the changes implemented by developers. Strong written, verbal, and interpersonal communication skills with management, technical peers, and business stakeholders. Strong Analytical skills. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Apply Now Save Saved Open sharing options Share Related Jobs Engineering III Consultant-Emerging Commercial Platforms Save Bangalore, India Technology Senior Engineering Consultant-Emerging Commercial Platforms Save Bangalore, India, +1 other location Technology Engineering II-Emerging Commercial Platforms Save Chennai, India, +1 other location Technology Shaping the future. Connect with the best and brightest to help innovate and operate some of the world’s largest platforms and networks.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Fractal is one of the most prominent players in the Artificial intelligence space. Fractal's mission is to power every human decision in the enterprise and brings Al, engineering, and design to help the world's most admire Fortune 500® companies. Fractal's products include Qure.ai to assist radiologists in making better diagnostic decisions, Crux Intelligence to assist CEOs and senior executives make better tactical and strategic decisions, Theremin.ai to improve investment decisions, Eugenie.ai to find anomalies in high-velocity data, Samya.ai to drive next-generation Enterprise Revenue Growth Manage- ment, Senseforth.ai to automate customer interactions at scale to grow top-line and bottom-line and Analytics Vidhya is the largest Analytics and Data Science community offering industry-focused training programs. Fractal has more than 3600 employees across 16 global locations, including the United States, UK, Ukraine, India, Singapore, and Australia. Fractal has consistently been rated as India's best companies to work for, by The Great Place to Work® Institute, featured as a leader in Customer Analytics Service Providers Wave™ 2021, Computer Vision Consultancies Wave™ 2020 & Specialized Insights Service Providers Wave™ 2020 by Forrester Research, a leader in Analytics & Al Services Specialists Peak Matrix 2021 by Everest Group and recognized as an "Honorable Vendor" in 2022 Magic Quadrant™ for data & analytics by Gartner. For more information, visit fractal.ai Job Description: (Senior Data Scientist – Generative AI) We’re looking for a passionate Data Scientist – Generative AI who thrives at the intersection of AI research & real-world applications. This role is ideal for someone who’s eager to build, experiment & scale LLM-powered solutions in enterprise environments. This role blends hands-on Problem solving, Research, Engineering & collaboration across multidisciplinary team driving innovation across industries/domains. Responsibilities: • Design and implement advanced solutions utilizing Large Language Models (LLMs). • Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions. • Conduct research and stay informed about the latest developments in generative AI and LLMs. • Develop and maintain code libraries, tools, and frameworks to support generative AI development. •Participate in code reviews and contribute to maintaining high code quality standards. • Engage in the entire software development lifecycle, from design and testing to deployment and maintenance. • Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility. • Possess strong analytical and problem-solving skills. • Demonstrate excellent communication skills and the ability to work effectively in a team environment. Primary Skills: • Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation. AND/OR • Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis. • Generative AI: o Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents) Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities. Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization. • Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred. • Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git Must Have Skills 5–10 years of experience in Data Science - NLP, with at least 2 years in GenAI/LLMs Proficiency in Python, SQL & ML frameworks (e.g., PyTorch, TensorFlow, Hugging Face) Hands-on experience with GenAI tools like LangChain, LlamaIndex, RAG pipelines, Prompt Engineering & vector databases (e.g., FAISS, ChromaDB) Strong understanding of NLP techniques including embeddings, topic modeling, text classification, semantic search, summarization, Q&A, chatbots, etc Experience with cloud platforms (GCP, AWS, or Azure) and CI/CD pipelines Experience integrating LLMs via Azure OpenAI, Google Vertex AI or AWS Bedrock Ability to work independently & drive projects end-to-end from development to production Strong problem-solving, data storytelling & communication skills Show more Show less

Posted 1 month ago

Apply

4.0 years

11 Lacs

Mohali

On-site

Skill Sets: Expertise in ML/DL, model lifecycle management, and MLOps (MLflow, Kubeflow) Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and Hugging Face models Strong experience in NLP, fine-tuning transformer models, and dataset preparation Hands-on with cloud platforms (AWS, GCP, Azure) and scalable ML deployment (Sagemaker, Vertex AI) Experience in containerization (Docker, Kubernetes) and CI/CD pipelines Knowledge of distributed computing (Spark, Ray), vector databases (FAISS, Milvus), and model optimization (quantization, pruning) Familiarity with model evaluation, hyperparameter tuning, and model monitoring for drift detection Roles and Responsibilities: Design and implement end-to-end ML pipelines from data ingestion to production Develop, fine-tune, and optimize ML models, ensuring high performance and scalability Compare and evaluate models using key metrics (F1-score, AUC-ROC, BLEU etc) Automate model retraining, monitoring, and drift detection Collaborate with engineering teams for seamless ML integration Mentor junior team members and enforce best practices Job Type: Full-time Pay: Up to ₹1,100,000.00 per year Schedule: Day shift Monday to Friday Application Question(s): How soon can you join us Experience: Total: 4 years (Required) Data Science roles: 3 years (Required) Work Location: In person

Posted 1 month ago

Apply

2.0 - 3.0 years

6 - 8 Lacs

Noida

On-site

About Us - Attentive.ai is a fast-growing vertical SaaS start-up, funded by Peak XV (Surge), InfoEdge, Vertex Ventures, and Tenacity Ventures that provides innovative software solutions for the landscape, paving & construction industries in the United States. Our mission is to help businesses in this space improve their operations and grow their revenue through our simple & easy-to-use software platforms. Position Description: We are looking for a DevOps Engineer to join our engineering team and help us develop and expand various our internal pipelines and infrastructure As a DevOps Engineer at Attentive, you will be working closely with different engineering, computer vision, testing, and product teams to improve and expand their workflows and cloud resources. We offer an inspiring environment full of young people with a lot of ambition. You get the freedom to implement your own designs, solutions, and creativity Roles & Responsibilities: Knowledge of building and setting up new development tools and infrastructure Setup uptime checks, resource health monitoring, and other monitoring tools (Gcp stack-driver, ELK) Managing and scaling cloud-based infrastructure Troubleshooting and resolving infrastructure issues Develop and integrate solutions for the automation of SDL processes such as automated code checks, tests, deployments, rollbacks, etc Automating the build, test, and release process Creating and maintaining documentation for infrastructure and processes Follows the established processes and best practices to ensure code quality and security. Requirements 2-3 years of work experience as a Cloud & DevOps engineer Excellent understanding of Python, Groovy , and bash scripting Experience working on Linux-based infrastructure Experience working on cloud services like GCP,AWS Hands-on experience with CICD Tools Experienced in deploying a containerized application, static websites deployment, etc Working knowledge of deploying and maintaining tools like Github, JIRA, Jenkins Experience in IaC tools Terraform, Ansible Good To Haves: Experience working with serverless application deployments. Experience working with source code scanning and dependency management. Familiarity with data management and ML Ops (Machine Learning) process and tools.

Posted 1 month ago

Apply

16.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: AVP – AI Agents & Autonomous Intelligence (Leadership Role) Experience: 12–16 years Location: [Insert Location] | Type: Full-time Role Overview We’re seeking an AVP to lead our Agentic AI charter — someone who can envision, architect, and operationalize intelligent agent systems that solve complex business problems. You’ll spearhead the evolution from traditional ML pipelines to AI systems with reasoning, memory, adaptability, and autonomy. Key Responsibilities Own the roadmap for building enterprise-grade AI agents — from task-specific agents to collaborative multi-agent ecosystems. Architect agent frameworks that support dynamic decision-making, tool integration, multi-turn workflows, and feedback loops. Lead solution design for use cases like intelligent contract parsing, policy analysis, conversational agents, knowledge assistants, and document triage agents. Partner with stakeholders to identify agentic opportunities, develop PoCs, and scale into production. Guide the development of reusable agent templates, stateful memory systems, and orchestration strategies. Champion best practices around agent evaluation, reliability, observability, and human-in-the-loop controls. Build and nurture a specialized team of AI engineers and researchers with deep interest in autonomous systems. Stay at the forefront of research in agentic LLMs, decision-making architectures, and cooperative AI. Qualifications 12–16 years of AI/ML experience with 4–5 years in GenAI, cognitive architectures, or NLP. Strong grasp of multi-agent design, prompt chaining, tool use, reasoning frameworks, and reflection-based learning. Hands-on familiarity with open-source agent toolkits like AutoGen, LangGraph, CrewAI, Semantic Kernel. Experience deploying agentic systems on cloud-native infrastructure (Cloud Run, GKE, Vertex AI, Lambda). Demonstrated success in leading teams across experimentation, architecture, and delivery of AI systems. Thought leadership or contributions to the GenAI community (conferences, blogs, GitHub, etc.) is a plus. Deep curiosity in the future of AI as intelligent collaborators, not just predictive tools. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

JD for Google Pre-Sales Solution Architect (Data & AI) - Lead the technical discovery process, assess customer requirements, and design scalable solutions leveraging a comprehensive suite of Data & AI services, including BigQuery, Dataflow, Vertex AI, Generative AI solutions, and advanced AI/ML services like Vertex AI, Gemini, and Agent Builder. Architect and demonstrate solutions leveraging generative AI, large language models (LLMs), AI agents, and agentic AI patterns to automate workflows, enhance decision-making, and create intelligent applications. Develop and deliver compelling product demonstrations, proofs-of-concept (POCs), and technical workshops that showcase the value and capabilities of Google Cloud. Strong understanding of data warehousing, data lakes, streaming analytics, and machine learning pipelines. Collaborate with sales to build strong client relationships, articulate the business value of Google Cloud solutions, and drive adoption. Lead and Contribute technical content and architectural designs for RFI/RFP responses and technical proposals leveraging Google Cloud Services. Stay informed of industry trends, competitive offerings, and new Google Cloud product releases, particularly in the infrastructure and data/AI domains. Extensive experience in architecting & designing solutions on Google Cloud Platform, with a strong focus on: Data & AI services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI (ML Ops, custom models, pre-trained APIs), Generative AI (e.g., Gemini). Strong understanding of cloud architecture patterns, DevOps practices, and modern software development methodologies. Ability to work effectively in a cross-functional team environment with sales, product, and engineering teams. 5+ years of experience in pre-sales or solutions architecture, focused on cloud Data & AI platforms. Skilled in client engagements, technical presentations, and proposal development. Excellent written and verbal communication skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. Show more Show less

Posted 1 month ago

Apply

2.0 - 3.0 years

0 Lacs

Andhra Pradesh, India

On-site

Experience Level 8 Plus years with at least 2 to 3 years in AI/ML/GenAI Primary Skill: Google Gemini, GCP, Vertex AI Key Responsibilities Design and implement GenAI architectures leveraging Google Cloud and Gemini AI models Lead solution architecture and integration of generative AI models into enterprise applications Collaborate with data scientists engineers and business stakeholders to define AI use cases and technical strategy Develop and optimize prompt engineering, model fine tuning, and deployment pipelines Design scalable data storage and retrieval layers using PostgreSQL BigQuery and vector databases e.g.Vertex AI Search Pinecone or FAISS Evaluate third party GenAI APIs and tools for integration Ensure compliance with data security privacy and responsible AI guidelines Support performance tuning monitoring and optimization of AI solutions in production Stay updated with evolving trends in GenAI and GCP offerings especially related to Gemini and Vertex AI Required Skills And Qualifications Proven experience architecting AI and ML or GenAI systems on Google Cloud Platform Hands-on experience with Google Gemini Vertex AI and related GCP AI tools Strong understanding of LLMs, prompt engineering and text generation frameworks Proficiency in PostgreSQL, including advanced SQL and performance tuning Experience with MLOps, CI and CD pipelines, and AI model lifecycle management Solid knowledge of Python, APIs, RESTful services, and cloud native architecture Familiarity with vector databases and semantic search concepts Strong communication and stakeholder management skills Preferred Qualifications GCP certifications e.g., Professional Cloud Architect Machine Learning Engineer Experience in model fine-tuning and custom LLM training Knowledge of LangChain, RAG Retrieval Augmented Generation frameworks Exposure to data privacy regulations GDPR, HIPAA, etc. Background in natural language processing NLP and deep learning Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us: Athena is India's largest institution in the "premium undergraduate study abroad" space. Founded 10 years ago by two Princeton graduates, Poshak Agrawal and Rahul Subramaniam, Athena is headquartered in Gurgaon, with offices in Mumbai and Bangalore, and caters to students from 26 countries. Athena’s vision is to help students become the best version of themselves. Athena’s transformative, holistic life coaching program embraces both depth and breadth, sciences and the humanities. Athena encourages students to deepen their theoretical knowledge and apply it to address practical issues confronting society, both locally and globally. Through our flagship program, our students have gotten into various, universities including Harvard University, Princeton University, Yale University, Stanford University, University of Cambridge, MIT, Brown, Cornell University, University of Pennsylvania, University of Chicago, among others. Learn more about Athena: https://www.athenaeducation.co.in/article.aspx Role Overview We are looking for an AI/ML Engineer who can mentor high-potential scholars in creating impactful technology projects. This role requires a blend of strong engineering expertise, the ability to distill complex topics into digestible concepts, and a deep passion for student-driven innovation. You’ll help scholars explore the frontiers of AI—from machine learning models to generative AI systems—while coaching them in best practices and applied engineering. Key Responsibilities: Guide scholars through the full AI/ML development cycle—from problem definition, data exploration, and model selection to evaluation and deployment. Teach and assist in building: Supervised and unsupervised machine learning models. Deep learning networks (CNNs, RNNs, Transformers). NLP tasks such as classification, summarization, and Q&A systems. Provide mentorship in Prompt Engineering: Craft optimized prompts for generative models like GPT-4 and Claude. Teach the principles of few-shot, zero-shot, and chain-of-thought prompting. Experiment with fine-tuning and embeddings in LLM applications. Support scholars with real-world datasets (e.g., Kaggle, open data repositories) and help integrate APIs, automation tools, or ML Ops workflows. Conduct internal training and code reviews, ensuring technical rigor in projects. Stay updated with the latest research, frameworks, and tools in the AI ecosystem. Technical Requirements: Proficiency in Python and ML libraries: scikit-learn, XGBoost, Pandas, NumPy. Experience with deep learning frameworks : TensorFlow, PyTorch, Keras. Strong command of machine learning theory , including: Bias-variance tradeoff, regularization, and model tuning. Cross-validation, hyperparameter optimization, and ensemble techniques. Solid understanding of data processing pipelines , data wrangling, and visualization (Matplotlib, Seaborn, Plotly). Advanced AI & NLP Experience with transformer architectures (e.g., BERT, GPT, T5, LLaMA). Hands-on with LLM APIs : OpenAI (ChatGPT), Anthropic, Cohere, Hugging Face. Understanding of embedding-based retrieval , vector databases (e.g., Pinecone, FAISS), and Retrieval-Augmented Generation (RAG). Familiarity with AutoML tools , MLflow, Weights & Biases, and cloud AI platforms (AWS SageMaker, Google Vertex AI). Prompt Engineering & GenAI Proficiency in crafting effective prompts using: Instruction tuning Role-playing and system prompts Prompt chaining tools like LangChain or LlamaIndex Understanding of AI safety , bias mitigation, and interpretability. Required Qualifications: Bachelor’s degree from a Tier-1 Engineering College in Computer Science, Engineering, or a related field. 2-5 years of relevant experience in ML/AI roles. Portfolio of projects or publications in AI/ML (GitHub, blogs, competitions, etc.) Passion for education, mentoring , and working with high school scholars. Excellent communication skills, with the ability to convey complex concepts to a diverse audience. Preferred Qualifications: Prior experience in student mentorship, teaching, or edtech. Exposure to Arduino, Raspberry Pi, or IoT for integrated AI/ML projects. Strong storytelling and documentation abilities to help scholars write compelling project reports and research summaries. Show more Show less

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Data Scientist Location: Chennai Experience: 5-12 Years Job Summary We are seeking a highly analytical and results-driven Data Scientist with a strong background in statistics , machine learning , and data science , combined with domain knowledge in mechanical engineering and cost analysis . The ideal candidate will have experience working with Google Cloud Platform (GCP) and will play a key role in transforming engineering and operational data into actionable insights to drive business decisions. Required Skills & Experience Strong knowledge of statistics, machine learning, and data science principles Hands-on experience with Google Cloud Platform (GCP), especially BigQuery, Vertex AI, and Cloud Functions Proficiency in Python or R for data analysis and modeling Solid understanding of mechanical engineering concepts and their application in data analysis Experience with cost modeling, cost-benefit analysis, or operational performance analytics Excellent problem-solving, analytical thinking, and communication skills Ability to work with large datasets and create clear, actionable insights Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are seeking a seasoned and visionary Lead AI Engineer to drive the design, development, and delivery of high-impact AI/ML solutions. As a technical leader, you will guide a team of AI developers in executing large-scale AI/ML projects, mentor them to build expertise, and foster a culture of innovation and excellence. You will collaborate with sales teams during pre-sales calls to articulate technical solutions and work closely with leadership to translate strategic vision into actionable, production-ready AI systems. Responsibilities Architect and lead the end-to-end development of impactful AI/ML models and systems, ensuring scalability, reliability, and performance. Provide hands-on technical guidance to a team of AI/ML engineers, fostering skill development and promoting best practices in coding, model design, and experimentation. Collaborate with cross-functional teams, including data scientists, product managers, and software developers, to define AI product strategies and roadmaps. Partner with the sales team during pre-sales calls to understand client needs, propose AI-driven solutions, and communicate technical feasibility. Translate leadership’s strategic vision into technical requirements and executable project plans. Design and implement scalable MLOps infrastructure for data ingestion, model training, evaluation, deployment, and monitoring. Lead research and experimentation in advanced AI domains such as NLP, computer vision, large language models (LLMs), or generative AI, tailoring solutions to business needs. Evaluate and integrate open-source or commercial AI frameworks/tools to accelerate development and ensure robust solutions. Monitor and optimize deployed models for performance, fairness, interpretability, and cost-efficiency, driving continuous improvement. Mentor and nurture new talent, building a high-performing AI team capable of delivering complex projects over time. Qualifications Bachelor’s, Master’s, or Ph.D. in Computer Science, Artificial Intelligence, or a related field. 5+ years of hands-on experience in machine learning or deep learning, with a proven track record of delivering large-scale AI/ML projects to production. Demonstrated ability to lead and mentor early-career engineers, fostering technical growth and team collaboration. Strong proficiency in Python and ML frameworks/libraries (e.g., TensorFlow, PyTorch, HuggingFace, Scikit-learn). Extensive experience deploying AI models in production environments using tools like AWS SageMaker, Google Vertex AI, Docker, Kubernetes, or similar. Solid understanding of data pipelines, APIs, MLOps practices, and software engineering principles. Experience collaborating with non-technical stakeholders (e.g., sales, leadership) to align technical solutions with business objectives. Familiarity with advanced AI domains such as NLP, computer vision, LLMs, or generative AI is a plus. Excellent communication skills to articulate complex technical concepts to diverse audiences, including clients and executives. Strong problem-solving skills, with a proactive approach to driving innovation and overcoming challenges. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below client is a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Machine Learning Engineer Location: Chennai,TN 600119 Duration: 24 Months Work Type: Onsite Position Description: Train, Build and Deploy ML, DL Models > Software development using Python, work with Tech Anchors, Product Managers and the Team internally and across other Teams Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end Software development using TDD approach Experience using GCP products & services Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Skills Required: 3+ years of experience in Python software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Skills Preferred: Good Communication, Presentation and Collaboration Skills Experience Required: 2 to 5 yrs Experience Preferred: GCP products & services Education Required: BE, BTech, MCA, M.Sc, ME TekWissen® Group is an equal opportunity employer supporting workforce diversity. Show more Show less

Posted 1 month ago

Apply

18.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description We are seeking a highly experienced and visionary Director of AI/ML Engineering to lead our AI/ML initiatives. The ideal candidate will have a minimum of 18+ years of experience in software engineering, with at least the last three years dedicated exclusively to AI/ML and Retrieval-Augmented Generation (RAG). This role requires a deep understanding of AI/ML technologies, a proven track record of leading engineering teams, and the ability to drive innovation and excellence in AI/ML projects within an AI-first organization. Role and Responsibilities Leadership and Strategy : Provide technical leadership and strategic direction for the AI/ML engineering team. Develop and execute a roadmap for AI/ML initiatives that align with the company's goals and objectives, transforming the organization into an AI-first entity. Team Management : Lead, mentor, and grow a high-performing team of AI/ML engineers, data scientists, and MLOps professionals. Foster a collaborative and innovative team culture, building an end-to-end AI Agent force. AI/ML Development : Oversee the design, development, and deployment of AI/ML models and algorithms. Ensure the scalability, reliability, and performance of AI/ML solutions, integrating AI in SDLC (Software Development Life Cycle). Project Management : Manage multiple AI/ML projects simultaneously, ensuring timely delivery and alignment with business objectives. Collaborate with cross-functional teams to integrate AI/ML solutions into products and services, driving significant business outcomes. Innovation and Research : Stay current with the latest advancements in AI/ML and RAG technologies. Drive innovation by exploring new techniques, tools, and methodologies to enhance AI/ML capabilities, including the development of a multi-agent platform. Stakeholder Collaboration : Work closely with stakeholders, including product managers, business leaders, and customers, to understand requirements and deliver AI/ML solutions that meet their needs. Quality and Compliance : Ensure that AI/ML solutions adhere to industry standards, best practices, and regulatory requirements. Implement robust testing, validation, and monitoring processes. Skills Needed Technical Expertise : Extensive experience in AI/ML, including deep learning, natural language processing (NLP), computer vision, and RAG. Proficiency in programming languages such as Python, R, and Java. Leadership and Management : Proven track record of leading and managing engineering teams. Strong leadership, mentoring, and team-building skills. Project Management : Excellent project management skills, with the ability to manage multiple projects and priorities. Strong organizational and time-management abilities. Innovation and Problem-Solving : Creative thinker with a passion for innovation. Ability to solve complex problems and drive continuous improvement. Communication and Collaboration : Strong communication and interpersonal skills. Ability to collaborate effectively with cross-functional teams and stakeholders. Educational Background : Bachelor's degree in Computer Science, Engineering, or a related field. Advanced degrees (Master's or Ph.D.) in AI/ML or related disciplines are preferred. Industry Knowledge : Up-to-date knowledge of AI/ML trends, tools, and technologies. Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and AI/ML frameworks (e.g., TensorFlow, PyTorch). Preferably GCP, Gemini Vertex The last three years must have worked on AI/ML programs, created the solutions and brought the value to the enterprise. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

b2bBuyer OPEN POSITION NOTIFICATION Order Number: 34189 Competency Group: All Information Systems The GRADE/BAND for this Order is: 1 Target Rate: 2375 Security Check Reqd: Y Maximum number of resumes to be submitted per position: 2 Note: Appropriate Affirmative Action efforts should be in effect when recruiting for Ford. This includes but is not limited to all equal employment opportunity and affirmative action requirements set for in 41 C.F.R. Sec. 60-1.4(a) and (c), which is incorporated herein by reference. In addition, all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, or national origin. Contract Personnel Agreement and Background Verification forms (if required) must be submitted to MSX International before candidate may start. COMPETENCY CENTER SPECIALIST: MARY PRIYANKA Phone No: 0000000000 Skills Based Assessment: Not Required Broadcast Round No: 1 Number of Positions: 1 Target Date of next round of broadcast, if needed: 10-JUN-2025 Position No: 27617 Status: OPEN Position Title: Architect Consultant Target Start Date: 22-SEP-2025 Original Duration: 365 Days Estimated Regular Hours: 40 Estimated Overtime Hours: 0 Overtime Required Flag: Y Work Hours: Standard Shift: DAY Travel Required? N Travel %: 0 Division: Location: 7539 - Global Tech and Business Center Location Address: lcot Sez, Plot No. 13, 15, & 16,Sholinganallur,Chennai,600119 Building: Room: GRASP Training Reqd: N HAZCOM Training Reqd: N Position Description: Provide technical leadership and architectural strategy for enterprise-scale data, analytics, and cloud initiatives. This role partners with business and product teams to design scalable, secure, and high-performing solutions that align with enterprise architecture standards and business goals. The Solution Architect will help GDIA teams with the architecture of new and existing applications using Cloud architecture patterns and processes. Works with product teams to define, assemble and integrate components based on Ford standards and business requirements. Supports the product team in the development of the technical design and documentation. Participate in proof of concepts and support the product solution evaluation processes. Provides architecture guidance and technical design leadership. Demonstrated ability to work on multiple projects simultaneously. Skills Required: GCP, Cloud Architecture, API, Enterprise Architecture, Solution Architecture, CI-CD, Data/Analytics Skills Preferred: Big Query, Java, React, Python, LLM, Angular, GCS, GCP Cloud Run, Vertex, Tekton, TERRAFORM, Problem Solving Experience Required: • Proficiency and direct hands-on experience in Google Cloud Platform Architecture • Strong understanding of enterprise integration patterns, security architecture, and DevOps practices. • Demonstrated ability to lead complex technical initiatives and influence stakeholders across business and IT. Show more Show less

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Bengaluru

Remote

Job Title: Senior Machine Learning Engineer Work Mode: Remote Base Location: Bengaluru Experience: 5+ Years Strong problem-solving skills and ability to work in a fast-paced, collaborative environment. Strong programming skills in Python and experience with ML frameworks. Proficiency in containerization (Docker) and orchestration (Kubernetes) technologies. Solid understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI, GitHub Actions). Knowledge of data engineering concepts and experience building data pipelines. Strong understandings on Computational, Storage and Orchestration resources on cloud platforms. Deploying and managing ML models especially on GCP (cloud platform agnostic though) services such as Cloud Run, Cloud Functions, and Vertex AI. Implementing MLOps best practices, including model version tracking, governance, and monitoring for performance degradation and drift. Creating and using benchmarks, metrics, and monitoring to measure and improve services Collaborating with data scientists and engineers to integrate ML workflows from onboarding to decommissioning. Experience with MLOps tools like Kubeflow, MLflow, and Data Version Control (DVC). Manage ML models on any of the following: AWS (SageMaker), Azure (Machine Learning), and GCP (Vertex AI). Tech Stack : Aws or GCP or Azure Experience. (More GCP Specific) must have done Py spark, Databricks is good. ML Experience, Docker and Kubernetes.

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 11 Lacs

Chennai

Work from Office

Experience in CI/CD pipelines, scripting languages, and a deep understanding of version control systems (e.g. Git), containerization (e.g. Docker), and continuous integration/deployment tools (e.g. Jenkins) third party integration is a plus, cloud computing platforms (e.g. AWS, GCP, Azure), Kubernetes and Kafka. Experience in 4+ years of experience building production-grade ML pipelines. Proficient in Python and frameworks like Tensorflow, Keras, or PyTorch. Experience with cloud build, deployment, and orchestration tools Experience with MLOps tools such as MLFlow, Kubeflow, Weights & Biases, AWS Sagemaker, Vertex AI, DVC, Airflow, Prefect, etc., Experience in statistical modeling, machine learning, data mining, and unstructured data analytics. Understanding of ML Lifecycle, MLOps & Hands on experience to Productionize the ML Model Detail-oriented, with the ability to work both independently and collaboratively. Ability to work successfully with multi-functional teams, principals, and architects, across organizational boundaries and geographies. Equal comfort driving low-level technical implementation and high-level architecture evolution Experience working with data engineering pipelines.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Minimum qualifications: Bachelor’s degree in Computer Science, Electrical Engineering, a related field, or equivalent practical experience. 2 years of experience with network equipment and network protocols testing and debugging. Experience in test automation, test methodologies, writing test plans and creating test cases using Python, C++, Golang. Preferred qualifications: Master's degree or PhD in Computer Science or equivalent practical experience. 5 years of experience in software development and testing. Experience with Network Equipment, Network Protocols Testing, debugging, Large Networks troubleshooting, Deployment. About The Job Our computational challenges are so big and unique we can't just buy our hardware, we've got to make it ourselves. Our Platforms Team designs and builds the hardware, software and networking technologies that power all of Google's services. As a Networking Test Engineer you make sure that our massive and growing network is operating at its peak potential. You have experience with complex networking equipment, a deep understanding of networking protocols, test design and implementation chops and a background in IP network design. It's your job to make sure Google's cutting-edge technology can perform at scale. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Design, develop, and execute test plans and cases for Google Software Defined Networking (SDN) network, infrastructure, and services, and maintain lab test beds, test infrastructure, and existing test automation environments (hardware and software). Identify, document, and track network defects and performance issues, implement various simulation techniques to replicate and assess network behavior and performance. Collaborate with cross-functional teams to identify, troubleshoot, and resolve network problems, triage automated regression failures, provide failure analysis and manage software releases to production. Utilize testing tools to assess network system performance and reliability, and analyze test results, generate detailed reports on network performance and reliability. Participate in the design and implementation of automated testing solutions, serve as a technical resource to junior team members for simple to intermediate technical problems (e.g., lab infrastructure or installations). Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Minimum qualifications: Bachelor's degree or equivalent practical experience. 5 years of experience in product management or related technical role. 2 years of experience taking technical products from conception to launch. Preferred qualifications: Experience in data centers and Cloud infrastructure. Experience in workflow engines, ETL and data pipelines or enterprise logging infrastructure. Knowledge of designing and building large-scale distributed workflow and orchestration systems. About The Job At Google, we put our users first. The world is always changing, so we need Product Managers who are continuously adapting and excited to work on products that affect millions of people every day. In this role, you will work cross-functionally to guide products from conception to launch by connecting the technical and business worlds. You can break down complex problems into steps that drive product development. One of the many reasons Google consistently brings innovative, world-changing products to market is because of the collaborative work we do in Product Management. Our team works closely with creative engineers, designers, marketers, etc. to help design and develop technologies that improve access to the world's information. We're responsible for guiding products throughout the execution cycle, focusing specifically on analyzing, positioning, packaging, promoting, and tailoring our solutions to our users. In this role, you will lead the development and evolution of Data Center Automation Platforms, encompassing workflow engines, data warehouse, and logging infrastructure. You will own the strategy, roadmap, and delivery of cutting-edge solutions that empower automation platform, improve operational efficiency, and enable data-driven decisions at scale. You will collaborate with cross-functional teams to define product requirements, align stakeholder priorities, and ensure seamless integration of platforms with the broader data center ecosystem. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Define and articulate the product goal and strategy for data center automation platforms, and focused workflow orchestration systems that handle operational tasks such as machine state management. Develop and maintain a comprehensive roadmap, balancing short-term deliverables with long-term strategic goals. Prioritize features and initiatives based on impact, feasibility, and stakeholder feedback. Conduct customer research, Critical User Journey (CUJ) analysis, and pain-point synthesis to identify opportunities for automation and improved data visibility. Collaborate with stakeholders (e.g., Engineering teams, Data Center operators, and business units) to define requirements and refine solutions. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Responsibilities Manage Data: Extract, clean, and structure both structured and unstructured data. Coordinate Pipelines: Utilize tools such as Airflow, Step Functions, or Azure Data Factory to orchestrate data workflows. Deploy Models: Develop, fine-tune, and deploy models using platforms like SageMaker, Azure ML, or Vertex AI. Scale Solutions: Leverage Spark or Databricks to handle large-scale data processing tasks. Automate Processes: Implement automation using tools like Docker, Kubernetes, CI/CD pipelines, MLFlow, Seldon, and Kubeflow. Collaborate Effectively: Work alongside engineers, architects, and business stakeholders to address and resolve real-world problems efficiently. Qualifications 3+ years of hands-on experience in MLOps (4-5 years of overall software development experience). Extensive experience with at least one major cloud provider (AWS, Azure, or GCP). Proficiency in using Databricks, Spark, Python, SQL, TensorFlow, PyTorch, and Scikit-learn. Expertise in debugging Kubernetes and creating efficient Dockerfiles. Experience in prototyping with open-source tools and scaling solutions effectively. Strong analytical skills, humility, and a proactive approach to problem-solving. Preferred Qualifications Experience with SageMaker, Azure ML, or Vertex AI in a production environment. Commitment to writing clean code, creating clear documentation, and maintaining concise pull requests. Skills: sql,kubeflow,spark,docker,databricks,ml,gcp,mlflow,kubernetes,aws,pytorch,azure,ci/cd,tensorflow,scikit-learn,seldon,python,mlops Show more Show less

Posted 1 month ago

Apply

1.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description · Hands-on experience with data tools and technologies is a must. ·Partial design experience is acceptable, but core focus should be on strong data skills. ·Will be supporting the pre-sales team from a hands-on technical perspective. · GCP experience: Looker / BigQuery / Vertex – any of these with 6 months to 1 year of experience. Requirements On day one we'll expect you to... Own the modules and take complete ownership of the project Understand the scope, design and business objective of the project and articulate it in the form of a design document Strong experience with Google Cloud Platform data services, including BigQuery, Dataflow, Dataproc, Cloud Composer, Vertex AI Studio, GenAI (Gemini, Imagen, Veo) Experience in implementing data governance on GCP Familiarity with integrating GCP services with other platforms like Snowflake, and hands-on Snowflake project experience is a plus Experienced coder in python, SQL, ETL and orchestration tools Experience with containerized solutions using Google Kubernetes Engine Good communication skills to interact with internal teams and customers Expertise in pySpark(Batch and Real-time both), Kafka, SQL, Data Querying tools. Experience in working with a team, continuously monitoring, working as a individual contributor hand-on and helping team deliver their work as you deliver yours Experience in working with large volumes of data in distributed environment keeping in mind parallelism and concurrency, ensuring performant and resilient system ops Optimize the deployment architecture to reduce job run-times and resource utilization Develop and Optimize Data Warehouses given the schema design. Show more Show less

Posted 1 month ago

Apply

5.0 years

4 - 7 Lacs

Thiruvananthapuram

On-site

Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI , and autonomous intelligent systems . We build intelligent, cutting-edge solutions designed to scale and evolve with the future of artificial intelligence. Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. What we are looking from an ideal candidate? Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes . Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Preferred Skills: What skills do you need? 5+ years of experience in DevOps, MLOps , or infrastructure engineering roles. Hands-on experience with cloud platforms ( AWS, GCP, or Azure ) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker , Kubernetes , and infrastructure-as-code frameworks. Experience with ML pipelines , model versioning, and ML monitoring tools. Scripting skills in Python , Bash , or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow , MLflow , DVC , or Triton Inference Server . Exposure to data versioning , feature stores , and model registries . Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies