Google Cloud AI Platforms and Infrastructure

Поділитися
Вставка
  • Опубліковано 22 сер 2024
  • In this session, we’ll explore how Vertex AI, Google Kubernetes Engine (GKE) and Google Cloud’s AI Infrastructure provide a robust platform for AI development, training and inference. We'll discuss hardware choices for inference (CPUs, GPUs, TPUs), showcasing real-world examples. We’ll cover distributed training and inference with GPUs/TPUs and optimizing AI performance on GKE using tools like autoscaling and dynamic workload scheduling.
    Brandon Royal, product manager at Google Cloud, discusses the use of Google Cloud's AI infrastructure for deploying AI on Google's infrastructure. The session focuses on how Google Cloud is applying AI to solve customer problems and the trends in AI, particularly the platform shift towards generative AI. Brandon discusses the AI infrastructure designed for generative AI, covering topics such as inference, serving, training, fine-tuning, and how these are applied in Google Cloud.
    Brandon explains the evolution of AI models, particularly open models, and their importance for flexibility in deployment and optimization. He highlights that many AI startups and unicorns choose Google Cloud for their AI infrastructure and platforms. He also introduces Gemma, a new open model released by Google DeepMind, which is lightweight, state-of-the-art, and built on the same technology as Google's Gemini model. Gemma is available with open weights on platforms like Hugging Face and Kaggle.
    The session then shifts to a discussion about AI platforms and infrastructure, with a focus on Kubernetes and Google Kubernetes Engine (GKE) as the foundation for open models. Brandon emphasizes the importance of flexibility, performance, and efficiency in AI workloads and how Google provides a managed experience with GKE Autopilot.
    He also touches on the hardware choices for inference, including CPUs, GPUs, and TPUs, and how Google Cloud offers the largest selection of AI accelerators in the market. Brandon shares customer stories, such as Palo Alto Networks' use of CPUs for deep learning models in threat detection systems. He also discusses the deployment of models on GKE, including autoscaling and dynamic workload scheduling.
    Finally, Brandon provides a live demo of deploying the Gemma model on GKE, showcasing how to use the model for generating responses and how it can be augmented with retrieval-augmented generation for more grounded responses. He also demonstrates the use of Gradio, a chat-based interface for interacting with models, and discusses the scaling and management of AI workloads on Google Cloud.
    Recorded live at AI Field Day 4 in Santa Clara, California, on February 22, 2024. Watch the entire presentation at techfieldday.c... or visit cloud.google.c... or TechFieldDay.c... for more information.

КОМЕНТАРІ •