Accelerate PyTorch workloads with PyTorch/XLA

Поділитися
Вставка
  • Опубліковано 28 вер 2024
  • Google Cloud AI Accelerators (TPUs and GPUs) enable high-performance, cost-effective training and inference for leading AI/ML frameworks: PyTorch, JAX, and TensorFlow. In this session, learn about the collaboration between Google, Meta, and partners in the AI ecosystem. Join us to see how PyTorch/XLA uses the XLA compiler to accelerate AI workloads on Cloud AI Accelerators. Discover how PyTorch/XLA enables high-performance training and inference for LLaMA 2, a state-of-the art large language model (LLM) from Meta. Learn how PyTorch Lightning helps customers quickly and easily fine-tune LLMs on Cloud TPUs.
    Speakers: Carlos Mocholi, Damien Sereni, Shauheen Zahirazami, Rachit Aggarwal
    Watch more:
    All sessions from Google Cloud Next → goo.gle/next23
    #GoogleCloudNext

КОМЕНТАРІ •