Teaching LLMs to Use Tools at Scale - Shishir Patil | Stanford MLSys #98

Поділитися
Вставка
  • Опубліковано 24 сер 2024
  • Episode 98 of the Stanford MLSys Seminar Series!
    Teaching LLMs to Use Tools at Scale
    Speaker: Shishir Patil
    Bio:
    Shishir G. Patil is a CS PhD student at UC Berkeley, with the Sky Computing and Berkeley AI Research (BAIR) labs. He is interested in designing and building efficient machine-learning systems. Recently, his focus has been on teaching LLMs to use tools through API calls. His works include Gorilla LLM, RAFT, OpenFunctions, Berkeley Function Calling Leaderboard, Skyplane, and POET. He was a Research Fellow at Microsoft Research before starting his PhD.
    Abstract:
    In this talk, we will explore our innovative approach to integrating Large Language Models (LLMs) with various tools via APIs. Bridging LLMs with APIs presents a significant challenge, primarily because of the models’ struggles to generate precise input arguments and their propensity to hallucinate API calls. Gorilla LLM, trained with our novel Retriever-Aware-Training (RAT), sets new benchmark for LLMs on writing API calls. Gorilla presents a novel PL-inspired metric to measure hallucination, commonly encountered in LLMs. I will conclude by presenting GoEx, a runtime to execute actions generated by LLMs, such as code and API calls, in various agents, workflows, to LLM-powered microservices. Key to GoEx is "undo" and "damage confinement" abstractions to manage unintended actions & risks. Gorilla is an open-source project having served hundreds of thousand user requests, with enterprise adoption, and an energetic community supporting it.
    --
    Stanford MLSys Seminar hosts: Avanika Narayan, Benjamin Spector, Michael Zhang
    Twitter:
    / avanika15​
    / bfspector
    / mzhangio
    --
    Check out our website for the schedule: mlsys.stanford.edu
    Join our mailing list to get weekly updates: groups.google....
    #machinelearning #ai #artificialintelligence #systems #mlsys #computerscience #stanford

КОМЕНТАРІ •