Are Emergent Abilities of Large Language Models a Mirage? - IEEE @ Stanford Unviersty

Поділитися
Вставка
  • Опубліковано 5 жов 2023
  • Title:
    Are Emergent Abilities of large Language Models a Mirage?
    paper: arxiv.org/abs/2304.15004
    Abstract: Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales. Here, we present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, emergent abilities appear due to the researcher's choice of metric rather than due to fundamental changes in model behavior with scale. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous predictable changes in model performance. We present our alternative explanation in a simple mathematical model, then test it in three complementary ways: we (1) make, test and confirm three predictions on the effect of metric choice using the InstructGPT/GPT-3 family on tasks with claimed emergent abilities; (2) make, test and confirm two predictions about metric choices in a meta-analysis of emergent abilities on BIG-Bench; and (3) show to choose metrics to produce never-before-seen seemingly emergent abilities in multiple vision tasks across diverse deep networks. Via all three analyses, we provide evidence that alleged emergent abilities evaporate with different metrics or with better statistics, and may not be a fundamental property of scaling AI models.
    Speaker: Brando Miranda, brando90.github.io/brandomira...
    Date: 6th October, 2023, 6.00 p.m.
    Location: Packard 101 (Electrical Engineering)
    Bio
    Brando Miranda is a current Ph.D. Student at Stanford University under the supervision of Professor Sanmi Koyejo in the department of Computer Science. Previously he has been a graduate student at University of Illinois Urbana-Champaign, Research Assistant at MIT’s Center for Brain Minds and Machines (CBMM), and a graduate student at the Massachusetts Institute of Technology (MIT). Miranda’s research interests lie in data-centric machine learning for foundation models, meta-learning, machine learning for theorem proving, and human & brain-inspired Artificial Intelligence (AI). Miranda completed his Master of Engineering in Electrical Engineering and Computer Science under the supervision of Professor Tomaso Poggio - where he did research on Deep Learning Theory. Miranda has been the recipient of several awards, including Most Cited Paper Certificate awarded by International Journal of Automation & Computing (IJAC), two Honorable Mention with the Ford Foundation Fellowship, Computer Science Excellence Saburo Muroga Endowed Fellow, Stanford School of Engineering fellowship, and is currently an EDGE Scholar at Stanford University.
    website: brando90.github.io/brandomira...
  • Розваги

КОМЕНТАРІ • 7

  • @wide_student
    @wide_student 4 місяці тому +1

    Hey, was there any response to Jason Wei's "Common Arguements agsint Emergent Abilities", I just read it and I thought it had really good points and I wanted to know more about this! Please let me know!!

  • @davidm6624
    @davidm6624 4 місяці тому

    Hey there, thanks for the research and presentation! Have you or anyone affiliated gotten any feedback from, let's say OAi researchers :)?

    • @brandomiranda6703
      @brandomiranda6703  4 місяці тому +1

      No. But it's evident they already knew this or something like this. E.g., see their GPT4 technical report, the section on extrapolating from small to large models.

    • @brandomiranda6703
      @brandomiranda6703  4 місяці тому +1

      Also one of my final slides references these nearly certainly true conjectures.

    • @davidm6624
      @davidm6624 4 місяці тому

      ​@@brandomiranda6703 Thanks! On one hand, it is nice to have a sufficient explanation for emergence, on the other, it seems like the closed source approach of e.g. OAI does have a decelerating effect on public research/knowledge. Merry Christmas!

  • @championx9
    @championx9 4 місяці тому

    ☠It looked like people found some secret sauce but it turned out to be a whole lot of nothing. (Still impressive what they can do regardless)