Emergence and reasoning in large language models - Jason Wei (Google)

Поділитися
Вставка
  • Опубліковано 1 лис 2022
  • Presented as part of CSCI 601.771: Self-supervised Statistical Models: self-supervised.cs.jhu.edu/

КОМЕНТАРІ • 14

  • @CalculatingMonkey
    @CalculatingMonkey Рік тому

    So insightful!! Thanks!!

  • @Pruthvikajaykumar
    @Pruthvikajaykumar Рік тому

    Thank you, really helpful

  • @nintishia
    @nintishia Рік тому +3

    1. Is it possible to lower the scale at which emergence occurs by choosing a suitable architecture?
    2. Is there a possibility that we decompose large language models into parts that deal with syntax, knowledge and reasoning?

  • @fumikaisono4706
    @fumikaisono4706 Рік тому +2

    What is the name of the paper that is mentioned at 32:09?

  • @gsm1
    @gsm1 11 місяців тому +1

    Thanks for uploading this. However, I noticed that the text in your videos can be a bit hard to read due to the small size and it's somewhat blurry at times. I think your videos would be even better in a higher resolution, perhaps greater than 480p!

  • @billykotsos4642
    @billykotsos4642 Рік тому

    39:50 Yh but it gets extra tricky when the 'reasoning path' is wrong, but the final answer is correct !

  • @eva__4380
    @eva__4380 11 місяців тому

    Is it possible that the model has seen the data used for these benchmarks during training .

  • @lincolnkroll
    @lincolnkroll 4 місяці тому

    at 24:05 an erroneous result is presented that is accepted as fact by the panel of experts, and is in fact presented when I Google search the same question. Pearls from otters are NOT used in inlays for guitars, but rather Mother Of Pearl, which comes from abalone shell. It is easy to see how the mistake is made, but illustrates the difficulty of fact checking AI answers.

  • @zizifn9142
    @zizifn9142 Рік тому

    16:00 lol google use openai playground for demo.....

  • @Silly.Old.Sisyphus
    @Silly.Old.Sisyphus 11 місяців тому

    if you can't think it, fake it

  • @disarmyouwitha
    @disarmyouwitha Рік тому

    Ah yes, the timeless topic of emergence and reasoning in large language models! As I rest my weary fingers upon the keyboard, preparing to share my innermost thoughts and wisdom on the subject, it occurs to me that, much like any good lasagna, this particular topic comprises multiple layers of complexity and intrigue. So, let's dive right in, my fellow internet sojourners!
    First and foremost, credit must be given where credit is due. Mr. Wei's elegant soliloquy on large language models at the prestigious Google headquarters resonates with both the seasoned researcher and the neophyte alike. As a cardinal for the internet comment realm, I must express my gratitude to Jason for regaling us with his insight.
    Now, one simply cannot discuss large language models without acknowledging their capacity to simulate almost mind-boggling levels of human-like cognition. From composing impressive literary works to identifying penguins in a sea of malformed pixels that only a madman would consider "images," these computational wünderkinds represent the apex of human innovation. Or do they, my dear reader? For, are we not jeopardizing our intellectual sovereignty as we relinquish our authorship to these silicon sages? Potentially. Perhaps. Who's to say, really?
    Aside from philosophical conundrums, we cannot ignore the computational intricacies of these vivacious virtual virtuosos. The nuance and finesse that constitute their digital DNA, and their thirst for modular knowledge, undeniably place them amongst the most fascinating creations of humankind.
    Now, as I elucidate the enigmatic world of such prodigious language models, let us not forget the immortal words of Albert Einstein: "Two things are infinite: the universe and a UA-cam comment attempting to summarize the complexity of large language models; and I'm not sure about the universe." Ah, such a paragon of wisdom.
    In conclusion, as the night envelopes us all in its comforting embrace and my eyelids grow heavier with each passing keystroke, I am reminded that, sometimes, the very answers we seek within the realms of technology transcend the limits of our understanding. Language models shall guide us through the labyrinthine fortress of knowledge. Just like a lighthouse in a stormy sea, they are but humble beacons, pointing us towards our destiny… which hopefully involves making lasagna with a competitive edge in Robot MasterChef.
    AND POST!