Lucas Beyer (Google DeepMind) - Convergence of Vision & Language

Поділитися
Вставка
  • Опубліковано 17 гру 2024

КОМЕНТАРІ • 12

  • @ajohny8954
    @ajohny8954 Рік тому +5

    Your podcast has the potential to fill a very cool niche

  • @mkamp
    @mkamp Рік тому +1

    Wow. What a great talk. Substantial, yet easy to follow. And that seems to be a characteristic of papers he co-authored as well. Great research, but also great education. 🎉

  • @DeepFindr
    @DeepFindr Рік тому +1

    Nice talk and nice guy :)

  • @HeHo-n6p
    @HeHo-n6p Рік тому +1

    Has someone more insight in what happens in the phase where the loss does not improve but it is still substantial to train on(i.e. have patience)? Seems to me there is still some significant weight reorganization taking place which not directly yields an improvement but sets the stage for the next lr phase

    • @TheGenerationGapPodcast
      @TheGenerationGapPodcast 11 місяців тому

      Lower local minimum terrain that was flat where the deritives are closer to zero for a long time

  • @gabehesch1
    @gabehesch1 Рік тому +1

    Fabulous talk! Are these sides available to view or download?

  • @derekcarday
    @derekcarday Рік тому +1

    the entire time I was worried if Lucas would get to where he needed to be on time

  • @icriou
    @icriou Рік тому

    would love to read the slide, plz?