Stanford CS224N NLP with Deep Learning | 2023 | Lecture 11 - Natural Language Generation

Поділитися
Вставка
  • Опубліковано 22 лис 2024

КОМЕНТАРІ • 10

  • @uraskarg710
    @uraskarg710 Рік тому +4

    Great lecture! Thanks!

  • @mshonle
    @mshonle Рік тому +1

    Ah, interesting… I had wondered about the distinction between NLU and NLP and now it makes sense! Cheers!

  • @robertokalinovskyy7347
    @robertokalinovskyy7347 10 місяців тому +1

    Great lecture!

  • @420_gunna
    @420_gunna 7 місяців тому +1

    Great lecture! :)

  • @l501l501l
    @l501l501l Рік тому +4

    Hi there, based on the schedule on your official course website, maybe this course should be lecture 10 and Prompting, Reinforcement Learning from Human Feedback by Jesse Mu) should be lecture 11?

    • @banruo-tz7tx
      @banruo-tz7tx 2 місяці тому

      Yeah seems there is a mistake

  • @p4r7h-v
    @p4r7h-v 5 місяців тому

    thanks!!

  • @mshonle
    @mshonle Рік тому +2

    Can using dropout during inference be another way to set the temperature and perform sampling? E.g., if training had a 10% dropout rate, why not apply a similar random dropout during inference? The neurons which get zeroed out could depend on some distribution, such as selecting neurons evenly or favoring the earlier layers or targeting attention heads at specific layers. One might expect the token distributions would be more varied than what beam search alone could find.

  • @JQ0004
    @JQ0004 Рік тому +1

    The TA seems attends Ng`s class a lot. Seems to imitate "ok cool" a lot. 😀

  • @annawilson3824
    @annawilson3824 4 місяці тому

    1:02:05