Learning to summarize from human feedback (Paper Explained)

Поділитися
Вставка
  • Опубліковано 20 тра 2024
  • #summarization #gpt3 #openai
    Text Summarization is a hard task, both in training and evaluation. Training is usually done maximizing the log-likelihood of a human-generated reference summary, while evaluation is performed using overlap-based metrics like ROUGE. Both significantly undervalue the breadth and intricacies of language and the nature of the information contained in text summaries. This paper by OpenAI includes direct human feedback both in evaluation and - via reward model proxies - in training. The final model even outperforms single humans when judged by other humans and is an interesting application of using reinforcement learning together with humans in the loop.
    OUTLINE:
    0:00 - Intro & Overview
    5:35 - Summarization as a Task
    7:30 - Problems with the ROUGE Metric
    10:10 - Training Supervised Models
    12:30 - Main Results
    16:40 - Including Human Feedback with Reward Models & RL
    26:05 - The Unknown Effect of Better Data
    28:30 - KL Constraint & Connection to Adversarial Examples
    37:15 - More Results
    39:30 - Understanding the Reward Model
    41:50 - Limitations & Broader Impact
    Paper: arxiv.org/abs/2009.01325
    Blog: openai.com/blog/learning-to-s...
    Code: github.com/openai/summarize-f...
    Samples: openaipublic.blob.core.window...
    My Video on GPT-3: • GPT-3: Language Models...
    My Video on GPT-2: • GPT-2: Language Models...
    Abstract:
    As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we really care about---summary quality. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning. We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone. Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. We conduct extensive analyses to understand our human feedback dataset and fine-tuned models. We establish that our reward model generalizes to new datasets, and that optimizing our reward model results in better summaries than optimizing ROUGE according to humans. We hope the evidence from our paper motivates machine learning researchers to pay closer attention to how their training loss affects the model behavior they actually want.
    Authors: Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul Christiano
    Links:
    UA-cam: / yannickilcher
    Twitter: / ykilcher
    Discord: / discord
    BitChute: www.bitchute.com/channel/yann...
    Minds: www.minds.com/ykilcher
    Parler: parler.com/profile/YannicKilcher
    LinkedIn: / yannic-kilcher-488534136
    If you want to support me, the best thing to do is to share out the content :)
    If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
    SubscribeStar: www.subscribestar.com/yannick...
    Patreon: / yannickilcher
    Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
    Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
    Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
    Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
  • Наука та технологія

КОМЕНТАРІ • 50

  • @wenhanzhou5826
    @wenhanzhou5826 Рік тому +9

    Cool to see how they managed to integrate this functionality two years later in ChatGPT.

  • @herp_derpingson
    @herp_derpingson 3 роки тому +5

    32:58 A paper that honestly describes its failure modes. Thats rare.
    33:11 "Want change this dumbass shitty ass policy pls" Oh no. I think it is getting self aware :X
    41:30 Thats a good idea. I think we can do some data augmenting by replacing words with synonyms as positive samples. Or feed completely random text as negative samples.

  • @wernerbogula6491
    @wernerbogula6491 3 роки тому +3

    Love the humorous way you explain the papers. Fun and insights in one go :-)

  • @dmitrysamoylenko6775
    @dmitrysamoylenko6775 3 роки тому +72

    "Nobody plays DotA for just 3 hours" 😂

  • @trevormartin1944
    @trevormartin1944 3 роки тому +3

    Hopefully someone creates some compression algorithm or NN connection pruning algorithm to reduce the complexity of NNs so that they are less expensive to train, esp. for NLP.

  • @yeyaozhang4930
    @yeyaozhang4930 3 роки тому +3

    hahaha very interesting paper. And, well drawing of the dataset symbol ;)

  • @t.swaggit629
    @t.swaggit629 3 роки тому +2

    I think one possible interpretation is that networks, like humans, don't need to know HOW to do something to decide if it is good or bad. Having a reward model means you're making that assumption.

  • @mns4183
    @mns4183 3 роки тому +5

    Hi Yannic can you do a video on how open AI trained Hide and Seek models

  • @chaower6958
    @chaower6958 3 роки тому

    curious on 20:41 Step2- Train Reward Model, at the first place that feeding 1 post with 2 summaries judged by human towards the reward model. Is not it still involved with human and it is still costly as the last step of the Step 1?

  • @trevormartin1944
    @trevormartin1944 3 роки тому +1

    Pretty cool.

  • @pvlr1788
    @pvlr1788 Рік тому

    What is the advantage of using PPO instead of regular supervised learning? You can define the reward model and the KL term as a "loss function" and train in a supervise manner. So why RL?

  • @haditime1665
    @haditime1665 3 роки тому +12

    Only 3 hours? :(

  • @xuelinli8348
    @xuelinli8348 3 роки тому +5

    Hi there, nice summarization of a summarization paper. Can I ask what is the software you are using that combines the paper and a whiteboard? I'm teaching online classes this semester and find your illustration very clear and try to learn.

    • @herp_derpingson
      @herp_derpingson 3 роки тому +4

      He explains it here
      ua-cam.com/video/H3Bhlan0mE0/v-deo.html

    • @xuelinli8348
      @xuelinli8348 3 роки тому +1

      @@herp_derpingson Thx a lo!

  • @nishadlost1531
    @nishadlost1531 3 роки тому +5

    Been following your updates for quite some time now, gotta say I;m highly fascinated..
    any other youtube channel or website like yours that reviews cs papers regularly?
    would be a great help.. thanks in advance

    • @AvastarBin
      @AvastarBin 3 роки тому +2

      i'm actually interested in other channels like this one or even blogs

    • @AvastarBin
      @AvastarBin 3 роки тому +2

      there's 2 minute papers although it's not as thorough as this channel and he doesn't review papers as regularly as him

    • @YannicKilcher
      @YannicKilcher  3 роки тому +2

      most similar is henry ai labs, check it out

    • @AvastarBin
      @AvastarBin 3 роки тому

      @@YannicKilcher Thanks! I'll check it out!

  • @rahulpurohit3004
    @rahulpurohit3004 3 роки тому +1

    Cool

  • @tinyentropy
    @tinyentropy 3 роки тому

    Thx :)

  • @Kerrosene
    @Kerrosene 3 роки тому +1

    Can a similar KL term generally help against adversarial attacks in other models as well?

    • @YannicKilcher
      @YannicKilcher  3 роки тому

      I'm sure it will help against some, but the field is in disagreement about those things

  • @paulowiz
    @paulowiz 3 роки тому

    I would like to see how to do the code =( In every video about that people said that evaluate as human, but I would like to know how implement code

  • @priancho
    @priancho 7 місяців тому

  • @LouisChiaki
    @LouisChiaki 3 роки тому

    I wonder how does this method compare to the Snorkel (www.snorkel.org/)?

  • @paveltikhonov8780
    @paveltikhonov8780 3 роки тому +2

    How do they get "Actual preference" at 39:46? Another group of real people evaluated the results?

  • @matthewtang1489
    @matthewtang1489 3 роки тому +1

    Hahaha! didn't think inverse reinforcement learning could be used like this.... Feels like everything is dependent on how its framed. A different frame and adversarial examples...

  • @pitbbe
    @pitbbe 3 роки тому +2

    I find the reinforcment learning part very confusing. If the reward is one number from the final generated summary how is the policy formed? As the gpt model is predicting many tokens there-bye producing many probabilities? Im confused how many probabilities turn into one action...

    • @YannicKilcher
      @YannicKilcher  3 роки тому

      I think it's just applying the REINFORCE loss instead of a supervised loss, the rest is the same

    • @pitbbe
      @pitbbe 3 роки тому

      Yannic Kilcher thanks for the response! so are there thousands of potential actions for each possible token generated? Or is the act of writing the summary one action? To make it more concrete the reinforce loss from my understanding is dependent on each action at every time step. This is conceivable for smaller amount of actions but in NLP there could be millions of potential actions if actions are tokens. Which to my knowledge is hard for a rl algorithm to learn. I guess that’s what is most confusing here.

  • @nolan8377
    @nolan8377 3 роки тому +2

    I'm confused on how the human feedback is incorporated into the reward loss function. It seems like the loss function doesn't incorporate the human feedback?

    • @herp_derpingson
      @herp_derpingson 3 роки тому +1

      No, the human feedback is used to generate the dataset. We then train a neural network that tries to mimic human behaviour. So, human feedback is never directly used as reward in the loss function. We are only interested in relative values.

  • @johnstifter
    @johnstifter 3 роки тому +1

    Comprehension is compression

  • @dimitriognibene8945
    @dimitriognibene8945 3 роки тому +1

    19:20 homer

  • @jonatan01i
    @jonatan01i 3 роки тому

    I clicked on the video and also opened a new tab to search sth. The video started though, I got confused real quick...

  • @tech4028
    @tech4028 3 роки тому +1

    Where can I try the code and do I need a GPU? should I "rent" one?

    • @herp_derpingson
      @herp_derpingson 3 роки тому +2

      Use google collab. If you are lucky you might get a GPU for free for a few hours.

  • @C0R0V008
    @C0R0V008 Рік тому

    Summurize

  • @chilliking3424
    @chilliking3424 3 роки тому +1

    3hours for DOTA? that is some rookie numbers

  • @zhenjing94
    @zhenjing94 3 роки тому

    3hours.. must be a herald rank player..

  • @konghong3885
    @konghong3885 3 роки тому +3

    "help, my boy friend kept screaming and swearing to the computer screen, what should I do"

  • @pepe_reeze9320
    @pepe_reeze9320 3 роки тому +8

    Thanks for your revealing the hypocrisy of broader-impact statements. Great paper but these sections are just so politically correct. Maybe they hired students from gender studies.