Learning to summarize from human feedback (Paper Explained)
Вставка
- Опубліковано 20 тра 2024
- #summarization #gpt3 #openai
Text Summarization is a hard task, both in training and evaluation. Training is usually done maximizing the log-likelihood of a human-generated reference summary, while evaluation is performed using overlap-based metrics like ROUGE. Both significantly undervalue the breadth and intricacies of language and the nature of the information contained in text summaries. This paper by OpenAI includes direct human feedback both in evaluation and - via reward model proxies - in training. The final model even outperforms single humans when judged by other humans and is an interesting application of using reinforcement learning together with humans in the loop.
OUTLINE:
0:00 - Intro & Overview
5:35 - Summarization as a Task
7:30 - Problems with the ROUGE Metric
10:10 - Training Supervised Models
12:30 - Main Results
16:40 - Including Human Feedback with Reward Models & RL
26:05 - The Unknown Effect of Better Data
28:30 - KL Constraint & Connection to Adversarial Examples
37:15 - More Results
39:30 - Understanding the Reward Model
41:50 - Limitations & Broader Impact
Paper: arxiv.org/abs/2009.01325
Blog: openai.com/blog/learning-to-s...
Code: github.com/openai/summarize-f...
Samples: openaipublic.blob.core.window...
My Video on GPT-3: • GPT-3: Language Models...
My Video on GPT-2: • GPT-2: Language Models...
Abstract:
As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we really care about---summary quality. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning. We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone. Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. We conduct extensive analyses to understand our human feedback dataset and fine-tuned models. We establish that our reward model generalizes to new datasets, and that optimizing our reward model results in better summaries than optimizing ROUGE according to humans. We hope the evidence from our paper motivates machine learning researchers to pay closer attention to how their training loss affects the model behavior they actually want.
Authors: Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul Christiano
Links:
UA-cam: / yannickilcher
Twitter: / ykilcher
Discord: / discord
BitChute: www.bitchute.com/channel/yann...
Minds: www.minds.com/ykilcher
Parler: parler.com/profile/YannicKilcher
LinkedIn: / yannic-kilcher-488534136
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: www.subscribestar.com/yannick...
Patreon: / yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n - Наука та технологія
Cool to see how they managed to integrate this functionality two years later in ChatGPT.
32:58 A paper that honestly describes its failure modes. Thats rare.
33:11 "Want change this dumbass shitty ass policy pls" Oh no. I think it is getting self aware :X
41:30 Thats a good idea. I think we can do some data augmenting by replacing words with synonyms as positive samples. Or feed completely random text as negative samples.
Love the humorous way you explain the papers. Fun and insights in one go :-)
"Nobody plays DotA for just 3 hours" 😂
Unknown TOM secret likes this
it's true
Hopefully someone creates some compression algorithm or NN connection pruning algorithm to reduce the complexity of NNs so that they are less expensive to train, esp. for NLP.
hahaha very interesting paper. And, well drawing of the dataset symbol ;)
I think one possible interpretation is that networks, like humans, don't need to know HOW to do something to decide if it is good or bad. Having a reward model means you're making that assumption.
Hi Yannic can you do a video on how open AI trained Hide and Seek models
curious on 20:41 Step2- Train Reward Model, at the first place that feeding 1 post with 2 summaries judged by human towards the reward model. Is not it still involved with human and it is still costly as the last step of the Step 1?
Pretty cool.
What is the advantage of using PPO instead of regular supervised learning? You can define the reward model and the KL term as a "loss function" and train in a supervise manner. So why RL?
Only 3 hours? :(
Hi there, nice summarization of a summarization paper. Can I ask what is the software you are using that combines the paper and a whiteboard? I'm teaching online classes this semester and find your illustration very clear and try to learn.
He explains it here
ua-cam.com/video/H3Bhlan0mE0/v-deo.html
@@herp_derpingson Thx a lo!
Been following your updates for quite some time now, gotta say I;m highly fascinated..
any other youtube channel or website like yours that reviews cs papers regularly?
would be a great help.. thanks in advance
i'm actually interested in other channels like this one or even blogs
there's 2 minute papers although it's not as thorough as this channel and he doesn't review papers as regularly as him
most similar is henry ai labs, check it out
@@YannicKilcher Thanks! I'll check it out!
Cool
Thx :)
Can a similar KL term generally help against adversarial attacks in other models as well?
I'm sure it will help against some, but the field is in disagreement about those things
I would like to see how to do the code =( In every video about that people said that evaluate as human, but I would like to know how implement code
❤
I wonder how does this method compare to the Snorkel (www.snorkel.org/)?
How do they get "Actual preference" at 39:46? Another group of real people evaluated the results?
At 13:55, they evaluate with humans as well
Hahaha! didn't think inverse reinforcement learning could be used like this.... Feels like everything is dependent on how its framed. A different frame and adversarial examples...
I find the reinforcment learning part very confusing. If the reward is one number from the final generated summary how is the policy formed? As the gpt model is predicting many tokens there-bye producing many probabilities? Im confused how many probabilities turn into one action...
I think it's just applying the REINFORCE loss instead of a supervised loss, the rest is the same
Yannic Kilcher thanks for the response! so are there thousands of potential actions for each possible token generated? Or is the act of writing the summary one action? To make it more concrete the reinforce loss from my understanding is dependent on each action at every time step. This is conceivable for smaller amount of actions but in NLP there could be millions of potential actions if actions are tokens. Which to my knowledge is hard for a rl algorithm to learn. I guess that’s what is most confusing here.
I'm confused on how the human feedback is incorporated into the reward loss function. It seems like the loss function doesn't incorporate the human feedback?
No, the human feedback is used to generate the dataset. We then train a neural network that tries to mimic human behaviour. So, human feedback is never directly used as reward in the loss function. We are only interested in relative values.
Comprehension is compression
19:20 homer
I clicked on the video and also opened a new tab to search sth. The video started though, I got confused real quick...
Where can I try the code and do I need a GPU? should I "rent" one?
Use google collab. If you are lucky you might get a GPU for free for a few hours.
Summurize
3hours for DOTA? that is some rookie numbers
3hours.. must be a herald rank player..
"help, my boy friend kept screaming and swearing to the computer screen, what should I do"
tell him to first pick invoker mid
Thanks for your revealing the hypocrisy of broader-impact statements. Great paper but these sections are just so politically correct. Maybe they hired students from gender studies.