Your effort to explain the complicated concept in easy and very clear way from scratch, giving visual examples, is just beautiful! Thanks for your sharing your knowledge.
Reading the book is really hard. It's hard to control the studying environment, and it's hard to sustain enough focus with the necessary momentum to learn a concept. Theres so many interruptions that can happen that can interfere with the process, but with this concise and powerful video it helped save a lot of time in learning. Thanks.❤
Thank you so much! I'm currently writing my master dissertation proposal and have just been exposed to PPO and after reading the OpenAI white paper the images and explanation of this video just tied the bridge with Actor Critic and now I completely understand how it's all related! It's so satisfying being able to understand this thank you so much!
Would you be kind enough to offer some insights into what direction you are taking in your dissertation? I'm also in same boat and just at a loss as to where, how or what to specifically focus on.
Gemini: This video is about Proximal Policy Optimization (PPO) and its applications in training large language models. The speaker, Louis Sano, starts the video by explaining what Proximal Policy Optimization (PPO) is and why it is important in reinforcement learning. Then, he dives into the details of PPO with a grid world example. Here are the key points of the video: * Proximal Policy Optimization (PPO) is a method commonly used in reinforcement learning. [1] * It is especially important for training large language models. [1] * In reinforcement learning, an agent learns through trial and error in an environment. The agent receives rewards for good actions and penalties for bad actions. [1] * The goal is to train the agent to take actions that maximize the total reward it receives. [1] * PPO uses two neural networks: a value network and a policy network. [2] * The value network estimates the long-term value of being in a particular state. [2] * The policy network determines the action the agent should take in a given state. [2] * PPO trains the value network and policy network simultaneously. [2] * The speaker uses a grid world example to illustrate the concepts of states, actions, values, and policy. [2,3,4,5] * In the grid world example, the agent is a small orange ball that moves around a grid. [2] * The goal of the agent is to get as many points as possible. [2] * The agent receives points by landing on squares with money and avoids squares with dragons. [2] * The speaker explains how to calculate the value of each state in the grid world. [4] * The value of a state is the maximum expected reward the agent can get from that state. [4] * The speaker also explains how to determine the best policy (i.e., the best action to take) for each state in the grid world. [5] * Once the value and policy are determined for all states, the agent can start acting in the environment. [5] * PPO uses a clipped surrogate objective function to train the policy network. [8,9] * This function helps to ensure that the policy updates are stable and do not diverge too much. [8,9] Overall, this video provides a clear and concise explanation of Proximal Policy Optimization (PPO) with a focus on its application in training large language models.
Hi Luis, thanks for this great video. However, @33:10 I believe the objective here is to maximize the surrogate objective function NOT to "make it as small as possible" as you said. When the advantage (At) is +ve we need to increase the prob of the current action (by maximizing) the surrogate objective function and when the advantage (At) is -ve we need to decrease the prob of the current action but also through maximizing the surrogate objective function.
I would like to say thank you for the wonderful video. I want to learn reinforcement learning for my future study in the field of robotics. I have seen that you only have 4 videos about RL. I am hungry for more of your videos. I found that your videos are easier to understand because you explain well. Please add more RL videos. Thank you 🙏
I think that in the case of the policy loss you want to maximize it instead of minimize it. Since a positive gain means you need to increase the weights that contribute to an increase the probability of the considered action and therefore should do a gradient ascent on the weights.
Appreciate the great explanation. I have a question regarding the clipping formula at 36:42. You have used the "min" function. For example, if the rate is 0.4 and the epsilon is 0.3, indicating that we should get 0.7 in this scenario. However, in the formula you introduced here is returns then 0.4. Shouldn't the formula be clipped_f(x) = max(1 - epsilon, min(f(x), 1 + epsilon))? Am I missing anything?
I was thinking the same before. But when I look up for more detailed explanation in the paper, it says that the process to obtain the value in between the range only happens on clip function. At the end, we just again consider choosing the minimum value between unclipped (on the left side) and clipped result. It says like this: "Finally, we take the minimum of the clipped and unclipped objective, so the final objective is a lower bound (i.e., a pessimistic bound) on the unclipped objective. With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse"
Am I correct in pointing out, that the Loss is the negative of this expectation? Loss is always something we want to decrease, so this is the gain without the minus?
I do not get it, you increase the probability just because you underestimate the gain by 10? Then what if in other directions you underestimate by 100, 1000, 10000, should you decrease the probability?
Might be a novel approach for robotics: represent paths as gaussian splats and use spherical harmonics as the "recommended" directions within those splats to reach a goal/endpoint
Thank you Luis! One thing though I don't catch: why decrease the policy when the value needs to go down, and increase the policy when value goes up? I can't see a coupling between the trend of the value and of the policy
Great question! Yeah I also found that part a bit mysterious. My guess is that as we’re training both the value and policy NNs at the same time, that they kind of capture similar information. So if the value NN underestimated the value of a state, then it’s likely that the policy NN also underestimates the probabilities to get to that state. So as we increase the value estimate, then we should also increase the probability estimate. But if you have any other thoughts lemme know, I’m still trying to wrap my head around it…
Alex, thank you for your incredible work to vulgarize complex things. But Why the hell, the value function share the same parameters theta as the policy function??!! Can you confirm that? And if this is the case, why?
I am wondering what level of expertise and knowledge one must have to be able to notice that impulse must be taken when probability is adjusted, omg:0 even if I live 2 lives spanning for 200 years I would never realize that impulse must be taken into account
When Im training the policy network, how do I know what the value for the scenario is from a previous iteration? My network has weight and biases that calculate the probability of a given action in a state. How do i get the probability for the same action in the same state from a pervious iteration?? I would have to use weights and biases before i updated them.
when explaining the formula with mathematical notation, where exactly is the notation for summing the values for each step just as you summed them over little earlier before explaining math formula?
Great question! You mean in the surrogate objective function? Yes, I skimmed over that part, but at the end when you see the expected value sign, that means we're looking at the average of functions for different actions (those taken along a path).
Great question, yes absolutely! If the gradient is too big or too small, then that messes up the training, and that's why we clip it to something in the middle.
@@SerranoAcademy It seems to me that the lower bound of the probability ratio is not determined by the clipping function since the min function will take the minimum of the probability ratio and the result of the clipping function. So if epsilon is 0.3 and the probability ratio is 0.2, the lower bound of the clipping function will be 1-0.3=0.7 and min(0.2, 0.7)=0.2
musical inserts between concepts are too loud and too long in this ever decreasing attention span world. the presentation and material are amazing. thank you
Probably the best explanation of the PPO ever
Agreed!
Probably? THIS IS THE BEST(personally though).
Thank you! Your explanation of PPO is SO explicit.
I love your clear examples and how you reduce them to the essentials.
The most “Spectacular Explanation” I have ever seen on PPO explanation. Really really liked it!
Your effort to explain the complicated concept in easy and very clear way from scratch, giving visual examples, is just beautiful! Thanks for your sharing your knowledge.
This is the best explanation for PPO I have seen; it's very intuitive.
Sorry bae cant talk right now , Luis dropped another masterpiece had to watch it first..... :)
🤣 LOL! that’s a good one
Loved it❤ Need to rewatch it few more times now but its getting much much clearer thanks to you!
Thank you very much for this simple, understandable and at the same time elegantly narrated video... Great work!
The explanation about surrogate function was so vivid
best and most simple explanation of RL and PPO👏👏👏
Reading the book is really hard. It's hard to control the studying environment, and it's hard to sustain enough focus with the necessary momentum to learn a concept. Theres so many interruptions that can happen that can interfere with the process, but with this concise and powerful video it helped save a lot of time in learning. Thanks.❤
Thank you so much! I'm currently writing my master dissertation proposal and have just been exposed to PPO and after reading the OpenAI white paper the images and explanation of this video just tied the bridge with Actor Critic and now I completely understand how it's all related! It's so satisfying being able to understand this thank you so much!
Would you be kind enough to offer some insights into what direction you are taking in your dissertation? I'm also in same boat and just at a loss as to where, how or what to specifically focus on.
Love your explanation. It's the best PPO explanation I found so far.
You are a Genius man .... The way you explain things in a easy to understand way is mind blowing. Love you a lot :)
Thank you very very much. I could understand. I owe you a lot not limited to this one, but all, for me to understand. Wonderful.
Gemini: This video is about Proximal Policy Optimization (PPO) and its applications in training large language models. The speaker, Louis Sano, starts the video by explaining what Proximal Policy Optimization (PPO) is and why it is important in reinforcement learning. Then, he dives into the details of PPO with a grid world example.
Here are the key points of the video:
* Proximal Policy Optimization (PPO) is a method commonly used in reinforcement learning. [1]
* It is especially important for training large language models. [1]
* In reinforcement learning, an agent learns through trial and error in an environment. The agent receives rewards for good actions and penalties for bad actions. [1]
* The goal is to train the agent to take actions that maximize the total reward it receives. [1]
* PPO uses two neural networks: a value network and a policy network. [2]
* The value network estimates the long-term value of being in a particular state. [2]
* The policy network determines the action the agent should take in a given state. [2]
* PPO trains the value network and policy network simultaneously. [2]
* The speaker uses a grid world example to illustrate the concepts of states, actions, values, and policy. [2,3,4,5]
* In the grid world example, the agent is a small orange ball that moves around a grid. [2]
* The goal of the agent is to get as many points as possible. [2]
* The agent receives points by landing on squares with money and avoids squares with dragons. [2]
* The speaker explains how to calculate the value of each state in the grid world. [4]
* The value of a state is the maximum expected reward the agent can get from that state. [4]
* The speaker also explains how to determine the best policy (i.e., the best action to take) for each state in the grid world. [5]
* Once the value and policy are determined for all states, the agent can start acting in the environment. [5]
* PPO uses a clipped surrogate objective function to train the policy network. [8,9]
* This function helps to ensure that the policy updates are stable and do not diverge too much. [8,9]
Overall, this video provides a clear and concise explanation of Proximal Policy Optimization (PPO) with a focus on its application in training large language models.
I love your clear teaching with both easy to understand and in-depth nature. I'll recommand to friend and hoping for the next RLHF video!
Really good explanation! Immediately understand PPO after watching.
Looking forward to this.
Sincerely impressed, you're explanation was amazing
Hi Luis, thanks for this great video. However, @33:10 I believe the objective here is to maximize the surrogate objective function NOT to "make it as small as possible" as you said. When the advantage (At) is +ve we need to increase the prob of the current action (by maximizing) the surrogate objective function and when the advantage (At) is -ve we need to decrease the prob of the current action but also through maximizing the surrogate objective function.
Thank you for your time and effort to prepare this useful video and explain it.
great courses !!! working on o1 kind of training , this helps a lot .
You're the best!!! Absolutely love all your ML vids!
I would like to say thank you for the wonderful video. I want to learn reinforcement learning for my future study in the field of robotics. I have seen that you only have 4 videos about RL. I am hungry for more of your videos. I found that your videos are easier to understand because you explain well. Please add more RL videos. Thank you 🙏
Thank you for the suggestion! Definitely! Any ideas on what topics in RL to cover?
@@SerranoAcademy more videos in the field of Robotics please. Thank you. You may also guide me how I can approach the study of reinforcement learning.
Extraordinarily lucid. Thanks!
Thank you for your video, which provided a great explanation of PPO.❤
Excelente explicación profe!!
Serrano brother........God Bless you..............
Crystal clear!
great example and clear explanation!
Since I am familiar with RL concepts, it was boring at the beginning. But, it finished awesome. Thanks
I think that in the case of the policy loss you want to maximize it instead of minimize it. Since a positive gain means you need to increase the weights that contribute to an increase the probability of the considered action and therefore should do a gradient ascent on the weights.
Appreciate the great explanation. I have a question regarding the clipping formula at 36:42. You have used the "min" function. For example, if the rate is 0.4 and the epsilon is 0.3, indicating that we should get 0.7 in this scenario. However, in the formula you introduced here is returns then 0.4. Shouldn't the formula be clipped_f(x) = max(1 - epsilon, min(f(x), 1 + epsilon))? Am I missing anything?
I was thinking the same before. But when I look up for more detailed explanation in the paper, it says that the process to obtain the value in between the range only happens on clip function. At the end, we just again consider choosing the minimum value between unclipped (on the left side) and clipped result.
It says like this: "Finally, we take the minimum of the clipped and unclipped objective, so the final objective is a lower bound (i.e., a pessimistic bound) on the unclipped objective. With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse"
You are a genius!!
Am I correct in pointing out, that the Loss is the negative of this expectation? Loss is always something we want to decrease, so this is the gain without the minus?
Excellent Session Luis. Can we have a similar one for DPO as well ?
Thank you! Yes, after this is RLHF and then DPO
Cant wait.....
Can I ask when training the value NN, why do not optimize the prediction for each state separately, instead of optimizing the prediction on each path?
I do not get it, you increase the probability just because you underestimate the gain by 10? Then what if in other directions you underestimate by 100, 1000, 10000, should you decrease the probability?
Might be a novel approach for robotics: represent paths as gaussian splats and use spherical harmonics as the "recommended" directions within those splats to reach a goal/endpoint
Thank you Luis! One thing though I don't catch: why decrease the policy when the value needs to go down, and increase the policy when value goes up? I can't see a coupling between the trend of the value and of the policy
Great question! Yeah I also found that part a bit mysterious. My guess is that as we’re training both the value and policy NNs at the same time, that they kind of capture similar information. So if the value NN underestimated the value of a state, then it’s likely that the policy NN also underestimates the probabilities to get to that state. So as we increase the value estimate, then we should also increase the probability estimate.
But if you have any other thoughts lemme know, I’m still trying to wrap my head around it…
Alex, thank you for your incredible work to vulgarize complex things. But Why the hell, the value function share the same parameters theta as the policy function??!! Can you confirm that? And if this is the case, why?
L_policy^CLIP seems to be incorrect, what is
ho? The min of clip() is always the lower bound. Can you give a reference?
Let's goooo!
Yayyy!!! :)
I am wondering what level of expertise and knowledge one must have to be able to notice that impulse must be taken when probability is adjusted, omg:0 even if I live 2 lives spanning for 200 years I would never realize that impulse must be taken into account
THE BEST on ppo.
It‘s was just so clear. 😃
When Im training the policy network, how do I know what the value for the scenario is from a previous iteration? My network has weight and biases that calculate the probability of a given action in a state. How do i get the probability for the same action in the same state from a pervious iteration?? I would have to use weights and biases before i updated them.
Also how would this work for iteration 1. I initialize random weights and biases, since its the first iteration whats the previous iteration result?
when explaining the formula with mathematical notation, where exactly is the notation for summing the values for each step just as you summed them over little earlier before explaining math formula?
Great question! You mean in the surrogate objective function? Yes, I skimmed over that part, but at the end when you see the expected value sign, that means we're looking at the average of functions for different actions (those taken along a path).
This was very helpful
loved it! :)
wow this video is so awesome! your book link doesnt seem to work in the description :)
Thank you, and thanks so much for pointing it out! Just fixed it.
Is clipping done to avoid vanishing/exploding gradients?
Great question, yes absolutely! If the gradient is too big or too small, then that messes up the training, and that's why we clip it to something in the middle.
@@SerranoAcademy It seems to me that the lower bound of the probability ratio is not determined by the clipping function since the min function will take the minimum of the probability ratio and the result of the clipping function. So if epsilon is 0.3 and the probability ratio is 0.2, the lower bound of the clipping function will be 1-0.3=0.7 and min(0.2, 0.7)=0.2
THE BEST !!!!!!!!!
Robinson Barbara Davis David Jones Steven
musical inserts between concepts are too loud and too long in this ever decreasing attention span world. the presentation and material are amazing. thank you
Weird comment