Hey, thanks for watching! These are fun to make and I learn a lot. I think my understanding has come a long way since I made this video, so I'll have to make another eventually.
This is rocket science to me lol but I get value from your videos anyway. I learn critical thinking from you. I think I understand and like general idea I hope you wont invent skynet or something in the future. :D
Hey, awesome video!! I had a question regarding how the model is choosing the averages and standard deviations. It is supposed to be continuous, so how is the model choosing a continuous output for the the two?
Thanks for watching! I'm not sure I understand the question, but I think it's actually easier to make a neural network which outputs in a continuous range instead of a discrete range (like categorization). After the actor makes the mean "mu" and standard deviation "sigma", it samples "epsilon" from a normal distribution and adds mu + sigma * epsilon; it's called the "reparameterization trick." sassafras13.github.io/images/2020-05-25-ReparamTrick-eqn2.png
Thanks! I think that answers my question. So you just essentially take the continuous outputs of your network for your action itself, I presume, instead of like categorisation where the one with the highest probability is chosen?
very underrated video, literally the best explanation on actor/critic that I've seen. Good Job! and Thanks!
Hey, thanks for watching! These are fun to make and I learn a lot. I think my understanding has come a long way since I made this video, so I'll have to make another eventually.
This is rocket science to me lol but I get value from your videos anyway. I learn critical thinking from you. I think I understand and like general idea I hope you wont invent skynet or something in the future. :D
Haha, thanks! If I ever invent skynet, I hope it's a NICE skynet.
Hey, awesome video!! I had a question regarding how the model is choosing the averages and standard deviations. It is supposed to be continuous, so how is the model choosing a continuous output for the the two?
Thanks for watching! I'm not sure I understand the question, but I think it's actually easier to make a neural network which outputs in a continuous range instead of a discrete range (like categorization). After the actor makes the mean "mu" and standard deviation "sigma", it samples "epsilon" from a normal distribution and adds mu + sigma * epsilon; it's called the "reparameterization trick." sassafras13.github.io/images/2020-05-25-ReparamTrick-eqn2.png
Thanks! I think that answers my question. So you just essentially take the continuous outputs of your network for your action itself, I presume, instead of like categorisation where the one with the highest probability is chosen?
@@aprameyandesikan3648 Yes, exactly!
Awesome, thanks for taking your time to answer my questions! Keep up with the videos!