Monte Carlo in Reinforcement Learning
Вставка
- Опубліковано 19 лис 2023
- Let's talk about how Monte Carlo methods can be used in reinforcement learning
RESOURCES
[1] Other Monte Carlo Video: • Running Simulations as...
PLAYLISTS FROM MY CHANNEL
⭕ Reinforcement Learning: • Reinforcement Learning...
Natural Language Processing: • Natural Language Proce...
⭕ Transformers from Scratch: • Natural Language Proce...
⭕ ChatGPT Playlist: • ChatGPT
⭕ Convolutional Neural Networks: • Convolution Neural Net...
⭕ The Math You Should Know : • The Math You Should Know
⭕ Probability Theory for Machine Learning: • Probability Theory for...
⭕ Coding Machine Learning: • Code Machine Learning
MATH COURSES (7 day free trial)
📕 Mathematics for Machine Learning: imp.i384100.net/MathML
📕 Calculus: imp.i384100.net/Calculus
📕 Statistics for Data Science: imp.i384100.net/AdvancedStati...
📕 Bayesian Statistics: imp.i384100.net/BayesianStati...
📕 Linear Algebra: imp.i384100.net/LinearAlgebra
📕 Probability: imp.i384100.net/Probability
OTHER RELATED COURSES (7 day free trial)
📕 ⭐ Deep Learning Specialization: imp.i384100.net/Deep-Learning
📕 Python for Everybody: imp.i384100.net/python
📕 MLOps Course: imp.i384100.net/MLOps
📕 Natural Language Processing (NLP): imp.i384100.net/NLP
📕 Machine Learning in Production: imp.i384100.net/MLProduction
📕 Data Science Specialization: imp.i384100.net/DataScience
📕 Tensorflow: imp.i384100.net/Tensorflow
One important reason to use MC methods is cases where we do not have access to the markov decision process (MDP). The example in this video does have a known MDP so this can be solved using bellman equations as well.
Loved the way decision making of a robot using Q table was explained in this video.
Glad the explanation was good. Thanks for the comment :)
For Quiz Time 1 at 3:47, Shouldn't the answer be B: 0.5 sq units.
I think the entire premise is that you know the area of a region, you know the ratio of balls dropped in both regions, and the ratio of balls dropped equals the ratio of area. Therefore you can use this information to determine the unknown area.
In S1 (8:08) the greedy action is to go up, actually...
There is no grid to go up in s1 where it starts there are only two options right and down.
@@AshmaBhagad Then why is there a payoff value in up?
I would use Monte Carlo to predict if there will be food at the office tomorrow because It's so unpredictable when I have to bring in food lol
Answer for Quiz2: Option 'B' frank was updating Q values based on observed rewards from simulated episodes.
0.5 sq units.
The area of square = 1*1 = 1 sq unit.
Half of the balls dropped fell into the diamond, which means the diamond occupies half the area of the square (Area of diamond = (1/2) * 1 sq unit = 0.5 sq unit).
Ding ding ding. That’s correct :)
@@CodeEmporium
Question 2)
B. Frank was updating Q-values based on observed rewards from simulation.
where does the number of the states is coming from? where is state 17??
Are you from Bharat 🇮🇳
I think you should include the answers of the quizes in the video at some point. Also in 8:00 you said the highest is 1.5 but it is 2.1.
Most importantly, I think these moments for frank where cringe and it distracted me from focusing. Target audience is not kids most likely (at least I think so), so they would consider it cringe too. No offense
found it funny, not a kid, but helped me concentrate more😂
Didn't find it funny, am a kid, but appreciate the light humor and effort put into these videos. Didn't really distract me.
Stop being such a hater. The reason why 1.5 is the highest is because action with assumed reward 2.1 is illegal in state 1 (you can't move up because of the wall).
-p.s. using the word cringe is cringe
8:09 I stopped watching when he thinks 1.5 is greater than 2.1 lmao
In state s1 the agent didnt actually have the option to go up. So maybe thats why 2.1 doesnt matter because the agent can only select the best action depending on the state it is in.
At the start he clearly told that the environment has 9 grids(states)
this is the difficult way to teach Monte Carlo 😂
difficult for absolute beginners I think, otherwise the video was easy to follow for me.
@@swphsil3675do you think that the Monte-Carlo should start learned from how the random could be gives a situation that control by math probability, starting how a coin could have two possibility, no one known if he will win or not to Play many times to be any one will 50% of times