Best Multi-Armed Bandit Strategy? (feat: UCB Method)
Вставка
- Опубліковано 24 чер 2024
- Which is the best strategy for multi-armed bandit? Also includes the Upper Confidence Bound (UCB Method)
Link to intro multi-armed bandit video: • Multi-Armed Bandit : D...
Link to code used in this video: github.com/ritvikmath/Time-Se...
Link to Hoffding's Inequality: lilianweng.github.io/lil-log/...
This is called teaching with the highest standards
You're a good teacher, man! Too bad only very few academics can explain things with the clarity and simplicity you do.
I appreciate that!
I've watched loads of your videos and it's given me so much clarity with so many different data science concepts. You're a really great teacher, hope you keep posting videos and hope your channel keeps growing!
Love this video man. Just the simple message the viewer gets that you're here to help them and break down higher, abstract concepts into simpler terms they can grasp is incredibly reassuring. Even if I failed to understand any given part as a student I'd go back over and over with the confidence you're willing and able to help me get there eventually. Even if this channel isn't around forever never stop sharing your knowledge.
this is too good Ritvik. Congrats you made learning UCB easier
This is very, very well explained. Concise, yet conversational. Excellent stuff.
You simply rock 👍Your teaching style, way of explaining complex things in such a simpler fashion makes learning much easier and faster. Wonderful.
So great, clear my doubt completely. Please keep doing this!!
I really appreciate your videos. i’m taking a course on machine learning and a/b testing and after every lesson I come watch your videos to actually understand what I just learned.
One of the best video explanations I have seen on Data science so far. Please keep up the good work Ritvik. Thanks a lot!!
Most welcome!
Thank you, brother! You are very good at explaining and giving the right information. Respect!
Your videos has cleared my concepts over the years. Please make a playlist on Reinforcement Learning.
Choosing a place for dinner will never be the same again...your videos are fantastic, man! I was so frustrated earlier today because I simply couldn't get a grip on the UCB algorithm. Now, I am more than happy not only because I finally understood it (at least the intuition behind it), but also because I have a name for one of the dominating stories of my life (exploration - exploitation - dilemma). You, sir, are one of the most amazing teachers I ever experienced!
This is such a great explanation! Thank you!
Thank you, you're talented teacher. You explained it very well and clear.
BEST EXPLANATION EVER.
Thank you so much, Ritvik!
love the way you explain by examples!
The best video explanation I have seen so far. Could not stop paying attention. Thank you!
Glad it was helpful!
I was stuck on bandit algorithm for a day before I found your video. Excellent work!
Thanks!
thanks a lot for these multi-bandit videos..........
spent ages trying to figure this stuff out, your explanations have helped a lot
Thank you :-)
Your ability to communicate difficult concepts using story telling is unparalleled.
Made kid easy. Thanks for teaching this and being clear as day.
I like your videos dude. Thank you for creating them!
Glad you like them!
better than my professor thank god i found your video, thank you very much!!
The best math-computer-science instructor online. Much appreciated
Always the best 👌 I hope you design a RL course one day. It will definitely be one of the best🌝
Your videos are getting only better! Thank you very much. Is the restaurant's happiness score equivalent to the rewards delivered?
This is great. You should definitely continue with reinforcement learning applications!!!
Thank you so much for also providing the link to the Hoeffding's inequality! Most other sources for this just skip the theory which I dislike since I would like to understand this algorithm.
Warning Everybody... Very adictive vídeos... I just can't stop seeing one after another. Fantastic job!!!
That's an amazing explanation!
This is such a good explanation. Brilliant.
Glad you think so!
The explanation makes the concepts very clear.
thanks!
Hey thx a lot for the explanations! Maybe you can make a third video about random and directed exploration. There are a lot more models like the UCB :)
You are a great teacher indeed.
great explanation bro!
Thank you so much you explained that very well
Best explanation, p.s. it would be nice to see results for 300+ days in this competition of ucb vs exploitation
This was an excellent video. Thanks.
Glad it was helpful!
Awesome explanation. Thanks a lot
Ritvik you are a pedagogical GOD
Came here to get better at picking restaurants but stayed for the data science teaching!
Woo!
Thank you very much..you made it very easy to understand
You are welcome!
Amazing! Thanks a lott!!
Nicely explained, Thanks.
Wonderful explanation
Glad you think so!
This is super cool! Thanks :)
Keep up the good work !!
The thing I love the most about your videos is the perfect balance between intuition, theory and matching them to results. Keep going!
If you have a Patreon or equivalent account, I'd be honored to support you in this terrific journey of yours.
I appreciate that!
Agree!
Very helpful!! Just wanna know if we don't have any prior info about happiness distribution of each restaurant, then how to use this UCB algorithm. In total cold start problem what parameters will be helpful to decide happiness distribution of restraunt in city.
Hi, first of all, very well put together video!
One question: in exploitation approach, in the example, we visited each restaurant once (n times in total) and then continued with the best observed one for the rest of 300 - n days, right?
Also, I find it quite surprising that exploitation only outperforms UCB1 for larger n, intuitively it seems that exploitation only approach is less stable/more up to chance (may perform worse than even exploration only). I guess the second term based on Hoeffding's inequality really punishes UCB1 in the example 🤔
Clear explanation
Mindblowing!
great tutorial brother can you make an lecture on ucb1 derivation
Are there any models that factor in staleness? I would image going to the same restaurant 297 days in a row would be pretty boring so the optimal strategy should include the other restaurants every once in a while.
That's probably Hoeffding's inequality. Maybe the name sounds strange, but nevertheless deserves to be spelled correctly!
Kindly, also upload a video about Thompson Sampling as well! Exam in 4 days
very nice lecture
Another very good viedo.
Glad you enjoyed it
while watching this vid, i unconsciously started nodding!!!
First, i love your channel!
Hi. I have just watched a couple of your videos and couldn't resist the temptation to subscribe and binge on all the materials. Very impressed by the intuitiveness of your approach. May I ask if you have or recommend any materials to intuitively understand epsilon automata machines and CSSR algorithm. Utterly grateful for your reply.
Hi, MAB seems to be inefficient when there are lots of arms. One way to calculate q-value for multiple arms using single model is by using contextual bandits, could you explain how contextual bandit does this? I cannot understand how one model outputs q-value for multiple arms..
Nice!! thank you
No problem!
thank you!
PERFECT !!!
You’re the goat
perfect!
Wouldn't the averages have to be within a specific range (e.g. [0,1])? Considering the explanation in the video, if the means move in an order of thousands, the bound would have practically no effect on the decision. Please correct me if this is not correct. Thanks!
Can you make a video on Contextual Bandit
Last option (n=100) is akin to real life. There are so many things to do and choose from in a short time. Exploiting is a better strategy to reduce regret - Make the most of what you got !
thanks
nice :)
One additional question: can this be solved through an optimization problem's solution?
In real-world problems, state-space will be very big and we will not get enough time to explore all possible states. In such cases, UCB1 should perform better than exploitation..
cannot believe
To use hoeffdings, you need to be bounded. Why do we see that here?
after seeing this video I decide not to continue exploration
explained everything in a hurry, till I reached the end of the video I have already had forgotten what did you say at the start of the video.
and watching again and again is also not helping.
please put the other formulas on white board as well and show by calculating a manually a bit, so that the ideas and concept has time to sink in our brains.
running to the end, won't help the learners.