Reinforcement Learning with sparse rewards

Поділитися
Вставка
  • Опубліковано 3 чер 2024
  • In this video I dive into three advanced papers that addres the problem of the sparse reward setting in Deep Reinforcement Learning and pose interesting research directions for mastering unsupervised learning in autonomous agents.
    Papers discussed:
    Reinforcement Learning with Unsupervised Auxiliary Tasks - DeepMind:
    arxiv.org/abs/1611.05397
    Curiosity Driven Exploration - UC Berkeley:
    arxiv.org/abs/1705.05363
    Hindsight Experience Replay - OpenAI:
    arxiv.org/abs/1707.01495
    If you want to support this channel, here is my patreon link:
    / arxivinsights --- You are amazing!! ;)
    If you have questions you would like to discuss with me personally, you can book a 1-on-1 video call through Pensight: pensight.com/x/xander-steenbr...
  • Наука та технологія

КОМЕНТАРІ • 93

  • @michaelc2406
    @michaelc2406 6 років тому +4

    I've just been reading these papers for the openai retro competition. Your video went into a lot of depth, which is really hard to do with complex ideas, bravo!

  • @AnkitBindal97
    @AnkitBindal97 6 років тому +37

    Your teaching style is incredible! Can you please do a video on Capsule Networks?

  • @DeaMikan
    @DeaMikan 3 роки тому +2

    Seriously great, I'd love to see an updated video with the newest research!

  • @thomasbao4477
    @thomasbao4477 4 роки тому +1

    AMAZING! The prediction-reward algorithm in the first mentioned paper is very similar to how humans learn, at least based on a computational neurobiology course I took in college.

  • @adrienforbu5165
    @adrienforbu5165 3 роки тому +1

    It's always interesting to see how ideas around curiosity have taken off in reinforcement learning (I think about the "Never give up" paper and atari57

  • @mohammadhatoum
    @mohammadhatoum 5 років тому

    Always impressing and I never get bored watching your videos. Good job and keep it up 👍

  • @henning256yt
    @henning256yt 2 роки тому

    Love your passion for what you are talking about!

  • @pasdavoine
    @pasdavoine 6 років тому +1

    Fantastic video! Making me gain time and in an enjoyable way.
    Many thanks

  • @timonix2
    @timonix2 2 роки тому

    Holy shit. I have been working on this problem for months and to see that professionals are getting almost the exact same answers as me is pretty cool. There are a whole bunch if ideas in here I have not tried yet as well. Super useful

  • @glorytoarstotzka330
    @glorytoarstotzka330 5 років тому +27

    no clickbait , good video quality , good sound, relative nice topics for some people, but 16k subs Excuse me , wtf

    • @sebastianjost
      @sebastianjost 3 роки тому

      The video quality is great but the topics are just not interesting for many people. And of course few subs makes it hard to find this channel.
      I'm glad I did though. This is a great overview.

  • @robosergTV
    @robosergTV 6 років тому +13

    need more deep RL stuff ^^

  • @Jabrils
    @Jabrils 5 років тому +21

    fantastic content lad!

  • @armorsmith43
    @armorsmith43 3 роки тому

    This a very effective strategy for personal productivity as a programmer with ADHD.
    I augment my unreliable reward-signaling system with Test-Driven Development.

  • @minos99
    @minos99 2 роки тому +1

    I was really touched by the ending of the video. We need research on models and the social-economic consequences of the AI models...and I don't mean that terminator, Butlerian jihad crap. I mean human side: job losses, bias, morality, misuse...etc

  • @skyheart_dev
    @skyheart_dev Рік тому

    Maan it is so damn interesting and good video. I come from a completely different area - game development. And I wanted to understand some basics of A.I because I really want to dive deep into this to eventually teach for example rocket to fly, flappy bird to jump, snake to play efficiently.
    Reading papers is really difficult without knowledge of some basics, and the way you explained all these things is so good. I still don't understand the terminology and all these formulas, but at least I got one step closer :)
    Thank you for this brilliant video :)

  • @LatinDanceVideos
    @LatinDanceVideos 5 років тому

    Great channel. Thanks for this and other videos.

  • @Frankthegravelrider
    @Frankthegravelrider 5 років тому

    Ah dude just discovered your videos!! Just what I needed. Can't believe, have 6 year degree in engineering, work in AI and I can still learn from UA-cam.mad when you think out it. It's a new paradigm of education

    • @ArxivInsights
      @ArxivInsights  5 років тому

      Haha, glad to hear that! You're welcome :)

  • @satyaprakashdash8203
    @satyaprakashdash8203 4 роки тому

    I would like to see a video on meta reinforcement learning. Its an exciting field now!

  • @QNZE5
    @QNZE5 6 років тому

    Hey, very nice video :)
    What is the source for that video containing the boat in a behavioural circuit?

  • @aliamiri4524
    @aliamiri4524 3 роки тому

    amazing content(s), you are a very good teacher

  • @lukaslorenc4816
    @lukaslorenc4816 5 років тому +2

    Recommend to read "Curiosity-driven Exploration by Self-supervised Prediction"
    it's really awesome paper.

  • @Leibniz_28
    @Leibniz_28 4 роки тому

    Really happy to find your channel, really sad to find out few videos in it.

  • @emademad4
    @emademad4 5 років тому +1

    great content , great purposes . please do more videos asap . im studding at the same field would you suggest some links for up to date good articles?

  • @mashpysays
    @mashpysays 6 років тому

    Thanks for the nice explanation.

  • @ianprado1488
    @ianprado1488 6 років тому

    You make high quality videos A+

  • @CalvinJKu
    @CalvinJKu 6 років тому

    Awesome video as usual!

  • @adityaojha627
    @adityaojha627 3 роки тому

    Nice video. Question: Is DDQN efficient at solving sparse reward environments? Say I only give an agent a reward at the end of an episode.

  • @Matthew8473
    @Matthew8473 3 місяці тому

    This is a marvel. I read a book with similar content, and it was a marvel to behold. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn

  • @ItalianPizza64
    @ItalianPizza64 6 років тому

    Amazing video again! Clear and concise as always, all but trivial with this kind of topics.
    I am very curious to see what you will be focusing on next!

  • @bonob0123
    @bonob0123 5 років тому

    great stuff well done man

  • @andrestorres2836
    @andrestorres2836 5 років тому +3

    Your videos are awesome!! Im going to tell all my frieds about you

  • @nikoskostagiolas
    @nikoskostagiolas 6 років тому +1

    Hey dude, awesome video as always! Could you do one for the Relational Deep Reinforcement Learning paper of Zambaldi et al. ?

  • @miriamramstudio3982
    @miriamramstudio3982 3 роки тому

    Excellent video! Thx.

  • @inspiredbynature8970
    @inspiredbynature8970 2 роки тому

    you are doing great, keep it up

  • @DjChronokun
    @DjChronokun 5 років тому +5

    if it wasn't for this channel I'd have never have known it wasn't pronounced 'ark-ziv'

    • @ritajitdey7567
      @ritajitdey7567 5 років тому +2

      Same here, at least we got it corrected without embarrassing ourselves IRL

    • @wahabfiles6260
      @wahabfiles6260 4 роки тому

      @@ritajitdey7567 INR

  • @cyrilfurtado
    @cyrilfurtado 6 років тому

    Great video, I can now look to read the papers. It would be great to post the links of the papers here

    • @ArxivInsights
      @ArxivInsights  6 років тому

      All links are in the video description! :)

  • @hassanbelarbi5185
    @hassanbelarbi5185 4 роки тому

    if some one want to contact you directly is there any way ?? i have some questions related to my thesis topic. thanks in advance for your efforts .

  • @ycjoelin000
    @ycjoelin000 5 років тому

    What's the website you used in 2:23?

  • @vadrif-draco
    @vadrif-draco 10 місяців тому

    So "HER" basically starts off as "if I do this action, I can get to this goal", and then gradually learns how to flip the statement to "if I want to get to this goal, I need to do this action". Pretty nice.

  • @sunegocioexitoso
    @sunegocioexitoso 5 років тому

    Awesome video

  • @markusdegen6036
    @markusdegen6036 5 років тому

    Hi, i am completely new to the topic of machine learning itself.....just some thought.....when you do this sparse rewards, would it be possible to have each reward as somehow a forced version and a free will version......and then enforcing not to have forced ones? It sounds a bit abstract right now....when i get a better grasp of things maybe later in time i can rephrase that.

    • @ArxivInsights
      @ArxivInsights  5 років тому +1

      Markus Degen A bit abstract indeed. In general the current paradigm is as follows: we want to give the algorithm sparse extrinsic rewards because those are usually easy to define and relatively unambiguous: 'win the game', 'stack object A on top of B', ... However, there are many people working on algorithms that create their own derivative intrinsic reward signals. In human terms those are things like motivation, passion, curiosity, ... that might not be directly linked to extrinsic rewards (paychecks, eating food, sex, ...) but seemingly evolution has shaped those drives to overcome similar problems as Deep RL is facing right now!

  • @arjunbemarkar7414
    @arjunbemarkar7414 5 років тому +2

    Can you tell me where you find these articles?

    • @areallyboredindividual8766
      @areallyboredindividual8766 3 роки тому

      Website appears to be Arxiv. Searching for DeepMind and OpenAI papers will yield results too

  • @420_gunna
    @420_gunna 6 років тому +7

    Great vid! re: the ending of the video, what do you think about creating something on AI safety or ethics?

    • @ArxivInsights
      @ArxivInsights  6 років тому +8

      Actually, that's a really good suggestion! Added to my pipeline :)

    • @AnonymousAnonymous-ht4cm
      @AnonymousAnonymous-ht4cm 5 років тому +1

      Have you seen Robert Miles' channel? He has some good stuff on AI safety, but posts rather infrequently.

  • @mountain_bouy
    @mountain_bouy 5 років тому

    you are amazing

  • @Vladeeer
    @Vladeeer 6 років тому

    C a n. you do an example for RL?

  • @saikat93ify
    @saikat93ify 5 років тому +1

    This channel is really amazing initiative as I've always found ArXiv extremely interesting but don't have the time to read all the papers. :)
    This question may sound very silly, but - How do programs play game like Mario and Reversi ? What I mean is, don't we need some kind of hardware like a keyboard or joystick to play these games ? How do software agents play this game ?
    I have always been curious about this. Please explain my doubt if anyone has my answer. :)

    • @ArxivInsights
      @ArxivInsights  5 років тому

      It's not that hard to hack the game engine so that an RL agent controls the game inputs via an API (so you can do that from eg Python) in stead of via a controller/joystick. In most gym games there's even an option to train your agent from the raw game state in stead of the rendered pixel version!

  • @aytunch
    @aytunch 4 роки тому

    Great videos and channel. Why don't you make any more videos? :(

  • @samanthaqiu3416
    @samanthaqiu3416 4 роки тому

    Make a video on the MuZero paper

  • @artman40
    @artman40 6 років тому +1

    What about delayed rewards?

  • @jeffreylim5920
    @jeffreylim5920 4 роки тому

    7:56 where the main point starts

  • @TheAcujlGamer
    @TheAcujlGamer 3 роки тому

    This is so cool, specially the "HER" method. Wow!

  • @ThibaultNeveu
    @ThibaultNeveu 6 років тому

    Very nice video. Thanks you :)

  • @codyheiner3636
    @codyheiner3636 5 років тому

    Hi Xander, I made a Patreon account just for you! Keep it up!

    • @ArxivInsights
      @ArxivInsights  5 років тому +1

      Thx a lot Cody!! Getting this kind of support from people I've never is such a great motivation to keep going! Many thanks :)

  • @DistortedV12
    @DistortedV12 6 років тому

    Smart guy

  • @viralblog007
    @viralblog007 5 років тому

    can you suggest a link of research paper on reinforcement learning?.

  • @herrizaax
    @herrizaax 4 роки тому +1

    Nice video.
    I didn't get the last part: how does it learn faster if it sets virtual goals? If it gets the same reward for a virtual goal as for the real goal, then it will just learn it can shoot at any point which is made a goal but the real goal will never be found. If it gets a lower reward then it learns that shooting at goals gives a reward but it tells nothing about the proximity to the real goal. I'm obviously missing something here and I'm really curious what it is. Thank you :)

  • @shivajbd
    @shivajbd 5 років тому +1

    15:29 Modi

  • @vigneshamudha821
    @vigneshamudha821 5 років тому

    brother please explain about capsule network

    • @ArxivInsights
      @ArxivInsights  5 років тому

      Aurelien Geron has a great video on CapsNets, no need to redo his video, its already perfect! ua-cam.com/video/pPN8d0E3900/v-deo.html

    • @vigneshamudha821
      @vigneshamudha821 5 років тому

      +Arxiv Insights thanks bro

  • @MasterofPlay7
    @MasterofPlay7 4 роки тому

    any coding videos?

  • @StevenSmith68828
    @StevenSmith68828 5 років тому +1

    I really like machine learning because it feels like training a pokemon sure it sometimes take a very long time to get it set up but yeah...

  • @dripdrops3310
    @dripdrops3310 5 років тому

    The number of views of your videos is not proportional to their quality. Looking forward to new content!

  • @planktonfun1
    @planktonfun1 4 роки тому

    big brain filter

  • @wahabfiles6260
    @wahabfiles6260 4 роки тому

    why his head bigger then the body? Alien?

  • @elarrayhesohit4479
    @elarrayhesohit4479 4 роки тому

    I just want my computer to grind levels. Not take a my job.

  • @MD-pg1fh
    @MD-pg1fh 6 років тому +2

    Her?

  • @WerexZenok
    @WerexZenok 6 років тому

    I don't see any social problem automation can cause.
    If you let the market free, it will ajust itself as it always did.

    • @egparker5
      @egparker5 6 років тому

      I sort of feel the same way. We shouldn't make any public policy decisions until we see actual damage happening, and not just overexcited predictions. So far it seems DL/ML is creating net additional jobs and increasing average salaries. If that changes, then maybe it is time to think about new public policies. In the meantime, I would recommend retargeting the time spent worrying about AI into time spent learning about AI to increase your human capital.
      www.wsj.com/articles/workers-fear-not-the-robot-apocalypse-1504631505
      www.forbes.com/sites/bernardmarr/2017/10/12/instead-of-destroying-jobs-artificial-intelligence-ai-is-creating-new-jobs-in-4-out-of-5-companies

    • @WerexZenok
      @WerexZenok 6 років тому

      Agreed.
      And even imagining the worst scenario, where AI replaces all jobs, we still will be capable of owning bots and renting then.
      We will live like gods on earth.

    • @NegatioNZor
      @NegatioNZor 6 років тому +1

      The question here though, is WHO will be owning these robots, and how will these jobs be distributed? For highly educated and resourceful people, this will probably not be a huge issue. But there are something like N million truck drivers in the US, which will have a much harder time adjusting. Going from blue-collar to white-collar is probably not as easy.

  • @Rowing-li6jt
    @Rowing-li6jt 5 років тому

    louder pls

  • @tsunamio7750
    @tsunamio7750 4 роки тому

    VOLUME TOO LOW!!!

  • @creativeuser9086
    @creativeuser9086 Рік тому

    what happened to this channel..

  • @loopuleasa
    @loopuleasa 6 років тому

    A feedback on your video: Trim your content, and be more entertaining for the videos.
    Watch how Siraj does it.
    From my point of view, I dozed off a couple of times, even though the accuracy of the content is high.
    Bascially use less words, less images, less intro, less buildup and focus more on the crux, while going faster to keep your audience on edge and curious.
    Hope my view is productive to you. Good luck.

    • @loopuleasa
      @loopuleasa 6 років тому

      Do it like an AI optimizer does it. Minimize and use simplicity as much as possible until you reach the goal: Communicate the idea you want to convey, in as little time and actions as possible.