Reinforcement Learning with sparse rewards

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ • 91

  • @AnkitBindal97
    @AnkitBindal97 6 років тому +38

    Your teaching style is incredible! Can you please do a video on Capsule Networks?

  • @michaelc2406
    @michaelc2406 6 років тому +4

    I've just been reading these papers for the openai retro competition. Your video went into a lot of depth, which is really hard to do with complex ideas, bravo!

  • @glorytoarstotzka330
    @glorytoarstotzka330 6 років тому +27

    no clickbait , good video quality , good sound, relative nice topics for some people, but 16k subs Excuse me , wtf

    • @sebastianjost
      @sebastianjost 4 роки тому

      The video quality is great but the topics are just not interesting for many people. And of course few subs makes it hard to find this channel.
      I'm glad I did though. This is a great overview.

  • @DeaMikan
    @DeaMikan 3 роки тому +2

    Seriously great, I'd love to see an updated video with the newest research!

  • @armorsmith43
    @armorsmith43 4 роки тому

    This a very effective strategy for personal productivity as a programmer with ADHD.
    I augment my unreliable reward-signaling system with Test-Driven Development.

  • @timonix2
    @timonix2 3 роки тому

    Holy shit. I have been working on this problem for months and to see that professionals are getting almost the exact same answers as me is pretty cool. There are a whole bunch if ideas in here I have not tried yet as well. Super useful

  • @Frankthegravelrider
    @Frankthegravelrider 6 років тому +1

    Ah dude just discovered your videos!! Just what I needed. Can't believe, have 6 year degree in engineering, work in AI and I can still learn from UA-cam.mad when you think out it. It's a new paradigm of education

    • @ArxivInsights
      @ArxivInsights  6 років тому

      Haha, glad to hear that! You're welcome :)

  • @minos99
    @minos99 3 роки тому +1

    I was really touched by the ending of the video. We need research on models and the social-economic consequences of the AI models...and I don't mean that terminator, Butlerian jihad crap. I mean human side: job losses, bias, morality, misuse...etc

  • @Jabrils
    @Jabrils 6 років тому +21

    fantastic content lad!

  • @adrienforbu5165
    @adrienforbu5165 3 роки тому +1

    It's always interesting to see how ideas around curiosity have taken off in reinforcement learning (I think about the "Never give up" paper and atari57

  • @thomasbao4477
    @thomasbao4477 4 роки тому +1

    AMAZING! The prediction-reward algorithm in the first mentioned paper is very similar to how humans learn, at least based on a computational neurobiology course I took in college.

  • @henning256yt
    @henning256yt 3 роки тому

    Love your passion for what you are talking about!

  • @robosergTV
    @robosergTV 6 років тому +13

    need more deep RL stuff ^^

  • @pasdavoine
    @pasdavoine 6 років тому +1

    Fantastic video! Making me gain time and in an enjoyable way.
    Many thanks

  • @satyaprakashdash8203
    @satyaprakashdash8203 5 років тому

    I would like to see a video on meta reinforcement learning. Its an exciting field now!

  • @DjChronokun
    @DjChronokun 6 років тому +5

    if it wasn't for this channel I'd have never have known it wasn't pronounced 'ark-ziv'

    • @ritajitdey7567
      @ritajitdey7567 6 років тому +2

      Same here, at least we got it corrected without embarrassing ourselves IRL

    • @wahabfiles6260
      @wahabfiles6260 4 роки тому

      @@ritajitdey7567 INR

  • @Matthew8473
    @Matthew8473 9 місяців тому

    This is a marvel. I read a book with similar content, and it was a marvel to behold. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn

  • @mohammadhatoum
    @mohammadhatoum 6 років тому

    Always impressing and I never get bored watching your videos. Good job and keep it up 👍

  • @lukaslorenc4816
    @lukaslorenc4816 5 років тому +2

    Recommend to read "Curiosity-driven Exploration by Self-supervised Prediction"
    it's really awesome paper.

  • @Leibniz_28
    @Leibniz_28 5 років тому

    Really happy to find your channel, really sad to find out few videos in it.

  • @dzima-create
    @dzima-create Рік тому

    Maan it is so damn interesting and good video. I come from a completely different area - game development. And I wanted to understand some basics of A.I because I really want to dive deep into this to eventually teach for example rocket to fly, flappy bird to jump, snake to play efficiently.
    Reading papers is really difficult without knowledge of some basics, and the way you explained all these things is so good. I still don't understand the terminology and all these formulas, but at least I got one step closer :)
    Thank you for this brilliant video :)

  • @vadrif-draco
    @vadrif-draco Рік тому

    So "HER" basically starts off as "if I do this action, I can get to this goal", and then gradually learns how to flip the statement to "if I want to get to this goal, I need to do this action". Pretty nice.

  • @ItalianPizza64
    @ItalianPizza64 6 років тому

    Amazing video again! Clear and concise as always, all but trivial with this kind of topics.
    I am very curious to see what you will be focusing on next!

  • @cyrilfurtado
    @cyrilfurtado 6 років тому

    Great video, I can now look to read the papers. It would be great to post the links of the papers here

    • @ArxivInsights
      @ArxivInsights  6 років тому

      All links are in the video description! :)

  • @andrestorres2836
    @andrestorres2836 5 років тому +3

    Your videos are awesome!! Im going to tell all my frieds about you

  • @emademad4
    @emademad4 6 років тому +1

    great content , great purposes . please do more videos asap . im studding at the same field would you suggest some links for up to date good articles?

  • @aliamiri4524
    @aliamiri4524 4 роки тому

    amazing content(s), you are a very good teacher

  • @LatinDanceVideos
    @LatinDanceVideos 6 років тому

    Great channel. Thanks for this and other videos.

  • @TheAcujlGamer
    @TheAcujlGamer 3 роки тому

    This is so cool, specially the "HER" method. Wow!

  • @miriamramstudio3982
    @miriamramstudio3982 4 роки тому

    Excellent video! Thx.

  • @inspiredbynature8970
    @inspiredbynature8970 2 роки тому

    you are doing great, keep it up

  • @adityaojha627
    @adityaojha627 4 роки тому

    Nice video. Question: Is DDQN efficient at solving sparse reward environments? Say I only give an agent a reward at the end of an episode.

  • @nikoskostagiolas
    @nikoskostagiolas 6 років тому +1

    Hey dude, awesome video as always! Could you do one for the Relational Deep Reinforcement Learning paper of Zambaldi et al. ?

  • @CalvinJKu
    @CalvinJKu 6 років тому

    Awesome video as usual!

  • @saikat93ify
    @saikat93ify 6 років тому +1

    This channel is really amazing initiative as I've always found ArXiv extremely interesting but don't have the time to read all the papers. :)
    This question may sound very silly, but - How do programs play game like Mario and Reversi ? What I mean is, don't we need some kind of hardware like a keyboard or joystick to play these games ? How do software agents play this game ?
    I have always been curious about this. Please explain my doubt if anyone has my answer. :)

    • @ArxivInsights
      @ArxivInsights  6 років тому

      It's not that hard to hack the game engine so that an RL agent controls the game inputs via an API (so you can do that from eg Python) in stead of via a controller/joystick. In most gym games there's even an option to train your agent from the raw game state in stead of the rendered pixel version!

  • @ianprado1488
    @ianprado1488 6 років тому

    You make high quality videos A+

  • @bonob0123
    @bonob0123 5 років тому

    great stuff well done man

  • @mashpysays
    @mashpysays 6 років тому

    Thanks for the nice explanation.

  • @420_gunna
    @420_gunna 6 років тому +7

    Great vid! re: the ending of the video, what do you think about creating something on AI safety or ethics?

    • @ArxivInsights
      @ArxivInsights  6 років тому +8

      Actually, that's a really good suggestion! Added to my pipeline :)

    • @AnonymousAnonymous-ht4cm
      @AnonymousAnonymous-ht4cm 5 років тому +1

      Have you seen Robert Miles' channel? He has some good stuff on AI safety, but posts rather infrequently.

  • @aytunch
    @aytunch 5 років тому

    Great videos and channel. Why don't you make any more videos? :(

  • @QNZE5
    @QNZE5 6 років тому

    Hey, very nice video :)
    What is the source for that video containing the boat in a behavioural circuit?

  • @sunegocioexitoso
    @sunegocioexitoso 6 років тому

    Awesome video

  • @hassanbelarbi5185
    @hassanbelarbi5185 5 років тому

    if some one want to contact you directly is there any way ?? i have some questions related to my thesis topic. thanks in advance for your efforts .

  • @arjunbemarkar7414
    @arjunbemarkar7414 5 років тому +2

    Can you tell me where you find these articles?

    • @areallyboredindividual8766
      @areallyboredindividual8766 4 роки тому

      Website appears to be Arxiv. Searching for DeepMind and OpenAI papers will yield results too

  • @samanthaqiu3416
    @samanthaqiu3416 5 років тому

    Make a video on the MuZero paper

  • @mountain_bouy
    @mountain_bouy 5 років тому

    you are amazing

  • @artman40
    @artman40 6 років тому +1

    What about delayed rewards?

  • @ThibaultNeveu
    @ThibaultNeveu 6 років тому

    Very nice video. Thanks you :)

  • @herrizaax
    @herrizaax 5 років тому +1

    Nice video.
    I didn't get the last part: how does it learn faster if it sets virtual goals? If it gets the same reward for a virtual goal as for the real goal, then it will just learn it can shoot at any point which is made a goal but the real goal will never be found. If it gets a lower reward then it learns that shooting at goals gives a reward but it tells nothing about the proximity to the real goal. I'm obviously missing something here and I'm really curious what it is. Thank you :)

  • @Vladeeer
    @Vladeeer 6 років тому

    C a n. you do an example for RL?

  • @ycjoelin000
    @ycjoelin000 6 років тому

    What's the website you used in 2:23?

  • @jeffreylim5920
    @jeffreylim5920 5 років тому

    7:56 where the main point starts

  • @codyheiner3636
    @codyheiner3636 6 років тому

    Hi Xander, I made a Patreon account just for you! Keep it up!

    • @ArxivInsights
      @ArxivInsights  6 років тому +1

      Thx a lot Cody!! Getting this kind of support from people I've never is such a great motivation to keep going! Many thanks :)

  • @vigneshamudha821
    @vigneshamudha821 6 років тому

    brother please explain about capsule network

    • @ArxivInsights
      @ArxivInsights  6 років тому

      Aurelien Geron has a great video on CapsNets, no need to redo his video, its already perfect! ua-cam.com/video/pPN8d0E3900/v-deo.html

    • @vigneshamudha821
      @vigneshamudha821 6 років тому

      +Arxiv Insights thanks bro

  • @dripdrops3310
    @dripdrops3310 6 років тому

    The number of views of your videos is not proportional to their quality. Looking forward to new content!

  • @viralblog007
    @viralblog007 6 років тому

    can you suggest a link of research paper on reinforcement learning?.

  • @StevenSmith68828
    @StevenSmith68828 5 років тому +1

    I really like machine learning because it feels like training a pokemon sure it sometimes take a very long time to get it set up but yeah...

  • @MasterofPlay7
    @MasterofPlay7 4 роки тому

    any coding videos?

  • @DistortedV12
    @DistortedV12 6 років тому

    Smart guy

  • @shivajbd
    @shivajbd 5 років тому +1

    15:29 Modi

  • @hesohit
    @hesohit 4 роки тому

    I just want my computer to grind levels. Not take a my job.

  • @WerexZenok
    @WerexZenok 6 років тому

    I don't see any social problem automation can cause.
    If you let the market free, it will ajust itself as it always did.

    • @egparker5
      @egparker5 6 років тому

      I sort of feel the same way. We shouldn't make any public policy decisions until we see actual damage happening, and not just overexcited predictions. So far it seems DL/ML is creating net additional jobs and increasing average salaries. If that changes, then maybe it is time to think about new public policies. In the meantime, I would recommend retargeting the time spent worrying about AI into time spent learning about AI to increase your human capital.
      www.wsj.com/articles/workers-fear-not-the-robot-apocalypse-1504631505
      www.forbes.com/sites/bernardmarr/2017/10/12/instead-of-destroying-jobs-artificial-intelligence-ai-is-creating-new-jobs-in-4-out-of-5-companies

    • @WerexZenok
      @WerexZenok 6 років тому

      Agreed.
      And even imagining the worst scenario, where AI replaces all jobs, we still will be capable of owning bots and renting then.
      We will live like gods on earth.

    • @NegatioNZor
      @NegatioNZor 6 років тому +1

      The question here though, is WHO will be owning these robots, and how will these jobs be distributed? For highly educated and resourceful people, this will probably not be a huge issue. But there are something like N million truck drivers in the US, which will have a much harder time adjusting. Going from blue-collar to white-collar is probably not as easy.

  • @wahabfiles6260
    @wahabfiles6260 4 роки тому

    why his head bigger then the body? Alien?

  • @planktonfun1
    @planktonfun1 5 років тому

    big brain filter

  • @MD-pg1fh
    @MD-pg1fh 6 років тому +2

    Her?

  • @tsunamio7750
    @tsunamio7750 4 роки тому

    VOLUME TOO LOW!!!

  • @Rowing-li6jt
    @Rowing-li6jt 5 років тому

    louder pls

  • @creativeuser9086
    @creativeuser9086 Рік тому

    what happened to this channel..

  • @loopuleasa
    @loopuleasa 6 років тому

    A feedback on your video: Trim your content, and be more entertaining for the videos.
    Watch how Siraj does it.
    From my point of view, I dozed off a couple of times, even though the accuracy of the content is high.
    Bascially use less words, less images, less intro, less buildup and focus more on the crux, while going faster to keep your audience on edge and curious.
    Hope my view is productive to you. Good luck.

    • @loopuleasa
      @loopuleasa 6 років тому

      Do it like an AI optimizer does it. Minimize and use simplicity as much as possible until you reach the goal: Communicate the idea you want to convey, in as little time and actions as possible.