Superhuman AI Cracked An Impossible Game! | DeepNash, Explained

Поділитися
Вставка
  • Опубліковано 21 гру 2022
  • An explanation of DeepMind's DeepNash and what it means for us.
    🔔 Subscribe for more stories: www.youtube.com/@underfitted?...
    📚 My 3 favorite Machine Learning books:
    • Deep Learning With Python, Second Edition - amzn.to/3xA3bVI
    • Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow - amzn.to/3BOX3LP
    • Machine Learning with PyTorch and Scikit-Learn - amzn.to/3f7dAC8
    Twitter: / svpino
    Disclaimer: Some of the links included in this description are affiliate links where I'll earn a small commission if you purchase something. There's no cost to you.
  • Наука та технологія

КОМЕНТАРІ • 40

  • @PedroAChagas
    @PedroAChagas Рік тому +4

    Man, your channel is amazing!! Keep up the good work and I'm sure it'll be HUGE. It baffles me how it isn't yet.

  • @yantran-forfuture373
    @yantran-forfuture373 Рік тому +5

    Was not capable to learn more about AI today, but watched entire video with great attention. That's why I subscribed you ♥♥

  • @CyrilleC
    @CyrilleC Рік тому +1

    You deep nailed it again.

  • @juneddavada
    @juneddavada Рік тому +2

    You are really a great person ❤, you are spreading a very good knowledge about what is going in the AI world.
    I just want to ask you how you are able to collect all this information and updated at same time ?

  • @prajwalsyallur712
    @prajwalsyallur712 Рік тому +1

    Wow! Greate update. Thanks!

  • @aaronprindle385
    @aaronprindle385 Рік тому +2

    Thanks for this, amazing job. +1 for making a video on Nash Equilibrium

  • @viddeshk8020
    @viddeshk8020 Рік тому +1

    Yes, Game theory and Nash equilibrium is indeed good for Reinforcement learning.

  • @juwonkim1782
    @juwonkim1782 Рік тому +1

    How is it different from Counterfactual Regret Minimization(CFR)? CFR combined with deep learning is well known solution for solving imperfect information game such as poker, and it also guarantees nash equilibrium in two player zero sum imperfect information game. Is deepnash different approach, or just case study of CFR applied to Stratego?

  • @dimasveliz6745
    @dimasveliz6745 Рік тому +1

    Lovely!! I'd love to see the Benchmarks that it has. Are you aware of any paper with those?

    • @underfitted
      @underfitted  Рік тому +2

      Yeah, check out the DeepNash paper (Google it) in Science.

  • @chavdadeep9165
    @chavdadeep9165 Рік тому +1

    Really good explanation of DeepNash 👍🏻

  • @rorodog27
    @rorodog27 Рік тому

    Wow! This is amazing stuff

  • @kambizazimi4898
    @kambizazimi4898 3 місяці тому

    how could we access to the Deepnash plays or even play with ?!

  • @AshishSharma-bc4ut
    @AshishSharma-bc4ut Рік тому

    You've mentioned that we can use this model to solve real life problems like traffic predictions. But, how exactly can we use it's algorithm, I mean as it's newly released, how will we implement a code or something to apply it on a real life dataset.

  • @sharmaanuj334
    @sharmaanuj334 Рік тому +1

    Can you do a video on how contrastive learning works

  • @BrograckgroMalog
    @BrograckgroMalog Рік тому

    I could see this being useful in wargamming

  • @blj9793
    @blj9793 Рік тому +2

    Would it be a problem if DeepNash tried to learn from itself but ended up playing a worse opponent and ended up faring poorly because it assumed that the opponent would make optimal decisions?

    • @underfitted
      @underfitted  Рік тому +2

      DeepNash tries to optimize for a Nash equilibrium regardless of what the other player does. It's not trying to "copy" the other player's strategies.

  • @learningwills8621
    @learningwills8621 Рік тому

    Yo, good job, keep at it! Small suggestion, don't put the stress on every sentence as it can be a bit too much at times to listen.

  • @iamlegend3927
    @iamlegend3927 Рік тому

    not superhuman but definitely impressive

  • @jesussaeta8383
    @jesussaeta8383 Рік тому

    Awww man truly beautiful, and you did ok too dad….

  • @DeeJayCzy
    @DeeJayCzy Рік тому

    Awsome chanel...
    But.... deep blue won with a simple brute force. I don't think it should be considered an algorithm that played chess. Showed only high computing power.

    • @underfitted
      @underfitted  Рік тому

      Well, computing alone doesn’t win games. You need an algorithm, regardless of how much brute force it uses.

    • @DeeJayCzy
      @DeeJayCzy Рік тому

      @@underfitted After each move made, the remaining number of possible game scenarios decreased significantly. the program always made a move selecting only the scenarios that led to its victory. played completely "mindless". Here is the point...

    • @sayamqazi
      @sayamqazi Рік тому

      @@DeeJayCzy well it was almost brute force because the solution sapce was too large and it still is large for the fastest and biggest supercomputers still.

  • @VivaPodemos
    @VivaPodemos Рік тому

    If your son is like his father we will have double chance to have a better world! ;)

  • @TheNettforce
    @TheNettforce Рік тому

    Yes please on Nash Equilibrium

  • @vishalteotia1384
    @vishalteotia1384 Рік тому

    6.00

  • @jespermikkelsen7553
    @jespermikkelsen7553 Рік тому +1

    A bit scary - Could we send a robot with DeepNash inside to Moscow. Goal: Find Putin and put him on the train to the war crimes tribunal in the Hague. That's a tough one.

  • @j21m
    @j21m Рік тому

    Good video, but I hate your conclusion.
    This looks like a central planners utopia but at the same time like a citizens dystopia and an Orwellian nightmare.