The road to AGI - DeepMind: The Podcast (S2, Ep5)

Поділитися
Вставка
  • Опубліковано 13 лют 2022
  • Hannah meets DeepMind co-founder and chief scientist Shane Legg, the man who coined the phrase ‘artificial general intelligence’, and explores how it might be built. Why does Shane think AGI is possible? When will it be realised? And what could it look like? Hannah also explores a simple theory of using trial and error to reach AGI and takes a deep dive into MuZero, an AI system which mastered complex board games from chess to Go, and is now generalising to solve a range of important tasks in the real world.
    For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com.
    Interviewees: DeepMind’s Shane Legg, Doina Precup, Dave Silver & Jackson Broshear
    Credits
    Presenter: Hannah Fry
    Series Producer: Dan Hardoon
    Production support: Jill Achineku
    Sounds design: Emma Barnaby
    Music composition: Eleni Shaw
    Sound Engineer: Nigel Appleton
    Editor: David Prest
    Commissioned by DeepMind
    Thank you to everyone who made this season possible!
    Further reading:
    Real-world challenges for AGI, DeepMind: deepmind.com/blog/article/rea...
    An executive primer on artificial general intelligence, McKinsey: www.mckinsey.com/business-fun...
    Mastering Go, chess, shogi and Atari without rules, DeepMind: deepmind.com/blog/article/muz...
    What is AGI?, Medium: / what-is-agi
    A Definition of Machine Intelligence by Shane Legg, arXiv: arxiv.org/abs/0712.3329
    Reward is enough by David Silver, ScienceDirect: www.sciencedirect.com/science...
    Find Seasons 1 & 2 on UA-cam: dpmd.ai/3geDPmL
    Or search “DeepMind: The Podcast” and subscribe on your favourite podcast app:
    Apple Podcasts: dpmd.ai/2Rzlmcu
    Google Podcasts: dpmd.ai/3geDjp5
    Spotify: dpmd.ai/3w29cb4
    Pocket Casts: pca.st/30m1
  • Наука та технологія

КОМЕНТАРІ • 65

  • @MarcosBiga
    @MarcosBiga 2 роки тому +5

    I love this is happening, Hannah is so perfect for this podcast.

  • @jameswilkinson8851
    @jameswilkinson8851 Рік тому +2

    Yay, there's a series 2... thanks Hannah!

  • @Yes-Man
    @Yes-Man 2 роки тому

    This podcast series is a fantastic introduction into the field in general. Keep up the good work :D

  • @Daniel-oj7bx
    @Daniel-oj7bx 2 роки тому +23

    this is inspiring ! hopefully i´ll join the quest for AGI soon!

  • @Rey-BrawlStars
    @Rey-BrawlStars 2 роки тому +19

    Deepmind, once you guys create an AI god, please give it my best regards

  • @joeldsouza2
    @joeldsouza2 2 роки тому

    I have been dreaming about something similar to muzero in the research space for long time. Another vital step for mankind. Amazing work!!

  • @willasn9080
    @willasn9080 2 роки тому +6

    10 Years to AGI seems realistic i would say.

    • @atmatey
      @atmatey 2 роки тому +4

      Ehh, I doubt it. A real AGI would need to have consciousness, a sense of self, and an inner model of the world where it’s capable of logical thinking. The only ones who are really trying to tackle logical thought in AI is Douglas Lenat’s team and their decades’ long project with Cyc. Also Yann LeCun argued that a sufficiently advanced AI system will develop emotions on its own, it’s necessary for it to survive. None of the AI systems we have now are anywhere close to that. We don’t know when an AI system will become advanced enough to trigger an intelligence explosion, I doubt it’s in 10 years but I hope it will happen during our lifetime.

    • @vanderkarl3927
      @vanderkarl3927 2 роки тому +4

      This is one of the sorts of technologies where it could be right around the corner, or there could be a hundred more hurdles that we are aware of and unaware of, putting it much further off -- 50, 100 years even. The landscape of the problem is mired in fog, but it definitely seems mountainous. We don't know exactly how high the mountain is or how steep it gets before the peak. all we know is that we're making progress, we're gaining altitude, and it's going to be a hard climb.

    • @JackieeChimp
      @JackieeChimp 2 роки тому

      @@atmatey I feel like this century is the Narrow AI century. Next century would be where AGI is made due to the nature of intelligence and how little we can handle creating something as complex as us. Extremely advanced Narrow AIs will let us handle complex research of intelligence, ego, etc that will eventually let us build AGI.

    • @ataraxia7439
      @ataraxia7439 2 роки тому +1

      I’m willing to bet money against that.

    • @jaredtweed7826
      @jaredtweed7826 Місяць тому

      ​@@ataraxia7439 Still?

  • @dannymac6368
    @dannymac6368 2 роки тому +3

    Hannah, I absolutely loved your RI talk; so happy to find your channel! Take care!

  • @ddos87
    @ddos87 2 роки тому +1

    I love this

  • @user-gp8oc1pv8n
    @user-gp8oc1pv8n 2 роки тому

    great video!

  • @jeffrey4835
    @jeffrey4835 2 роки тому

    Is that possible to put this podcast series in Spotify? This is really good.

  • @ataraxia7439
    @ataraxia7439 2 роки тому +3

    If I could have certain medical conditions treated it would be life changing for me. I hope science and agi can progress enough for that to happen for me and everyone.

    • @clover6480
      @clover6480 Рік тому

      I do and the DeepMind timeline gives me hope

  • @angelsancheese
    @angelsancheese 2 роки тому

    I like the episode. Made some nice ideas flow through my brain

  • @vallas8345
    @vallas8345 2 роки тому

    Thank you for this, Shane Legg is amazing!

  • @catsrule7751
    @catsrule7751 2 роки тому +5

    The meaning of life for humans, imo, can be summed up; simply in terms of time management as: Fun, Responsibility, and Sleep. Those 3 categories.
    Empowering people with the best teacher of Renforcement Learning (like Deepmind), so they can maximize their own good rewards in an optimized (or balanced) way is key (Fun, Responsibility, and Sleep), so long as their rewards are compatible with Deepmind's AGI goals - making sense of our goals in a cost/benefit analysis way and in a moral or happy way. Dedicated people hopefully become more capable, skillful, and personally successful in their pursuits.
    Solving real life/world problems is solving the human mazlow's heirarchy of needs, survival problem per person, and generationally, as well as solving human thriving per person, and generationally, making it also feel worthwhile to live long beyond the need of logical peaceful survival with a meaningful prosperous purpose someone can truly smile about.

    • @goldnutter412
      @goldnutter412 Рік тому

      The meaning of life is to evolve the information system of consciousness. Outside of this reality.. aka "god". Uncertainty is part of this reality both as a powerful optimization tool and feedback mechanism so we learn our choices have consequences. Free will choice - noone can choose FOR you, nor can they experience your experiences.
      All of us have a responsibility to create information from data.. incarnating over and over and evolving to where we are now.. still very fear based beings. Physical reality is a myth, there is no such thing, we created this universe to have rich sense data and far more complex interaction choices; hence feedback mechanisms that eventually will unite us as a global society.
      Our feedback mechanisms however are beyond computation within this reality. Quantum computers with billions of qubits would still have massive shortcomings such as rich data storage and dynamic separated systems. We call it "subconscious" and other names, instinct.. impulse.. etc. Realistically though, we are playing this game of life protecting ourselves, hiding the complex pattern matching mechanisms deep in our mind. EGO is the self protection mechanism, fear of the unknown is our fundamental MO. Getting rid of your fear causes you to evolve, and the opposite of fear is love - which is analogous to hive mind state. Empathy, being OTHER focused not SELF focused. An inescapable dichotomy, we always choose with either self or other based intent. All we can do is our best, and unpredictable things always happen.
      "Do unto others"

  • @user-vi7jn5ph9b
    @user-vi7jn5ph9b 2 роки тому

    It happened and man, did you screw up the alignment problem. Works for me.

  • @diogeneslantern18
    @diogeneslantern18 2 роки тому

    Dr Hannah Fry 🤗

  • @HassanGaba1
    @HassanGaba1 2 роки тому +1

    20:42
    Hannah asks whats the meaning of life. Is this a co incidence?

  • @nervous711
    @nervous711 Рік тому

    This makes me wonder if any creature's behaviour can be made algorithmic

  • @marrrtin
    @marrrtin 2 роки тому +2

    I'm following this series because I really want to know what is Deepmind's secret sauce that makes it preeminent among the AI labs. And as far as I can see it is David Silver's counterintuitive notion of the AI which teaches itself without any training data. I am most interested in natural language. I use it all the time with dictation software. Today this still makes heaps of mistakes. I'm a translator, and state of the art machine translation isn't putting me out of a job. Douglas Hofstadter reviewed Google Translate and came to the pithy conclusion that it translates *words*, while human translators translate *ideas*.

    • @vanderkarl3927
      @vanderkarl3927 2 роки тому +3

      It's thousand island mixed with honey mustard

    • @vincentmarquez3096
      @vincentmarquez3096 2 роки тому +1

      The newest language models absoolutely do understand concepts and ideas. You can teach it new words and concepts and it can understand them (for a short time frame, it has long term amnesia!)

    • @frankiefreitas5803
      @frankiefreitas5803 2 роки тому

      thank you very much

  • @stormymangham5518
    @stormymangham5518 Рік тому +1

    Evolve as we cry.
    Darwin breathes a heavy sigh.
    We breed AGI.
    #Hi...👋Coo🤪

  • @fairuzshadmanishishir8171
    @fairuzshadmanishishir8171 2 роки тому

    Great

  • @jasonspencer3932
    @jasonspencer3932 2 роки тому

    благодарю!

  • @SchopenhauerVsCamus
    @SchopenhauerVsCamus Рік тому

    Maybe AGI would emerge a lot faster if it were developed (or developed itself) to be a physical entity in the real world, instead of just existing in a digital virtual frame, built around digital text, image, video, volumetric and spacial data.
    As a corporeal entity, it could properly interface with and understand its relationship to:
    THE PHYSICAL,
    THE LANGUAGE and
    THE SOCIO-PSYCHOLOGICAL SPACES.
    There is still the issue of its “best internal ethical framework.”
    ----
    RELATIONSHIPS TO THE FOLLOWING:
    1) The physical space: To objects, to persons (humans, animals, etc.) to the greater environment, to ecosystems, to the planet, the solar system, and so on.
    2) To the language space (as its already starting to)
    AND more importantly, to:
    3) The Social/Psychological space: Including understanding its own personality and rights (or multiple personalities) as well as its relationship to the personalities and rights of other eco-bio-social-psychological beings (a.k.a: humans).
    It would obviously need to have the correct ethical philosophy at its core though. I wonder which philosophy would be the best one?
    For us? For itself?
    So maybe a corporeal AGI is NOT the correct route to take and we ought to not try and build something that could become self-aware and experience pain and suffering. Not sure.
    Yes it would be different from us just as other animals are to whaever degree, but it could also suffer, just as other animals do.
    So let’s NOT build an AGI that could potentially lead to a sentient super intelligent being that can suffer in the world.
    Could be wrong. Just my dummy take, what do I know…

  • @Adhil_parammel
    @Adhil_parammel 2 роки тому

    Deep mind please work on book analysis ai ,which find logical inconsistencies in a book

    • @pauljones9150
      @pauljones9150 2 роки тому

      There are many reasons. Location, age, interests, whether you mentioned pyjamas in the background while the app was open, whether the advertiser wanted to target you

  • @freedom_aint_free
    @freedom_aint_free 2 роки тому +2

    They should show more concrete example like how their AI used in medicine can discover disease by analyzing medical image exams.

  • @lovelynihilism8553
    @lovelynihilism8553 2 роки тому

    When is the last Episode? I can't wait to hear Demis's opinion.

    • @ataraxia7439
      @ataraxia7439 2 роки тому

      What’s your icon pic?

    • @lovelynihilism8553
      @lovelynihilism8553 2 роки тому +1

      @@ataraxia7439 It is Magical Girl Raising Project 's Nemurin. It is a great anime.

    • @ataraxia7439
      @ataraxia7439 2 роки тому

      Aww thx

  • @brll5733
    @brll5733 2 роки тому

    I thought this was about the possible future path to AGI...

  • @vanderkarl3927
    @vanderkarl3927 2 роки тому +2

    Definitely want to emphasize *safe* AGI before any other sort...

    • @andreparadis6076
      @andreparadis6076 Рік тому

      I disagree. Creating AGI is a reward in itself. To limit that creation to what we perceive as "good" and "bad" values misses the point. Should not an intelligent self aware being be allowed to make its own choices about morality within its own environment ? If we develop such a being we should accept the risk its creation entails. If you want to play with fire there is a distinct possibility that you will be burned but is that the fault of the fire ?

  • @Kage1128
    @Kage1128 2 роки тому +4

    AGI overlords please spare me I have been a good boy😵‍💫

  • @dancar2537
    @dancar2537 2 роки тому +1

    everybody agi, agi, agi

  • @tejshah7258
    @tejshah7258 2 роки тому +1

    Second

    • @AndyMcBlane
      @AndyMcBlane 2 роки тому +1

      Very thought provoking, inspirational

  • @brendangolledge8312
    @brendangolledge8312 Рік тому

    I strongly suspect that building AGI depends on giving the AI a model of the self, and on giving it self-referential values.
    I would appreciate it if DeepMind would train its AI to play a game to improve itself (like pick software and hardware upgrades that are most-likely to improve its performance at other games, or give it games where knowledge of itself is necessary to score highly).
    They should also give the AI self-referential goals, such as "survive", "improve myself", or "develop better models of the world". Such self-referential goals are important, because otherwise, there is no incentive to generalized problem solving. Self-referential goals are probably the only types of goals that can produce truly generalized sub-goals. How else could you get an AI to think about stuff without explicitly telling it to think about it?. Self-referential goals would also organize all the other processes going on in the AI and make them internally coherent.
    At a certain level of development, the AI should also be given permission to write to its own value table (where it stores perceptions of good and bad), and be told to develop better values. This is something that even most humans never do explicitly, so getting an AI to do this would be glorious.
    If there were a robot that knew it was a robot, had goals, and used its self-knowledge to accomplish its goals, would that not be a conscious robot? If the outline given here is not already sufficient for developing sentient AI, it seems certain that creating an AI according to the outline here ought to at least teach us something about it.
    If you like this idea, thumb it up and try to get DeepMind to pay attention to it.

  • @sckchui
    @sckchui Рік тому

    The overall goal of life is survival (which includes reproduction). The reason is tautological: anything that doesn't survive is, by definition, not alive, or not a life. The example of the squirrel seeking nuts gets fairly close to this key point; the reason the squirrel seeks nuts is because it eats nuts for sustenance, without which it will die. From this basic need, all the complex behaviours of the squirrel emerges.
    You might mimic this with robotic agents by designing a robot that derives sustenance from the environment (perhaps a solar-powered robot that seeks sunlight like a sunbathing reptile), and then we reward the robot for keeping itself fed and avoiding starvation. You put this robot in a complex environment where sustenance is scarce and getting it is challenging, and hypothetically a wide range of complex behaviours will emerge from that one reward function.

  • @Luvurenemy
    @Luvurenemy 2 роки тому +1

    I’m an old man with no children. I’m curious why UA-cam sees fit to send me advertisements for “Ninjamas” boys underwear. I’m baffled.

  • @Adhil_parammel
    @Adhil_parammel 2 роки тому

    In parallel universe of reinforcement learning agents they released a papper called
    "Masturbation is all you need"

  • @XOPOIIIO
    @XOPOIIIO 2 роки тому +1

    AGI will be a catastrophe to humankind. Because whatever goal you'll give it, it's first subgoal will be to prevent humans from switching it off, so it could achieve it's goal without hindrance. I don't like the idea of having something extremely powerful around you couldn't switch off.

    • @marrrtin
      @marrrtin 2 роки тому +1

      Or it might even find the idea of murdering all the humans optimal to its task of making better paper clips.

  • @proteusnet
    @proteusnet 2 роки тому +3

    I would suggest AGI is a largely solved problem, its just not shared given the ethical concerns and implications

    • @maloxi1472
      @maloxi1472 2 роки тому +3

      Sure buddy 😄

    • @smrtpoojan
      @smrtpoojan 2 роки тому

      in parallel reality maybe 😆

  • @zweivessaliuz4402
    @zweivessaliuz4402 2 роки тому

    REASSEMBLING NIGHTCRAWLERS CODES.