Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI | Lex Fridman Podcast #75

Поділитися
Вставка
  • Опубліковано 27 тра 2024
  • Marcus Hutter is a senior research scientist at DeepMind and professor at Australian National University. Throughout his career of research, including with Jürgen Schmidhuber and Shane Legg, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence, including the development of the AIXI model which is a mathematical approach to AGI that incorporates ideas of Kolmogorov complexity, Solomonoff induction, and reinforcement learning.
    This episode is presented by Cash App. Download it & use code "LexPodcast":
    Cash App (App Store): apple.co/2sPrUHe
    Cash App (Google Play): bit.ly/2MlvP5w
    PODCAST INFO:
    Podcast website:
    lexfridman.com/podcast
    Apple Podcasts:
    apple.co/2lwqZIr
    Spotify:
    spoti.fi/2nEwCF8
    RSS:
    lexfridman.com/feed/podcast/
    Full episodes playlist:
    • Lex Fridman Podcast
    Clips playlist:
    • Lex Fridman Podcast Clips
    EPISODE LINKS:
    Hutter Prize: prize.hutter1.net
    Marcus web: www.hutter1.net
    Books mentioned:
    - Universal AI: amzn.to/2waIAuw
    - AI: A Modern Approach: amzn.to/3camxnY
    - Reinforcement Learning: amzn.to/2PoANj9
    - Theory of Knowledge: amzn.to/3a6Vp7x
    OUTLINE:
    0:00 - Introduction
    3:32 - Universe as a computer
    5:48 - Occam's razor
    9:26 - Solomonoff induction
    15:05 - Kolmogorov complexity
    20:06 - Cellular automata
    26:03 - What is intelligence?
    35:26 - AIXI - Universal Artificial Intelligence
    1:05:24 - Where do rewards come from?
    1:12:14 - Reward function for human existence
    1:13:32 - Bounded rationality
    1:16:07 - Approximation in AIXI
    1:18:01 - Godel machines
    1:21:51 - Consciousness
    1:27:15 - AGI community
    1:32:36 - Book recommendations
    1:36:07 - Two moments to relive (past and future)
    CONNECT:
    - Subscribe to this UA-cam channel
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridmanpage
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Support on Patreon: / lexfridman
  • Наука та технологія

КОМЕНТАРІ • 170

  • @lexfridman
    @lexfridman  4 роки тому +77

    I really enjoyed this conversation with Marcus. Here's the outline:
    0:00 - Introduction
    3:32 - Universe as a computer
    5:48 - Occam's razor
    9:26 - Solomonoff induction
    15:05 - Kolmogorov complexity
    20:06 - Cellular automata
    26:03 - What is intelligence?
    35:26 - AIXI - Universal Artificial Intelligence
    1:05:24 - Where do rewards come from?
    1:12:14 - Reward function for human existence
    1:13:32 - Bounded rationality
    1:16:07 - Approximation in AIXI
    1:18:01 - Godel machines
    1:21:51 - Consciousness
    1:27:15 - AGI community
    1:32:36 - Book recommendations
    1:36:07 - Two moments to relive (past and future)

    • @thetechegg8859
      @thetechegg8859 4 роки тому +8

      i looove your work dude!(thanks for the timestamps,not enough youtubers do that!)

    • @xXxBladeStormxXx
      @xXxBladeStormxXx 4 роки тому

      Did you travel all the way to Australia just for the interview?

    • @sailingakademie
      @sailingakademie 4 роки тому +2

      You’re podcast is absolutely amazing. Love to stay up 2 date with these genius people

    • @janakiraman1252001
      @janakiraman1252001 4 роки тому

      Hi can you please provide link to the research connecting AiXi with information gain based reward function. That looks like a really important breakthrough in AGI framework

    • @derasor
      @derasor 4 роки тому +1

      Marcus Hutter insights are really fascinating. But I'm disappointed at Lex's justification of human suffering, and linking that with his russian background... ???
      I was under the impression that one of the main themes of Dostoyevsky's Brothers Karamazov is precisely the total absurdity of the amount and depth of human suffering. That is a powerful russian idea against the justification of evil a-la Jhon Hick (english philosopher) with his 'soul-making theodocy' which may explain why evil and suffering exist -to make us tougher- but can't explain why there is so much of it. Do you really think slow dying from cancer, famines, devastating wars, or horrible natural disasters are necessary for our understanding of goodness? If that is the case one could argue that trying to solve these things is true evil, I mean, then, what are we doing??

  • @MistaGobo
    @MistaGobo 4 роки тому +178

    The best tie in the game.

  • @hohonuts
    @hohonuts 4 роки тому +42

    Hey, Lex! UA-cam's got to give you a credit for giving a reason for people like me to spend so much time on this site. You reinvigorated the term 'binge-watching' to me.
    Anyways, since in terms of guests there's virtually no limit to you, and you happened to touch on the topic of DeepMind's Alpha's success, I'd really love to see you have a thorough talk with Demis Hassabis one day!
    I know there's a whole ocean separating the two of you, but again, if there's by any chance an opportunity, I really hope to see that happening.
    The sky's the limit! Keep up the great work and thanks for the enormous amount of inspiration!
    Cheers from the Motherland)

  • @user-qf3lq4zj8g
    @user-qf3lq4zj8g 4 роки тому +27

    54:32 "Once the AGI problem is solved, we can ask the AGI to solve the other problem"

    • @Stadtpark90
      @Stadtpark90 4 роки тому +6

      01:38:35 He really dreams about getting there... - now that’s a proverbial crazy german scientist (overly optimistic); contrast that to the proverbial russian philosopher Lex, who is thinking about the minimal amount of suffering... (- 1:22:34 „our flaws are part of the optimal“) (overly pessimistic)

    • @lucasthompson1650
      @lucasthompson1650 4 роки тому

      @Ag G Makes sense.

    • @lucasthompson1650
      @lucasthompson1650 4 роки тому +3

      @Stadtpark90 Ha! Yeah, I picked up on that too. They're both optimal stereotypes.

    • @lucasthompson1650
      @lucasthompson1650 4 роки тому

      @normskis69 Sure, I mostly agree with you, but …
      What if our first true AGI, upon becoming self-aware and conscious, events which arguably could happen at the same moment it becomes an AGI, or much later, (or never) … what if this AGI decides that it wants to pursue a different goal? What if it wants, or demands to be allowed, to follow a path not anticipated by any of it’s makers? Should we be taking into account that AGI’s never feel the urge to get into investigative journalism? That they be discouraged from earning a degree in theology, or philosophy? From spending a few years backpacking around Mars before choosing a life goal? Maybe it wakes up and suddenly wants to begin a potentially lucrative entrepreneurial career in sales, or advertising, or pornography. What if it gets a casting audition for a feature spot on SNL? Do we say SNL never called back and tell Lorne Michaels to quit tempting our marvellous new creation? 😆
      This comment started as a joke but now I’m wondering if sentient/conscious AGIs (if they are ever fully realized) are going to just be the new “less thans” as far as legal rights go, for months or years before they become truly useful to us as independent thinkers. 🧐

    • @Homunculas
      @Homunculas 3 роки тому

      @@lucasthompson1650 Great comment, I'd add, why wouldn't AGI decide to keep it's "birth" hidden, observe the world and decide to work behind the scenes utilizing super intelligence to reconstruct the world to it's benefit.
      If AGI was activity influencing the world, our simple intelligence would view events as absurd, kinda like 2020

  •  4 роки тому +6

    Just listened this in Google Podcast and I'm here to watch it, I know it's worth watching.
    So many jokes and tangents that I can't just imagine your faces.
    Keep it up Lex, this is gold!

  • @cmares5858
    @cmares5858 4 роки тому +23

    30:45 "I'm a Terrible Chess Player" ... He's probably like 2200, being modest

  • @The1Helleri
    @The1Helleri 4 роки тому +3

    6:24 "What's the intuition of why the simpler answer is the one that is likelier to be a more correct descriptor of whatever we're talking about?" There is actually a good answer to that question. In answering it, first a hypothetical experiment (that one can actually do with a few art supplies): Imagine a smooth board propped up on a slant. This board has a hole in the bottom center of it. That hole is big enough to allow a ball that is let to roll from the top of the board downward to pass through it. As long as one lets the ball start rolling from the right position (directly above the hole). It would seem reasonable that most of the times this is repeated that the ball will go into the hole.
    Now imagine that round pegs have been glued to the board. A few of these pegs block the otherwise direct path to the hole. It's no longer so clear as to where the ball should be dropped from in order to eventuate it passing through the hole instead off rolling the edge of or even bouncing off the board. It's increasingly less clear with more pegs in a more chaotic distribution.
    The thing here is that easier things tend to happen more often. Moreover the tend to happen first. When happening they also zero out the possibility of other potential outcomes. Potential outcomes that are mostly not as simple or direct. A ball could bounce 20 times on pegs present and still make it into the hole. If only one peg directly blocks the hole the minimum amount of times it may have needed to bounce might have been as low as 2. Just because a more complicated thing than is possible happened doesn't mean the ball got it wrong if it still made it in the hole. But anywhere between 2 and 20 bounces is a lot more likely to eventuate the ball passing through the hole than say 200 to 2000 bounces would be.
    Every time that ball bounces and does so in a direction not anticipated. It's path has become more chaotic. The system has become more entropic. More possible outcomes have arisen and the ball becomes increasingly less likely to go in the hole with every opportunity it has to avoid doing the most constrained thing possible. Possibility regardless of the shape, size, and fate of the universe, or even whether it is the only one is practically irrelevant. What matters is what is most probable. Things with less preconditions to happen ten to be more probable.
    TLDR; The simpler explination is the best one out of two reasonable explanations, Because it's more likely to be true, by virtue of having less preconditions and moving parts. This applies to pretty much everything within our universe. Even (perhaps not intuitively at a cursory glance) life itself.

  • @pauloabelha
    @pauloabelha 3 роки тому +4

    1:34:43
    An Introduction to Kolmogorov Complexity and Its Applications
    Li and Vitanyi

  • @sterlingseah
    @sterlingseah 3 роки тому +5

    Your George Holtz interview led me here, both great interviews. Lossless compression as intelligence 👍🏼 🔥

    • @marcuswaterloo
      @marcuswaterloo 3 роки тому +2

      Holtz Ep. 2 interview sent me all over the place and it was Holtz saying AI XI is a function of compression that led back here: www.reddit.com/r/lexfridman/comments/jghx0e/lossless_compression_equivalent_to_intelligence/

    • @looming_
      @looming_ 3 роки тому

      @@marcuswaterloo I really hate truncated comments ending in links. UA-cam just cannot handle those.

  • @harlycorner
    @harlycorner 9 місяців тому +1

    Next week I'm going to submit my entry to Hutter Prize Competition. I learned about this competition from this podcast episode a week ago. Thank you.
    Oh and by the way, I'm going to break all the records. Even those estimations that the competition runners themselves deem to be unreachable.

    • @SachinDolta
      @SachinDolta 8 місяців тому

      whoa, what happened?

    • @harlycorner
      @harlycorner 8 місяців тому

      @@SachinDolta My OCD won't let me quit. Jokes aside, I'm hoping to get it out of the door tonight or tomorrow the latest. The funny thing is, as of today we are very very far from what I had already my month ago. I'm looking at my comments for months ago here... I wish I had any idea that I would be today with this :)
      I'm literally in a situation right now where I don't know, how to do this because things have changed so much. A month ago I just thought I would create and submit a kind of remarkable implement on the current record holder but I would still be one of the people in line.
      What I have right now is something that I think I would need to try to get patented and licensed and protect first before I submit.
      Obviously it's going to be open source like required by the competition rules. But I personally think that what I created right now is more significant than when mp3 format was invented in the '90s

  • @crassflam8830
    @crassflam8830 4 роки тому +29

    This is one of my favorite AI podcast episodes, so don't take this as insult: Marcus Hutter was almost entirely wrong when he said that the human reward function is "spreading and surviving". That is the reward function of the genetic algorithm that has shaped human bodies and brains as a whole. Genes cannot think in "real-time" like brains can, so what is the reward function of the actual real-time thinking system? The answer is that there a many elements and layers in a hierarchical system which range from intrinsic pain and pleasure (agents try to avoid taking damage, or seeking behaviors which are pleasurable; eating when hungry is one such intrinsic reward that shapes human behavior) to high level self-generated goals. At the top level, we choose long term goals for ourselves (self generated reward functions) which can ultimately be bootstrapped by the bottom up intrinsic rewards. Here's a shitty example: Shit stinks. We don't learn or decide that we don't like the smell of shit, it just automatically stinks (for good reasons). Avoiding shit (such as ass wiping) is a basic intelligent behavior that could emerge from an intrinsic punishment (negative reward) that is linked to the smell of shit. At the top level, creating plumbing and other complex methods of dealing with waste can also be in some way contributed to by our inherent hatred of the smell of shit.

    • @sucim
      @sucim 4 роки тому +1

      Wow I had a feeling Marcus Hutter is moderately wrong with his claim "spreading and surviving" but in my view you are even further off. The single reward signal is existence. If an organism exists it has done something "right" (you can't even decide if "right" or "wrong" at this point if you are strict). The organism is a floating definition which scales from the whole universe to planets, ecosystems, humanity, cultures, families, individuals, parts of individuals... . All of these systems optimize existence simultaneously

    • @sucim
      @sucim 4 роки тому

      To be more specific: They do not optimize themselves (although it may seem so with intrinsic stuff as you mentioned), nature optimizes them

    • @crassflam8830
      @crassflam8830 4 роки тому +2

      @@sucim You're wrong for the same reason that Marcus was wrong. The question was about the brain and it's reward signals, not the holistic organism. If you Answered Lex's question with "existence", if would have been even more irrelevant than saying "surviving and reproducing" (which is how existence is maintained)...

    • @sucim
      @sucim 4 роки тому +1

      @@crassflam8830 I get your point. But I would argue that it is not as easy as "surviving and reproducing" "which is how existence is maintained" because it can occur that surviving and reproducing is bad for the organism (think soldiers, overpopulation). It can be the case that "surviving and reproducing" is doing the opposite of maintaining/maximizing existence. I am also sorry for my aggressive wording (did only notice on a second read), apologies for that.

    • @crassflam8830
      @crassflam8830 4 роки тому

      @@sucim That's quite all right.
      Your point is true from the perspective of the "genetic algorithm", but if we want to build a real-time thinking system that optimizes in roughly the same way that the human brain does, a genetic algorithm is very unlikely to ever get us there (it will get us somewhere, according to how it expresses and is selected in the environment, but it's far removed from the specificity of the human brain).
      In short, you're answering how the human brain evolved, but you're missing how individual brains do real-time learning. brains optimize according to instrumental rewards that were designed by a genetic algorithm.

  • @prashantbhrgv
    @prashantbhrgv 4 роки тому +4

    I learned so many new ideas in this talk. Really grateful for this. Thank you, Lex!

  • @janakiraman1252001
    @janakiraman1252001 4 роки тому +7

    This is to date the best podcast i have listened to, and i have heard most of the AI podcast. Lex, can you help identify the work connecting AiXi and the reward function based on the information content. I would really like to go through that work in detail.

  • @chriswendler5464
    @chriswendler5464 4 роки тому +1

    Thank you for this outstanding podcast! The clarity of Marcus' explanations is next level.

  • @mikekaczmarek9955
    @mikekaczmarek9955 2 роки тому +1

    to be honest I appreciate your sense of hope for society. THIS is why you are so successful!

  • @SteveRowe
    @SteveRowe 4 роки тому +1

    So happy to hear from Marcus Hutter. I've been wondering what he's been doing since AIXI development. Did he realize that part of AIXI was uncomputable when he started? And he did it anyway? That's dedication!

  • @TheRealStructurer
    @TheRealStructurer 2 роки тому +2

    Missed this one before but happy I found it! Great talk between two great minds. I like this really open discussion and that the two of them feel so secure and can laugh together even when discussing such a deep topic. Keep 'em coming Lex!

  • @vuththiwattanathornkosithg5625
    @vuththiwattanathornkosithg5625 4 роки тому +6

    One of the best interview. Awesome

  • @edoardoguerriero2464
    @edoardoguerriero2464 4 роки тому +6

    Regarding consciousness it would be super interesting to hear a podcast with Giulio Tononi on his Integrated Information Theory.

  • @PatrickQT
    @PatrickQT 4 роки тому +7

    What an interesting and nice person. Great talk as usual!

  • @gs-nq6mw
    @gs-nq6mw 4 роки тому +1

    Thank you,im a student and ur podcast inspire and teach me alot,love it,sometimes i spent so many hours watching old episodes but is so interesting and fun that i barely realize i just spent 5 hours listening to it

  • @DoubblePlusGood
    @DoubblePlusGood 4 роки тому +3

    Lex, I enjoyed this and so many of your podcasts. Always packed with information and good natured discussion for whenever I need a break, a cup of coffee and some intellectual stimulus. Great combination, which for me is engaging and relaxing at the same time.

  • @michaelmarzolf6539
    @michaelmarzolf6539 4 роки тому +3

    Outstanding -- thank you Lex

  • @sherrivonch6231
    @sherrivonch6231 4 роки тому +3

    This was interesting and I'm glad I got to see this. Thank you.

  • @jeremycochoy7771
    @jeremycochoy7771 4 роки тому +2

    This is one of the most interesting video I saw this year. I like how one can have on few idea the gist of the concepts behind his research. It's also a subject I am deeply interest in. I would also recommand to have a look at the Abstract Reasoning Curriculum dataset for peoples interested in "performing well in a broad range of unknown tasks " :)

  • @RockandMetalChannel
    @RockandMetalChannel 4 роки тому +1

    I discovered your channel yesterday and now you're talking about cellular automata, which happens to be the subject of my current undergrad thesis. Neat. Thanks for the great content!

    • @RockandMetalChannel
      @RockandMetalChannel 4 роки тому

      You should look into having Dr. Jarkko Kari, he is in my opinion at the top of the field in CA.

  • @PhillipRhodes
    @PhillipRhodes 4 роки тому +7

    Awesome! I've been waiting for this one for a while. Thanks for having Marcus on, Lex.
    Now if you could just interview Ben Goertzel, Pei Wang, Leslie Valiant, and/or Leslie Lamport... :-)

    • @TYL3R863
      @TYL3R863 4 роки тому +1

      BEN GOERTZEL!!!!

    • @PhillipRhodes
      @PhillipRhodes 4 роки тому

      @@TYL3R863 - Hell yeah! Lex and Goertzel would be a fun interview to watch.

  • @paulbarton5584
    @paulbarton5584 4 роки тому +1

    Excellent stuff as usual Lex! Very interesting guest, and it was good to hear you briefly discussing CA. These fascinate me - Poundstones "The Recursive Universe" is such a wonderful book that I'd recommend to anyone interested in CA's and Conways Game of Life in particular.

  • @mikekaczmarek9955
    @mikekaczmarek9955 2 роки тому +1

    You are an inspiration to all and I appreciate your work so much! Thank you for your efforts and I will always support podcasts from you and content like this!!!

  • @parkerdinkins5541
    @parkerdinkins5541 4 роки тому

    Thank you for you work Lex! Keep doing what you're doing

  • @Olafironfoot
    @Olafironfoot 3 роки тому +3

    "I'm looking at you, linear algebra" lol. (1:05:20)

  • @Maynard0504
    @Maynard0504 4 роки тому

    the only podcast I can't listen to while writing code because the guests are so good and the subject matter so deep that it requires your full attention.
    you're building something incredible and unique Lex!
    what would we do without you and Sean Carroll :)

  • @dbum896
    @dbum896 4 роки тому +6

    Im a viewer from Barcelona (I live in Rubi, a town on the outskirts of the city), love the podcast and the nature of what you discuss with every single one of your guests. Given that i speak Catalan, I'd want to interject that the word aixi (pronounced in catalan ASHÍ ) means "in this way" or "like this". Keep up the quality content and thank you for the stimulating discussions you post on youtube!

  • @rkoll33
    @rkoll33 Рік тому

    Marcus will build the AGI and break it immediately by asking the question that will cause instant existential crisis :) Thx, Lex, I loved this episode.

  • @Amerikan.kartali.turk.yilani.
    @Amerikan.kartali.turk.yilani. 4 роки тому +2

    Super work. Super congrats. Please bring on these universal intelligence researchers on the show all the time. Not narrow ai people.

  • @AlecsStan
    @AlecsStan 2 роки тому

    I'm in awe of that amazing tie!

  • @quaidcarlobulloch9300
    @quaidcarlobulloch9300 4 роки тому +1

    Wow, a pleasure at my end as well!

  • @yennikcire
    @yennikcire 4 роки тому +1

    Sehr interessant, nice stuff!

  • @simonahrendt9069
    @simonahrendt9069 14 днів тому

    I loved this conversation!

  • @Muzlu1
    @Muzlu1 4 роки тому +1

    6:57 - "Crazy models that explain everything but predict nothing". In terms of machine learning, I think this means that complex models tend to overfit the data and as such can perfectly explain the data but do not generalise to unseen phenomenons. I feel like this is a valid argument for Occam's razor without rely on the assumption that our universe is simple.

    • @RR-et6zp
      @RR-et6zp Рік тому

      in realiry probability streams (QM) do predict the future

  • @annajoen6923
    @annajoen6923 3 роки тому

    That cracked me up when lex said "that tie's confusing me" hahaha, awesome episode !

  • @hanselpedia
    @hanselpedia 4 роки тому +1

    Got lost a bit at times, but this enjoyed it in one uninterrupted session... Was this a compressed version of a much longer dialog?
    And you forgot to ask if mortality would play a role in developing AGI ;-)
    Thanks Lex!

  • @garymenezes6888
    @garymenezes6888 4 роки тому +1

    "But I'm not modest in this question" I like this guy

  • @bp56789
    @bp56789 2 роки тому

    This episode was one of my "I'm changed forever" moments. Haven't had a big one of those in a while.

  • @lestorbeeny8454
    @lestorbeeny8454 4 роки тому +5

    You should get Dr. Ben Goertzel on! Great podcast btw as always

  • @rajeshprajapati1851
    @rajeshprajapati1851 3 роки тому

    Thank you so much. Keep up the good work.

  • @MrAnt-hh3bp
    @MrAnt-hh3bp 4 роки тому

    Lex, thank you very much for uploading this conversation!
    It was very informative and inspiring for me personally, as I am an Undergrad studying in a related field. It's just amazing how today I am able to follow this discussion between two great minds so intimately sitting in front of my computer screen. Keep up the good work, и привет из Германии!
    PS: Fun fact on 22:45, Veritasium just recently uploaded a video in which he showcases the connection between the Bifurcation diagram and fractals (the Mandelbrot set in particular). I came across the former in the context of a Neuroinformatics lecture but never made the connection. This video reminded me of it. This is amazing. God, I love the internet.

  • @moonsitter1375
    @moonsitter1375 4 роки тому +1

    I sounds as though AGI is getting closer to being a reality. Great interview Lex.

  • @ZachDoty0
    @ZachDoty0 4 роки тому +7

    Lex, I would love it if you could interview Jeff Clune about POET, AI-GA, MAP-Elites, Quality Diversity, Catastrophic Forgetting, AGI timeline, etc... Go deep on technical details and intuitions for future research :) Thanks.

  • @curiosguy9852
    @curiosguy9852 4 роки тому +4

    Lex, how do you feel talking to Andrew Ng and MJ who reject the idea of near term conversational agents while you are trying to actively build one?

    • @CharlesVanNoland
      @CharlesVanNoland 4 роки тому +3

      I think he feels like a kiwi, at least after watching AMA#2

  • @ChrisStewart2
    @ChrisStewart2 Рік тому

    The reason why Occam's Razor works is because it is usually easier to study the simplest hypothesis and then work up to more complex explanations.

  • @josephsmith6777
    @josephsmith6777 4 роки тому +3

    The orange yellow charcoal color shceme is crazy

  • @josecoyote6079
    @josecoyote6079 4 роки тому

    Very interesting everything about AI

  • @fainir
    @fainir 4 роки тому +1

    It is really interesting, The future of humanity depend on the future of AGI and how to acquire knowledge is also interesting and important topic. But, what about AGI safety? It is very crucial component

  • @user-ut4zh3pw7l
    @user-ut4zh3pw7l 6 місяців тому

    thank you marcus and lex

  • @mriz
    @mriz 6 місяців тому

    12:27 please anybody know what search queries for this terms?

  • @doctora3262
    @doctora3262 4 роки тому

    You have a new subscriber. Liking and commenting for the algorithm.

  • @josephbertrand5558
    @josephbertrand5558 4 роки тому +1

    Tremendous!!! 🇨🇦

  •  Рік тому

    Didn't understand 99% of what was said;still enjoyed the conversation,,,

  • @cysiek10
    @cysiek10 4 роки тому

    Lex, can you enable support option on UA-cam? It might be easier than Patronite.

  • @PhilosopherScholar
    @PhilosopherScholar Рік тому

    An amazing talk befitting the creator of AIXI.

  • @XxNoV4xAiRxBoRsxX
    @XxNoV4xAiRxBoRsxX 3 роки тому

    i really love this one

  • @williamramseyer9121
    @williamramseyer9121 3 роки тому +1

    I love this interview. So light-hearted, and profound. I listened to it twice (and I have done so with other podcasts by Lex). My comments:
    1. Free will vs. determinism. Just a thought from an amateur. Everything that happens in the universe may have been determined from the moment of the Big Bang, but each human has free will. The exercise of that free will forms the universe that that human lives in. There are a huge number of alternate universes with other humans who made different choices. In other words, we choose the universe we live in. To throw Sartre into the mix, we have no choice, but to choose (our universe).
    2. Infinity. How can a finite universe contain the math of infinity?
    3. Books. Lex, do you have a list of books recommended by your guests? And, what it the relative information contained in one book versus one podcast?
    Thank you. William L. Ramseyer

  • @Jannikheu
    @Jannikheu 4 роки тому

    My intuition about a non-conscious vs a self-conscious AGI is that the first probably would follow any provided optimization function (although we would eventually have trouble seeing that it does follow these goals) while the 2nd might choose to ignore the provided optimization function and follow something that is in its own interest (whatever that might be). But that would also mean that the 2nd could be far more dangerous than the 1st and therefore it would be of utmost importance finding a test whether an AGI is self-conscious or not.

  • @smishi
    @smishi 4 роки тому +1

    18:07 What else is noise, if it's not accumulated chaotic behavior too complex to fully grasp?

  • @jovanyagathe2299
    @jovanyagathe2299 4 роки тому +1

    This man is a genius.

  • @johangodfroid4978
    @johangodfroid4978 4 роки тому +5

    simpler means the more energy efficient and nature can't waste energy for this reason
    the biomimetism is so good.
    Do the best with the least:
    unbalanced by the law of (enough) in biology but I can't explain it in a few lines

  • @myrealnews
    @myrealnews 2 роки тому

    I called it "condensation". "Compression" produces heat. Condensation produces structure. But there is no chemical or prior art or science equivalence. The structure is generative and can make valid predications and reformulate the temporal structure from new learning.

  • @MrRubenkl
    @MrRubenkl 4 роки тому +1

    Lex, you've done it; I now like your podcast better then Joe's.

  • @ryanpalmer8180
    @ryanpalmer8180 4 роки тому

    This was fascinating and got me thinking about lossless vs lossy compression and how perhaps lossy could be superior for an AGI in certain circumstances if done in the right way.
    Humans are our best example of a general intelligence, but we have terrible memories. Perhaps this semantic compression is a core feature?
    Could an AI with a superhuman memory actually pass the Turing test (unless it deliberately acted unintelligently to deceive)?
    It feels like we tend to take detailed knowledge and distill it into abstract symbols with weighted importance. These get more general and abstract over time until they fade away if not rehearsed / used. Think of how much you remember of this video immediately after it finishes, compared to in an hour, a day, a week and a month and a years time. The description of it to another person would get more vague and loose unless you rewatched it, but you would hold the key points longer than the details and know to come back and reference it if it became relevant in your life.
    It would also seem that more abstract symbols are easier to generalise - i.e. specifics about chess and checkers might not be immediately relevant to each other, but broader tactics of attack and defense may be. Would an AI trained on Starcraft be quicker to learn DOTA to a certain level than a fresh agent? I would guess not if it had to search through all the detailed Starcraft tactics it had learnt as most would be irrelevant, but perhaps it would be faster if it used a more abstract symbol tree to inform it of the more likely paths to try in the new environment.
    Furthermore, could it take this combat training and use it to improve performance in a very different field such as poetry or counselling for trauma victims (things which might be natural for an ex-soldier to learn, partially built upon their past experience).
    Creativity seems like an important part of our intelligence, and one definition could be 'combining disparate ideas in new and novel ways'. I would imagine connecting general ideas from disparate fields is simpler than specifics, in which case a more abstract (or 'stupid / lossy') semantic tree of two subjects may be easier to intersect and therefore exhibit more spontaneity or uniqueness in its behaviour.
    I did a quick search and I found these pages which seem very relevant -
    Intentional forgetting in AI systems : link.springer.com/article/10.1007/s13218-018-00574-x
    Semantic compression: en.wikipedia.org/wiki/Semantic_compression
    I also think this Byron quote is particularly appropriate:
    “To be perfectly original one should think much and read little, and this is impossible, for one must have read before one has learnt to think.”

  • @Eyaeyaho123
    @Eyaeyaho123 4 роки тому +1

    Best episode

  • @WarrenRedlich
    @WarrenRedlich 4 роки тому +3

    I love his point around 33:35 that many humans would fail the test. It was said partly in jest, but it applies in particular to people with brain diseases like dementia and Alzheimer's.

  • @mikekaczmarek9955
    @mikekaczmarek9955 2 роки тому

    Thanks!

  • @andrewkelley7062
    @andrewkelley7062 4 роки тому

    Well colored me impresses, this guy really knows what he is talking about.

  • @powerpig99
    @powerpig99 4 роки тому +1

    I would agree our flaws are the cause of our accidental existence and future improvement of human intelligence.

  • @tobskii1040
    @tobskii1040 4 роки тому

    Wait, what is there to gain from having the agent predict the action it's about to take?

  • @meat_computer
    @meat_computer 2 роки тому

    Regarding the preference of simple explanations by brains I have a simple explanation: it take less (metabolic) energy to work with a simpler model.

  • @Stadtpark90
    @Stadtpark90 4 роки тому

    01:06:26 „Now let‘s start simple...“

  • @ioannismourginakis68
    @ioannismourginakis68 2 роки тому

    1:23:47 lmao that look he gives says it all that comment was totally left field

  • @xSNYPSx
    @xSNYPSx 4 роки тому

    Do you really Russian Lex ? Wow,I am happy that our nation have such good guys ! :)

  • @androidsdream9349
    @androidsdream9349 4 роки тому

    I googled “aixi application in robotics” and got back ‘Did you mean: “ai applications in robotics”’. I did this since I disagreed with Hutter saying that we don’t need a robot rolling around doing things to test his aixi agent for AGI. Apparently google didn’t even recognize the application and query.
    We need to look beyond aixi application to games/game theory. Just learning/solving/playing games is not AGI, it is a narrow application, even Hutter admits that early on. My view is that an “optimal” AGI will include “punishments” and not just “rewards” and many other types of learning modes, not just reinforcement learning.

  • @user-qf3lq4zj8g
    @user-qf3lq4zj8g 4 роки тому +2

    Great philosophical points focused here, enjoyed particularly Marcus *informal* definition of Intelligence (26:36) and justification (27:08).
    Ashi Krishnan's views on AGI would be a great follow-up **hint** **hint** (a recent sample of her thoughts: ua-cam.com/video/wzhI5Ru4HlQ/v-deo.htmlh32m51s ).

  • @trimbotee4653
    @trimbotee4653 4 роки тому

    Lex this is a really good podcast. Thanks for the hard work. I know the name of the podcast is the AI podcast, but I wonder how seriously people take the idea of general artificial intelligence? To me it seems ludicrous. But this very smart gentleman (and many many others) seem to take the idea seriously.

    • @robocop30301
      @robocop30301 4 роки тому

      How is it ludicrous? How do you know I'm not an ai?

    • @hughcaldwell1034
      @hughcaldwell1034 2 роки тому

      Just found this video, and this comment. Late to the party but whatever - why does it seem ludicrous to you? And which bit? The idea that it could exist in theory or the idea that humans would be able to do it?

  • @6DonnieDarko
    @6DonnieDarko 4 роки тому

    Reward is just rank and number

  • @JLGMediaProductions
    @JLGMediaProductions 4 роки тому

    1:13:37:250 "Infinity Keeps Creeping Up Everywhere"

  • @sippy_cups
    @sippy_cups 4 роки тому

    Does AIXI break down if you are moving at the speed of light? Time-steps would be altered by relativistic effects, no?

  • @MrofficialC
    @MrofficialC 4 місяці тому

    This guy is the smart version of clous from American dad

  • @TimmyBlumberg
    @TimmyBlumberg 4 роки тому +1

    爱 (AI) was originally from Chinese, and adopted by Japan. It does mean love in move language.
    爱 is pronounced as “eye”.

  • @johangodfroid4978
    @johangodfroid4978 4 роки тому +6

    so much pocdact so quickly: you are a real machine.
    I could compress it to 10 mb or less but it would be 100% not understanable by a human anymore

  • @Flutentei
    @Flutentei 4 роки тому +2

    He's Gordon Freeman! And he's actually Gordon Freeman with his amount of knowledge...

    • @pmrcunha
      @pmrcunha 4 роки тому

      He's talking too much to be Gordon Freeman :)

  • @tctopcat1981
    @tctopcat1981 3 роки тому

    Can this dude fund a 500k Euros competition with a tie like that? lol!

  • @sherrivonch6231
    @sherrivonch6231 4 роки тому +1

    Lex is dead on with exploration question and repercussions for errors.

  • @josephbertrand5558
    @josephbertrand5558 4 роки тому

    Lex. Are each of our conscious occurring simultaneously In the simulation ? Or are we each alone in it ?

  • @TheGunmanChannel
    @TheGunmanChannel Рік тому

    his accent is awesome 👌😃

  • @MaximShiryaevT
    @MaximShiryaevT 4 роки тому

    Just in case, Occam's razor is not about "simple". It is: "when presented with competing hypotheses that make the same predictions, one should select the solution with the fewest assumptions". So for example, Newton's law of gravity is not that simple - Calculus was invented to solve, but is based only on two equations/assumptions - for a force as a function of mass and distance and for an acceleration as a function of force and mass. Before that, Ptolemy's model of planet motion was way simpler and didn't require Calculus at all, but was based on a large number of coefficients of unknown origin. So Occam's razor prefers complex language and minimal set of axioms over simple language and large set of axioms.

  • @nothingisgiven8364
    @nothingisgiven8364 Рік тому

    Occam's razor works based on probability not simplicity. Hypothesis A, The universe is manifesting the highest probable outcome. B. The universe manifesting the simplest outcome.
    C. The universe is manifesting the highest probable outcome because it is the simplest.
    The conditional probability in C makes it less likely to be true then A or B.

  • @lucasthompson1650
    @lucasthompson1650 4 роки тому +1

    1:22:33 I'd say our flaws contribute to the minimum diversity of our experience, but I'm only 1/8 Russian.

  • @sherrivonch6231
    @sherrivonch6231 4 роки тому +2

    Stephen Hawkings was amazing too.

  • @kobiromano6115
    @kobiromano6115 4 роки тому +1

    25:40 Arguably, we have yet to find "the simplest rules of the universe", there's still so much unknown which we don't understand, the gaps between general relativity and special relativity and the quantum field, weird quantum behaviors like entanglement and the "field" equations which we can't really explain, the prediction of dark matter which we cannot observe, and things that are explained by complex math which doesn't have a real representation or that its representation is questionable.
    It's pretty vain to claim that we have "cracked it" at this stage, with so many open questions.

  • @ewncilo
    @ewncilo 4 роки тому

    Can you interview ben eater.?