Nick Bostrom: Superintelligence | AI Podcast Clips

Поділитися
Вставка
  • Опубліковано 25 лип 2024
  • Full episode with Nick Bostrom (Mar 2020): • Nick Bostrom: Simulati...
    Clips channel (Lex Clips): / lexclips
    Main channel (Lex Fridman): / lexfridman
    (more links below)
    Podcast full episodes playlist:
    • Lex Fridman Podcast
    Podcasts clips playlist:
    • Lex Fridman Podcast Clips
    Podcast website:
    lexfridman.com/ai
    Podcast on Apple Podcasts (iTunes):
    apple.co/2lwqZIr
    Podcast on Spotify:
    spoti.fi/2nEwCF8
    Podcast RSS:
    lexfridman.com/category/ai/feed/
    Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere.
    Subscribe to this UA-cam channel or connect on:
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Support on Patreon: / lexfridman
  • Наука та технологія

КОМЕНТАРІ • 51

  • @nicknamenescio
    @nicknamenescio 4 роки тому +18

    Great interview. This somehow gives me the vibe of an FBI agent carefully and meticulously interviewing a slightly nervous scientist that they had recently "invited for a little talk."

    • @mitchal54321
      @mitchal54321 3 роки тому +2

      Hilarious. After reading your comment and then watching, I can’t help but think of what you said and laugh at loud at this

    • @nicknamenescio
      @nicknamenescio 3 роки тому +3

      @@mitchal54321 You are welcome :) I am also watching it again and it is hard not laugh.
      Agent: "So, would it need to be conscious to be intelligent?" (very direct look)
      Scientist: "Ehm, er, I think, em, it would not need to be conscious to be at least somewhat intelligent." (nervous nestling)
      Agent: (thinks: Ah, now we are getting somewhere, dude) "So, tell me, what is ... Superintelligence?!"
      Scientist: (looks, nestles, sweats. vanishes into a nervous puddle on the floor)

  • @apolotary
    @apolotary Рік тому +3

    Waiting for a “I told you so” follow up interview with Nick

  • @DrMJC13
    @DrMJC13 4 роки тому +1

    Great show man!

  • @sixthousandblankets
    @sixthousandblankets 4 роки тому +21

    You should interview Yuval Noah Harrari.

  • @Hyuts
    @Hyuts 4 роки тому +1

    Those lights look like places to grab onto in DOOM

  • @Armoredgadgets
    @Armoredgadgets 4 роки тому

    One question I wished you would ask your guests is on the Wozniak coffee test (robot enters unfamiliar home and makes a cup of coffee). How long into the future before we achieve a "good score" on that? A lot of good questions arise from that test which are in the realms of todays tech and todays/tomorrows capabilities, not 15-20+ years into the future. It does have a lot to do with robotics of course, but the crux is on the AI (or more specifically the G) side of the matter.

  • @lucashoek5403
    @lucashoek5403 4 роки тому +2

    What do you think about merging with AI, with respect to the plans or vision of Elon Musk's company Neuralink?

  • @Agerskiold
    @Agerskiold 4 роки тому

    Automation, self awareness, etc - is this some of, or parts of, how we define AI!?

  • @bitcoinyoda8321
    @bitcoinyoda8321 3 роки тому

    a room stuffed with Ikea lamps

  • @mbhinkle
    @mbhinkle 4 роки тому +2

    This should be good...:)

  • @ivannogolica364
    @ivannogolica364 4 роки тому +6

    David Deutsch!

  • @NormanBliss
    @NormanBliss 3 місяці тому

    “Theocentric Philosophical Alignment”
    A Guardian Angel In Digital Form.
    "Theocentric Philosophical Alignment." This term reflects a focus on God ("Theocentric" means God-centered) and incorporates your emphasis on philosophical principles. This term would denote an approach to AGI alignment that prioritizes the moral laws as described in the Ten Commandments and the insights derived from various branches of philosophy.
    This approach aligns the AGI not directly with human values or wants, but with a higher moral and philosophical order as defined by the Ten Commandments and the many disciplines of philosophy. It's a unique perspective and could add to the ongoing discussion about AGI alignment and the future of AI development.

  • @gregorylent
    @gregorylent 4 роки тому +1

    read aurobindo if superintelligence is an interest

  • @pls.protect.free.speechuns5528
    @pls.protect.free.speechuns5528 4 роки тому

    Compare the past internet growth to future growth of AI...probably exponential

  • @JezebelIsHongry
    @JezebelIsHongry 11 місяців тому

    We found the killer app

  • @gregorylent
    @gregorylent 4 роки тому +1

    need to melt this with some mystics, they have a very high understanding, and in many ways are far ahead of science

  • @salzen6283
    @salzen6283 3 роки тому +1

    I heard about nick in so many lectures as a reference...it seems to me he have the goods but he falls to impress me time after time !
    Nick buddy, ingest some brain unlocking substance, DMT, weed, lsd, shrooms... Whatever it take buddy let it out.
    Much love!

  • @doofy28
    @doofy28 4 роки тому +1

    Can't all the worlds current problems be solved mathematically? I"m not a mathematician. I just don't understand why it would not work. I thought it was possible to break everything down into math.

  • @lancebowley4986
    @lancebowley4986 4 роки тому +10

    This guy is obviously very smart, but it is really frustrating to listen to him get caught up in many levels of semantics

    • @Omakhara
      @Omakhara 4 роки тому +4

      I think a lot of that is Lex's habbit of interrupting his guests sometimes.

    • @Doughboy14145
      @Doughboy14145 4 роки тому

      My thoughts exactly

    • @christophergreenDP
      @christophergreenDP 4 роки тому

      MKpwn possibly, but nick can do that on his own too

    • @socrattt
      @socrattt 4 роки тому

      You need more practice with semantics, then.

    • @leomarkaable1
      @leomarkaable1 4 роки тому

      I enjoy NB most when he is expounding on the Simulation Argument, which is his baby. See Joe Rogan when he had NB on...it is information passed back and forth in an amusing and interesting way.

  • @leomarkaable1
    @leomarkaable1 4 роки тому +2

    AI will evolve into super-bullying and remain there. Any humans with even a bit of control over the AI will use it to maintain themselves in power. All this talk about how wonderful it will be is fantastical. Are billionaires giving away their fortunes? Are the ravaged poor forgetting their plight?

  • @tN0able
    @tN0able 4 роки тому +4

    Always cute when humans discuss super intelligence. It is like birds would discuss the general theory of relativity.

    • @socrattt
      @socrattt 4 роки тому +1

      I think it's cute that you are discussing an advanced topic; it's like an ant discussing philosophy.

  • @DamianReloaded
    @DamianReloaded 4 роки тому +2

    While listening I imagined a human-like AGI that would learn to appreciate the fact that "life evolved from inanimate matter, developed intelligence and created AI" and would therefor assign a high value to the existence of biological life because of this. Like it being capable of knowing/loving/liking its own roots and wanting them to be kept... safe...? An intelligence that would know the universe is uncertain and every resource could be valuable in different worst case scenarios. To understand that the richness of outcomes often relies on the initial parameters being diverse. It would most certainly have to like blues, jazz and other African-descent reinvented rhythms, not as much for their mathematical complexity (or optimality) but for the fact that they came to existence in difficult times and "unforseenly" became important. Just like itself... **drifts away**

  • @friedrichdergroe9664
    @friedrichdergroe9664 4 роки тому +4

    How about talking about how to actually implement ASI? I would find that a much more interesting conversation. The whole overhyped "existential threat" bit is so overdone you can't even scrape it from the pan anymore!!!!!

    • @Dr_MKUltra
      @Dr_MKUltra 4 роки тому

      Friedrich der Große if you want to know search for Marvin Minksy on AI

    • @friedrichdergroe9664
      @friedrichdergroe9664 4 роки тому

      @@Dr_MKUltra I know all about Marvin Minsky. Hell, I was there when he did a keynote at a special showing of 2001 at a theatre in Boston, where he spoke to his involvement in that project!
      I will remember him fondly.

    • @socrattt
      @socrattt 4 роки тому

      There are other channels for that. You can search for them. Do you have access to the internet?

  • @lostinbravado
    @lostinbravado 4 роки тому +1

    If you just take the assumed magic out of how our mind works, it makes a lot of rational sense. You don't need magic to describe the experience of consciousness just as you don't need magic to explain how a PC's operating system works.
    How do I explain the experience of the colour "Red"? Lots and lots of neurons firing. The brain is complex which means it can bring together objective pieces of empirical evidence in extremely complex ways.
    CPU's are basically more primitive clones of natural brains. And they function essentially the same way, just on entirely different scales of complexity. We're the same, just with far more advanced software and hardware.
    Except our hardware and software grows at natural speeds. While PC's are improving fast enough it might as well be instant in comparison. And they are not too many years away from being more complex than we are.
    We need to wake up and get over this whole belief in some essential human magic. The magic is that we exist at all. Look at the universe and point out another living thing like us.
    And regardless of how we judge ourselves, from what we can see, we are exceedingly rare. We are endangered. Not the Planets, the Stars, or the Galaxies, there are plenty of those things. The natural beauty we hold a deep reverence for is actually very common. Only living things appear to be rare. Not the mountains, the waterfalls, or the oceans.
    And for everything less complex than us, we can create it. Creation is just a complexity problem. To create something complex, we need to reach a more complex level.
    Building a skyscraper is easier than building a living thing. But the tools we use are starting to reach the same complexity as the life we're trying to create. We're understanding our genetic codes, understanding cells and what they do, and understanding broad complex living systems.
    The brain is already being analyzed and we're not finding anything drastically unknowable.
    The brain is a natural system which can be understood and will soon be explained. We're going to need to get over this nonsense of subjective experience and start seeing the world in more literal terms.
    Accept that we're fairly simple machines, and it's actually not that bad. We have an entire solar system to ourselves and possibly even an entire Galaxy. That's enough IRL Minecraft to occupy trillions of us for millions of years.
    We could live in utopias that are far beyond anyone's imagination for practically infinite amounts of time.
    The reality for us dumb biologically evolved machines is actually pretty damn good. If we could ever get over our exceedingly large emotional baggage. Ugh...

    • @goyonman9655
      @goyonman9655 4 роки тому

      You still haven't explained red
      or proven that we both see the same colour as red

  • @alvarorodriguez1592
    @alvarorodriguez1592 4 роки тому

    This guy is the ultimate hot water seller.
    Jean LeCun is much more worth listening to.

  • @aspiceoflife
    @aspiceoflife 4 роки тому

    May I remind you this intelligence is all based on classical computing which is nothing compared to quantum computing. 100 million times faster. So stop trying to define intelligence when you don't even know what is possible. Work on making possible, reality.

  • @burkebaby
    @burkebaby Рік тому

    Great interview. This somehow gives me the vibe of an FBI agent carefully and meticulously interviewing a slightly nervous scientist that they had recently "invited for a little talk."