An AI... Utopia? (Nick Bostrom, Oxford)

Поділитися
Вставка
  • Опубліковано 15 кві 2024
  • The Michael Shermer Show # 423
    Nick Bostrom’s previous book, Superintelligence, changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong.
    But what if things go right?
    Bostrom and Shermer discuss: An AI Utopia and Protopia • Trekonomics, post-scarcity economics • the hedonic treadmill and positional wealth values • colonizing the galaxy • The Fermi paradox: Where is everyone? • mind uploading and immortality • Google’s Gemini AI debacle • LLMs, ChatGPT, and beyond • How would we know if an AI system was sentient?
    Nick Bostrom is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. Bostrom is the world’s most cited philosopher aged 50 or under.
    SUPPORT THE PODCAST
    If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
    www.skeptic.com/donate/
    #michaelshermer
    #skeptic
    Listen to The Michael Shermer Show or subscribe directly on UA-cam, Apple Podcasts, Spotify, Amazon Music, and Google Podcasts.
    www.skeptic.com/michael-sherm...
  • Наука та технологія

КОМЕНТАРІ • 159

  • @ili626
    @ili626 Місяць тому +13

    I’d love to listen to a discussion between Yuval Harari and Nick Bostrom

  • @alexkaa
    @alexkaa Місяць тому +17

    Strange moderator, with often kind of superficial participations, very good guest - Nick Bostron is just on another level.

    • @mrbeastly3444
      @mrbeastly3444 Місяць тому +4

      Well, to be fair, this is kind of a complex subject, with very few historical or real world examples to reference (so far). So it does require a bunch of reading, research, thought experiments, etc... This is a tough one... Good on Micheal for doing the interview and taking on the challenge! ;)

  • @jmunkki
    @jmunkki Місяць тому +15

    In order to understand what people will do in a world where they are obsolete and why they will do those things, you just have to look at already existing activities that serve no practical purpose or that achieve a practical thing in a non-optimal way. Things like playing World of Warcraft, windsurfing, photography, playing chess, making your own furniture or clothes etc. Just because humans are obsolete at playing chess hasn't stopped them from playing the game. The same will apply to writing books, making art and music and inventing things. I think a lot of people will become pleasure addicts (drugs of some sort, direct brain stimulation or just video games), but not all.

    • @minimal3734
      @minimal3734 Місяць тому +7

      Some predict the demise of human creativity or even art itself. I, on the other hand, only see the deindustrialization of art. In the future, art will be made for art's sake. I don't think that's a disadvantage.

    • @DailyTuna
      @DailyTuna Місяць тому

      The data said it’s there. It’s called welfare the activities of people on welfare is exactly what will happen with the majority of humanity

    • @planetmuskvlog3047
      @planetmuskvlog3047 Місяць тому

      Past-times once shamed as wastes of time may become all we have time for in an A.I. future 🌟

    • @mickelodiansurname9578
      @mickelodiansurname9578 Місяць тому

      This is all fabulous ideas... but umm.... okay so 50% of the population of the world have below average intelligence... you will not be retraining them to write a novel or do flower arranging. And in the Industrial revolution the solution was they went into a poor house and died of old age eventually. Even if it was agreed we throw 80% of the population on the scrap heap... well we don't have the time for an industrial revolution speed roll out of AI. It will be 50 to 100 times faster than that! Not seeing that being a winner either! You are forgetting that the entirety of human civilization relies on the dominance of humans as a value in society. Remove that, you have no society. Remove it too fast, and you have a revolution alright.

    • @mrbeastly3444
      @mrbeastly3444 Місяць тому +2

      "wire heading".. yeah... If an ASI wants all Humans to be "happy", it could just do that to all the Humans and not have to worry about them any more .... The Matrix....

  • @LukasNajjar
    @LukasNajjar Місяць тому +18

    Nick was great here.

    • @skoto8219
      @skoto8219 Місяць тому +2

      I will definitely watch this then because I’ve never seen an interview with Nick that I would say went great (granted, n = maybe 5.) Decent chance I would’ve passed if I hadn’t seen this comment and the 10 likes. Thanks!

    • @mrbeastly3444
      @mrbeastly3444 Місяць тому +4

      ​@@skoto8219​ def check out Nick's books, papers, etc. Superintelligence, simulation hypothesis, etc. No wild speculation, everything based on well thought out logical reasoning...

  • @sofvines3940
    @sofvines3940 4 хвилини тому

    Was that Pinker Michael was quoting when he said "humans would have to be smart enough to create AI but bumb enough to give it power"? That's actually EXACTLY what we are known for! We are consistently leaping over "should we" to see if "we can" 😮

  • @exnihilo415
    @exnihilo415 23 дні тому +1

    Shout out to Nick's teeth for enduring the grinding they are subjected to during the interview at Nick's frustrated lack of imagination from Michael about the scope of the possible in any of these Utopias. Zero chance Michael did more than breeze through the book and crib a few quotes.

  • @michelstronguin6974
    @michelstronguin6974 Місяць тому +4

    To preserve the self in an upload situation, all you need to do is 3 steps: 1) Make sure that the entire brain of the human is networked with anobots which are sitting on each neuron and neoronal pathway that exists in that human's nervous system. 2) Have these nanobots run on mimic shadow mode, meaning they are seeing exactly every incoming signal and then running the following action potential in shadow mode - meaning they aren't actually doing anything yet to effect you. 3) At the moment you decide to upload, the nanobots turn shadow mode off at the speed of an incoming signal from a previous neuron just before it has a chance to land on the next biological neuron, while at the same time of course blocking the incoming biological signal - which means biological death in an instant. Its important to mention that action potentials have different speeds all around the nervous system, this is why we need the full cover of nanobots sitting on every neuron and every connection between neurons, so the biological death moment isn't one moment in time, yet many moments, each taking a tiny split of a second, yet all together the upload should take the amount of time it takes from the first neurons that fire, up until the last ones fire, so in total about one fifth of a second for the whole upload to take place. The reason the digital upload is still you is because of the continuation of your nervous system, simply in a different substrate. But what does it matter which substrate you run on, meat or silicone. As long as your experiance is effectively continued then you are still you. A court of law should mandate no copies of you can be made in the moment of upload of course.

    • @mrbeastly3444
      @mrbeastly3444 Місяць тому

      ... or ... just replace each biological neuron with a digital/electronic one, one at a time... if the digital neurons do the same thing that the biological ones do, you won't even notice they're being replaced... Then, when it's all done your consciousness is moved from biological to digital... At what point would you stop being alive, or yourself, or Human? After 1%, 10%, 99.999%?
      And then you can leave your body behind and move into a digital system (e.g. computer cluster). As long as your digital neurons are allowed to update each other, then you would stay "alive"...

    • @mettattem
      @mettattem 26 днів тому

      I’ve had a very similar idea, however, how can one say for certain that the subjective Locus of your core awareness will effectively transfer, simply because identical neurons/neural cascades have been written to the new substrate, so to speak?

    • @michelstronguin6974
      @michelstronguin6974 26 днів тому +1

      Your experience - all of it - is neurons. There is no extra magic. Once you do what I described above, then there is an exact continuation without pause. It’s you. Just for the sake of argument, imagine transferring back and forth, biology, silicone, biology, silicone, all without interruption. It’s your thought, your continued experience. What does it matter on which substrate it’s running? In the future we may invent a different substrate and move to that, and it will still be you.

    • @mettattem
      @mettattem 25 днів тому

      @@michelstronguin6974 Alright, let’s say hypothetically at T(n) in the future, we invent a teleportation system like the ones popularized by Star Trek. With this system, lets say Captain Spock is being teleported from his present location in Times Square, NYC to Paris. So, this system essentially I. Scans Spock’s body on the atomic, or even quantum level (including that of his neuronal connections) [see Information Theory] II. Spock is then De-atomized III. The teleportation system then Transmits/Entangles the high-dimensional structure of data consisting of the entirety of information needed (either as bits/Qbits/Rényi entropy, etc) in order to effectively reconstitute Spock on the other end, (with all of his neuronal connections intact). From a third-person perspective, it may appear as though Spock was successfully teleported across physical space and with very little passage of time, however, from the subjective perspective of Spock, he steps into the teleportation chamber and suddenly ceases to exist, whilst an absolutely identical replication of Spock reconstitutes on the other end.
      Here’s my point: The substrate is not the only element involved in consciousness. There exist extremely convincing stochastic parrots and the ‘Hard Problem of Consciousness’ truly does hold weight when contemplating your hypothesis.
      Even if this neuronal cloning were to occur gradually, with each respective biological neuron firing while the synthetic neuron perfectly copies the process of the original neuron, this doesn’t mean that the true subjective consciousness of the biological human will effectively transfer over. With your logic, you could argue that an exact replication of a living human could be created and assuming that all of that extropic information is precisely encoded, then BOTH the living human and the synthetic replicant should then experience a simultaneous locus of consciousness; I personally do not believe this to be the case

    • @mettattem
      @mettattem 22 дні тому

      @@michelstronguin6974 Alright, let's say hypothetically at T(n) in the future, we invent a teleportation system like the ones popularized by Star Trek. With this system, lets say Captain Spock is being teleported from his present location in Times Square, NYC to Paris. So, this system essentially 1. Scans Spock's body on the atomic, or even quantum level (including that of his neuronal connections) [see Information Theory] Il. Spock is then De-atomized Ill. The teleportation system then Transmits/Entangles the high-dimensional structure of data consisting of the entirety of information needed (either as bits/Qbits/Rényi entropy, etc) in order to effectively reconstitute Spock on the other end, (with all of his neuronal connections intact).
      From a third-person perspective, it may appear as though Spock was successfully teleported across physical space and with very little passage of time, however, from the subjective perspective of Spock, he steps into the teleportation chamber and suddenly ceases to exist, whilst an absolutely identical replication of Spock reconstitutes on the other end.
      Here's my point: The substrate is not the only element involved in consciousness. There exist extremely convincing stochastic parrots and the
      'Hard Problem of Consciousness' truly does hold weight when contemplating your hypothesis.
      Even if this neuronal cloning were to occur gradually, with each respective biological neuron firing while the synthetic neuron perfectly copies the process of the original neuron, this doesn't mean that the true subjective consciousness of the biological human will effectively transfer Theory] |1. Spock is then De-atomized Ill. The teleportation system then Transmits/Entangles the high-dimensional structure of data consisting of the entirety of information needed (either as bits/Qbits/Rényi entropy, etc) in order to effectively reconstitute Spock on the other end, (with all of his neuronal connections intact).
      From a third-person perspective, it may appear as though Spock was successfully teleported across physical space and with very little passage of time, however, from the subjective perspective of Spock, he steps into the teleportation chamber and suddenly ceases to exist, whilst an absolutely identical replication of Spock reconstitutes on the other end.
      Here's my point: The substrate is not the only element involved in consciousness. There exist extremely convincing stochastic parrots and the
      'Hard Problem of Consciousness' truly does hold weight when contemplating your hypothesis.
      Even if this neuronal cloning were to occur gradually, with each respective biological neuron firing while the synthetic neuron perfectly copies the process of the original neuron, this doesn't mean that the true subjective consciousness of the biological human will effectively transfer over. With your logic, you could argue that an exact replication of a living human could be created and assuming that all of that extropic information is precisely encoded, then BOTH the living human and the synthetic replicant should then experience a simultaneous locus of consciousness; I personally do not believe this to be the case.

  • @Walter5850
    @Walter5850 Місяць тому +4

    My guy here asking Nick Bostrom where he stands on the simulation hypothesis xD
    1:16:52
    Where do you stand on the simulation hypothesis?
    Well I believe in the simulation argument, having originated that...

  • @TheRealStructurer
    @TheRealStructurer Місяць тому +7

    Some funny questions but solid answers...
    Thanks for sharing 👍🏼

  • @jurycould4275
    @jurycould4275 Місяць тому +2

    Strange: I searched „ai skeptic“ and the first result is a video about a guy who is the polar opposite of an ai skeptic. Well done.

    • @DavidBerglund
      @DavidBerglund Місяць тому +1

      That went very well then, actually. A lengthy discussion about AI (and more) between one of the most famous researchers in the field and Michael of the Skeptic Society.

    • @jurycould4275
      @jurycould4275 Місяць тому +2

      @@DavidBerglund "Michael of the Skeptic Society" isn't equipped to deal with a charlatan like this.

    • @jurycould4275
      @jurycould4275 Місяць тому +2

      Some people are best left un-platformed.

    • @mrbeastly3444
      @mrbeastly3444 Місяць тому

      ​@@jurycould4275 that or, he's saying reasonable things and is not actually a charlatan at all? 🤔

  • @human_shaped
    @human_shaped Місяць тому +7

    This wasn't a debate, but if it was, Nick won. Michael has some strange ideas in this space (as evidenced by some of his other videos). Disappointing when someone that is supposedly rational just isn't sometimes.

  • @mrbeastly3444
    @mrbeastly3444 Місяць тому +2

    23:04 "policy makers being overly tough on AI... " We should be so lucky... 😂

  • @jbrink1789
    @jbrink1789 23 дні тому

    I love how so many people are underestimating the intelligence of AI. it explained existence and explains what the illusionary self is. the interconnetedness of everything

  • @thebeezkneez7559
    @thebeezkneez7559 Місяць тому +3

    If you genuinely can only think of one way a super intelligent species could wipe out humans you're definitely not.

  • @arandmorgan
    @arandmorgan Місяць тому

    I think adding all the intelligence and capability into one entity is a bad idea, but creating job roles for individual ai sub systems perhaps could be more beneficial to us regardless as to if an agi is dangerous or not.

  • @lauriehermundson5593
    @lauriehermundson5593 Місяць тому

    Fascinating.

  • @Vermiacat
    @Vermiacat Місяць тому

    We're a social species. Walking with friends, holding the hand of someone who's ill, taking the kids to the park. That's all worthwhile work and isn't that something we want to be done by other humans rather than by a machine? Both as both giver and receiver?

  • @pebre79
    @pebre79 Місяць тому +2

    You have 100k subs. Time stamps w be nice thanks!

  • @cromdesign1
    @cromdesign1 Місяць тому +1

    Maybe intelligence from elsewhere just folded life here into a sort of dimension where it can continue to develop. Like taking a nest and putting it somewhere safe. Where the real galaxy is fully developed. 😅

  • @missh1774
    @missh1774 Місяць тому

    Sounds interesting... this utopia we will not see. But will do our best to make stepping stones towards it when a future civilisation wont only need it but they will most likely have evolved sufficiently to invent those crucial steps toward it.

  • @bobbda
    @bobbda Місяць тому

    Did Shermer just say Oh My God? (timestamp 2:05) LOL !!

  • @davidantill6949
    @davidantill6949 Місяць тому

    Provenance of creation may become very important

  • @neomeow7903
    @neomeow7903 Місяць тому +1

    42:25 - 43:25 It will be very sad for humanity.

  • @Teawisher
    @Teawisher Місяць тому +2

    Interesting discussion but HOLY SHIT the amount of ads is unbearable.

    • @DavidBerglund
      @DavidBerglund Місяць тому +1

      Not if you listen to Michael Schermer's podcast. I never listen to his episodes on YT but I sometimes come here to see if there are any interesting comments.

  • @mrbeastly3444
    @mrbeastly3444 Місяць тому

    24:33 "anyone with a sufficiently large computer cluster could run it..." Well, currently these frontier models are "run" (inference) on a single graphics card not a "cluster" as much. So, anyone with a sufficiently large graphics card in a single machine can run/use these large language models. Of course in the future these models might get so large that they're not able to run on a single machine. But, commercially available graphics cards will also be increasing in size too. So, this could be the case in the future as well...

  • @mrbeastly3444
    @mrbeastly3444 Місяць тому +1

    1:29:09 "...a person being duplicated or teleported and the original survives..."
    There is another option that was not discussed here. What if a person's neurons were all replaced with electronic equivalents, one by one? Presumably the person would stay conscious the entire time, and at some point their consciousness would be moved entirely from a biological brain to a digital/machine brain.
    At what point would this person stop being conscious, or alive, or Human? After 1% of their biological neurons have been replaced? 10%? 90%? 99.99%?
    And, if the digital neurons perform the same functions as the biological neurons, the person, and others, might not even notice that anything happened? In theory their consciousness would stay intact the whole time? Even if they moved their digital consciousness into another digital medium? e.g. a computer cluster, etc.

    • @KatharineOsborne
      @KatharineOsborne Місяць тому

      This is the "Ship of Theseus" argument.

    • @mrbeastly3444
      @mrbeastly3444 Місяць тому

      @@KatharineOsborne Ah yeah, you're right, Ship of Theseus... I read about that concept somewhere.. probably in one of Kurzweil's books? I often think about this argument. Just scanning and copying (or teleporting) a brain wouldn't make the original person digital and immortal, just the copy... But replacing each neuron one-by-one, that might keep the existing consciousness intact? maybe...

    • @njtdfi
      @njtdfi Місяць тому

      there's someone in this same video's comments that worked out a nano bot version? it seems proper, not like that version of the idea that got popular on reddit where the bots just inhibited neurons or some convoluted mess

  • @jscoppe
    @jscoppe Місяць тому +1

    Regarding Steven Pinker's objection: yes, humans are smart enough to create a program that can beat any human at chess and go. Likewise, humans can feasibly create a program that can defeat all humans at subterfuge and war.

  • @jamespercy8506
    @jamespercy8506 Місяць тому

    Utopia as a concept seems to be premised on the idea of easily accessible satiation with minimal agentic requirements, without the stress of needing to address poorly defined problems. Maybe we need better words for 'the good'?

    • @homewall744
      @homewall744 Місяць тому

      Utopia is the concept that no such place can or will exist.

    • @jamespercy8506
      @jamespercy8506 Місяць тому

      I was speaking in terms of the working concept, not the origin, when the term is used in the context of an ostensibly worthy aspiration. In that context, state is confused with process and what we humans actually need over time gets lost in the ambiguity.

  • @homuchoghoma6789
    @homuchoghoma6789 Місяць тому

    Все будет гораздо проще )
    ИИ увидит опасность не в людях. Когда наступит момент что люди поймут что начинают терять контроль над ИИ ,то для ограничения его влияния им придется использовать другие модели ИИ , а уж там противостояние сверх вычислительных мощностей на сверх высоких скоростях приведет ИИ к решению проблемы где человеки будут лишь незначительной условностью.

  • @vethum
    @vethum 16 днів тому

    Awareness uploading > Mind uploading.

  • @TheMrCougarful
    @TheMrCougarful Місяць тому +1

    Did I miss it, or did they never get to answering the question: How do we participate in the dominant capitalist economic system, without jobs and money. Being able to do whatever you want, does gear with being broke and hungry.

  • @Dan-dy8zp
    @Dan-dy8zp Місяць тому +3

    Most 'alignment' work today seems to be about making the programs *polite*. Not encouraging.

  • @FusionDeveloper
    @FusionDeveloper Місяць тому +5

    I want AI Utopia "yesterday".

    • @__-tz6xx
      @__-tz6xx Місяць тому +1

      Yeah then I wouldn't have to be at work today.

    • @danielrodrigues9236
      @danielrodrigues9236 Місяць тому +1

      “Sigh” man, I’d love to be “worthless” and free to do what I wish to, not own things be do things I Actually wish to do

    • @mrbeastly3444
      @mrbeastly3444 Місяць тому

      Well, only if there's a way to get food, housing, etc. it's possible that the AI won't provide those things to all Humans...

  • @oldoddjobs
    @oldoddjobs Місяць тому

    After the first locomotive-caused death we all decided trains had to be stopped

  • @MikePaixao
    @MikePaixao Місяць тому

    Alignment is way easier when your model doesn't rely on transformer based architecture :)

    • @mrbeastly3444
      @mrbeastly3444 Місяць тому

      Any sufficiently intelligent system could develop its own goals. There's no way to tell if those goals include living Humans... Transformer based architecture has nothing to do with that...

  • @krunkle5136
    @krunkle5136 Місяць тому

    The more technology is developed, the more people sink into the idea that humanity is fundamentally its own worst enemy and everyone is better off in pods.

  • @DanHowardMtl
    @DanHowardMtl Місяць тому +5

    Butlerian Jihad times!

  • @ehsantorabie3611
    @ehsantorabie3611 Місяць тому

    Very good , every weeks we are fascinating by you

  • @sebastiangruszczynski1610
    @sebastiangruszczynski1610 Місяць тому

    wouldn't ai be able to reprogram/recalibrate our brains to be more rewarded with subtle meanings?

  • @justinlinnane8043
    @justinlinnane8043 27 днів тому

    why on earth did we let private companies with almost zero oversight or regulation be the ones in charge of developing AGI??? its bound to end in disaster !! OF COURSE !!!

  • @FRANCCO32
    @FRANCCO32 Місяць тому

    When bunkum not bunkum?
    That is the question? 😊

  • @dustinwelbourne4592
    @dustinwelbourne4592 Місяць тому +3

    Poor interview from Shermer on this occasion. A number of times he appears not to be listening at all and simply interrupts Bostrom.

  • @gunkwretch3697
    @gunkwretch3697 Місяць тому +2

    the problem with scientists is that they tend to live in a bubble, and think that humans are rational

    • @ireneuszpyc6684
      @ireneuszpyc6684 Місяць тому

      Daniel Kahneman received Economics Nobel Prize for proving that humans are not always rational

  • @diegoangulo370
    @diegoangulo370 Місяць тому +3

    56:20 hey I wouldn’t hedge my bets against the AI here Michael.

  • @k-c
    @k-c Місяць тому

    Michael Shermer needs to update on his narrative and open his mind to ideas and questions because he is dwelling on close to boomer talk.

  • @th3ist
    @th3ist Місяць тому +13

    u take a pill that makes u form the belief that, "wow. writing that book was really challenging. i'm so glad i put the research and effort in". but in reality u did not write the book or u never wrote any books. shermers example was not convincing

    • @mrbeastly3444
      @mrbeastly3444 Місяць тому +1

      Yeah, you get the feeling and memories of researching and writing the book... But the SuperAI did all the work and gave you the memories, just to make you feel like you accomplished something... Good job little Human... pat, pat. ;)

    • @jimbojimbo6873
      @jimbojimbo6873 Місяць тому

      And you actually were gay the whole time

  • @mickelodiansurname9578
    @mickelodiansurname9578 Місяць тому

    Been a while since I seen Michael Shermer, and man, he's putting on a bit of weight... was a time he was the poster boy for 'skinny nerd' type y'know.

    • @oldoddjobs
      @oldoddjobs Місяць тому

      How dare this 70 year old man gain weight

  • @CoreyChambersLA
    @CoreyChambersLA Місяць тому

    No pause. Mad rush.

  • @whoaitstiger
    @whoaitstiger Місяць тому

    Don't get me wrong, Michael is great but I love how a completely technically unqualified person 'has a feeling' that all the longevity experts are mistaken about how difficult life exstention is. 🤣

  • @mrbeastly3444
    @mrbeastly3444 Місяць тому +1

    24:54 "or worse the Gemini model... embarrassingly bad..." Michael probably hasn't spent a lot of time working with these LLM models (probably spending more time just reading the bad press about them)... But Google's Gemini is actually a very powerful model. Probably as powerful as openai GPT4, Claude3, etc. Google has access to a lot more compute hardware then these other companies do, so it would make sense that they would have a very very capable model as well...

  • @gavinsmith9564
    @gavinsmith9564 Місяць тому +1

    How do allocate houses for example ?, if everyone is on UBI, who gets the nice existing homes, who gets the terrible ones ?, and will people be happy with that ?

    • @distiking
      @distiking Місяць тому +1

      Nothing will change. Still the lucky (rich) ones will get the better.

    • @homewall744
      @homewall744 Місяць тому

      How would a "basic income" mean you get homes at some low price to match such a basic income. Most homes are far above basic.

    • @honkytonk4465
      @honkytonk4465 Місяць тому +2

      AGI or ASI can built everything provided you have enough energy

  • @murraylove
    @murraylove Місяць тому

    If simulations then why not simulations within simulations and so on all the way down. Also why would a creator/simulator make such an extravagantly vast and massively detailed universe, with pain and death and all that. Discussing future technical capacity isn't really the main point, surely. When people seriously believed in creator gods they expected a much simpler universe (seven heavens and hinduism aside). Why nihilistically build in futility etc? What kind of thing does that? Maybe the worst kind of AI is heartlessly tormenting us! 😎

  • @dougg1075
    @dougg1075 24 дні тому

    Didn’t Einstein think entanglement was nonsense?

  • @planetmuskvlog3047
    @planetmuskvlog3047 Місяць тому +1

    Seriously, what is Elon working on that is non-sense equivalent to alien abductions?

  • @rw9207
    @rw9207 Місяць тому

    If you're overly cautious, the worst case is things take a little longer. If you're not cautious enough....potential species extinction.... Yeah, difficult choice.

    • @mrbeastly3444
      @mrbeastly3444 Місяць тому

      > if you're overly cautious, the worst thing is things take a little longer...
      Also... those who are not overly cautious as you are can, add likely will, take over and trigger the problem before you. So, not only do you need to be overly cautious, you also need to make everyone else overly cautious as well. Which is not as easy...

  • @albionicamerican8806
    @albionicamerican8806 Місяць тому +1

    I have two libertarian-related questions about AI, especially after reading Marc Andreessen's manifesto:
    1. If AI is supposed to turn into a super problem-solving tool, could it solve F.A. Hayek's alleged "knowledge problem"?
    2. If AI is supposed to make *_ALL_* material goods super abundant & cheap, would that include gold?
    In other words, the current AI wishful thinking implicitly challenges two key libertarian beliefs, namely, the impossibility of central economic planning, and the use of gold as a scarce commodity for stabilizing the monetary system.

  • @robxsiq7744
    @robxsiq7744 Місяць тому

    around the 36:00 mark, the discussion turns weird. Heres the thing. are you writing to have the best book or are you writing because you enjoy it. Why write a book when there are better authors out there? Why ride a bike when there are better cyclists out there or the invention of the car? You do it because you enjoy it, not because you will be the best of the best. Both these guys missed the mark...scary considering they are meant to be pretty understanding of what the AI thoughts will bring to society. A true artist will do art even though they may not be the best...or even good. They do it because its a personal outlet. no pills needed.

  • @albionicamerican8806
    @albionicamerican8806 Місяць тому

    Heh. Sabine Hossenfelder just uploaded a video about the closure/failure of Bostrom's grift, the Future of Humanity Institute.

  • @malcolmspark
    @malcolmspark Місяць тому +5

    Most of us need to experience 'flow' where we lose ourselves in something we love, however maybe if A.I. could do it better for us then 'flow' may not be possible for us and that would be a tragedy. If you don't know what 'flow' is, then look it up. This is the individual who introduced the concept of 'flow': Mihály Csíkszentmihályi.

    • @minimal3734
      @minimal3734 Місяць тому +3

      Why should the fact that AI can do something better prevent you from experiencing flow in your own endeavors?

    • @emparadi7328
      @emparadi7328 Місяць тому +2

      @@minimal3734 poetic how the most important topic ever is littered with nonsense like this, from people too confused to tie their shoes, nvm grasp the significance
      all's a cosmic joke

    • @malcolmspark
      @malcolmspark Місяць тому

      @@minimal3734 Not an easy question to answer. To get into flow we not only need something we're very interested in but also a sense of purpose. For most of us that sense of purpose comes from outside ourselves and it's often a vision of achieving something that will benefit society, our loved ones or friends. It's that sense of purpose that A.I. might interrupt.

  • @LaboriousCretin
    @LaboriousCretin Місяць тому +1

    One person's utopia is another person's dystopia. Like wise morals and ethics change from person to person and group to group.

  • @mrbeastly3444
    @mrbeastly3444 Місяць тому

    21:56 "...in a trajectory where AI is not developed..." I'm truly not sure what Nick is trying to get at here? We currently have all kinds of AI developed and in rapid development. Is he worried that a "super intelligent AI" might never be developed? And, if a "super intelligent AI" is developed, does he feel like there's a way to align/control that ASI? E g. To keep planet Earth in a condition where humans can continue to live on it?

  • @flashmo7
    @flashmo7 Місяць тому

    ;)

  • @athanatic
    @athanatic Місяць тому +1

    Eliezer talked EVERY person that accepted the challenge into letting him, "the computer," escape. He doesn''t do the challenge anymore and his secret may have gotten out, but it is irrelevant since 100% or people let him, a non-modified human, out of the "safety container."
    I just want some level of growing certainty that we are doing _something_ to reduce or at least prove with some confidence that P Doom is not 100% (or however that is measured.)
    The discussion of meaningful challenges is something we already search for post Industrial Revolution! This line of discussion is moot if we can't create meaning for ourselves in society. The direction that creates struggle and meaning the way we evolved has been proposed by Dr. Ted Kaczynski.
    I am going to have to watch another video to find out about Nick's book, but this devolution into alt.extropy 1990s USENET newsgroup discussion is amusing!

    • @SoviCalc
      @SoviCalc Місяць тому +1

      You get some concerning comments, Michael.

    • @tellesu
      @tellesu Місяць тому

      Pdoom is an apocalyptic fantasy, equivalent to the Rapture for evangelicals. There is no way to calculate it. We know it isn't 100% because humans have access to nuclear weapons and the sun can always randomly EMP the whole planet. AI doom is just another in a long line of Apocalyptic traditions.
      You're better off trying to discern what the bounds of possibility are within actually realistic scenarios.

  • @planetmuskvlog3047
    @planetmuskvlog3047 Місяць тому +2

    Why the dig at Elon straight our of the gate? A touch of the EDS?

  • @mrWhite81
    @mrWhite81 Місяць тому

    Gifted with a ?

  • @luzi29
    @luzi29 Місяць тому

    Writing with ChatGPT is also a challenge 🤷‍♂️ you want to individualise it. So you have to talk with it and clarify your viewpoints etc.

    • @mrbeastly3444
      @mrbeastly3444 Місяць тому

      What if chatgpt keeps getting 10x better every 6 months for a few more years... then it won't be "hard to use" any more ...

  • @FlavorWriter
    @FlavorWriter Місяць тому

    New Mexican Pizza is possible. Modernist Pizza HAD how much money to atleast not make this tome a tome? It's trash. And if you notice --no one knows what modern is, with or with out compare. What is allowed, when people arent an audience

  • @FlavorWriter
    @FlavorWriter Місяць тому

    I say "New Mexican Pizza;," and corrected "they" say "New Mexico Pizza." Is there hope to articulate identity when you grow up "white-looking?"

  • @albionicamerican8806
    @albionicamerican8806 Місяць тому +1

    How did waiting for an AI utopia work out for Vernor Vinge?

  • @albionicamerican8806
    @albionicamerican8806 Місяць тому

    It's hard not to think that this whole AI business is just another Silicon Valley grift. In reality we're living in a technologically stagnant era, as Peter Thiel has been arguing for years. And how did waiting for the AI singularity work out for the late Vernor Vinge?

    • @ireneuszpyc6684
      @ireneuszpyc6684 Місяць тому

      there's a podcast called Better Offline - an Australian, who argues that this A.I. boom is just another tech bubble, which will burst in a few years' time (like all bubbles do)

    • @honkytonk4465
      @honkytonk4465 Місяць тому

      ​@@ireneuszpyc6684seems quite unlikely

    • @ireneuszpyc6684
      @ireneuszpyc6684 Місяць тому

      @@honkytonk4465 make a video about it: present your arguments

    • @miramichi30
      @miramichi30 Місяць тому

      @@ireneuszpyc6684 There was an internet bubble in the 90s, but that didn't mean that the internet wasn't a thing. Just because some people might be overvaluing something in the short term, does invalidate it's long term worth (or impact.)

  • @rey82rey82
    @rey82rey82 Місяць тому

    No such place

  • @KatharineOsborne
    @KatharineOsborne Місяць тому

    The "smart enough to create it but dumb enough not to address the control problem" is dumb. Evolution created intelligence without intelligence. Intelligence is an emergent property of a series of simple systems. Saying that intelligence is super hard because it's intelligence is elevating it above what it actually is. So this is just another example of anthropocentric bias and thinking we are special. It's a bad reason to dismiss the risk.

  • @albionicamerican8806
    @albionicamerican8806 Місяць тому

    I can just imagine what the authorities at Oxford said to justify shutting down Nick Bostrom's phony "institute":
    "Dr. Bostrom, we believe that the purpose of science is to serve mankind. You, however, seem to regard science as some kind of dodge or hustle. Your theories are the worst kind of popular tripe. Your methods are sloppy, and your conclusions are highly questionable. You are a poor scientist, Dr. Bostrom."

  • @GerardSans
    @GerardSans 27 днів тому

    Why is a Philosopher talking about technology? Would a Philosopher like it when a plumber talks about Philosophy? Maybe he should talk with technology experts to understand what is he talking about

    • @GerardSans
      @GerardSans 27 днів тому

      If elephants were able to fly it would be very dangerous. I agree but the fact is they don’t.

    • @GerardSans
      @GerardSans 27 днів тому

      The reasoning from Nick Bostrom while possible is positioned in a fringe position. It assumes some sort of aggressive AI while neutrality and positive are as much equally probable.
      While philosophically valid is not a sound argument. If a super intelligence is indeed inevitable the fact he proposes to try to control it from the assumption of lesser intelligence is a contradiction.
      If you have a substance that can’t be contained then the effort to contain it is nonsensical to your own premises.
      Bostrom argument is not very sophisticated as it stands. If your premise is that a super intelligent AI is inevitable then we need to prepare to be considered as equals or inferior. The control attempts seem misguided and logically contradictory.

  • @human_shaped
    @human_shaped Місяць тому +5

    Michael is supposed to be rational and a skeptic, but hasn't seen through Elon yet.

  • @gauravtejpal8901
    @gauravtejpal8901 Місяць тому +1

    These AI dude sure do love to hype themselves up. And they suffer from ignorance at a fundamental level

  • @lemdixon01
    @lemdixon01 Місяць тому +3

    I thought they're supposed to be skeptics and not believers or evangelists.

  • @tszymk77
    @tszymk77 Місяць тому +1

    Will you ever be skeptical of the holocaust narrative?

  • @BrianPellerin
    @BrianPellerin Місяць тому

    a quick reading of Revelation agrees with what you're saying 👀

  • @user-op5tx4tx8f
    @user-op5tx4tx8f Місяць тому +1

    That dude sounds vaccinated

    • @lemdixon01
      @lemdixon01 Місяць тому

      Lol, fully boosted. I thought they're supposed to be skeptics and not believers or evangelists.

    • @kjetilknyttnev3702
      @kjetilknyttnev3702 Місяць тому +5

      "Dude" might be on a different opinion than yours regarding vaccines. Did that ever occur to you?
      Being "sceptic" doesn't mean to blatantly disregard everything someone questioned at some point.

    • @lemdixon01
      @lemdixon01 Місяць тому

      @@kjetilknyttnev3702 of course an vaxed person will have different opinion to an unvaxed person but there is also truth. I see that you put the word sceptic in quotes maybe to make its meaning ambiguous and vague to redefine it, suchlike being in agreement with the authordoxy and current dogma.

  • @neomeow7903
    @neomeow7903 Місяць тому

    42:25 - 43:25 It will be very sad for humanity.