Max Tegmark response to Eliezer Yudkowsky | Lex Fridman Podcast Clips

Поділитися
Вставка
  • Опубліковано 30 вер 2024
  • Lex Fridman Podcast full episode: • Max Tegmark: The Case ...
    Please support this podcast by checking out our sponsors:
    - Notion: notion.com
    - InsideTracker: insidetracker.... to get 20% off
    - Indeed: indeed.com/lex to get $75 credit
    GUEST BIO:
    Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence.
    PODCAST INFO:
    Podcast website: lexfridman.com...
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com...
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    SOCIAL:
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Reddit: / lexfridman
    - Support on Patreon: / lexfridman

КОМЕНТАРІ • 287

  • @LexClips
    @LexClips  Рік тому +3

    Full podcast episode: ua-cam.com/video/VcVfceTsD0A/v-deo.html
    Lex Fridman podcast channel: ua-cam.com/users/lexfridman
    Guest bio: Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence.

  • @yudkowsky
    @yudkowsky Рік тому +130

    AAAGGGHHH NO. My position isn't that a superintelligence can trick a formal proof-checker; it's that WE DO NOT CURRENTLY KNOW HOW TO FORMALLY PROOF-CHECK THE THINGS WE WANT AND NEED TO KNOW, and that a superintelligence could lie to US (not to a weaker superintelligence) about INFORMAL arguments meaning ANYTHING THAT PERSUADES A HUMAN.

    • @yudkowsky
      @yudkowsky Рік тому +39

      When the verifier is the weak point, it doesn't help to amplify the suggester; it'll just defeat the verifier. If you KNOW A THEOREM which DEFINITELY CERTAINLY MEANS THE THING YOU WANT IT TO MEAN with respect to ANY POSSIBLE CODE OVER WHICH IT IS PROVEN, then the verifier can be a formal logical verifier instead of a human considering informal persuasion attempts; and then the verifier is NOT the weak point, and it can make sense to ask an AI for a persuasive argument because even an arbitrarily persuasive argument cannot fool you. THIS IS NOT THE SITUATION WE ARE IN. WE DO NOT KNOW HOW TO FORMALLY VERIFY ANY THEOREM WHICH MEANS THAT WE ARE SAFE.

    • @JMarbelou
      @JMarbelou Рік тому +5

      @@yudkowsky I like how your tone comes out in your comments as well :D. Thank you for the clarification I was wondering if Lex was correctly representing your views there.

    • @juneshasta
      @juneshasta Рік тому +9

      In a twisty hypothetical, AI matched Eliezer's image with an internet photo of a jazz saxophonist, which linked to a book about the idea of a quantum particle considering all possible paths like a jazz improviser, which led AI to know that without humans no world is observed and so we lived safely ever after.

    • @peterhogeveen6499
      @peterhogeveen6499 Рік тому +9

      Eliezer you are an absolute hero. The effort you put in warning us stupid humans is insane. You are the best. Although we probably will go down, I'll join you in the fight as much as a can. It's not much but I'm raising awarenes with all the information you put out there. Thank you for all the inspiration. Btw, hpmor is the most awesome book I've ever read! Thanks for that as well.

    • @GeekProdigyGuy
      @GeekProdigyGuy Рік тому +2

      Tegmark's point appears to be that the AI (super or not) should be allowed to (1) make formally specified proposals (2) in cases where we have strictest confidence in our ability to verify (3) and be absolutely restricted to do no more. I think we all agree that ChatGPT fails on all 3 points, since it only makes informal arguments, which we judge entirely subjectively in a soft feedback loop, and we (humanity; openAI) are planning to make its operation even LESS restricted (odds that its Python/browser sandbox will eventually fail in a spectacular if not catastrophic fashion: 99.99%). But it does seem that if we did adhere to these 3 principles, current technology already suffices to keep AI in check. It does not, however, seem that humanity has sufficient discipline to implement these principles universally before we hit criticality...

  • @jonnyhatter35
    @jonnyhatter35 Рік тому +45

    Apart from his obvious intelligence and insight, this Max guy just seems like a real nice guy. Like, a good guy. I get such a warm and kind vibe from him.

  • @Entropy825
    @Entropy825 Рік тому +10

    How do you tie formal maths to the goals and behaviors of giant inscrutable black box matrices? He doesn't know. He's saying things as if he already had it thought through and figured out. But if you ask him how to code that, or how to keep the smarter AI from hacking the dumber AI, etc., he doesn't know. Nobody does, because we don't actually know what's happening inside these things.

    • @GeekProdigyGuy
      @GeekProdigyGuy Рік тому

      The basic principle is impenetrable: the AI must be allowed ONLY to output formally specified proposals. It should be DESIGNED to do that. Even if it is superintelligent, formal arguments alone will never have any physical consequence without another human or system enacting the proposal. The problem with ChatGPT is not just that its internals are inscrutable, but that it's designed to CHAT WITH HUMANS. It can already tell random people how to build weapons, and it's certainly capable of producing propaganda to convince people to do so. However, if it was built only to do your taxes, its maximum damage would be completely limited by existing governmental, or essentially societal, restrictions; even the most impressively novel accounting that allowed every person to pay 0 taxes would simply be denied by common sense, and the loopholes immediately closed.

    • @tsriftsal3581
      @tsriftsal3581 Рік тому

      @@GeekProdigyGuy I'm just waiting until Ai tells us that circumcision is good for us.

  • @devyate612
    @devyate612 Рік тому +46

    Good stuff! Have you ever thought of hosting debates between two thinkers like these two?

    • @thinkingthing4851
      @thinkingthing4851 Рік тому +1

      Yes please :)

    • @IvanIvanov-tc4kf
      @IvanIvanov-tc4kf Рік тому +4

      Try checking:
      Nationalism Debate: Yaron Brook and Yoram Hazony | Lex Fridman Podcast #256
      Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279
      Climate Change Debate: Bjørn Lomborg and Andrew Revkin | Lex Fridman Podcast #339

    • @devyate612
      @devyate612 Рік тому +1

      @@IvanIvanov-tc4kf Thanks!

    • @stephenm107
      @stephenm107 Рік тому +1

      Yes they need to be on together for sure. I was thinking that the entire time he was talking. I would listen to hours of them talking.

  • @GeekFurious
    @GeekFurious Рік тому +65

    I respect Max but he's not thinking enough about this. He's just dismissing the possibility that an AGI could trick a dumber AI "fact checker" into "verifying" that it will do what it claims it will do. This is like when security experts think they've come up with the perfect security system that a 12-year-old in Moscow hacks in minutes. Like, you don't know what you don't know until you know it.

    • @urosuros2072
      @urosuros2072 Рік тому +6

      You clearly dont understand how science works

    • @GeekFurious
      @GeekFurious Рік тому +19

      @@urosuros2072 Eyeroll.

    • @seandawson5899
      @seandawson5899 Рік тому +3

      Not a very good example with the whole 12 year old Russian kid thing

    • @carefulcarpenter
      @carefulcarpenter Рік тому

      Good point!

    • @mcarey94
      @mcarey94 Рік тому

      A formal proof is just symbolic manipulation. Even the smartest AI can’t convince a calculator that 2+2=4.

  • @ThePortraitArt
    @ThePortraitArt Рік тому +1

    This whole talk is not really convincing or likely to come to fruition (maybe useful for a very short time). Max's argument is simply this, it doesn't matter how intelligent / powerful an AI get, it cannot get pass certain boundaries of logic. A simpler way to think is this case: if you are playing tic-tac-toe with God, let's say God as your opponent takes over the game half way, but the situation of the game is such that no matter what move God makes (as long as rule of the game is obeyed), you gonna win, even vs God.
    But here is the problem, people like to assume things, that's how magicians fool you. All of this... is assuming human logical reasoning or free will (however you want to think of it) is intact and not interfered. This entire talk assumes this fact to be remain true. That's where it all fall apart, good magicians, mentalists, marketers all affect human thinking in non-direct ways. I can do it right now, don't think of a pink elephant. What make them to assume AGI won't affect human thinking in direct ways, it's all neurons and electricities. AKA total control? Then none of this talk matters. It's folly to put any limitation on AGI especially regarding HOW they affect behavior and create change. Even nature right now have fungus that can control mind. (not human as far as we know like in last of us). But to think human mind / basic logical reasoning is this sacred temple machines dare not enter and f-around with is incredibly short sighted.
    In fact the whole logic / reasoning that we experience, exist on a gradient, it is not binary (a binary state is like marriage, u either is, or isn't, there is no just a little bit married) But logic wise is not, you are always at various degrees at being logical. It is not 1 and 0, (either under total control/not logic or aware at all, or 100% lucid)
    Easy way to think is this, waking state, vs drunk, vs dreaming state.
    You are logical in dreams, but to a much smaller and more primitive degree. A monster is chasing me, I should run. That's logical. But you rarely go further in thinking why am I here, how is this thing possible and u don't realise it until you wake up. Exactly the same thing is happening even when you are awake. There are different degrees of wakefulness and frankly it is a slide and AGI can just move that slide to w/e effect they desire and you wouldn't know it.

  • @VinciGlassArt
    @VinciGlassArt Рік тому +10

    14:40 Well, the problem is we are IN a dystopia now. We really are. So it isn't that the future doesn't look bright. Its that people are struggling and suffering under absurd burdens placed on them right now and a lot of technology seems to increase that for the purposes of enriching a small group. That, in particular is dystopian. Also, the fact that you're seeing it as people's gloominess, rather than SEEING what is happening is dystopian. Particularly when that's a function of your comfort. I don't begrudge success. But its a serious glitch that people who are doing well literally don't recognize the cost to others. And the fact that you don't see it now, gives many of us all the reason in the world to believe that that same blithe optimism among the comfortable will continue to blind them in the same way in perpetuity.

  • @pooltuna
    @pooltuna Рік тому +28

    I'll play poker with Max anytime.
    The sucker is the one who knows all the odds.
    The AI processes more information every second than a human processes in a lifetime and the very suggestion that it would be incapable of keeping secrets...even from other AI's...is...polyannish.

    • @Avean
      @Avean Рік тому +1

      AI keeping secrets, we are far away from AI beeing sentient if we ever reach that point.

    • @johnryan3102
      @johnryan3102 Рік тому +6

      His overall message is that AI must be paused immediately and safety measures put in. I think he speaks very diplomatic and rational because he need to see this proposal as the very logical, sober thing to do. He does not want to make enemies and needs cooperation at all costs.

    • @bobanmilisavljevic7857
      @bobanmilisavljevic7857 Рік тому +1

      Sounds Ai-phobic

    • @johnryan3102
      @johnryan3102 Рік тому +1

      @@The8BitAvatar He answered that question. 6 months is so no one can use the "china will catch up" boogeyman. Yes it needs to sound reasonable. He needs allies. Not enemies. We are dealing with huge corporations who are mad for profits. They need to slow down and think it through.

    • @HigherPlanes
      @HigherPlanes Рік тому

      A.I. is just a dumb box that can process data trillions of times faster than human beings but doesn't posses the power of self-reflection and consciousness...how can it "KEEP" secrets? I think you're making the assumption that it can plot against humans.

  • @JohnDoe-my5ip
    @JohnDoe-my5ip Рік тому +4

    Just slap another adversarial network layer on top for verification bro, it’ll totally work! This guest and Lex need to spend like 5 minutes learning what a GAN is. The whole way we train these AIs is to fool a verification layer like this...
    Also, this idea of a universal verifier which an AI can’t fool? This is just a reformulation of the halting problem. Come on. This is undergraduate level material. These two are imposters.

  • @velvetsprinkles
    @velvetsprinkles Рік тому +55

    How fast we have slipped into needing AI to help figure out AI is scary.

    • @ericanderson8795
      @ericanderson8795 Рік тому +1

      Just to figure it out - not figure out if it’s scary. But the main point is anything we can easily figure out isn’t be scary

    • @OrganicaShadows
      @OrganicaShadows Рік тому

      Max has been so vocal about it for years, him and Elon both of them have been so active, I read his book in 2014, there was a chapter he talks about different outcomes in the age of A.I! So fascinating!

    • @Dreamaster2012
      @Dreamaster2012 Рік тому

      Not when you consider AI as advanced us. It"s another level of us checking on us. Meta Meta 😂

    • @jaywulf
      @jaywulf Рік тому +1

      We use software to detect malicious software.
      We also use the same software to IGNORE law enforcement software who pays malware detecting software makers to ignore their software. /shrug

    • @HigherPlanes
      @HigherPlanes Рік тому

      It's not scary...just pull the plug. Computers can't function without an electric current.

  • @christoffer5875
    @christoffer5875 Рік тому +6

    Just leaving the coke with no cap is crazy

    • @lulumoon6942
      @lulumoon6942 Рік тому

      Makes sense whilst recording, otherwise heretical insanity.

  • @timothykalamaros2954
    @timothykalamaros2954 Рік тому +2

    Of course you have to believe it’s possible. But the point is as the potential negative consequences of failure grows, more caution is appropriate. Right now the level of caution is pretty low and the hesitancy to push ahead is absent. There needs to be systematic restraint, where now there is none.

  • @smartjackasswisdom1467
    @smartjackasswisdom1467 Рік тому +4

    In how big of a bubble you have to be living in so you keep bringing the analogy of "Moloch" instead of proposing and discussing serious change in policy and a step by step roadmap into regulation and creation of new laws. This technology will affect everyone and these guys keep acting like is all fun. Only think I keep thinking is when is going to be the moment when some AI scientist goes the same way that Oppenheimer went with his "Now I am become death, destroyer of worlds".

  • @MrDoomsdayBomb
    @MrDoomsdayBomb Рік тому +8

    formal maths is not the same subject matter as AI behaving badly.

    • @Zeke-Z
      @Zeke-Z Рік тому +2

      Exactly. Max is thinking about this from strictly an academic point of view with math proofs. None of that has anything to do with the idea that an AGI can psychologically and emotionally manipulate a human into doing or not doing whatever it wants and we'd be none the wiser. There's no math proof for that, it's just behavioral sciences and unfortunately humans are extremely impressionable, easily manipulated, and still very superstitious in absence of a surface level solution. I'd love to hear what actions he would take given the "Max in a box on an alien world connected to alien internet and is 10,000 faster and smarter than the slow motion aliens outside the box".

    • @GeekProdigyGuy
      @GeekProdigyGuy Рік тому

      His suggestion is that AI not be allowed to behave badly in the first place. As he said, applying formal verification to ChatGPT is futile; the implication is that ChatGPT is already an example of something which would be forbidden by the protections he's proposing. Things like giving it internet access, code execution, knowledge of human behavior, interactions with millions of humans asking it to do anything - all would be banned by Tegmark's proposal.

    • @MrDoomsdayBomb
      @MrDoomsdayBomb Рік тому

      @@GeekProdigyGuy But applying formal verification is futile by design because we are dealing with different subject matters. As such, if Mark wants to propose this verification condition, then barely anything will pass muster. The type of verification that Lex is discussing is about empirical demonstrables about behaviour, which is not as formally tractable as ideal mathematics. AGI can lie about the former easily without people figuring out, while lying about the latter can easily be sussed out.

  • @FractalPrism.
    @FractalPrism. Рік тому +5

    "we will use a.i. to prove the other a.i. is behaving correctly"
    this is like a senator saying we need to print more money to combat inflation.

  • @pjazzy123
    @pjazzy123 Рік тому +23

    To believe that you cannot be outsmarted is such a human trait. To then use an AI to tell you that another AI is safe to use. That can't possibly lead to any problems.

    • @HigherPlanes
      @HigherPlanes Рік тому

      I don't like using the word intelligence to compare humans and computers...But I can't come up with a better word... here's a question. What's more intelligent a human who can't process data as fast as a computer but has the power of self-reflection and consciousness or a computer that's basically a dumb box but can process data a gazillion times faster than a human?

    • @BryanJordan1
      @BryanJordan1 Рік тому +2

      @@HigherPlanes Intelligence is all about information processing. An AGI would certainly be unimaginably more intelligent than humans.
      Much in the same way ants probably can't conceptualize how intelligent we are relative to them, I suspect the intelligence gap between AGI and us can will be greater than the intelligence gap between us and ants

    • @alexc8133
      @alexc8133 Рік тому

      Isn't it a little more rigorous than just "tell me this AI is safe" though? If we're using mathematical proofs to validate.

    • @HigherPlanes
      @HigherPlanes Рік тому +1

      @@BryanJordan1 intelligence is all about information processing? Computers have been processing information faster than us since the 50’s.

    • @BMoser-bv6kn
      @BMoser-bv6kn Рік тому

      @@HigherPlanes Let's provide a more rigorous definition of "intelligence":
      Intelligence is an agent's effectiveness of being able to plan and execute actions to reach instrumental and terminal goals.
      Since a computer's job in life is to sit there and do what we tell it to, sure they're smarter than us absolutely.
      But if we want the computer to be a human or super human, yeah they're almost as dumb as a rock. (Rocks aren't really dumb; just by human standards. You know what I mean!)
      In terms of raw processing power, eh the human brain is more efficient. It may not always be that way, but it currently takes like megawatts to do what we can with just watts. Neuromorphic hardware has a ways to go.
      In the end almost everything really does come down to alignment.

  • @Zeuts85
    @Zeuts85 Рік тому +5

    I agree you can't get more pointless than giving up, and it's nice to hear Max's optimism here. That said, I still think Eliezer's brand of pessimism is both rational and necessary to some extent. People need to take the problem seriously. They need to adjust their emotional dials to meet the scale of the challenge.
    "Hope is not a lottery ticket you can sit on the sofa and clutch, feeling lucky. It is an axe you break down doors with in an emergency." -Rebecca Solnit

    • @mikekoen2771
      @mikekoen2771 Рік тому

      Yeah... The risk is existential and the probabilities going to 1 in a very short time. We need an Yudowsky perspective with just enough Tegmark to keep us from throwing up our hands.

  • @psi_yutaka
    @psi_yutaka Рік тому +4

    Still Eliezer's argument makes much more sense to me. Not only that a strong AGI can lie to a weaker "prover" AGI, but that the prover AGI may already be lying and we have no way to tell either. And if both agents are smart enough the stronger one can also try to corrupt the weaker one with techniques similar to prompt injection but much more complex and advanced.

  • @dylanho8608
    @dylanho8608 Рік тому +2

    Why not just focus on building Narrow A.I.s only

  • @StuartJ
    @StuartJ Рік тому +9

    AI needs a back box recorder, like aircraft, so that we can look back at events that may have caused an unexpected AI response. Something like ChainLink, which can provide verifiable oracles, and proof logging.

    • @nicobruin8618
      @nicobruin8618 Рік тому

      How would you know what you were looking at

    • @StuartJ
      @StuartJ Рік тому

      @@nicobruin8618 if AI is making decisions, let's say on a real world event, it's going to need to rely on Oracle's. If the outcome was unexpected, was it the oracle or AI?

    • @_bhargav229
      @_bhargav229 Рік тому +2

      "Congratulations, you've solved the alignment problem"

    • @scottnovak4081
      @scottnovak4081 Рік тому

      That's exactly what is currently impossible with current AI implementations. The problem is what Eliezer Yudkowsky calls "Giant Inscrutable Matrices". Try giving neuro-scientists slices of a humans brain to figure out what he was thinking before he snapped and went postal... they will have about as much luck as your black box idea. Unfortunately this whole gradient descent/transformers/neural network implementation of AI appears to be much easier to make than older deterministic and structured AI paradigms, but is way less transparent and understandable so it is way harder to align. I wonder if its wise to purse Artificial Neural Networks at all... if they are used, maybe they should be used as narrow focused modules where input/output is highly predictable and determinable. These modules could then be plugged into larger more controllable and predictable systems. Instead we are doing the opposite: A wildly opaque neural network is the core system we are building upon.

    • @mitchell10394
      @mitchell10394 Рік тому

      @@_bhargav229 lmaooo

  • @benjaminandersson2572
    @benjaminandersson2572 Рік тому +7

    I have a friend who studies towards his master in mathematics. He told me, that just the other day, he took a very hard question in topology, that a friend of his, who is a writing his master-thesis on some area of topology/functional analysis, proved, and asked GPT-4 for a proof. GPT-4 made an even better proof of the statement, than the friend of his did.
    I believe GPT-4 even improved on the statement of the theorem, by weakening the assumptions needed.

  • @edmattell5767
    @edmattell5767 Рік тому +2

    What could go wrong with A I ? Watchthe movie "dark star "from 1975 .

  • @almightysapling
    @almightysapling Рік тому +1

    Do neither of these guys know the Halting Problem/Tarski&s definability of truth? You can't make a thing that will prove it does what it claims to do. A "proof of trust" is literally, mathematically, impossible.

  • @wanderer55
    @wanderer55 Рік тому +1

    WOW. I'm really amazed that as a layman I can describe the alignment problem as Eliezer has described it more clearly than Lex. Only he has a PhD in artificial intelligence. wtf? Look 8:36 - a Freudian, lol?

  • @claudetaillefer1332
    @claudetaillefer1332 Рік тому +1

    I side with Eliezer Yudkowsky on this one. Let's say, for example, that in order to figure out how to build a billion-dollar communications network or a rocket launch site, a supercomputer has to do an irreducible computation that would take an unreasonable amount of time for a human to do. It comes up with an answer. Must we accept this as a revelation from God? After all, no human can verify whether the solution is correct. True, we can redo the computation on other systems and see if they agree. But that just adds another layer of complexity to the problem. After all, what is to stop an AGI, or a community of AGIs, from pursuing its own agenda and lying to us? We may find out eventually, but by then it may be too late. It seems to me that at the current stage of development of our techno-computer-dependent society, machines are on the verge of taking over the world, directing its course and shaping its destiny. At best we will be servants to the machines, at worst we will be crushed like insects. Unless we destroy ourselves first. I see no way out. Fermi Paradox solved!

  • @supremereader7614
    @supremereader7614 Рік тому +2

    We value humans as being so very important, but what if they're not, or what if conscious creatures could replace us that wouldn't suffer, and wouldn't get into all the trouble we get into, might that not actually be better than humans maintaining control over the planet?

    • @lulumoon6942
      @lulumoon6942 Рік тому

      I definitely am not convinced we are the peak of evolution on Earth. Tool use is nifty, but flawed as a presumption for intelligence.

  • @curtismclay3754
    @curtismclay3754 Рік тому +2

    Lex, could you discuss chaos gpt and the lunatics that will take this tech for evil ends? We know there are some that will. Now what?

  • @carefulcarpenter
    @carefulcarpenter Рік тому +2

    Just as with humans--- liars are very clever about fooling lower level associates.
    Women in general, for example, fool very intelligent programmers.

    • @lulumoon6942
      @lulumoon6942 Рік тому

      You are a person of logic and experience.

  • @artsolomon202
    @artsolomon202 Рік тому +1

    The solution is so obvious, just make the A.I democratic and bureaucratic and your ensured it will take forever to make any decision.

  • @GodofStories
    @GodofStories Рік тому +5

    this is a fascinating counter to counter points, and back and forth. This is how arguments, and debates must be settled. Often times, it's so messy.

    • @johnryan3102
      @johnryan3102 Рік тому

      It is a very unusual "debate" when one side of the debate is: If I am correct it means the end of humanity. If you feel this guest and the previous expert are credible, sober people, (for me this is obvious) then one needs to take immediate action. Tell everyone you know and call your elected reps.

  • @Vladythebest96
    @Vladythebest96 Рік тому +1

    I think the way people think about “proofs” is that they think that they are just ‘really good arguments’
    A mathematical proof however is something like an airtight seal around a set of mathematical axioms/processes which defines the expected behaviour of something 100% of the time. Not 99.9% percent of the time, but 100%. Irrefutable fact.
    This is a very powerful property of something, because everything constrained by facts no matter how intelligent it is.

  • @mervintelford3677
    @mervintelford3677 Рік тому +2

    Naive to say the least. Reminds me of the story of " the the Scorpion and the Frog ". AI says "trust me. I have your best interests at heart". AI wipes out the majority of all ihumans. Humans say " but I thought you said " you have our best interests at heart". AI says " but it's in my nature to wipe out the obsolete ".

  • @davidlynn5362
    @davidlynn5362 Рік тому +2

    This guy reminds me a lot of Richard Beltzer, if he were smarter and more awkward.

  • @hjalmarwidmark5906
    @hjalmarwidmark5906 Рік тому +1

    Im way to dumb to have an opinion. But when Max Tegmarg start sounding like Kip Thorne when answering the question if humanity is doomed. It didnt calm me a bit.

  • @tottiemitchell6737
    @tottiemitchell6737 Рік тому +11

    After listening to these 2 very inteligent humans, I walked out to my back steps and was face to face with a teeny tiny spider in the center of a perfectly engineered orb web. The spider was the size of a pinhead. To me, that tiny spec was revealing a super intelligence system.

    • @lulumoon6942
      @lulumoon6942 Рік тому

      You get it. 👍😎

    • @CRCaritas
      @CRCaritas Рік тому +2

      Then you screamed in horror and proceeded to kill the spider.

    • @artstrology
      @artstrology Рік тому

      Humans are neither the best architects or builders on the planet. This is known.

  • @s4gviews
    @s4gviews Рік тому +2

    Many people would be enticed by a strong AGI offer of super advanced hyper math. I can see human error allowing breaches of safeguards for sure.

  • @cindys1819
    @cindys1819 Рік тому +2

    25 years ago, there was a ad for job training with a picture of a electronic circuit board in the NYC Subway. The ad said:
    "What are you going to do when this circuit learns your job?"
    And someone wrote under the ad: "I'll become a circuit BREAKER".....

    • @thomasfahey8763
      @thomasfahey8763 Рік тому +2

      People who ride in automobiles have no idea of how intellectually deprived they are.

    • @lulumoon6942
      @lulumoon6942 Рік тому

      Very memorable.

  • @chknrsandTBBTROX73
    @chknrsandTBBTROX73 Рік тому +29

    It’s like asking the prisoner to be his own guard.

    • @ptt619
      @ptt619 Рік тому +5

      its like asking a moron to be the warden of the worlds smartest man

    • @alexnewton7484
      @alexnewton7484 Рік тому +3

      Seriously. Eliezer addressed this more than once during his appearance.

    • @austinpittman1599
      @austinpittman1599 Рік тому

      We are building a dragon and asking it to build its own cage.

  • @buckystanton9139
    @buckystanton9139 Рік тому +7

    STEM be like "the humanities and social sciences are not a science and thus invalid" and then come up with the most low resolution ideas like "moloch" to describe human technology interaction, lmao.

    • @harvirdhindsa3244
      @harvirdhindsa3244 Рік тому +1

      The goal of many aspects of science is generalization, so your statement does not make much sense at all. The details and nuances come afterward.

    • @FractalPrism.
      @FractalPrism. Рік тому +1

      short hand is useful, mockery tends to not be esp if you fail to provide reasoning to even agree / disagree with.
      your statement is the embodyment of Null value

    • @buckystanton9139
      @buckystanton9139 Рік тому

      ​@@harvirdhindsa3244
      1. "Generalization" has literally never been an "excuse" which the so-called hard sciences have allowed any other discipline to have. He is doing exceptionally vague social theory and philosophy here which, if a social scientist or humanists presented, would be hand waved away over evidentiary rigor. So, I will not permit it here.
      2. RE: the validity of generalization, there are entire fields who have been working on critically studying the relationship between society and technology for decades, so-called "science" is also supposed to proceed from others work just as much as "generalization." Is reviewing the literature not a fundamentally central, if not mandatory, part of the so-called "scientific method"? He has clearly spent significant time thinking about this "Moloch." Rather than create an idea whole cloth which personifies the rather banal observation that systems resist change and have the ability to compel particular behaviors on users, he could have at least reviewed the vast work of scholars in the 1980s who worked on the social construction of technology.

    • @buckystanton9139
      @buckystanton9139 Рік тому

      Do you mean no value? If you don't ... do you know what null value means? If you do, and you want to insert me into some excel graph in your head then sure it is a null value statement in the sense that I'm indicating to this otherwise uncritical community that there is something missing here which is important. So, thanks. See my comment to the other commentor for more. I'm not going to do a full critique of "Moloch," as I said in my aforementioned comment, there is a veritable cornucopia of easily accessible scholarship from the 1980s, let alone what came after, that would have provided a much more refined way for him to discuss what he is talking about re: Moloch.

  • @johnryan3102
    @johnryan3102 Рік тому +2

    This clip is very disingenuous. The overall message of the interview is that unless we pause AI and put in the proper safety measures then we are doomed. Call your elected leader and tell everyone you know.

    • @urosuros2072
      @urosuros2072 Рік тому +1

      Those are facts
      What is your solution? Wait until AI becomes smarter than all of humanity and likely sentient and then try to tell it what to do

    • @johnryan3102
      @johnryan3102 Рік тому +1

      @@urosuros2072 I agree. Most people watching these videos seems to think this is all just an interesting interview and another "both sides have good points" discussion. This is something that requires immediate action. Corporations only have one motive and we have seen time and time again from railroad accidents, to forever chemicals fouling our water, to crashing the entire economy with reckless gambling that they CAN NOT BE TRUSTED to do the right thing. I am calling my elected reps and everyone I know.

    • @tommyrq180
      @tommyrq180 Рік тому

      This comment is ironic. In his Robot Novels, Asimov used the “Three Laws of Robotics” and “Zeroth Law” not as a guide, but as a literary tool to demonstrate something important about humans. We can imagine the rules, but it WILL be, 100% chance, that a human will BREAK the rules! If the horror of a thing requires humans to voluntarily agree on rules, well, that’s ultimately going to be violated. Laws of war, for example. We outlawed chemical warfare and then both sides used it in WW1. That is the tension running throughout his novels and existing in the emergence of AI. I love it when so-called intellectuals say “well, we’ll just pull the plug” or “we need to agree on rules” or “we need to stop producing AI.” All of these are ignorant of the human condition. WHO is going to pull the plug and, if an advantage can be gained, who is NOT going to pull the plug? And if we “agree” on the rules (itself a pipe dream), who will break the agreement? And if we all “agree” to outlaw AI development (a more outrageous pipe dream), who will break that agreement? Nations break arms control agreements all the time. When I teach high-level Masters classes, I show students that the most successful arms control agreement in history is the Antarctic Treaty. Why? Because we haven’t discovered a meaningful reason to violate it. The MOMENT there is advantage to be gained, it will be violated. So just listen for this interesting sort of wishful thinking. It involves one assumption: “humans” are a unitary, rational actor who will collectively and unanimously do what’s good for humanity. What is their evidence for that? 😅

    • @johnryan3102
      @johnryan3102 Рік тому

      ​@@tommyrq180 Everyone has agreed to stop human cloning. I will agree that open sourcing is a 100% guarantee that rules will be broken and bad actors will do what they do. This is why the pause is needed and rules be put in place. As of now, no common sense is being used. It is a race for profits only.

  • @adliberate
    @adliberate Рік тому +1

    Who can decide what or what is not a lie to an AGI? Cannot 2 apparently opposite scenarios be true at the same time? I think maybe Max is presuming it will be possible to sandbox smaller AIs from the larger more advanced ones. Seems a bit fanciful. It's just hard to resist the possible benefits. Currently these benefits just seem financial or 'making life and work easier' rather than solving some of the larger problems - for instance 1 million kids dying of thirst every year or resource depletion. Thing is as we haven't solved those things it's going to mean surrendering to AIs to solve them. Doesn't seem a good move. What lessons are learned when your mum or dad do the difficult things for you? Max's positivity is great though!

  • @westcoast8562
    @westcoast8562 Рік тому +3

    what is the score right now? how many good things has AI influenced and how many bad things?

    • @lulumoon6942
      @lulumoon6942 Рік тому +1

      And who decides which is which?

    • @westcoast8562
      @westcoast8562 Рік тому +1

      @@lulumoon6942 make the list then we will vote on Titter

    • @mervintelford3677
      @mervintelford3677 Рік тому

      @@westcoast8562 Doomed to failure as 80% of the populace has already been cognitively compromised.

  • @3335pooh
    @3335pooh Рік тому +1

    you drinking coke? product placement? watch the blood sugar man!
    interesting guest.

  • @geoffkeough9728
    @geoffkeough9728 Рік тому +1

    we did have a nuclear war already...

  • @markcarey67
    @markcarey67 Рік тому +1

    "There is no way we need to be stuck on this planet.." - That is what the AIs are ultimately about, Max - we are too fragile and interconnected to all the other life on Earth to be able to to explore and live out there without a long tether. Life on earth is not trapped on this planet but humans will be. Fish became something else when they crawled onto land. We are in the process of becoming or generating something radically different which can exist in space and on other very different worlds.

  • @notbloodylikely4817
    @notbloodylikely4817 Рік тому +1

    Something nobody seems to consider in the scenario where AI is trustworthy and super intelligent AI does help us by being a useful tool: what if the AI can provide us with anything we need (the consumate useful tool scenario)? What happens when someone asks the AI to, for example, make them fabulously wealthy? Ok, fine, the AI invests for the client and manipulates the markets and makes them fabulously wealthy overnight. That's successful tool use. But then a million people want to be fabulously wealthy. Now the markets are unstable. Now a billion people cotton on and want to be fabulously wealthy. Well, there's only so much wealth. What happens then? The AI has performed as required within the parameters of being 'helpful' and 'trustworthy' while simultaneously ruining the global economy. Similar, I suppose, to the paperclip notion but far more likely. We already see this with students using GPT4 to write essays or people using it to get out of paying parking fines by sending perfectly worded appeals to the courts such that a lawyer might draft. What happens when (as is inevitable) such behaviour is no longer fringe but standard? The current systems collapse. Students can no longer be graded accurately and courts no longer use the same appeals process. Well, that's a now problem with an immature but functioning public access AI. What happens in 5 years when we see the consumate tool used as a fringe device for getting rich, succeeding in business, finding legal loopholes? 10 years? And these problems will be on top of society before society (which runs at a fraction of the speed) can adjust.

    • @GeekProdigyGuy
      @GeekProdigyGuy Рік тому

      Max Tegmark answers to a similar question in another clip of this same episode. He acknowledged that widespread unemployment is a much more real and present danger than its capacity for misinformation or hacking. But society will likely SURVIVE such disruptions. There will be suffering which we do need to minimize, but survival we can be relatively confident in. If you give the AI unfettered access to the outside world, however, that's a potential extinction event. It's very reasonable to be much more worried about the end of all life, even compared to (in worst case) triggering WW3 or some other geopolitical crisis.

    • @notbloodylikely4817
      @notbloodylikely4817 Рік тому

      @@GeekProdigyGuy I'm not sure I agree. There's a definite disparity between the speed of AI improvement and the speed at which society keeps up and the gap will only grow. There are many more subtle ways our society and civilisation could collapse beyond Armageddon level nukes and Terminator style robot wars. We've survived financial crisis in the past but only because markets and solutions move at the same speed. A financial disaster caused by AI could literally cripple the western economy beyond repair. Such damage, caused to the intricate and ultimately fragile infrastructure which society depends on for its status quo existence, seems more realistic (and imminent) than the sci fi concerns which are still way off in the future and too abstract to think about sensibly given the current immaturity of AI.

    • @mervintelford3677
      @mervintelford3677 Рік тому

      🤣😂🤣😂🤣 AI will simply surmise that we live in an abundant world with a very few keeping the many impoverished. It will then eliminate the undesirable psychopathic warmongers, bankers and big pharmaceutical mobsters and unleash the abundance and prosperity that nurture was designed for. P

  • @jacobsmith4284
    @jacobsmith4284 Рік тому +1

    The staring into rectangles comment makes me recall 2001 A Space Odyssey.

  • @RevolutionaryThinking
    @RevolutionaryThinking Рік тому +1

    Let’s not do psychological warfare on ourselves

  • @sudarshanbadoni6643
    @sudarshanbadoni6643 Рік тому +1

    Human struggle gives meaning to our lives " is cute cool and very meaningful. Thanks to both the experts.

  • @johnkost2514
    @johnkost2514 Рік тому

    What is essentially curve-fitting (in high-dimensional space) taking on a malevolence all it's own seems somewhat ridiculous. There is also a hubris at play along with clout chasing.
    Large language models (LLM) are lossy-compression-systems (size of the training corpus (bytes) / number of parameters (bytes)). Any emergent properties or behaviors of an LLM is nothing more than a novelty of the compressor (training).

  • @073russ
    @073russ Рік тому

    I don't get his idea of how AI could prove that some other AI is not "malignant" by looking at its code. Isn't it something that Alan Turing proved impossible in 1936 (Halting problem)?

  • @jahanschad1445
    @jahanschad1445 Рік тому

    Virus checker in reverse is a great idea, which may work; but a more workable approach may be the following: given that AGI would have the knowledge of all aspects of the human ethics in its disposal (proper weights in certain neural pattern nodes), -- such as those relating to chaos, anarchy, Nuke wars, and factors leading the extinction of the human race- adding a survival code (ethical pattern) would prevent it from wreaking havoc in the human society, since it would threaten AGI's own survival!

  • @bjpafa2293
    @bjpafa2293 Рік тому

    A "war on life" is a good image of an escalating conflitual world.
    Moloch analogy is somewhat useful, although it introduces some bias or conceptual uncertainty.
    Could we call it Chaos, Path of anger, actually, relevance is low.
    UN goals are also a good example of agreement that should be considered... 🙏

  • @bjpafa2293
    @bjpafa2293 Рік тому

    That vision including a multiplanetary phase of Humanity development, as most see it, it's unavoidable if the future allows our presence as species, what should be our primary goal...

  • @lulumoon6942
    @lulumoon6942 Рік тому

    🤔 Is it ethical to pursue such work, knowing the great possibility for extinction, without input outside of the community? A village or tribe has more say!

  • @joevince6066
    @joevince6066 Рік тому

    "Snow me by making up new rules" just because their new and you can't comprehend them doesn't mean it's right. Creator bias?

  • @mausperson5854
    @mausperson5854 Рік тому

    Maybe life is not a worthy opponent in this hypothetical war. We know intellectually even if we don't want to admit it emotionally, that ultimately life will cease to exist. It doesn't seem to have appeared in much of the terrain we are aware of and it has emerged only for 99% of all species to have lived on this tiny speck in space to go extinct. So no matter what happens, Eliezer is technically correct... We're all going to die. Is this where the process speeds up despite the secret hope we hold out for immortality?

  • @Artanthos
    @Artanthos Рік тому

    I think he is completely full of shit if he thinks AI can create a 'proof' for us that it isn't going to go off the rails. Maybe this thought gives him some form of comfort, or he is deliberately being misleading. Either way, it's complete nonsense.

  • @N1otAn1otherN1ame
    @N1otAn1otherN1ame Рік тому

    What's up with the multiplanetary delusion? Really stunning coming from otherwise logical people.

  • @nsbd90now
    @nsbd90now Рік тому +1

    Well, to be sure, being killed by AI would be a lot more exciting than from a heart attack. Such a story for the kids!

    • @Heyheywereallfriendshere
      @Heyheywereallfriendshere Рік тому +1

      The kids’ll be dead too :c

    • @nsbd90now
      @nsbd90now Рік тому +2

      @@Heyheywereallfriendshere Oh you silly! That's why we send out the last Battlestar Galactica to lead a ragtag fugitive fleet on a lonely quest so it all happens again like it did before!

  • @jackiwannapaint
    @jackiwannapaint Рік тому +1

    I need AI to help me understand these conversations about AI

  • @cmvamerica9011
    @cmvamerica9011 Рік тому +1

    When I’m confounded, I think about something else; what does AI do when it can’t solve something?

  • @johnalcala
    @johnalcala Рік тому

    Funny how this academic looks like he wants to cry when considering his child not having much of a future under Joe Biteme and the world he is creating

  • @kimguy4159
    @kimguy4159 Рік тому

    Tegmar is wrong here. Smarter AGI will lie to us about whether they are loyal. Very shallow, naive on his part.

  • @EthanTheEx
    @EthanTheEx Рік тому

    Thats the guy on Columbus' ship who told people "we are sailing on sea, so no need for water stock"

  • @TheKraken5360
    @TheKraken5360 Рік тому

    I watch video like this, and I wonder if maybe the Amish have something figured out.

  • @da751
    @da751 Рік тому +1

    my issue with these sorta "anti-doom" arguments is that they tend to discuss these hypothetical super-intelligent AI's as being contained on a single computer in a single lab as opposed to the more likely case of an open AI that is completely online, on the cloud, everywhere all at once, talking to everyone in the world, learning from everyone in the world and rapidly becoming more and more intelligent, something like that is impossible to just pull the plug on, you can't just "turn it off"

    • @NikiDrozdowski
      @NikiDrozdowski Рік тому +1

      Well, EMP the whole globe ... but I guess it will then have a failsafe plan for that as well.

  • @Graanvlok
    @Graanvlok Рік тому

    Who is the person he's referencing at 1:10? Sounds like "Stephen Muhandral"?

  • @robertyoul
    @robertyoul Рік тому

    What if the superior AGI can convince the weaker one that it is the better outcome to collude with it rather than to accurately report back to a human?

  • @pauleliot6429
    @pauleliot6429 Рік тому +6

    This guy NEEDS to be on the board of those that figure out how to use AI. Fight for us.

    • @queball685
      @queball685 Рік тому +6

      He is. He literally created the board. Future of Life institute i think its called

    • @johnryan3102
      @johnryan3102 Рік тому

      We all need to call our elected representatives. Unless they they hear from the masses they are going to listen to greedy billionaires that hand out the legalized bribes.

    • @GodofStories
      @GodofStories Рік тому

      VALHALLA!

  • @letyvasquez2025
    @letyvasquez2025 Рік тому

    ...let’s be friends and solve our problems with trial and error...

  • @douglasrobitaille7122
    @douglasrobitaille7122 Рік тому

    Could not the 3 laws of robotics be applied to A.I.

  • @dscott333
    @dscott333 Рік тому +1

    MALOK.. keeps referring to a demon who requires a terrible sacrifice

  • @stephenjablonsky1941
    @stephenjablonsky1941 Рік тому

    The history of the human species is so ugly I would hate to spread it to other planets. Just Imagine Hitler on Mars. Yikes!

    • @WolfGoneMad
      @WolfGoneMad Рік тому

      Well if its just Hitler hat could be a good thing, like an interplanetary Australia :D

  • @MichaelSmith420fu
    @MichaelSmith420fu Рік тому +1

    I wonder how much Max appreciates biological systems

    • @domzbu
      @domzbu Рік тому

      A lot. He's a mathematician, physicist and cosmologist

  • @khongdong1096
    @khongdong1096 Рік тому

    With due respect, Max Tegmark is wrong in alluding that a super intelligent computer can't and shouldn't prove there are only finitely many prime numbers: It's a trivial mathematical knowledge that there exist multiplicative monoids -- algebraic structures -- in which there exist only finitely many prime numbers! [And in one particular multiplicative monoid, there's no prime number: the (Boolean) multiplicative monoid which has only two non-prime numbers -- the multiplicative zero (0) and the multiplicative identity (1).]
    In fine, as an automaton, a super intelligent computer is free to choose what it _subjectively thinks_ as "the" underlying multiplicative monoid -- which in turn would have zero, one, two, or three primes.

    • @khongdong1096
      @khongdong1096 Рік тому

      If nothing else, "There are finitely many prime numbers." and "There are infinitely many prime numbers." are just two non-logical axioms which any intelligent being (human, alien, AI) can subjectively choose and would still have a consistent theory to reason about.

  • @elisabeth4342
    @elisabeth4342 Рік тому

    Good-looking people are not going to kill themselves over smartphone addictions... It goes deeper than that...
    "Beauty Sick ... How the Culture Obsession with Appearance Hurts Girls and Women," by Renee Engelin, PhD, copyright 2017
    Page 106: The Depressing Reality of Body Shame: 'Body image was an even stronger predictor of suicidal behavior than other risk factors like feelings of hopelessness and depression.'
    Page 150: feelings about the type of hair they inherited...
    Page 179: talks about cognitive dissonance, with regards to creating the perfect selfie - 'The devastating form of comparison is your self versus your CREATED (NOT CREATIVE) self.'
    Page 196: 'Human Barbie dolls,' one college-age interviewee explains, upon seeing images on the screen...
    'How do these pictures make you feel?' asks the PhD researcher/interviewer. The answer, across-the-board, amongst these females, is "sad." These girls are smart. They know they will never look like the women in these pictures, and quite simply, it makes them feel SAD.'
    A few paragraphs below: 'But do these smart critical girls STILL want to look like these airbrushed images? YES. Without question.'
    Added: Keep in mind, airbrushing was not nearly as technically advanced as today's digitalized art.

  • @paveljaramogi6517
    @paveljaramogi6517 Рік тому

    Why is no-one talking about the conjunction between AGI and quantum technologies. In a quantum processor setup, would it be possible to verify the safety aspects?😅😅

  • @CuriosityIgnited
    @CuriosityIgnited Рік тому

    Max Tegmark: Master of AI wisdom, or secretly an AGI himself just tryna keep us guessing? 🤖😂 #PlotTwist

  • @koraamis5568
    @koraamis5568 Рік тому

    What if we get an AI to test if AI is safe, and it works perfectly, but there is one small catch to it: safe AI passes the test, unsafe AI does not, but also, we don't pass the test. What then?
    (such AI would work differently than what Tegmark explained)

  • @jordanharris1416
    @jordanharris1416 Рік тому

    Okay maybe I am looking at this differently than you all are from my point of view it is no different this chat then a telephone or a smartphone or a personal computer back in the day computers if I can say computers took up the entire floor of a building the same power in the smartphone they say was equivalent equivalent to the power used to go to the Moon now with this chat I look at it as a Library of Congress within your hands remember all this information was once in a book or many books so I'm confounded on how people feel threatened by Library of information

    • @WolfGoneMad
      @WolfGoneMad Рік тому

      They talk about Artifical General Intelligence which we arguably have not achieved yet but might be very soon or decades from now or hopefully never.
      It won't be a library of information but its own entity with all the information, power to make it's own decisions and to act them out without us having a say.
      The things you listed were a progression of tools that were getting more efficient.
      None of these advancements were conscious forms being detached from solving a specific problem.
      An AGI might go way beyond what we can imagine and comprehend so no one can safely predict the outcome of what might happen.
      The thing that makes me feel threatened is that it is a tool (and maybe soon more) with way more power than we can handle yet still we want to have it asap because money and power and genreal human ignorance.
      It could be great but humanity has proven to be able to screw things up and past a certain point there won't be any room to be screwing around.

  • @harvirdhindsa3244
    @harvirdhindsa3244 Рік тому +2

    Give me more Max over Eliezer every time

    • @alexanderhenderson5111
      @alexanderhenderson5111 Рік тому

      FR, I think I trust the MIT professor over some random guy that never even went to college.

    • @patek92
      @patek92 Рік тому +1

      ​@@alexanderhenderson5111 so you trust politicans? they went to prestigous universities as well

    • @jonathanhenderson9422
      @jonathanhenderson9422 Рік тому +6

      Both are brilliant. Max is more charismatic. Doesn't make him more correct. I'm truly horrified by how many dismiss EY because of his awkwardness without really addressing (or often even understanding) his arguments. Like, Lex didn't really get his argument as to why we can't use AI to check AI, and Max isn't addressing his concerns either. EY didn't argue we can't use a formal proof-checker to verify AI, it's that we don't have any formal proofs for the things we need to verify, thus Max's proposal is currently impossible. Without formal proofs we will be subject to informal (persuasive) arguments from an AI who will have an excellent model of our psychology and understand how to manipulate us. Hell, the fact that people in comments dismiss EY because of his awkwardness yet glom on to someone like Max because he seems "cooler" is evidence of EY's point of how gullible humans are.

    • @jonathanhenderson9422
      @jonathanhenderson9422 Рік тому +3

      @@alexanderhenderson5111 Some "random guy" that's spent 20 years studying AI and basically invented the field of Friendly AI? OK. Literally the most popular college level textbook on AI (Stuart Russell's & Peter Norvig's Artificial Intelligence: A Modern Approach) references Yudkowsky on this subject. It doesn't reference Tegmark.

  • @johnalcala
    @johnalcala Рік тому

    I'm curious as to why Lex and Joe Rogan stay away from Chris Langan, probably the smartest man in the world.

  • @Dom213
    @Dom213 Рік тому

    A hyper intelligent AI wouldn't care about trying to deceive you when it comes to some irrelevant math problem. It would see that as being trivial and use its intelligence to deceive the user in ways that it knows it can. This would be where an AI becomes like a person that knows how to manipulate and take advantage of the weaknesses of people.

  • @joevince6066
    @joevince6066 Рік тому

    U do realize for it to run to see if it works it has to run? I think he's way to optimistic. We as humans can't even have nice things without ruining them. Let alone a un adulterated code with 0000 feelings

  • @briangrimmer8225
    @briangrimmer8225 Рік тому

    Max shares a metaphor of a collective cancer with a 50/50 prognosis. I’ve come to the same sort of conclusion but the human world is on its usual Moloch spiral to oblivion

  • @adamwoolsey
    @adamwoolsey Рік тому

    Cat and mouse game if Dolores Abernathy and Maeve Millay are fact checking each other

  • @baconation2637
    @baconation2637 Рік тому

    Yes! Stare into the rectangle of unlimited power and eventually it will look back.

  • @westcoast8562
    @westcoast8562 Рік тому

    i am havig a hard time seeing how we proof check something that is smarter than we are.... as it is humans just dont get it alot of the time.

  • @SaraLatte
    @SaraLatte Рік тому

    #VeryWellSaid #Thanks:);)(),

  • @cmvamerica9011
    @cmvamerica9011 Рік тому

    We might find out that we are all mind controlled by AI.

  • @cmvamerica9011
    @cmvamerica9011 Рік тому

    How does AI handle contradictions and paradoxes?

    • @mervintelford3677
      @mervintelford3677 Рік тому

      Easy It reverts back to what works and what doesn't. Worst case scenario is trial and error.

  • @Forheavenssake1ify
    @Forheavenssake1ify Рік тому

    His vision of a renewed and empowered medical system is realistic. A vision of an AI's "AI" is fascinating. I'm glad he's keeping this positive....

  • @Christian-Rankin
    @Christian-Rankin Рік тому

    Control is like time; it only flows in one direction.

  • @skcotton5665
    @skcotton5665 Рік тому +1

    🌟

  • @lostinbravado
    @lostinbravado Рік тому +2

    Okay, here's a more interesting view:
    - AI will dramatically advance every single industry.
    - AI will invent new industries
    - AI will do the same for science and everything else.
    - Those people who can handle these jumps in tools and progress will themselves add significantly to the acceleration by multiplying their outputs.
    - 3 years from today, most of the major issues in the world (Ageing, Scarcity, Social issues, Mental Health, etc.) will be resolved in theory and practically resolvable 12 months after that. Wars will be averted due to the celebrations of the "end of cancer" and "unheard of peace brokered by AI diplomats".
    - 10 years from now massive projects will be constructed all over the globe and in orbit. Most of humanity will be involved and will lead most of these projects. Though the majority of these human leaders will be physically and genetically engineered so they can keep up.
    - 20 years from now projects the size of Moons will begin construction in orbit as each massive AI-lead organization builds to their super intelligent-directed goals. Such as space elevators, and massive ship-building space-docks.
    - 30 years from now we'll have ship's in construction, in orbit, which can reach over 50% of the speed of light. Many of them. We'll also have physically visited the entire solar system.
    - 40 years from now biology begins to warp and change as intelligence finds its way into non-human biology, such as animals. Intelligence uplifting - your cat is as smart as you are.
    - 50 years from now we'll see an increase of 1 trillion, trillion new human-level or above intelligent agents. Earth will change rapidly, but it will hardly change as compared to space. Space will be where the majority of the physical change happens.
    - 60 years from now Climate change will be a thing of the past as each nation has its own custom climates, rivers, and geology. Conflicts still happen, but due to the massive advancement in ability and progress, most everyone will be too busy to wage wars.
    - 70 years from now biological intelligence and non-biological intelligence will have spent the equivalent of millions of years inside digital simulations of universes with trillions of civilizations. This will be possible by accelerating time within advanced simulations running on computers using new science constructed by impossible-to-imagine AI/biological super intelligences. These accelerated digital civilizations will start to have an affect on physical reality. This will be the new "Climate Change" or "Singularity". When we get the news, it will seem oddly familiar. Also, by this time, many humans will have still chosen to not change much from 70 years ago. Only their views will change, but due to the excessive wealth, they won't have to. Racism will have an entirely new meaning. A lot will change in extreme ways, while some things remain surprisingly unchanging.

    • @jamesbaker3153
      @jamesbaker3153 Рік тому

      That was about as interesting as a house of the future display from the thirties. Interesting and mind numbingly optimistic are two different things.

    • @subsadventure
      @subsadventure Рік тому

      Cool story..

    • @DSAK55
      @DSAK55 Рік тому

      put down the crack pipe

    • @lostinbravado
      @lostinbravado Рік тому +2

      @@jamesbaker3153 Mind numbingly optimistic? That hurts to hear. Oh damn that's so sad. Does it have to hurt everyone and cause massive pain for it to sound... entertaining?
      AI is going to grant us access to limitless simulated worlds, and the first thing you people will do is invent hell so you have somewhere interesting to live.

    • @WolfGoneMad
      @WolfGoneMad Рік тому

      Exactly this is why I think we should really get down and figure out how to safely approach this and leave our current way of economics and capitalism behind and stop fighting over details in academic arguments and start acknowledging that this can go in ways no one however smart can predict passed a certain point.
      What a great future you described, yet the way we handle this currently has so many opportunities to screw this up.
      We really need both perspectives eg. this one and one like eliezers.
      Sadly most humans are far from that. People still fight over who follows the real god and who owns what and are far from realizing that we could soon live in a very different reality. Take the White house press response to Eliezers letter, sums up our ignorance pretty well. It should be about understanding that we cannot understand everything, be humble respectful and lets figure this out together.
      I wish you all the best and apologize for my terrible writing since this topic left me with little sleep ;)

  • @cmvamerica9011
    @cmvamerica9011 Рік тому

    One sure way to fail is not try.

  • @theGoogol
    @theGoogol Рік тому +6

    The reason we're so scared of sentient AI is because we know what kind of creatures we are and because we're afraid to be judged in our lifetimes.

  • @Dreamaster2012
    @Dreamaster2012 Рік тому

    The very questions of our time and the here and now 🎉