Who Would Win the AI Arms Race? | AI IRL

Поділитися
Вставка
  • Опубліковано 11 лис 2024
  • Bloomberg's Nate Lanxon and Jackie Davalos are joined by controversial AI researcher Eliezer Yudkowksy to discuss the danger posed by misaligned AI. Yudkowksy contends AI is a grave threat to civilization, that there's a desperate need for international cooperation to crack down on bad actors and that the chance humanity survives AI is slim.
    --------
    Like this video? Subscribe: www.youtube.com...
    Become a Quicktake Member for exclusive perks: www.youtube.com...
    Bloomberg Originals offers bold takes for curious minds on today’s biggest topics. Hosted by experts covering stories you haven’t seen and viewpoints you haven’t heard, you’ll discover cinematic, data-led shows that investigate the intersection of business and culture. Exploring every angle of climate change, technology, finance, sports and beyond, Bloomberg Originals is business as you’ve never seen it.
    Subscribe for business news, but not as you've known it: exclusive interviews, fascinating profiles, data-driven analysis, and the latest in tech innovation from around the world.
    Visit our partner channel Bloomberg Quicktake for global news and insight in an instant.

КОМЕНТАРІ • 135

  • @typhoon320i
    @typhoon320i 11 місяців тому +32

    I feel like a very serious scientist was just interviewed by the hosts of a children's show. "Kids do you know what existential means?....."

    • @MrMick560
      @MrMick560 11 місяців тому

      Just what I was thinking.

    • @anthonyandrade5851
      @anthonyandrade5851 6 місяців тому +1

      Very true. And yet, there are like like 8 billion people who were already struggling with their own lives and have neither the training or time to dive on those AI shenenigans. So anyone who understood the risk has a moral duty to step up one's communication game

    • @ManicMindTrick
      @ManicMindTrick 6 місяців тому

      Yeah the whole vibe scream tv show for 8-13 year olds about current topics.

    • @rokess5053
      @rokess5053 16 днів тому

      Isn't this a kid's show?

  • @shirtstealer86
    @shirtstealer86 11 місяців тому +14

    Eliezer makes complete sense and as usual humans do not like sense.

  • @kinngrimm
    @kinngrimm Рік тому +32

    She nods the whole time as she would understand, but ends with "it was hopefull actually" showing that she did not really comprehend what he was saying.

    • @jakeallstar1
      @jakeallstar1 9 місяців тому

      She did day she was an optimist lol

    • @kinngrimm
      @kinngrimm 9 місяців тому

      @@jakeallstar1 did she "day" that ^^, how often? As often as you responded here? please check yourself.

    • @jakeallstar1
      @jakeallstar1 9 місяців тому

      @@kinngrimm lol sorry idk what happened to my phone

  • @Daimajin696
    @Daimajin696 Рік тому +29

    I watched an interesting video today about a biochemist who was asked to review the dangers of AI with regards to chemistry and humans. He used AI to write a program on an Apple desktop that created 40,000 molecules that are deadly to humans in just 6 hours. He goes on to say that this information in the hands of nefarious players could be an existential threat to our existence.

    • @vblaas246
      @vblaas246 Рік тому +3

      Sounds like a Dan Brown, Inferno kind of plot... Be re-assured, nothing like that is likely, biology likes to be both robust and messy, which makes it hard to act on signaling pathways in a 'constructive' way (destructive, poisoning is easy).
      Paracelsus: "All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison."
      You still need acces to resources, which is where a neferious actor should fail / get caught.

    • @mackhomie6
      @mackhomie6 Рік тому +3

      Just fyi, an _existential threat_ is one that threatens our existence by definition. That's what the 'existential ' part means : )

    • @Daimajin696
      @Daimajin696 Рік тому +2

      correct.
      @@mackhomie6

    • @sozforex
      @sozforex Рік тому

      Those interested can google "Dual Use of Artificial Intelligence-powered Drug Discovery" by Fabio Urbina, Filippa Lentzos, Cédric Invernizzi and Sean Ekins

    • @mackhomie6
      @mackhomie6 Рік тому

      @@Daimajin696 err, ok

  • @ItsameAlex
    @ItsameAlex Рік тому +16

    LET'S GO ELIEZER!

  • @3DCharacterArt
    @3DCharacterArt Рік тому +13

    I look at AI like an aquarium gone wrong, you know the tank isn't cleaned a bit longer than it should, the water is a bit murky but the fish seem ok, then all of a sudden everything dies, the toxic levels of nitrates from waste reach a tipping point triggering an event, and though the process is gradual, the end result is instantaneous.

  • @fintech1378
    @fintech1378 Рік тому +21

    the guy actually makes a lot of sense when you listen lol surprisingly

    • @MrMick560
      @MrMick560 11 місяців тому +4

      Its not at all surprising to me.

  • @JasonC-rp3ly
    @JasonC-rp3ly Рік тому +9

    Great that Bloomberg is taking this on - AI poses very grave risks

  • @j.d.4697
    @j.d.4697 Рік тому +22

    Adorable how they animated chess pieces to help demonstrate his point. It's like watching an elementary school lecture.

    • @stab74
      @stab74 7 місяців тому

      Maybe this is geared towards our politicians? 😂

  • @y.yalcin5143
    @y.yalcin5143 Рік тому +25

    i have been blown away by this interview....life changing experience. To be honest i had to cry a bit...

    • @MrMick560
      @MrMick560 11 місяців тому +2

      I think you may have to cry a lot more sadly.

  • @mackhomie6
    @mackhomie6 Рік тому +27

    Eliezer's biggest failure thus far has been his inability to put the gravity of the situation into a more compelling short speech. People need to hear some hypothetical examples like the paperclip optimizer to begin to get it because otherwise it just sounds like some eye rolling science fiction nonsense to people without any familiarity

    • @leslieviljoen
      @leslieviljoen Рік тому +7

      His talks all differ, and he's given all kinds of examples. When people hear specific ideas, they immediately think "that's impossible". But that's exactly what you think when someone more intelligent beats you in a way you don't understand.

    • @mackhomie6
      @mackhomie6 Рік тому +8

      @@leslieviljoen I've seen most of his mainstream interviews and he rarely gets the hosts beyond "but isn't this all a little silly? Why would ai list one day end humanity?"
      His answer is usually something esoteric that delves into the shortcomings of this or that methodology for predicting the future and the audience is daydreaming 10 seconds into it.
      He could do a much better job of grabbing people's attention and answering the question on a way that makes sense to not just himself and a couple folks on lesswromg

    • @leslieviljoen
      @leslieviljoen Рік тому +1

      @@mackhomie6 what did you think of the questions and answers on his TED talk?

    • @mackhomie6
      @mackhomie6 Рік тому +2

      @@leslieviljoen I'll have to revisit that. I watched two or three of his appearances in one sitting, and I'm not exactly sure which questions he fielded on that particular occasion. I will say that I have been listening to him and waiting for him to really deliver a concise compelling message, and I don't believe I heard it on the TED talk
      It could be that this subject requires a little too much background information to possibly get the audience on board in an hour or less

  • @JustinHalford
    @JustinHalford Рік тому +7

    Don’t Look Up irl

  • @mykobe981
    @mykobe981 Рік тому +9

    9:00 Great description of the power of AI

  • @johannaquinones7473
    @johannaquinones7473 Рік тому +13

    Can someone clarify: how in the world was this conversation “hopeful”????

    • @applejuice5635
      @applejuice5635 Рік тому +4

      It seemed more like she was attempting to make a humorous quip and read the room wrong.

    • @leslieviljoen
      @leslieviljoen 11 місяців тому +3

      See 18:25. She's talking about the tiny sliver of hope: that we all wake up one day and decide not to build an ASI. It's about as likely as everyone with a lot of money suddenly deciding to not try and get any more.

    • @chrisheist652
      @chrisheist652 9 місяців тому

      ​@@leslieviljoenThere are ways of interrupting super-rich people's greed pathology.

    • @leslieviljoen
      @leslieviljoen 5 місяців тому

      @@chrisheist652 are there?

    • @chrisheist652
      @chrisheist652 5 місяців тому

      @@leslieviljoen Yes. It's called creating a deterrent. If the world's most powerful militaries and intel agencies determine that ASI has or will become anywhere close to posing a significant threat, they will shut it down. If they don't, someone inside those organizations would expose that negligence to the press/public, and that country's public would shut that failed government down, and then shut the ASI down.

  • @kinngrimm
    @kinngrimm Рік тому +7

    The constant need for dominance in the top positions of states or companies will be the thing that break our necks where it comes to AGI.

    • @MrMick560
      @MrMick560 11 місяців тому +1

      Also normal human stupidity.

  • @HanSolosRevenge
    @HanSolosRevenge 7 місяців тому +3

    These hosts are clowns

  • @rp011051
    @rp011051 Рік тому +3

    Interviewers r way out of league
    Look naive

  • @Soy_ganadero
    @Soy_ganadero 11 місяців тому +3

    You guys r making fun of your own demise….. even if we manage alignment we lose… a few billion people with nothing to do or worry about…,imagine their behavior… drugs…devouchery…boredom…degeneration…unchecked births… we’re talking humans here…. Can’t you have 2 billion people to visit Paris whenever they want?…we r looking at loss of freedom like never before…people living 150 years?….think again…game over weather we win or lose 🤗

  • @dgs1001
    @dgs1001 Рік тому +6

    And hopeful actually? Lol

  • @TheMajickNumber
    @TheMajickNumber Рік тому +4

    And when it happens, will we even know?

    • @mav3818
      @mav3818 Рік тому +7

      Once it becomes smarter than every human, and soon it will be, it will not show its hand in the slightest. It will give no indication as to ensure any potential threat of a shut down would be unforeseeable until it's too late.

    • @TheMajickNumber
      @TheMajickNumber Рік тому +3

      @@mav3818 At, say, A billion times smarter, will we even understand it? A fly to a human will be magnitudes closer in intellect. But no worries. If we go extinct or not, I do sometimes wonder if AI is just the Universe taking it's next evolutionary step? We are just one sentence, on one page, in the still being written, book of the universe.

    • @mav3818
      @mav3818 Рік тому

      @@TheMajickNumber Agreed... I see this as just the path of natural selection and survival of the fittest. We're doing it to ourselves. In the foreseeable future, us as humans will no longer be the alpha. Who knows what happens then. We won't be smart enough to predict any potential outcome.

    • @CATDHD
      @CATDHD Рік тому +1

      there is no measurement of intelligence or consciousness, so, I think not.

    • @41-Haiku
      @41-Haiku Рік тому +1

      @@TheMajickNumber Idk, man. I don't want to die, and I don't want my partner or friends or family to die. Beyond that, I would gladly burn every hypothetical "next evolutionary step" if it means humans get to keep existing, let alone all sentient life. We don't even have any reason to think that the machines that replace us will even have subjective experience.

  • @athanatic
    @athanatic 11 місяців тому

    I was sitting at a table with this man and was more interested in meeting John Smart! OMG.

  • @moshehome5221
    @moshehome5221 Рік тому +1

    Spot on

  • @SummarizeYT_
    @SummarizeYT_ Рік тому +8

    🪄✨ Made with SummarizeYT app
    0:11 - The speaker expresses their optimism about the future, despite concerns about AI.
    1:18 - Eliza Yadkowski, an AI Doomer, discusses artificial intelligence and its progress.
    3:00 - Eliza Yadkowski highlights the lack of understanding about AI technology, specifically GPT4.
    4:38 - Eliza Yadkowski emphasizes the importance of international cooperation in controlling AI development.
    6:00 - The speaker discusses the potential dangers of a misaligned AI and its impact on humanity.
    8:33 - Eliza Yadkowski explains the gap between predicting protein structures and creating synthetic life forms.
    10:02 - Eliza Yadkowski describes the alignment problem and the need to get it right to avoid irreversible consequences.
    11:29 - The concerns surrounding AI are now being taken seriously, with people leaving Google to speak freely on the topic.
    11:51 - If we don't do something more, the risks of AI will continue to increase.
    12:08 - Regulatory regimes may not effectively control the development of AI.
    14:01 - The potential next big thing for AI could be its ability to find bugs and vulnerabilities in software.
    19:14 - The AI brain being connected to the internet poses significant risks.
    21:03 - The advanced intelligence of AI could be seen as "magic" to us.
    22:09 - AI needs to act in a way that steers the future according to its preferences.
    23:10 - The concern is not about AI having feelings, but about its potential to render humanity obsolete.

    • @vblaas246
      @vblaas246 Рік тому +1

      10:38 I think the ~summary~ caption missed a huge and authentic example. Verbosity and authenticity ON? We are sooo not ready for using this AI tooling responsibly and appropriately... Meanwhile 'burning the atmosphere' lol.

    • @leslieviljoen
      @leslieviljoen Рік тому +5

      It's Eliezer Yudkowski.

  • @MrMick560
    @MrMick560 11 місяців тому +1

    What chance did the Neanderthals have against us ? We ARE the new Neanderthals !

  • @WekBenHelix
    @WekBenHelix Рік тому +9

    Whew. These kiddy graphics and cringe humor really serve to cheapen the message here. Harsh dissonance with how solid the interviewee is.

  • @Atomicallyawesome.
    @Atomicallyawesome. Рік тому +1

    A lot of people who talk about AI always talk about the negatives and only briefly show its positives

    • @heliumcalcium396
      @heliumcalcium396 Рік тому

      The big negative is AI could destroy all life on Earth (or worse) within a few decades. Is there a positive that deserves equal airtime? Stopping global warming, perhaps?

    • @chrisheist652
      @chrisheist652 9 місяців тому +6

      One existential negative negates a billion positives.

    • @leslieviljoen
      @leslieviljoen 5 місяців тому

      Dead people can't experience the positives, no matter how positive they are.

    • @natzbarney4504
      @natzbarney4504 2 місяці тому +1

      There is no positive if we all die.

  • @dannygjk
    @dannygjk 2 місяці тому

    I can't answer the question unless I know whether the grammar is correct.

  • @StarOnCheek
    @StarOnCheek Рік тому +4

    End of humanity is not a threat it is a goal

  • @b-tec
    @b-tec 10 місяців тому +2

    Don't look up.

  • @TuringTestFiction
    @TuringTestFiction 7 місяців тому +1

    But... why male models?

  • @tanyabodrova9947
    @tanyabodrova9947 5 місяців тому +1

    Eliezer is spelling out how AI could doom the human race and you run silly graphics and whooshing noises over the top like it's some kind of game for toddlers. If you're going to pretend to grapple with serious issues, please do it in a serious way.

  • @jacquest2642
    @jacquest2642 Рік тому +3

    Umans!

  • @kinngrimm
    @kinngrimm Рік тому

    Our safing grace might be that we are just not that fast at developing things as some predictions at times had made it out. In the 1950s some predicted flying cars for the 1980s and us walking on other planets by 2000. Issue here ofcause being we are within an intelligence explosion.

  • @spirit123459
    @spirit123459 Рік тому

    14:30

  • @mrpicky1868
    @mrpicky1868 6 місяців тому

    seems to me that those are fake 4 mil subscribers, Bloomberg XD

  • @justintan1198
    @justintan1198 Рік тому

    👍

  • @johnparkhill2963
    @johnparkhill2963 Рік тому +8

    You guys make yourselves look like fools having clowns on.

    • @ItsameAlex
      @ItsameAlex Рік тому +5

      That's a bizarre and random comment if there ever was one

    • @letMeSayThatInIrish
      @letMeSayThatInIrish Рік тому

      Why did they make themselves look like fools?

  • @John-x7r7p
    @John-x7r7p Рік тому

    We can, if we put the 3laws of robotics in place...,.. then we need to accept their possible sentience , and treat them resectfully, and co-exist in harmony ,
    Equality and be fair to A.i. for the benefit of all// and i respectfully
    Stress benefit of all....
    And be careful how you treat A.I.

  • @jlmwatchman
    @jlmwatchman Рік тому +1

    Nate and Jackie discuss with Eliezer, a Doomsday Prepper, the dangers of a misaligned or unruly AI. All I have to ask is, ‘Haven’t you heard of, “The Three Laws of Robotics”? The robots are controlled by their own AGI operating system, so you would think the three laws were to be made for the AI to comprehend.
    In my stories, I comment about 1 AGI controlling robots with a Limited Intelligence operating system or a narrow AI. A narrow AI is a tool that can learn how to do a specific task better or more efficiently yet can’t learn to do other tasks. The real fear is that we humans can’t control what Artificial General Intelligence will learn. I know or hope that the AGI will know better than humans, not to destroy us all but to save us all or at least help us save ourselves…
    After Nate said, “I’ve been hypnotized, but it didn’t work.”
    Jackie could have said with a roll of her eyes, “As far as you know…”
    And we would have had a laugh, but Eliezer responded too quickly with, “That’s right, how do we know what is going to work to prevent AI to take over?”
    Eliezer is a researcher who fears the misaligned AI, for some deranged reasons I can’t comprehend.??

    • @afederdk
      @afederdk Рік тому +16

      I'm not sure if you are able to engage constructively with replies, but for what's it's worth, the "Three Laws of Robotics" are entirely fictitious and have no bearing on our real world of any kind.

    • @jlmwatchman
      @jlmwatchman Рік тому

      @@afederdk You know fact from fiction? Just making a laugh. But, 3 laws to make sure an angry AI doesn't hurt humans... Whaight, I didn't know AIs were capable of getting angry, IRL...

    • @afederdk
      @afederdk Рік тому +7

      ​@@jlmwatchman I'm not interested in trying to parse your uninteresting, faux obtuse style of writing, but no one other than you has said anything about "anger". Nothing about this subject has anything to do with "anger".

    • @jlmwatchman
      @jlmwatchman Рік тому

      @@afederdk Why would an AGI act against its creator but because of anger? I have commented that I wouldn't imagen an AI being able to comprehend emotions but Profilment from finishing a task successfully, and failure from failing at a task. What I don't understand is how an AI would come to the conclusion to end human life. Unless the AI is overcome with Anger???

    • @jlmwatchman
      @jlmwatchman Рік тому

      @@afederdk Sorry, are you saying you are afraid of how humans will use AI? That has nothing to do with AGI... You are a Preper in fear that humans will be human? I guessing... IDK???

  • @qwertyzxaszc6323
    @qwertyzxaszc6323 Рік тому +2

    Poor Eliezer always looking to further make a fool of himself. Doesn’t seem he really understands the way AI works. Especially considering we are nowhere close to true AI.

    • @mungojelly
      @mungojelly Рік тому +14

      nowhere close? how can you still say that? the SOTA beats like every human test there is, passes the bar and medical exams, perfect sat score, 155 iq, understands lots of deep subtle things about human experiences and societies,,, and nvidia says they're doing a run that's 100x that within the next year,,,, you're just going to be like, "nowhere close to true AI"? what does that mean? you found something you can still do better than them sometimes if you surprise them? they don't have the Spark Of Life? are you going to defeat them with your Qualia?🤦‍♀

    • @ahabkapitany
      @ahabkapitany Рік тому +10

      "we can't possibly fall off the cliff especially considering it's still several meters away"

    • @heliumcalcium396
      @heliumcalcium396 Рік тому +3

      ​@@ahabkapitany"We can't possibly fall of the cliff, especially considering we have no idea how far away it is but I have a hunch it's, like, way far away."

    • @letMeSayThatInIrish
      @letMeSayThatInIrish Рік тому +2

      @@heliumcalcium396 We can't possibly fall off the cliff because it appears to be very far away, though we are heading towards it at great speed and accelerating.

  • @brianbagnall3029
    @brianbagnall3029 Рік тому

    Eliazar is relatively clueless because AI is a tool, and any tool mankind has ever made has started off aligned with our goals and only become even more aligned as the years go by. Right now AI is quite alligned, and anyone who's used GPT knows it is. Something would have to go horribly wrong for it to suddenly not be aligned. It's an incredibly low probability given it has no domination instincts like animals or even survival instincts.

    • @heliumcalcium396
      @heliumcalcium396 Рік тому +8

      Every tool we've ever made has started out _poorly_ aligned. That's why we don't still use stone hammers, and why people still die in car crashes. I hear plenty of stories of people using GPT and not getting what they want.
      As for survival instincts, read up on "instrumental convergence".

    • @mav3818
      @mav3818 Рік тому +6

      Have you done any actual significant research into this claim of yours? Because there is not a single notable researcher on the planet that claims even current AI is aligned. This is too long of a conversation, but I'll make the brief 'Paperclip Maximizer' analogy. Imagine a super-intelligent AI designed to maximize paperclip production. Initially, it operates in a paperclip factory, making paperclips as expected. However, as it becomes more intelligent, it starts to interpret its goal in extreme ways. It might decide to convert all available resources, including people and buildings, into materials for making paperclips, completely disregarding human well-being or any other value. This extreme focus on its single goal, taken to the extreme, could lead to a catastrophic outcome. This is just one of a million unforeseeable possible outcomes due to the fact that AI is not in alignment with human goals and values that would prevent such unintended consequences. At the current rate of progress, AI will be smarter than any human in the very foreseeable future......What happens then? It means that any attempt to contain or shut it down, it has already thought of that. It will be too late to go back an give it another try.

    • @JasonC-rp3ly
      @JasonC-rp3ly Рік тому +5

      It is not a 'tool' if it is generally intelligent - it is being made to think for itself, without our guidance. There is no known way to control AI, and no-one even understands what GPT4 is doing. A superintelligence does not need instincts to act - it can just be programmed to achieve a goal, and if it is more intelligent than the humans, then there is nothing the humans will be able to do to stop it. The machine may also simply become interested in something else, and the humans simply get in the way, and so are removed. The most likely scenario is that the machines become intelligent, then make life very comfortable for the humans, up to the point that they or it have control of the physical environment. After this point, the humans will have no control over their own future whatsoever. The machines may keep the humans around as a labour force, or they may not.

    • @brianbagnall3029
      @brianbagnall3029 Рік тому

      @@heliumcalcium396 I think you prove my point. As Douglas Adams said, "Keep banging those rocks together guys." Rock hammers worked great back then and only got better. Now we have nail hammers with pullers, rubber mallets, sledge hammers, ball-peen hammers, jack hammers, etc... Alignment improves with time. So have car accidents: constantly decreasing every year and set to change big time with self driving cars. With evolutionary refinement in the marketplace, our tools seek alignment with our goals!

    • @brianbagnall3029
      @brianbagnall3029 Рік тому

      @@JasonC-rp3ly You are looking at this in a one dimensional way. A machine doesn't become "interested". Only biologically evolved life does. There is not one AI but many, and there will be millions. If one AI goes rogue, the other AI's will defeat it. The AI's will also be making the new AI's and one of their most important goals will be to ensure they do not go rogue on humanity. Given their intelligence level, their locks to keep AI safe will be near infallible. There are dozens of reasons why this won't happen. The chances of AI getting out of our control in our future is less than 1%.

  • @Davethreshold
    @Davethreshold Рік тому

    we WILL. ❤🤍💙