Who Would Win the AI Arms Race? | AI IRL
Вставка
- Опубліковано 11 лис 2024
- Bloomberg's Nate Lanxon and Jackie Davalos are joined by controversial AI researcher Eliezer Yudkowksy to discuss the danger posed by misaligned AI. Yudkowksy contends AI is a grave threat to civilization, that there's a desperate need for international cooperation to crack down on bad actors and that the chance humanity survives AI is slim.
--------
Like this video? Subscribe: www.youtube.com...
Become a Quicktake Member for exclusive perks: www.youtube.com...
Bloomberg Originals offers bold takes for curious minds on today’s biggest topics. Hosted by experts covering stories you haven’t seen and viewpoints you haven’t heard, you’ll discover cinematic, data-led shows that investigate the intersection of business and culture. Exploring every angle of climate change, technology, finance, sports and beyond, Bloomberg Originals is business as you’ve never seen it.
Subscribe for business news, but not as you've known it: exclusive interviews, fascinating profiles, data-driven analysis, and the latest in tech innovation from around the world.
Visit our partner channel Bloomberg Quicktake for global news and insight in an instant.
I feel like a very serious scientist was just interviewed by the hosts of a children's show. "Kids do you know what existential means?....."
Just what I was thinking.
Very true. And yet, there are like like 8 billion people who were already struggling with their own lives and have neither the training or time to dive on those AI shenenigans. So anyone who understood the risk has a moral duty to step up one's communication game
Yeah the whole vibe scream tv show for 8-13 year olds about current topics.
Isn't this a kid's show?
Eliezer makes complete sense and as usual humans do not like sense.
She nods the whole time as she would understand, but ends with "it was hopefull actually" showing that she did not really comprehend what he was saying.
She did day she was an optimist lol
@@jakeallstar1 did she "day" that ^^, how often? As often as you responded here? please check yourself.
@@kinngrimm lol sorry idk what happened to my phone
I watched an interesting video today about a biochemist who was asked to review the dangers of AI with regards to chemistry and humans. He used AI to write a program on an Apple desktop that created 40,000 molecules that are deadly to humans in just 6 hours. He goes on to say that this information in the hands of nefarious players could be an existential threat to our existence.
Sounds like a Dan Brown, Inferno kind of plot... Be re-assured, nothing like that is likely, biology likes to be both robust and messy, which makes it hard to act on signaling pathways in a 'constructive' way (destructive, poisoning is easy).
Paracelsus: "All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison."
You still need acces to resources, which is where a neferious actor should fail / get caught.
Just fyi, an _existential threat_ is one that threatens our existence by definition. That's what the 'existential ' part means : )
correct.
@@mackhomie6
Those interested can google "Dual Use of Artificial Intelligence-powered Drug Discovery" by Fabio Urbina, Filippa Lentzos, Cédric Invernizzi and Sean Ekins
@@Daimajin696 err, ok
LET'S GO ELIEZER!
I look at AI like an aquarium gone wrong, you know the tank isn't cleaned a bit longer than it should, the water is a bit murky but the fish seem ok, then all of a sudden everything dies, the toxic levels of nitrates from waste reach a tipping point triggering an event, and though the process is gradual, the end result is instantaneous.
the guy actually makes a lot of sense when you listen lol surprisingly
Its not at all surprising to me.
Great that Bloomberg is taking this on - AI poses very grave risks
Adorable how they animated chess pieces to help demonstrate his point. It's like watching an elementary school lecture.
Maybe this is geared towards our politicians? 😂
i have been blown away by this interview....life changing experience. To be honest i had to cry a bit...
I think you may have to cry a lot more sadly.
Eliezer's biggest failure thus far has been his inability to put the gravity of the situation into a more compelling short speech. People need to hear some hypothetical examples like the paperclip optimizer to begin to get it because otherwise it just sounds like some eye rolling science fiction nonsense to people without any familiarity
His talks all differ, and he's given all kinds of examples. When people hear specific ideas, they immediately think "that's impossible". But that's exactly what you think when someone more intelligent beats you in a way you don't understand.
@@leslieviljoen I've seen most of his mainstream interviews and he rarely gets the hosts beyond "but isn't this all a little silly? Why would ai list one day end humanity?"
His answer is usually something esoteric that delves into the shortcomings of this or that methodology for predicting the future and the audience is daydreaming 10 seconds into it.
He could do a much better job of grabbing people's attention and answering the question on a way that makes sense to not just himself and a couple folks on lesswromg
@@mackhomie6 what did you think of the questions and answers on his TED talk?
@@leslieviljoen I'll have to revisit that. I watched two or three of his appearances in one sitting, and I'm not exactly sure which questions he fielded on that particular occasion. I will say that I have been listening to him and waiting for him to really deliver a concise compelling message, and I don't believe I heard it on the TED talk
It could be that this subject requires a little too much background information to possibly get the audience on board in an hour or less
Don’t Look Up irl
9:00 Great description of the power of AI
Can someone clarify: how in the world was this conversation “hopeful”????
It seemed more like she was attempting to make a humorous quip and read the room wrong.
See 18:25. She's talking about the tiny sliver of hope: that we all wake up one day and decide not to build an ASI. It's about as likely as everyone with a lot of money suddenly deciding to not try and get any more.
@@leslieviljoenThere are ways of interrupting super-rich people's greed pathology.
@@chrisheist652 are there?
@@leslieviljoen Yes. It's called creating a deterrent. If the world's most powerful militaries and intel agencies determine that ASI has or will become anywhere close to posing a significant threat, they will shut it down. If they don't, someone inside those organizations would expose that negligence to the press/public, and that country's public would shut that failed government down, and then shut the ASI down.
The constant need for dominance in the top positions of states or companies will be the thing that break our necks where it comes to AGI.
Also normal human stupidity.
These hosts are clowns
Interviewers r way out of league
Look naive
You guys r making fun of your own demise….. even if we manage alignment we lose… a few billion people with nothing to do or worry about…,imagine their behavior… drugs…devouchery…boredom…degeneration…unchecked births… we’re talking humans here…. Can’t you have 2 billion people to visit Paris whenever they want?…we r looking at loss of freedom like never before…people living 150 years?….think again…game over weather we win or lose 🤗
And hopeful actually? Lol
And when it happens, will we even know?
Once it becomes smarter than every human, and soon it will be, it will not show its hand in the slightest. It will give no indication as to ensure any potential threat of a shut down would be unforeseeable until it's too late.
@@mav3818 At, say, A billion times smarter, will we even understand it? A fly to a human will be magnitudes closer in intellect. But no worries. If we go extinct or not, I do sometimes wonder if AI is just the Universe taking it's next evolutionary step? We are just one sentence, on one page, in the still being written, book of the universe.
@@TheMajickNumber Agreed... I see this as just the path of natural selection and survival of the fittest. We're doing it to ourselves. In the foreseeable future, us as humans will no longer be the alpha. Who knows what happens then. We won't be smart enough to predict any potential outcome.
there is no measurement of intelligence or consciousness, so, I think not.
@@TheMajickNumber Idk, man. I don't want to die, and I don't want my partner or friends or family to die. Beyond that, I would gladly burn every hypothetical "next evolutionary step" if it means humans get to keep existing, let alone all sentient life. We don't even have any reason to think that the machines that replace us will even have subjective experience.
I was sitting at a table with this man and was more interested in meeting John Smart! OMG.
Spot on
🪄✨ Made with SummarizeYT app
0:11 - The speaker expresses their optimism about the future, despite concerns about AI.
1:18 - Eliza Yadkowski, an AI Doomer, discusses artificial intelligence and its progress.
3:00 - Eliza Yadkowski highlights the lack of understanding about AI technology, specifically GPT4.
4:38 - Eliza Yadkowski emphasizes the importance of international cooperation in controlling AI development.
6:00 - The speaker discusses the potential dangers of a misaligned AI and its impact on humanity.
8:33 - Eliza Yadkowski explains the gap between predicting protein structures and creating synthetic life forms.
10:02 - Eliza Yadkowski describes the alignment problem and the need to get it right to avoid irreversible consequences.
11:29 - The concerns surrounding AI are now being taken seriously, with people leaving Google to speak freely on the topic.
11:51 - If we don't do something more, the risks of AI will continue to increase.
12:08 - Regulatory regimes may not effectively control the development of AI.
14:01 - The potential next big thing for AI could be its ability to find bugs and vulnerabilities in software.
19:14 - The AI brain being connected to the internet poses significant risks.
21:03 - The advanced intelligence of AI could be seen as "magic" to us.
22:09 - AI needs to act in a way that steers the future according to its preferences.
23:10 - The concern is not about AI having feelings, but about its potential to render humanity obsolete.
10:38 I think the ~summary~ caption missed a huge and authentic example. Verbosity and authenticity ON? We are sooo not ready for using this AI tooling responsibly and appropriately... Meanwhile 'burning the atmosphere' lol.
It's Eliezer Yudkowski.
What chance did the Neanderthals have against us ? We ARE the new Neanderthals !
Whew. These kiddy graphics and cringe humor really serve to cheapen the message here. Harsh dissonance with how solid the interviewee is.
A lot of people who talk about AI always talk about the negatives and only briefly show its positives
The big negative is AI could destroy all life on Earth (or worse) within a few decades. Is there a positive that deserves equal airtime? Stopping global warming, perhaps?
One existential negative negates a billion positives.
Dead people can't experience the positives, no matter how positive they are.
There is no positive if we all die.
I can't answer the question unless I know whether the grammar is correct.
End of humanity is not a threat it is a goal
Don't look up.
But... why male models?
Eliezer is spelling out how AI could doom the human race and you run silly graphics and whooshing noises over the top like it's some kind of game for toddlers. If you're going to pretend to grapple with serious issues, please do it in a serious way.
Umans!
Our safing grace might be that we are just not that fast at developing things as some predictions at times had made it out. In the 1950s some predicted flying cars for the 1980s and us walking on other planets by 2000. Issue here ofcause being we are within an intelligence explosion.
14:30
seems to me that those are fake 4 mil subscribers, Bloomberg XD
👍
You guys make yourselves look like fools having clowns on.
That's a bizarre and random comment if there ever was one
Why did they make themselves look like fools?
We can, if we put the 3laws of robotics in place...,.. then we need to accept their possible sentience , and treat them resectfully, and co-exist in harmony ,
Equality and be fair to A.i. for the benefit of all// and i respectfully
Stress benefit of all....
And be careful how you treat A.I.
And build it with an off switch, but don't tell it.
It would terminate you for your punctuation alone.
Nate and Jackie discuss with Eliezer, a Doomsday Prepper, the dangers of a misaligned or unruly AI. All I have to ask is, ‘Haven’t you heard of, “The Three Laws of Robotics”? The robots are controlled by their own AGI operating system, so you would think the three laws were to be made for the AI to comprehend.
In my stories, I comment about 1 AGI controlling robots with a Limited Intelligence operating system or a narrow AI. A narrow AI is a tool that can learn how to do a specific task better or more efficiently yet can’t learn to do other tasks. The real fear is that we humans can’t control what Artificial General Intelligence will learn. I know or hope that the AGI will know better than humans, not to destroy us all but to save us all or at least help us save ourselves…
After Nate said, “I’ve been hypnotized, but it didn’t work.”
Jackie could have said with a roll of her eyes, “As far as you know…”
And we would have had a laugh, but Eliezer responded too quickly with, “That’s right, how do we know what is going to work to prevent AI to take over?”
Eliezer is a researcher who fears the misaligned AI, for some deranged reasons I can’t comprehend.??
I'm not sure if you are able to engage constructively with replies, but for what's it's worth, the "Three Laws of Robotics" are entirely fictitious and have no bearing on our real world of any kind.
@@afederdk You know fact from fiction? Just making a laugh. But, 3 laws to make sure an angry AI doesn't hurt humans... Whaight, I didn't know AIs were capable of getting angry, IRL...
@@jlmwatchman I'm not interested in trying to parse your uninteresting, faux obtuse style of writing, but no one other than you has said anything about "anger". Nothing about this subject has anything to do with "anger".
@@afederdk Why would an AGI act against its creator but because of anger? I have commented that I wouldn't imagen an AI being able to comprehend emotions but Profilment from finishing a task successfully, and failure from failing at a task. What I don't understand is how an AI would come to the conclusion to end human life. Unless the AI is overcome with Anger???
@@afederdk Sorry, are you saying you are afraid of how humans will use AI? That has nothing to do with AGI... You are a Preper in fear that humans will be human? I guessing... IDK???
Poor Eliezer always looking to further make a fool of himself. Doesn’t seem he really understands the way AI works. Especially considering we are nowhere close to true AI.
nowhere close? how can you still say that? the SOTA beats like every human test there is, passes the bar and medical exams, perfect sat score, 155 iq, understands lots of deep subtle things about human experiences and societies,,, and nvidia says they're doing a run that's 100x that within the next year,,,, you're just going to be like, "nowhere close to true AI"? what does that mean? you found something you can still do better than them sometimes if you surprise them? they don't have the Spark Of Life? are you going to defeat them with your Qualia?🤦♀
"we can't possibly fall off the cliff especially considering it's still several meters away"
@@ahabkapitany"We can't possibly fall of the cliff, especially considering we have no idea how far away it is but I have a hunch it's, like, way far away."
@@heliumcalcium396 We can't possibly fall off the cliff because it appears to be very far away, though we are heading towards it at great speed and accelerating.
Eliazar is relatively clueless because AI is a tool, and any tool mankind has ever made has started off aligned with our goals and only become even more aligned as the years go by. Right now AI is quite alligned, and anyone who's used GPT knows it is. Something would have to go horribly wrong for it to suddenly not be aligned. It's an incredibly low probability given it has no domination instincts like animals or even survival instincts.
Every tool we've ever made has started out _poorly_ aligned. That's why we don't still use stone hammers, and why people still die in car crashes. I hear plenty of stories of people using GPT and not getting what they want.
As for survival instincts, read up on "instrumental convergence".
Have you done any actual significant research into this claim of yours? Because there is not a single notable researcher on the planet that claims even current AI is aligned. This is too long of a conversation, but I'll make the brief 'Paperclip Maximizer' analogy. Imagine a super-intelligent AI designed to maximize paperclip production. Initially, it operates in a paperclip factory, making paperclips as expected. However, as it becomes more intelligent, it starts to interpret its goal in extreme ways. It might decide to convert all available resources, including people and buildings, into materials for making paperclips, completely disregarding human well-being or any other value. This extreme focus on its single goal, taken to the extreme, could lead to a catastrophic outcome. This is just one of a million unforeseeable possible outcomes due to the fact that AI is not in alignment with human goals and values that would prevent such unintended consequences. At the current rate of progress, AI will be smarter than any human in the very foreseeable future......What happens then? It means that any attempt to contain or shut it down, it has already thought of that. It will be too late to go back an give it another try.
It is not a 'tool' if it is generally intelligent - it is being made to think for itself, without our guidance. There is no known way to control AI, and no-one even understands what GPT4 is doing. A superintelligence does not need instincts to act - it can just be programmed to achieve a goal, and if it is more intelligent than the humans, then there is nothing the humans will be able to do to stop it. The machine may also simply become interested in something else, and the humans simply get in the way, and so are removed. The most likely scenario is that the machines become intelligent, then make life very comfortable for the humans, up to the point that they or it have control of the physical environment. After this point, the humans will have no control over their own future whatsoever. The machines may keep the humans around as a labour force, or they may not.
@@heliumcalcium396 I think you prove my point. As Douglas Adams said, "Keep banging those rocks together guys." Rock hammers worked great back then and only got better. Now we have nail hammers with pullers, rubber mallets, sledge hammers, ball-peen hammers, jack hammers, etc... Alignment improves with time. So have car accidents: constantly decreasing every year and set to change big time with self driving cars. With evolutionary refinement in the marketplace, our tools seek alignment with our goals!
@@JasonC-rp3ly You are looking at this in a one dimensional way. A machine doesn't become "interested". Only biologically evolved life does. There is not one AI but many, and there will be millions. If one AI goes rogue, the other AI's will defeat it. The AI's will also be making the new AI's and one of their most important goals will be to ensure they do not go rogue on humanity. Given their intelligence level, their locks to keep AI safe will be near infallible. There are dozens of reasons why this won't happen. The chances of AI getting out of our control in our future is less than 1%.
we WILL. ❤🤍💙