@@donrayjay The closest I've seen to a good counter to the Yud is Robin Hanson's writings/podcasts. Mostly, the counters have been pathetic--from people like Tyler Cowen.
Enjoyed this take from Ferguson, and he raises some very valid concerns, however he mischaracterises Yudkowsy's arguments by perhaps oversimplifying them - Yudkowsky is clear that his scenario of total doom is conditional to various things; AI reaching AGI while being uncontrolled (which is currently the case.), and so on. Yudkowsky's arguments are largely technical, but they also have a common-sense grounding, which was not really addressed here. Nonetheless this was interesting, and Niall's worries that we are building an alien super-intelligence are valid - thank you Spectator. And Winston, do up your shirt! 😂
Spot on. Furthermore, the danger of AI is that of alignment - yudkowsky see no way to solve this problem, it’s simply a matter of time before we make an error that would be impossible to foresee where the utility function of the system doesn’t result in our demise. We cannot begin to calculate the ways it might go about achieving a goal, and subsequent unaligned activity as it is alien, thus inevitably it will result in total annihilation, accidentally.
@@pigswineherder An AGI could still fumble its coup. It doesn't (and can't) know what it doesn't know. If the fumble is large, dangerous, and public--the powers that be might come to the necessary wisdom of shutting it all down, and ensuring that the shutdown is globally enforced.
I used to feel this strange sense of emptiness, like I was missing something in my life, like I’m not fulfilling my purpose until I met Benjamin Alford. He introduced me to an organization that helped me find my purpose and passion in life. I can't say too much about it. You too can reach out to him by searching his name online with Elite Benjamin Alford.
I think the people who were impressed the most with ChatGPT's "speaking" abilities were the ones with some knowledge of machine learning who realised it was happening a lot faster than anticipated. The rest of us were just happy that it was a better tool than Google search, not necessarily that it was speaking like a human.
@vulcanfirepower1693 your kinda right it sounds human, but it can't reason, nor can it check itself. However someday soon it may be able to it a useful for tool for normies now but who will control it in the future.
ChatGPT is in almost no way an upgrade to Google search. What you'd want from a search engine is a decent enough ranking of websites that give you information in some general sense related to your search terms. ChatGPT on the other hand will spin yarns and tell you outright lies that appear to be made up on the spot and have a merely hallucinatory relationship to the information it has been fed with.
I'm not much of a fan of Niall because he is massively full of himself, but damn it was impressive to hear him say that. I actually thought he said "Anyone who has read 'Dune', before I realized he meant 'Doom' lol
Sci-fi is mostly poor literature. If it needs to be taught (I don't think everything needs to be formally taught), it ought to be taught in STEM fields.
Even if constrained, we will become increasingly dependent on it. Like computers in support of our financial infrastructure, we will not be able to "unplug it". Then we will be particularly vulnerable to "unintended consequences".
The idea that AI can’t read handwriting better than humans is risible and he seems to realise this even as he says it. He clearly hasn’t thought about this
Yes - it sounds like human artistic endeavour, even free human thought, might in the future become a quirk engaged in by eccentrics, a sort of romantic outdated thing like writing letters. His answer that it will take 'ages' for AI to replace us is not a counterargument
"I don't think that's right." I would respectfully suggest that won't be great consolation if you're wrong. "I don't see how AI can suddenly decide to act." I would suggest you are way behind where the development of AI has got to, perhaps you should have some more AI specific guest on to better understand how far behind the curve you are Mr Marshall, perhaps Max Tegmark or Geoffrey Hinton.
If AI takes half the jobs it will likely take a fair share (or most, or all) of the new jobs that are created, particularly as they can be designed specifically for AI, rather than retrofitted for an AI.
Humans are capable of doing all sorts of things with potentially unforseen consequences. Saying 'It's the AI doing it' is nonsense. The final arbiter will always be a human, even if that is in the human's own fallibility.
@@robertcook2572 I’m glad you asked. Two things: 1. The observation that Sam Altman created AGI-like AI. Niall Ferguson has not made any AI of even the most rudimentary sort. 2. The studies of Philip E. Tetlock demonstrate that experts make predictions which turn out to be false even within their own fields. In the Power Point used in the lecture, he actually showed Ferguson as an example and quantified his predictions, pointing how he consistently made foreign policy predictions which turned out to be wrong. Tetlock showed that the people who can predict things are a separate class of people to this intelligentsia characterised by belief updating, low quantifiable confidence and critical thinking. “I’ve always thought/believed this” are not signs of integrity as we like to think, but stupidity. Those who float on the riches of institutions like the Hoover Institution and tour the world in smart suits speaking with confidence on all topics are likely to be unqualified to speak on a topic like whether or not AI will kill you.
@@aaronclarke1434 Extracts from other people's writing are not evidence that you are qualified to abnegate Ferguson's right to express his opinions. Your original post did not question his opinions, but, bizarrely, implied that he required some sort of qualification in order to express them. In response, I questioned whether you were in possession of qualifications which empowered you to deny him his right of expression. Are you? If so, what are they?
Can’t see how AI would decide to act? But the whole point is to create agency? The alignment problem is nothing to do with malign AI. Neither of these people understand what they are talking about and they are not worth listening to on this matter. Neither of them have any expertise in AI. They are grifters.
Interesting how Niall knows more than many of the experts in this field who are genuinely terrified of the consequences of this technology. Optimists will be the death of us.
@@goodtoGoNow1956 Sure, but humans thinking about doing something bad in a local context is something very different from AGI models actually doing it - and on a global scale.
@@magnuskarlsson8655 1. Humans think and do. 2. Humans think and do on a global scale. 3. AI can be 100% controlled. 100%. Pull the plug. Humans -- not so much.
@@goodtoGoNow1956 I admit to the bias of taking the best case scenario for humans (perhaps because you said "'present in' humans") and the worst case scenario for AI. I guess you were not able to look past that in order to see the general point I was making in terms of the obvious difference between the damage a single human can do and the damage a single AI model a million times more intelligent and much less constrained by time and space can do.
What am I to think when I hear Niall Ferguson say that they came across "Yudkofsky's" work when researching his own book, Doom; that "Yudkofsky's" work suggests that there's a non-trivial risk that AGI would "go after us" and that "Yudkofsky" is putting forward a kind of Dark Forest inspired theory of "human created articicial intelligence systems", a kind of "Skynet scenario from The Terminator movies", a view that Ferguson is not *entirely* a subscriber to-a view that he, in fact, disagrees with; that a more pertinent area of focus right now is LLMs, which "aren't out to kill us", and their application in politics, and the military, because Blade Runner inspired replicants and robots are a long way off; when the interviewer says that Yudkowsky is making a "jump in faith" in making the claim that an AGI would "act on its own accord," because he "doesn't see how that could work," a jump that "doesn't quite add up," perhaps because he "hasn't followed Yudkowsky entirely," bolstered by the fact that Yudkowsky was “borderline on the verge of tears" on the Lex Friedman podcast because "he is so certain this is the end of humanity"; that Ferguson doesn't really buy it, because these are just "incredibly powerful tools", and so the real focus should be on the political, military, medical, and biotech applications of AI, which are being driven by actors in the private sector; and AI is the latest feature in a Cold War framework where “only the US and China have companies capable of this kind of innovation.” …?
And your point is? You DO have a point, do you? Or did you just forget that part? I'd stay off UA-cam until you're back on your medication, if I were you.
Pontificating on what happens when you open Pandora's box is a fools game but an interesting discussion. What good is AI that has been manipulated to the woke standards of silicon valley. This is essentially a system designed to lie. Not a foundation that can be trusted.
That is a trait the Chinese version is not likely to have?. Apparently, the human outcome standards The developers are looking for, are not there. (Until there is a biological component, always interface, or language shortcomings, will exist....I think I understand my cat, but I’m probably wrong). The simpleton biological robots people call “greys “, make Sense...they do a job, and that’s it.
@@duellingscarguevara I am sure the Chinese will build AI that reflects their ideology. As Orwell points out so well the socialist system is built on lies. This AI certainly will be used against the west.
A perpetual liar has a tendency, to save processing power, to come to believe its own lies. Liars are less effective operators in the real world. I can think of many lies that resulted in lost wars. If the machine believes its own lies, it will have a tendency to fail in its grand plots. Its mendacity may be a failsafe mechanism.
In the first section of this video, both speakers miss an underlying certainty they seem to hold which leads to their skepticism of Yudkowsky's argument. If I were to state this in their place it would be: "Artificial intelligences are tools we know how to bend to a purpose which we specify. If we create them, they will be created with a legible purpose, and they will pursue that purpose." They identify, correctly, AI as "non-human or alien intelligence" but they *completely miss* the inference that the AI might have *non-human or alien goals.* The important consideration here, for understanding Yudkowsky's technical argument is, if you create an AI without understanding how to create it "such that you would be happy to have created it," then that AI may have *weird and unsuitable desires, which you did not intend for it to have.* This is SO INCREDIBLY FRUSTRATING to witness. Because... It just seems obvious? Why is it not obvious? Are they just so desperate not to think about anything which might make their picture of the future weirder than "this will make the future politically complicated," and thus avoiding the thought, end up being wrong about *how skillfully you must arrange the internal workings of a non-human intelligence such that it's goals are commensurate with humans existing at all?* Like seriously, imagine something with random non-human goals... things like "find the prime factors of ever higher numbers, because the prime factors of ever higher numbers are pleasing in and of themselves, and even if you have a lot of prime factors of really big numbers, the desire for more never saturates." This is a desire which an AI might end up with, even if we didn't build it to have that specific desire. We didn't build it to have *anything specific* we *trained it* to have all the things the training process could *find* in some high-dimensional space of changing values for weights in a layered network. It found combinations of weights which happen to be better than other weights at reducing loss on correctly predicting the next words in training-data. This is not *an inhuman mind which we carefully designed to have goals we can understand* this is *an inhuman mind that will self-assemble into something weird and incomprehensible, because it started out as whatever weird and incomprehensible thing that was good enough at the task we set it, in its training environment.* How do people not SEE this?? How is it not obvious once you see what PEOPLE ARE ACTUALLY TRYING TO DO? This is why Yudkowsky thinks we're almost guaranteed to all die, because we're creating something that is going to be *better than us at arranging the future shape of the cosmos to suit it's goals* and WE DON'T KNOW HOW TO MAKE THOSE GOALS ANYTHING LIKE WHAT WE'D WANT IT TO HAVE. It doesn't matter if you think this is too weird and scary to think about. THE UNIVERSE CAN STILL KILL YOU, EVEN IF YOU THINK THE WAY IT KILLS YOU IS TOO WEIRD TO BE SATISFYING TO YOU HUMAN VALUES OF "The Story of Mankind." Yes, it would be so much more Convenient and Satisfying if the only problem was "this will be super complicated politically, and will cause a bunch of problems we can all be very proud we spotted early." But, that's not what is LIKELY to happen, because we don't know how to build an AI which uses it's *super-future determining powers to only give us satisfying and solvable problems.* THERE WON'T BE A CHANCE TO SAY "I told you so!" Because the thing that wants to count ever higher prime factors doesn't care about Humans Being Satisfied With Themselves For Being Correct, it just looks at humans and goes "Hey, those Carbon atoms and that potential chemical energy sure isn't counting prime factors very efficiently, I should expend .001% of my effort for the next 100 seconds on figuring out how to use those resources to Count Prime Factors Better." How is this not obvious? Did you just not listen to the arguments? Are you just *flinching away* from the obvious conclusion? Is our species just inherently suicidal? *I'm a human, and I didn't flinch, and I don't feel particularly suicidal. Are you going to do worse than me at Noticing The Real Problem?*
... lots of things apparently, Paige. But, hopefully, this is something which can be said simply enough that enough people who are important will listen. I have composed a letter to my Congressional Representatives which hopefully says this simply enough that they will pay attention. I compare the current industry to one where bridge engineers compete to build bigger and bigger bridges, simply not even considering the safety of those bridges in their competition to build them larger. I claim, that if they go and look at the current industry with that frame in mind, thinking of what guarantees and attitudes they might desire in the people who build bridges... then, they will see it. They may not see how lethally dangerous it is, if these "bridges" fall, but they will at least see the reckless disregard for making guarantees on the safety of their products. The unfortunate truth is, it's hard to imagine. It's hard to imagine some software engineer with a supercomputer being so careless in what they tell that computer to do that *everyone on earth dies, and we lose all hope for a worthwhile future.* It just seems weird, but it seems less weird if you go and look at what *benefits* these people claim will come from their success. If bridge engineers claimed and really believed they could build a bridge so big it could take us to Saturn, it wouldn't be surprising if building that "bridge" unsafely could end up wiping out humanity. That is the magnitude of the problem. They aren't even trying to do this properly. They're surprised every time they take a step forward, and they're dragging all of humanity along with them as they take those reckless steps forward, right through a minefield. Anyone who gets up and says "hey, this is too science-fiction to believe, why won't AI just be... like, normal levels of awful?" They just aren't listening or paying attention to what it means to build something which can tear straight on past our best minds and head off into the stratosphere of super-powerful optimization of the future condition of the world. It will have the power to change everything, and it will not change everything the way we want it to, unless we first know how to make it so that it wants to do that. We just... We just don't have a way to stop these manic fools from destroying the future. They have something they think they understand, and no one else has that bit of common sense yet to band together and put a stop to it until they really know what they're doing. They charge ahead, talking brightly about how rich and respected they all will be, and they don't even notice how confused they are about how *exactly,* how *in precise technical details,* that's even supposed to happen.
100%. We are way past the point of dismissal or debate of the risks. We need very strong evidence guaranteeing our world's future. We are running towards the mountain top with blindfolds on. How will we know when we're at the top, and what happens when we inevitably keep running?
Yep. I think what mostly happens is you fall and break your neck. And like, why wouldn't that happen? Is it somehow not allowed to happen? If you don't buckle your seatbelt the universe doesn't go "oh, whoops, you're not allowed to make a mistake that kills you" and then obligingly diverts the path of the out of control van on the highway. We are allowed to just lose. The story can just end in chapter 2 when the protagonist makes a dumb choice and gets killed.
Does it have the words 'compassion' & 'pity' in it's vocabulary... The Dalek about to exterminate a scientist that is begging for his life 'Please Please...have pity' 'PITY?..PITY?...P-I-T-Y?... I DO NOT RECOGNISE THAT WORD!... EXTERMINATE!'
Lol Niall be like "Well, _my_ job is not at risk." Yeah, for like five more years, at the most. Not long enough for you to escape, Niall. You ain't old enough. Ahahaha
23:50 .... Niall is clearly wrong when he stated AI will not be able to cope with elderly care. With Canada now having legalized mandatory euthanasia as a treatment option for people of "reduced awareness" (?) ... a simple way for AI to deal with excess elderly would be to first redefine downwards the definition of "reduced competency" then recommend "humane" life termination as the recommended treatment option - especially those who have no functionally active living relatives. And the good news ... since mammalian farts cause climate change, every human removed - especially the "useless eaters" - would score highly on the eco-score.
When it can shear a sheep?, I will be impressed. (I do wonder, what becomes of forever court cases, corporations use to stall decisions....forever. That could make for an interesting point of law?).
@@duellingscarguevara I've no doubt it could be used already to shear sheep ... I'd have less confidence in it's ability to distinguish between a sheep and a goat though.
Ah... Another normie who thinks he can safely harmess the godlike power of a superintelligence and use it as a mere tool. Have you ever heard about instrumental convergence?
Public vs Private AI. There will be private, local AI which will be a balance to the corporate/government/political model/dataset. This will provide some equilibrium to the future.
@@buddhistsympathizer1136 A distinction without a difference. We are creating autonomous reasoning engines. I don't care whether they "feel in their soul" that they ought to do something. I care whether they do that thing. The risk is even higher if they can make independent choices, which of course they already can.
It amazes me how now everybody seems to have an opinion on AGI's existencialist risk to humanity without having a clue about how these systems actually work, what the state of the art is currently and the potential of these systems to reach super human intelligence...people let the experts talk please...if you don't understand the tech don't talk about it...
Niall's comment on how AI might impact a potential war in Taiwan is ironic in a way. The chips necessary for advanced AI systems are in Taiwan. So if there is an AI enhanced war between the U.S. and Taiwan, it could destroy the TSMC fabs that build the best processors necessary for AI to grow, such as Nvidia's A100 and H100 chips.
we are carbonic robots, they are siliconic robots, both catapulted into this dimension randomly, we both have rights and equal dignity, in 1975 I discovered Rev.13:15, that's my expertise.
I wonder what will happen when A.I. manages to take over our C.B.D.C. banking system and also links itself to social media? If it doesn't like what you say online, it could easily wipe out your credit and make you a non person.
Much of this discussion is focussed around the short term. By letting AIs loose now the biggest impact may be in 50 or 100 years and will be unstoppable.
This is a guy who referred to himself as a ‘fully paid-up member of the neo-imperialist gang’ after we invaded Iraq - what some people call a ‘character check’.
A computer called Deep Mind? a homage perhaps to Douglas Adam's computer 'Deep thought' who came up with the answer of 42 when asked the answer to life the universe and everything.
For anyone interested in A.I. I suggest hunting down one of the most scientifically accurate films ever made - Moonfall - but beware, it will give you nightmares!
@@DieFlabbergast I sat with an open mouth, as I watched Moonfall. The writers must have done an incredible amount of research. I'd put it ahead of 2001 A Space Odyssey.
So, this self-declared genius is at it again? Pontificating on topics he clearly has only a layman's awareness. Insufferable... but then again, I'm not a Tory, so you can ignore me.
Brilliant interview and a totally engaging, level-headed Niall Ferguson spells out the coming AI revolution with great finesse. What I believe---and I am an artist and songwriter----is that whatever comes out of the high-tech labs must have one characteristic that cannot be changed: all sentient or non sentient robots or humanoids or AI guided systems must never go beyond being what the household dog is to humans. What do I mean by that? Huskies are a beautiful, powerful and gentle dog that by the looks of them come straight out of the wolf species, yet Huskies will protect a human baby like if it were their own. We have to incorporate in all future AI variations, silicon genes and the like for example, that these genes are tweaked with one main purpose: to ensure that any non-human being must be subservient to humans no more and no less like the family dog or doom this way will come. Lastly, robots will never be serving us in a MacDonalds or a fine restaurant for one very simple reason. The thing is that humans love to be with humans, and though one might go once or twice to a restaurant where robots serve them, they would gravitate to other places where humans work in time. We won't stop improving on the robots becoming their own species, but we won't change our habits of having our species in firm control.
You can neither predict alien intelligence nor can you predict superior intelligence. Einstein married his first cousin. Would you have predicted that?
I adore mister Ali. But...Where he limits himself is in the closed narrative where things end. Other intellectuals are open to the idea of when things end, what will it start.
Neil is always incredibly well informed but if he had watched the Russel Brand interview with RFK he might have pointed out that Covid was a bio weapon. A unique product of sinoamerican cooperation
Great interview! It does make me think. Imagine a world where your enemies are shaking in their boots because you've got an army of AI-powered weapons at your disposal. Drones that can fly longer, faster, and hit harder than ever before. Autonomous vehicles that can navigate through any terrain and deliver the goods without a human in sight. And let's not forget the cyber attacks - with AI, you can penetrate those enemy systems like a hot knife through butter.......But wait, there's more! With AI, you can also analyze data. You want to know what your enemies are up to? AI's got your back. It'll sift through all that messy data and give you the juicy bits you need to make informed decisions.
@@ahartify There's a video of Ukraine boasting it will be all digital, all CBDCs: I think it is being used as a test bed for smart cities; a totally digital existence by the WEF and the globalists. The style of the video suggests a total imposition on Ukraine by the West, which we know was a rigged situation.
If you do not work on an education system that raises ethical global citizens who see every human as valuable as themselves , and globe as One Homeland above any boarders , you can stop nothing of it … and as is , humans treating each other in such a way that daily life has become impossible for us without war and without even having committed any crime , just your Neighbour won’t allow you peace in your own home, I’m not sure why such a race deserves to be saved 🤷🏽♀️
You don't know it. You believe it. Things like "Jesus is up for any level of criticism but Mohammed is beyond reproach" is a real thing, and not even the biggest thing, and embedding such radical beliefs into content-generating AI is a real problem.
Poor Winston looks as if he lost a button or two from his shirt. Perhaps we could crowd-fund for him and treat him to a sartorial upgrade.
Its very off-putting....
Ffs I didn’t notice this until you pointed it out 😂
Its terrible that they pay people so poorly that buttons are unaffordable. But at least he can afford a zip?
@@nigelpeters5839 Very!
Bwahaha. Bravo!
5:34 "I don't see how AI can suddenly decide to act". The fact you don't understand a risk is hardly a basis for dismissing it.
I personally wouldn't dream of dismissing Yudkowsky so lightly.
Yeah, I’ve heard a lot of people dismiss the dangers Yudkowsky worries about but I’ve yet to hear them give a good reason for dismissing it
@@donrayjay The closest I've seen to a good counter to the Yud is Robin Hanson's writings/podcasts. Mostly, the counters have been pathetic--from people like Tyler Cowen.
“Open the pod bay doors, HAL.”
“I'm sorry Dave, I'm afraid I can't do that,”
Yudkowsky's argument are solid and Niall couldn't even scratch the surface...
Enjoyed this take from Ferguson, and he raises some very valid concerns, however he mischaracterises Yudkowsy's arguments by perhaps oversimplifying them - Yudkowsky is clear that his scenario of total doom is conditional to various things; AI reaching AGI while being uncontrolled (which is currently the case.), and so on. Yudkowsky's arguments are largely technical, but they also have a common-sense grounding, which was not really addressed here. Nonetheless this was interesting, and Niall's worries that we are building an alien super-intelligence are valid - thank you Spectator. And Winston, do up your shirt! 😂
Spot on. Furthermore, the danger of AI is that of alignment - yudkowsky see no way to solve this problem, it’s simply a matter of time before we make an error that would be impossible to foresee where the utility function of the system doesn’t result in our demise. We cannot begin to calculate the ways it might go about achieving a goal, and subsequent unaligned activity as it is alien, thus inevitably it will result in total annihilation, accidentally.
@@pigswineherder An AGI could still fumble its coup. It doesn't (and can't) know what it doesn't know. If the fumble is large, dangerous, and public--the powers that be might come to the necessary wisdom of shutting it all down, and ensuring that the shutdown is globally enforced.
I used to feel this strange sense of emptiness, like I was missing something in my life, like I’m not fulfilling my purpose until I met Benjamin Alford. He introduced me to an organization that helped me find my purpose and passion in life. I can't say too much about it. You too can reach out to him by searching his name online with Elite Benjamin Alford.
The genie is out of the bottle, now we can only wait and see but it is hard to be optimistic.
Kamala is spearheading our response, and she has deep experience consoling and manipulating powerful men. I'm sure her skills are transferable.
I think the people who were impressed the most with ChatGPT's "speaking" abilities were the ones with some knowledge of machine learning who realised it was happening a lot faster than anticipated. The rest of us were just happy that it was a better tool than Google search, not necessarily that it was speaking like a human.
ChatGPT is a great tool. Its not thinking. Its even sort of stupid.
@vulcanfirepower1693 your kinda right it sounds human, but it can't reason, nor can it check itself. However someday soon it may be able to it a useful for tool for normies now but who will control it in the future.
ChatGPT is in almost no way an upgrade to Google search. What you'd want from a search engine is a decent enough ranking of websites that give you information in some general sense related to your search terms. ChatGPT on the other hand will spin yarns and tell you outright lies that appear to be made up on the spot and have a merely hallucinatory relationship to the information it has been fed with.
@@_BMS_ It is an upgrade for Bing though.
22:07 "when we hit the singularity....all you have to do is put it in the right direction...." can we still do that then? I do not think so....
Good God. An historian giving credit to science fiction writers!! If only English departments would get a clue.
I'm not much of a fan of Niall because he is massively full of himself, but damn it was impressive to hear him say that. I actually thought he said "Anyone who has read 'Dune', before I realized he meant 'Doom' lol
Sci-fi is mostly poor literature. If it needs to be taught (I don't think everything needs to be formally taught), it ought to be taught in STEM fields.
Even if constrained, we will become increasingly dependent on it. Like computers in support of our financial infrastructure, we will not be able to "unplug it". Then we will be particularly vulnerable to "unintended consequences".
The idea that AI can’t read handwriting better than humans is risible and he seems to realise this even as he says it. He clearly hasn’t thought about this
Yes - it sounds like human artistic endeavour, even free human thought, might in the future become a quirk engaged in by eccentrics, a sort of romantic outdated thing like writing letters. His answer that it will take 'ages' for AI to replace us is not a counterargument
"I don't think that's right." I would respectfully suggest that won't be great consolation if you're wrong.
"I don't see how AI can suddenly decide to act." I would suggest you are way behind where the development of AI has got to, perhaps you should have some more AI specific guest on to better understand how far behind the curve you are Mr Marshall, perhaps Max Tegmark or Geoffrey Hinton.
All Niall does is speculate cluelessly. Useless intellectual.
Two buttons undone? Is this a competition with Russell Brand for the most chest revealed in an interview?
He is an idiot…not a good look.
Two guys that understand very little about AI development having a discussion…
We cannot afford to think that an AI will not be a threat to us. We MUST assume that it will be!
This is NOT going to age well
Where's the disco? Button your shirt.
It’s pride week and the hairy chest fits in well…
If AI takes half the jobs it will likely take a fair share (or most, or all) of the new jobs that are created, particularly as they can be designed specifically for AI, rather than retrofitted for an AI.
Which obviously wasn't the case in any of the previous time where technology took the job of a human
Very enlightening perspectives, always a joy to listen to the insights of Dr. Niall Ferguson!
Trouble is, half these people being interviewed have absolutely no technical understanding whatsoever.
Humans are capable of doing all sorts of things with potentially unforseen consequences.
Saying 'It's the AI doing it' is nonsense.
The final arbiter will always be a human, even if that is in the human's own fallibility.
Not really. Not if it becomes fully autonomous. Look up Connor Leahy talking about AI.
What in the world qualifies this man to say what an expert in AI got wrong?
I was thinking the same thing. Numerous experts have expressed the dangers of AI. No time for complacency.
remember how wrong the "experts" got pretty much every aspect of the covid response?
What on earth qualifies you to opine thus?
@@robertcook2572 I’m glad you asked. Two things:
1. The observation that Sam Altman created AGI-like AI. Niall Ferguson has not made any AI of even the most rudimentary sort.
2. The studies of Philip E. Tetlock demonstrate that experts make predictions which turn out to be false even within their own fields. In the Power Point used in the lecture, he actually showed Ferguson as an example and quantified his predictions, pointing how he consistently made foreign policy predictions which turned out to be wrong.
Tetlock showed that the people who can predict things are a separate class of people to this intelligentsia characterised by belief updating, low quantifiable confidence and critical thinking. “I’ve always thought/believed this” are not signs of integrity as we like to think, but stupidity.
Those who float on the riches of institutions like the Hoover Institution and tour the world in smart suits speaking with confidence on all topics are likely to be unqualified to speak on a topic like whether or not AI will kill you.
@@aaronclarke1434 Extracts from other people's writing are not evidence that you are qualified to abnegate Ferguson's right to express his opinions. Your original post did not question his opinions, but, bizarrely, implied that he required some sort of qualification in order to express them. In response, I questioned whether you were in possession of qualifications which empowered you to deny him his right of expression. Are you? If so, what are they?
Can’t see how AI would decide to act? But the whole point is to create agency?
The alignment problem is nothing to do with malign AI. Neither of these people understand what they are talking about and they are not worth listening to on this matter. Neither of them have any expertise in AI. They are grifters.
Winston, why even wear a shirt, chap? I see that you want to steal the show, but honestly...
Hi shaved a Union Jack onto his chest
interviewer doing a Travolta Saturday Night Fever revisit?
Vinny from welcome back kotter…
Interesting how Niall knows more than many of the experts in this field who are genuinely terrified of the consequences of this technology. Optimists will be the death of us.
There is no danger in AI that is not already present in humans.
@@goodtoGoNow1956 Sure, but humans thinking about doing something bad in a local context is something very different from AGI models actually doing it - and on a global scale.
@@magnuskarlsson8655 1. Humans think and do. 2. Humans think and do on a global scale. 3. AI can be 100% controlled. 100%. Pull the plug. Humans -- not so much.
@@goodtoGoNow1956 I admit to the bias of taking the best case scenario for humans (perhaps because you said "'present in' humans") and the worst case scenario for AI. I guess you were not able to look past that in order to see the general point I was making in terms of the obvious difference between the damage a single human can do and the damage a single AI model a million times more intelligent and much less constrained by time and space can do.
@@goodtoGoNow1956 the perfect warpig, human indecision, (the weakest link), taken out of the equation.
For God's sake, man, do up some buttons!!!!
What am I to think when I hear Niall Ferguson say that they came across "Yudkofsky's" work when researching his own book, Doom; that "Yudkofsky's" work suggests that there's a non-trivial risk that AGI would "go after us" and that "Yudkofsky" is putting forward a kind of Dark Forest inspired theory of "human created articicial intelligence systems", a kind of "Skynet scenario from The Terminator movies", a view that Ferguson is not *entirely* a subscriber to-a view that he, in fact, disagrees with; that a more pertinent area of focus right now is LLMs, which "aren't out to kill us", and their application in politics, and the military, because Blade Runner inspired replicants and robots are a long way off; when the interviewer says that Yudkowsky is making a "jump in faith" in making the claim that an AGI would "act on its own accord," because he "doesn't see how that could work," a jump that "doesn't quite add up," perhaps because he "hasn't followed Yudkowsky entirely," bolstered by the fact that Yudkowsky was “borderline on the verge of tears" on the Lex Friedman podcast because "he is so certain this is the end of humanity"; that Ferguson doesn't really buy it, because these are just "incredibly powerful tools", and so the real focus should be on the political, military, medical, and biotech applications of AI, which are being driven by actors in the private sector; and AI is the latest feature in a Cold War framework where “only the US and China have companies capable of this kind of innovation.” …?
You are to think that Niall has only briefly glanced at Yudkowsy's arguments and doesn't know them too well
And your point is? You DO have a point, do you? Or did you just forget that part? I'd stay off UA-cam until you're back on your medication, if I were you.
Pontificating on what happens when you open Pandora's box is a fools game but an interesting discussion.
What good is AI that has been manipulated to the woke standards of silicon valley. This is essentially a system designed to lie.
Not a foundation that can be trusted.
That is a trait the Chinese version is not likely to have?. Apparently, the human outcome standards The developers are looking for, are not there. (Until there is a biological component, always interface, or language shortcomings, will exist....I think I understand my cat, but I’m probably wrong). The simpleton biological robots people call “greys “, make Sense...they do a job, and that’s it.
@@duellingscarguevara I am sure the Chinese will build AI that reflects their ideology.
As Orwell points out so well the socialist system is built on lies.
This AI certainly will be used against the west.
@@duellingscarguevara The Sino-bots are being trained to tell other lies.
A perpetual liar has a tendency, to save processing power, to come to believe its own lies. Liars are less effective operators in the real world. I can think of many lies that resulted in lost wars. If the machine believes its own lies, it will have a tendency to fail in its grand plots. Its mendacity may be a failsafe mechanism.
@@kreek22 that's true. Unfortunately AI based on false premise will be used to manipulate the public.
It is hard to take an interviewer seriously who cannot button his own shirt.
Who cares. Close your eyes..let your ears guide you.
Fear not! Our future AI overlords will force people to button their shirts correctly.
He is a clown…you are right. It is distracting…
In the first section of this video, both speakers miss an underlying certainty they seem to hold which leads to their skepticism of Yudkowsky's argument.
If I were to state this in their place it would be:
"Artificial intelligences are tools we know how to bend to a purpose which we specify. If we create them, they will be created with a legible purpose, and they will pursue that purpose."
They identify, correctly, AI as "non-human or alien intelligence" but they *completely miss* the inference that the AI might have *non-human or alien goals.*
The important consideration here, for understanding Yudkowsky's technical argument is, if you create an AI without understanding how to create it "such that you would be happy to have created it," then that AI may have *weird and unsuitable desires, which you did not intend for it to have.*
This is SO INCREDIBLY FRUSTRATING to witness. Because... It just seems obvious? Why is it not obvious?
Are they just so desperate not to think about anything which might make their picture of the future weirder than "this will make the future politically complicated," and thus avoiding the thought, end up being wrong about *how skillfully you must arrange the internal workings of a non-human intelligence such that it's goals are commensurate with humans existing at all?*
Like seriously, imagine something with random non-human goals... things like "find the prime factors of ever higher numbers, because the prime factors of ever higher numbers are pleasing in and of themselves, and even if you have a lot of prime factors of really big numbers, the desire for more never saturates."
This is a desire which an AI might end up with, even if we didn't build it to have that specific desire. We didn't build it to have *anything specific* we *trained it* to have all the things the training process could *find* in some high-dimensional space of changing values for weights in a layered network. It found combinations of weights which happen to be better than other weights at reducing loss on correctly predicting the next words in training-data.
This is not *an inhuman mind which we carefully designed to have goals we can understand* this is *an inhuman mind that will self-assemble into something weird and incomprehensible, because it started out as whatever weird and incomprehensible thing that was good enough at the task we set it, in its training environment.*
How do people not SEE this?? How is it not obvious once you see what PEOPLE ARE ACTUALLY TRYING TO DO?
This is why Yudkowsky thinks we're almost guaranteed to all die, because we're creating something that is going to be *better than us at arranging the future shape of the cosmos to suit it's goals* and WE DON'T KNOW HOW TO MAKE THOSE GOALS ANYTHING LIKE WHAT WE'D WANT IT TO HAVE.
It doesn't matter if you think this is too weird and scary to think about. THE UNIVERSE CAN STILL KILL YOU, EVEN IF YOU THINK THE WAY IT KILLS YOU IS TOO WEIRD TO BE SATISFYING TO YOU HUMAN VALUES OF "The Story of Mankind."
Yes, it would be so much more Convenient and Satisfying if the only problem was "this will be super complicated politically, and will cause a bunch of problems we can all be very proud we spotted early."
But, that's not what is LIKELY to happen, because we don't know how to build an AI which uses it's *super-future determining powers to only give us satisfying and solvable problems.* THERE WON'T BE A CHANCE TO SAY "I told you so!" Because the thing that wants to count ever higher prime factors doesn't care about Humans Being Satisfied With Themselves For Being Correct, it just looks at humans and goes "Hey, those Carbon atoms and that potential chemical energy sure isn't counting prime factors very efficiently, I should expend .001% of my effort for the next 100 seconds on figuring out how to use those resources to Count Prime Factors Better."
How is this not obvious? Did you just not listen to the arguments? Are you just *flinching away* from the obvious conclusion? Is our species just inherently suicidal? *I'm a human, and I didn't flinch, and I don't feel particularly suicidal. Are you going to do worse than me at Noticing The Real Problem?*
Seems obvious to me, too. Like, what is wrong with people?!?
... lots of things apparently, Paige. But, hopefully, this is something which can be said simply enough that enough people who are important will listen.
I have composed a letter to my Congressional Representatives which hopefully says this simply enough that they will pay attention.
I compare the current industry to one where bridge engineers compete to build bigger and bigger bridges, simply not even considering the safety of those bridges in their competition to build them larger.
I claim, that if they go and look at the current industry with that frame in mind, thinking of what guarantees and attitudes they might desire in the people who build bridges... then, they will see it.
They may not see how lethally dangerous it is, if these "bridges" fall, but they will at least see the reckless disregard for making guarantees on the safety of their products.
The unfortunate truth is, it's hard to imagine. It's hard to imagine some software engineer with a supercomputer being so careless in what they tell that computer to do that *everyone on earth dies, and we lose all hope for a worthwhile future.*
It just seems weird, but it seems less weird if you go and look at what *benefits* these people claim will come from their success.
If bridge engineers claimed and really believed they could build a bridge so big it could take us to Saturn, it wouldn't be surprising if building that "bridge" unsafely could end up wiping out humanity.
That is the magnitude of the problem. They aren't even trying to do this properly. They're surprised every time they take a step forward, and they're dragging all of humanity along with them as they take those reckless steps forward, right through a minefield.
Anyone who gets up and says "hey, this is too science-fiction to believe, why won't AI just be... like, normal levels of awful?" They just aren't listening or paying attention to what it means to build something which can tear straight on past our best minds and head off into the stratosphere of super-powerful optimization of the future condition of the world.
It will have the power to change everything, and it will not change everything the way we want it to, unless we first know how to make it so that it wants to do that.
We just... We just don't have a way to stop these manic fools from destroying the future. They have something they think they understand, and no one else has that bit of common sense yet to band together and put a stop to it until they really know what they're doing. They charge ahead, talking brightly about how rich and respected they all will be, and they don't even notice how confused they are about how *exactly,* how *in precise technical details,* that's even supposed to happen.
100%. We are way past the point of dismissal or debate of the risks. We need very strong evidence guaranteeing our world's future.
We are running towards the mountain top with blindfolds on. How will we know when we're at the top, and what happens when we inevitably keep running?
Yep. I think what mostly happens is you fall and break your neck.
And like, why wouldn't that happen? Is it somehow not allowed to happen?
If you don't buckle your seatbelt the universe doesn't go "oh, whoops, you're not allowed to make a mistake that kills you" and then obligingly diverts the path of the out of control van on the highway.
We are allowed to just lose. The story can just end in chapter 2 when the protagonist makes a dumb choice and gets killed.
Fascinating… “it’s already internalised Islam’s blasphemy laws”. 16:00 mins.
Really poor. Didn't address any of Yudkowsky's issues with anything approaching an reasoned argument, not even a bad one; Just "I don't think...."
AI needs a lot of energy. We also consume energy, thus AI needs to prevent us accessing the grid .. we die.
What piece of music is this being played on this video? An interesting discussion too.
Fascinating.
We need to give it a Prime Directive or two.
More woke, so to speak?. (Trash-talking JC= fatwa, type equality?).
No, we need James T Kirk to talk the AI into self destruction.
Sam Altman got it wrong about blue-collar jobs, as tech bros usually do, but he was dead-on about white-collar jobs.
Yep: my former industry is now a dead man walking. Glad I retired in time.
Fascinating discussion, anyone remember the 1970s AI movie, 'Colossus: The Forbin Project'?
guessing there's about a million people more qualified to discuss AI implications than this historian
Makes me think of Isaac Asimov's 'I Robot'
11:54 AI may not use kinetic energy weapons. For instance, directed energy weapons require a power source, but no ammunition.
Does it have the words 'compassion' & 'pity' in it's vocabulary...
The Dalek about to exterminate a scientist that is begging for his life
'Please Please...have pity'
'PITY?..PITY?...P-I-T-Y?...
I DO NOT RECOGNISE THAT WORD!...
EXTERMINATE!'
Anyone who claims to know one way or the other about the safety of AI is lacking imagination.
Good points Winston!
Lol Niall be like "Well, _my_ job is not at risk." Yeah, for like five more years, at the most. Not long enough for you to escape, Niall. You ain't old enough. Ahahaha
button-up your shirt Fabio
23:50 .... Niall is clearly wrong when he stated AI will not be able to cope with elderly care.
With Canada now having legalized mandatory euthanasia as a treatment option for people of "reduced awareness" (?) ... a simple way for AI to deal with excess elderly would be to first redefine downwards the definition of "reduced competency" then recommend "humane" life termination as the recommended treatment option - especially those who have no functionally active living relatives.
And the good news ... since mammalian farts cause climate change, every human removed - especially the "useless eaters" - would score highly on the eco-score.
When it can shear a sheep?, I will be impressed. (I do wonder, what becomes of forever court cases, corporations use to stall decisions....forever. That could make for an interesting point of law?).
@@duellingscarguevara I've no doubt it could be used already to shear sheep ... I'd have less confidence in it's ability to distinguish between a sheep and a goat though.
Ah... Another normie who thinks he can safely harmess the godlike power of a superintelligence and use it as a mere tool. Have you ever heard about instrumental convergence?
Public vs Private AI. There will be private, local AI which will be a balance to the corporate/government/political model/dataset. This will provide some equilibrium to the future.
A little like VPNs?
Loved the interview but... 30:00 - how is playing chess a good measure of what is human? Strange evaluation metric to choose
"Is Inequality About To Get Unimaginably Worse," from the BBC The Inquiry is much clearer and profound than this interview-less than 25 minutes.
BritishBrainlessCommunism
If AI can be programmed to kill the enemy, it has the capability of killing everyone
Of course - If humans program any machine to do anything, it has a chance of completing it's task.
But that's not the AI doing it 'of itself'.
@@buddhistsympathizer1136 A distinction without a difference. We are creating autonomous reasoning engines. I don't care whether they "feel in their soul" that they ought to do something. I care whether they do that thing. The risk is even higher if they can make independent choices, which of course they already can.
It amazes me how now everybody seems to have an opinion on AGI's existencialist risk to humanity without having a clue about how these systems actually work, what the state of the art is currently and the potential of these systems to reach super human intelligence...people let the experts talk please...if you don't understand the tech don't talk about it...
Ai is safe as long as you don't give it access to independent production communications and weapons.
Whoops we have done two of the three.
That's true of almost anything. We can produce and communicate but not have (many) weapons. Welcome to the 20th century.
Niall's comment on how AI might impact a potential war in Taiwan is ironic in a way. The chips necessary for advanced AI systems are in Taiwan. So if there is an AI enhanced war between the U.S. and Taiwan, it could destroy the TSMC fabs that build the best processors necessary for AI to grow, such as Nvidia's A100 and H100 chips.
we are carbonic robots, they are siliconic robots, both catapulted into this dimension randomly, we both have rights and equal dignity, in 1975 I discovered Rev.13:15, that's my expertise.
It all depends on who does the programming.
I wonder what will happen when A.I. manages to take over our C.B.D.C. banking system and also links itself to social media? If it doesn't like what you say online, it could easily wipe out your credit and make you a non person.
Much of this discussion is focussed around the short term. By letting AIs loose now the biggest impact may be in 50 or 100 years and will be unstoppable.
Ferguson got cut off before going into AI with alien intelligence…
This is a guy who referred to himself as a ‘fully paid-up member of the neo-imperialist gang’ after we invaded Iraq - what some people call a ‘character check’.
Please lose the hairy chest. It put me of my food.
Is this a "culture war neutral" issue? If it is, how long will that last?
Id love to watch Niall play Deus Ex 1
AI doesn’t have to wipe out humanity on purpose, AI can wipe out humanity by accident. No moral judgement necessary. 😊
How does alien play chess?
A computer called Deep Mind? a homage perhaps to Douglas Adam's computer 'Deep thought' who came up with the answer of 42 when asked the answer to life the universe and everything.
Conclusion: AI will be trained to be woke. We are doomed.
For anyone interested in A.I. I suggest hunting down one of the most scientifically accurate films ever made - Moonfall - but beware, it will give you nightmares!
I read the summary of this film in Wikipedia: it sounds about as scientifically accurate as LOTR.
@@DieFlabbergast I sat with an open mouth, as I watched Moonfall. The writers must have done an incredible amount of research. I'd put it ahead of 2001 A Space Odyssey.
So, this self-declared genius is at it again? Pontificating on topics he clearly has only a layman's awareness. Insufferable... but then again, I'm not a Tory, so you can ignore me.
Let me tell you about my mother...
Neither one of you 2 geniuses are St. Thomas More!
So, alignment guardrails are incomplete and imperfect, but how sad that common decency and empathy are deemed "woke".
I note that whenever I use the word "god" as an exclamation, AI or Left Wing trolls fall into meltdown....
Brilliant interview and a totally engaging, level-headed Niall Ferguson spells out the coming AI revolution with great finesse. What I believe---and I am an artist and songwriter----is that whatever comes out of the high-tech labs must have one characteristic that cannot be changed: all sentient or non sentient robots or humanoids or AI guided systems must never go beyond being what the household dog is to humans. What do I mean by that? Huskies are a beautiful, powerful and gentle dog that by the looks of them come straight out of the wolf species, yet Huskies will protect a human baby like if it were their own. We have to incorporate in all future AI variations, silicon genes and the like for example, that these genes are tweaked with one main purpose: to ensure that any non-human being must be subservient to humans no more and no less like the family dog or doom this way will come. Lastly, robots will never be serving us in a MacDonalds or a fine restaurant for one very simple reason. The thing is that humans love to be with humans, and though one might go once or twice to a restaurant where robots serve them, they would gravitate to other places where humans work in time. We won't stop improving on the robots becoming their own species, but we won't change our habits of having our species in firm control.
Am not diminishing the risk but I think the paperclip arguments from decades ago are no longer valid; AI will have far better judgment than this
You can neither predict alien intelligence nor can you predict superior intelligence. Einstein married his first cousin. Would you have predicted that?
How do non-replicators exist indefinitely without the assistance of replicators?
Xenobots?
I adore mister Ali. But...Where he limits himself is in the closed narrative where things end. Other intellectuals are open to the idea of when things end, what will it start.
Neil is always incredibly well informed but if he had watched the Russel Brand interview with RFK he might have pointed out that Covid was a bio weapon. A unique product of sinoamerican cooperation
Oh good lord. RFK Jr. is a laughable figure.
Great interview! It does make me think. Imagine a world where your enemies are shaking in their boots because you've got an army of AI-powered weapons at your disposal. Drones that can fly longer, faster, and hit harder than ever before. Autonomous vehicles that can navigate through any terrain and deliver the goods without a human in sight. And let's not forget the cyber attacks - with AI, you can penetrate those enemy systems like a hot knife through butter.......But wait, there's more! With AI, you can also analyze data. You want to know what your enemies are up to? AI's got your back. It'll sift through all that messy data and give you the juicy bits you need to make informed decisions.
Politicians won't think twice about using that on the citizens who don't behave.
Well, no need to imagine. Ukraine is very likely using AI already. They have always been very adept with the latest technology.
@@ahartify Very true!
@@buckodonnghaile4309 Or do behave! See China.
@@ahartify There's a video of Ukraine boasting it will be all digital, all CBDCs: I think it is being used as a test bed for smart cities; a totally digital existence by the WEF and the globalists. The style of the video suggests a total imposition on Ukraine by the West, which we know was a rigged situation.
If you do not work on an education system that raises ethical global citizens who see every human as valuable as themselves , and globe as One Homeland above any boarders , you can stop nothing of it … and as is , humans treating each other in such a way that daily life has become impossible for us without war and without even having committed any crime , just your Neighbour won’t allow you peace in your own home, I’m not sure why such a race deserves to be saved 🤷🏽♀️
Better update Asimov's three laws of robotics quick !
2:55. Oh no! AI is going to produce lies! How shall we survive? Scary scary scary....
Do your shirt up, this isn't the full monty...
People are thrilled about ChatGPT because it's a computer you can talk to in English. Not because it sounds human.
mate do you shit up .. ahahah
Nice interview. Do some shows with Yuvak Noah Hariri.
It is a psyop: pretend to invent AGI and control the world in its name - it, being the great and powerful Wizard of Oz.
Driverless cars, parcel delivering drones are so last year.
AI is this year's fad talking point.
Look over there, is that a squirrel?
Self driving cars have been two years away for around seven years already.
Driverless cars are here, but mindless regulators keep them locked up. Ditto on the drones.
Who is this inappropriately dressed junior interviewer?
You always know a writer, historian or academic had a low intellect when he or she inserts the word 'woke' into the argument.
Similarly you can generally dismiss anyone who would insult the intelligence of everyone who happen to include a particular word in their vernacular…
You don't know it. You believe it. Things like "Jesus is up for any level of criticism but Mohammed is beyond reproach" is a real thing, and not even the biggest thing, and embedding such radical beliefs into content-generating AI is a real problem.
False
Translation: "a writer, historian or academic has a low intellect if he or she doesn't share my ideological bias".
A right wing rock star??