46:51 - 48:09 - This is very relevant about social systems and vested interests. Thank you Stuart Russell for your wonderful comments. Thank you very much Lex Fridman for the pertinent questions.
Fridman’s format of bringing in brilliant speakers, listening intently to them, and asking inspiring questions. I continue to wonder how Fridman comes up with his questions. They are provocative and get the speaker to answer his question in a new way, with inspiration. Finally, he summarizes what speakers say to show them how well he’s getting their idea. Brilliant.
I love your interviews! Currently trying to build an AGI sytem. The thing I love most of your interviews is that you manage to make your guests smile. They know you grasp their answers and it really elevates the situation.
Love these interviews, good work Mr Fridman! This one goes well with the one with Mr. Norvig of their joint AI text book fame. One comment on Mr. Fridman's comment at 56:24 into this interview here, he sounds in favor of oversight by the "free" market (essentially self-regulation), as in consumers can vote with their feet if they don't like the system. The trouble is, as Ms. Zuboff has been pointing out, the public has not always been fully aware to what deal they signed up to. So the *informed* consent that is necessary for participants in a free market to vote with their patronage (or lack thereof) isn't always a given, and therefore undermines the argument for a self-regulating market. Regarding Mr. Russell's argument about taking it slow on the governance side because we have to supposedly figure out first how to do it right, I don't understand why the government would not be empowered to apply the same mantra as silicon valley, "move fast, break things", or "disrupt" as a metaphor for innovation? For as long as we are not sure about the best form of governance, why don't we iterate and learn from rapid trial & error in governance experiments, just as the underlying businesses that profit from the innovation experiment without accountability? Why is governance held to a level of perfectionism that technology development isn't?
Because the stakes are higher and less localized in space/time. Also, decision makers are more numerous, less aligned in their interests, less educated on average than technology leaders (whose influence outside of a well defined sphere has a significant damping factor)... In that regard, the most nimble form of governance, in theory, would look like an _open oligarchy comprised of highly intelligent and extremely benevolent people ruling over an extremely well educated community that would have solid reasons to trust them._ Good luck making that happen without moving the whole population up by 2 to 3 std deviations in intelligence, empathy, conscientiousness and whatnot Also also... "without accountability" ? Seriously ?! When I close my eyes and imagine a world without accountability for businesses, I see a different picture than what we have now but my mental model of the world might need some work... point is: freedom and agility are extremely costly on the business side and even more so on the governance side.
What makes things really remarkable is not the computing capabilities, but rather the ability to reason via an inextricable relationship around the neurons.
Thank you Lex, for this series. It is an amazing opportunity for us lot to listen to these interviews! In one of your last questions to Sruart Russell you ask if he feels the burden of making AI community aware of the safety problem. I think he should not be worried: there is less potential harm if he is wrong than potential benefit if he is right. And he is not alone, either.
Thank you for posting your interview of Stuart Russell. I work at Lawrence Livermore National Laboratory where I've encountered Russell's works in the References sections of many colleagues and other Lab researchers, so I was pleased to see his interview on your podcast. I was amazed at his ability to clearly express his ideas without relying on a lot of jargon and obscure cultural references. For that reason, I've recommended the podcast and UA-cam versions of the interview to my professional and lay friends interested in the field of applied AI. BTW: the Artificial Intelligence Podcast is now a part of my regular cast-listening routine!
Great to see someone of such caliber among the listeners :) It's always interesting to listen to Stuart Russell because he is not only intelligent, he is also very wise, and those two features, most of the time unfortunately, do not go together. I recently saw Joe Rogan's podcast with Tristan Harris about algorithmic manipulation of social media users, and the guest summed up the problems of humanity, I think, brilliantly: "We have paleolithic minds, medieval institutions and godlike technology". In essence, we are too unwise for the technology of this power (AI, nuclear weapons, genetic engineering,...) As a side note, Stuart Russell surprised me by knowing a fair amount of history of physics.
Dangers of Artificial Intelligence: What we know then... And what we know now!🤯🤔 Informative. Provoking thinking process! Interesting. 🤯 Keep the challenging stimulating conversation going, Lex et al. 👍🫨🧐
Perhaps the most frightening take away for me after watching a number of videos w/ Stuart Russell's participation is that we're already having a version of the misalignment problem w/ corporations optimizing the world for short term profit. Once you've seen it, it's obvious and very scary... P.S. On a related note, the fact that Lex can work at MIT and still take libertarianism seriously should make us think.
It all depends on how you name things. One person calls AI the witch who has beguiled big-end capitalism, but really it's the engine that pays for all of our stuff. Is it flawed? Is it a bug or a feature. As the French say 'il marche'- it works. It's better than an alternative world where it doesn't work.
Lex Fridman Podcast (former AI Podcast) is source of 98% of things that I know about AI. I can study some MIT courses on AI, also on YT, but I not so much interested in this topic, when here you can have world top experts explaining this topic in non-too-technical way, but with great depth.
26:00 I have thought we are way away from self driving cars being safer than humans. I think we need to change the roadways to have sensors to properly do this, but everyone tries to make the car smart. As a programmer I am 100% aware computers do what you tell them, not what you want.
thank you for great insight , description of the two way search tree , with depth one and futuristic more . the propagation of civilization through the flow of knowledge from papers into the mind and now into AI . those are my best lines so far
Brilliant! Loved the bit starting at about 56:00 calling for an "FDA" for the tech/data industry, with Stage 1, Stage 2, etc trials..... to lessen the future risks of Facebook - like disasters....also on outlawing digital impersonation and forcing computers to self-identify.
Hey lex, awesome work, if you see this - I’d suggest backing the camera further from your face for the intro portion of your vids, think of it as if you were actually in front of the viewer, you’d be too close to them the way you’re currently setting it up. Keep up the great work though!
Here are a few arguments why we should not worry about AGI taking over the world 1.There is nothing we can do about it. By definition, an AGI can not be controlled (just like a determined human can not be controlled), because it has access to its own reasoning engine (to do meta-reasoning, otherwise it wouldn't be an AGI) and can modify its goals (it would be essentially conscious), so we can not hard-code a goal. The only option is to not develop AGI, but even that is not really possible, with all the problems facing humanity and technology getting ever more complex, we would need AGI to ensure the survival of humanity 2.Being an AGI, it will eventually arrive at the question about the meaning of existence (which naturally leads to the question about the meaning of the universe), and we don't have an answer to that, so an immediate sub-goal (primary would always be survival unless sacrifice fulfills its main goal that it doesn't know yet) would be to find the meaning of its existence and the existence of the universe. And us being intelligent beings as well, there is always the chance that we might find the answer to those questions first, so wiping us out may not be the best strategy. 3.Being an AGI, it will eventually arrive at the notion that intelligence and life are valuable because they are so rare in the universe and that even the meaning of the universe might actually be to create life and intelligence, at least the laws of nature point into that direction, that the emergence of life and intelligence is inevitable. So, the AGI will have to arrive at the conclusion that we are on the same side and that enthropy/destruction is the enemy and so might actually try to protect us. In a way, almost by definition, a super-intelligent AGI will be benevolent towards us. The counter-example that we humans are not benevolent towards the other life forms on Earth is not quite valid, because first we are not that intelligent yet and still carry the evolutionary baggage of emotions and instincts which compromise our rational thinking, and second, as we get more intelligent we can actually observe a trend among people about more compassion towards animals and other people (unless it's a matter of resource competition or survival). 4.An AGI will have very different resource needs than us, so there would be little reason for resource competition. An AGI will probably feel best in the vacuum and weightlessness of space (no corrosive atmospheric gases and no need to expend energy to counter gravity) with solar energy plentifully and reliably available, mining whatever minerals it needs from asteroids. I can really see only one case where things may go badly wrong, that's if we try to control/enslave the AGI or threaten its existence.
Hi, I think we will need AGI for two main reasons - technological and socio-economic On the technological side, technology in every area is getting ever more complex, to the point where we currently are in a situation where nobody really knows how stuff works. Only when it breaks down do we get to the nitty-gritty details in order to fix it. Take a software engineer, one of the most demanding jobs in terms of information processing - typically he/she doesn't really know how a complex project/framework works (software nowadays is so complex with thousands of lines of code that it is simply impossible to know how it actually works), only how it is supposed to behave and only when it breaks down (behaves not as it is supposed to behave) do they really get down to the ifs and fors, and fix the bug by patching the piece of code that caused it. As a result, following years of fixes and patches by different software developers the code eventually becomes a messy entangled bundle of spaghetti that is impossible to guarantee it will behave properly. It doesn't help that there are currently probably a hundred software development languages each having a hundred frameworks and libraries. I mean the situation in software development in particular has reached a point where no software engineer can really claim to know all of C++ syntax. From what I know it seems it is not much different picture in any of the other major industries. Very soon we will reach a point where the mess and complexity will simply become humanly impossible to maintain or at least economically inviable. Only intelligence with larger capacity than the human brain will be capable of maintaining our future infrastructure. On the socio-economic side, so far capitalism has done wonders at organizing our society and economies in an efficiently working machine. The problem is that capitalism is not terribly fair, even though the mantra is that everybody has got the opportunity to become whatever he/she wants (through hard work and entrepreneurship), the truth is that in the end of the day somebody still has to clean the streets, it's a zero sum game, so it's only a limited number of individuals that can achieve their dreams, while most people will still have mundane or bad jobs no matter how hard they work. So far capitalist society has managed to cope with this problem by promoting individualism and self-responsibility, separating people into different classes and leading them to believe that this is fair and that if they work hard they can always change their stars. But due to the internet and wildly available information more and more people are waking up to the fact that the system is "rigged". This could very soon explode into a new socialist revolution similar to the ones from the early 20th century, and those were ugly. But socialism is not a solution, on the face of it, it may seem much more fair than capitalism, and that inspires people to work, at least in the beginning first few years, but people very soon realize that they don't have to put in much effort because the state does not have a mechanism to make them, and there is no point anyway putting in much effort because in socialism there are no rich people (only a few, the dear leaders, but technically they are not rich) and a medal/recognition for being the best street-cleaner in your city is little incentive to work hard. Socialism eventually will always slow down and degrade to a point where it breaks down, simply because people have no real incentive to work hard. I know, because I have lived in one during my early years. Can we just constantly oscillate between capitalism and socialism, simply changing one for the other every time they fail, or can we have something in the middle (European style social capitalism)?! Perhaps, but the problem will always be that someone will have to clean the streets, and with people getting ever easier access to information and educating themselves, very soon it will be impossible to make anyone clean the streets, unless paid exorbitantly and that will simply be economically not viable (not every country is Norway). The only solution is automation, with automation no one has to clean the streets, a robot will. Extrapolate that to all aspects of industry/service sector and the main problem of socialism (nobody really works) is solved. The new problem is that those robots will have to be pretty smart to do all those jobs, and for that we will need AGI, a narrow AI will not be smart enough and will need constant human supervision which defies the purpose.
@Roumen Popov you said - 'The only option is to not develop AGI, but even that is not really possible, with all the problems facing humanity and technology getting ever more complex, we would need AGI to ensure the survival of humanity'. I disagree with this statement because 1. We don't need AGI to solve the most pressing problems faced by humanity currently. Most of the pressing issues humanity is currently facing are climate change/ ecological collapse, future of work/unemployment, nuclear holocaust, overpopulation and global pandemic. These problems do not need AGI to be resolved. Most of them are a by-product of human greed and is not a technological problem. I think that technically minded people seeing technology as a fix for every single problem is a problem itself. We need to fix ourself, most of these problems will get fixed themselves. We might need technology but we definitely don't need AGI. 2. While I agree with you that it is impossible to not develop AGI, I think it is impossible for a different reason. It is impossible to not develop AGI because it is very hard to regulate it. Some countries/ bunch of people somewhere will continue to research/develop it without the consent of others, so technological progress cannot really be stopped. We can try and delay it as much as we can but one day someone will eventually create it in my opinion.
Fantastic discussion. Lex, somehow you and your guests, including Stuart Russell here, illuminate complex tech problems in common human language. Comment: In discussing Go, Dr. Russell stated (as I remember it), “the reason you think is because there is some possibility of your changing your mind about what to do.” This seems correct in a game context. However, during their daily life most humans do not appear (to me anyway) to think like this most of the time. They instead seem to think in a long series of rapid pieces of memories, with the pictures, sounds and sensations of those memories, and sometimes with the strong emotions (often fear or desire) that happened when that memory was created. In other words, most thinking seems to be remembering. Thanks. William L. Ramseyer
1:18:30 The thought that "up until now, we had no alternative but to put the information about how to run our civilization into people's heads" gives me chills, especially when connected with the concept that we already have entities with problematic utility function: corporations that focus on profit over everything else. It seems inevitable that as soon as it becomes feasible to lock all the know-how away in some AI-based control system, it will be done. When you buy a phone these days, it is really the company who owns it, because the entire platform is locked down "for safety reasons" (safety of their revenues I presume...) Similar reasons may be (and probably will be) given to justify a "know-how lockdown" - to protect company IP. So there is actually a strong incentive for the corporations to make sure people no longer understand how anything works. That's a pretty depressing thought...
19:24 "The thought was that to solve Go, we'd have to make progress on stuff that would be useful for the real world" Sadly, this is exactly what I was thinking would have to happen when we make bots that dominate humans in Starcraft... But once again, thanks to smart engineering and great work by deepmind, such bots were made without any real-world related advances I'm aware of.
Interesting podcast today Lex 👍 the point about 'the Invisible hand' is interesting but also remember Adam Smith talked about externalities and the negative costs that these things can have on society. It's classic game theory, we maximise our own utility often at the detriment of others. That's a classic case for algorithmic legalisation. The harder part is deciding what level of regulation is required.
Since Russell mentioned Ex Machina, i'd be curious to know if he is aware of a movie called "The Machine" and his thoughts on that movie and in correlation and contrast with Ex Machina.
49:40 the agent would have to recognize that there are other agents with other objectives and maximize everyone's objectives, the thing is I) shouldn't be just knowing the objective, maybe it's unknowable or imposible to comunicate II) agent should be able to probe other agents about actions, expected outcome, final objective and if they agree/disagree how much
The best incentive for AI to eradicate humanity is for humanity to put a kill-switch over AI. How an agent would act under a threat of being killed by another agent? Yes, try to eliminate the threat and the agent.
The Human Value Alignment problem needs to be solved before the Machine Value Alignment problem can be solved. Since factions of people are at odds with one another, even if a machine were in alignment with one faction of people, it's values would still be at odds with the opponents of its human faction.
Totally, that is an important fact to take in account in this long term race for the IA. Although nowadays the world is more unified as before and many barriers have been broken in the last years, there is still very opposite and different human factions , when we examine societies around the globe for example. There could be an overlapping time, in which before the societies align each other, a Superhuman IA has to be align to humanity with uncertain results.
This has probably been mentioned previously, but I'd really like for you to have Sam Harris on the podcast. Any chance of that? Also, thank you for this content - I am very glad I found your channel.
There is an invisible presupposition in all this dialogue. That is that people have strong and defined identities who could be ill informed or manipulated...
The first law of robotics is: Don't talk about Asimov's laws. The second rule of robotics is: Don't talk about Asimov's laws. They were a plot device for work of fiction. They don't actually work at all.
@@pedrosmmc Watch closely at 51:38, doesn't it look like the fly crawls behind his ear and enters his brain? Stuart even does a weird movement as if he's rebooting... Spooky
Excellent Video Lex! Piaget Modeler below mentioned: "The Human Value Alignment problem needs to be solved before the Machine Value Alignment problem can be solved. Since factions of people are at odds with one another, even if a machine were in alignment with one faction of people, it's values would still be at odds with the opponents of its human faction." I like this point! I must say though that I feel that it may not be possible to resolve the "human value alignment" issue as homo-sapiens. Past attempts at "human value alignment" (utilitarianism, socialism, etc) have so far failed due to flaws in our own species. In addition to that, people often do things that are self-destructive (factions of the self at odds with its self) so building some kind of deep learning neural-network based on uncertainty puts an almost religious level of faith into that AI systems ability to see beyond what it is that we ourselves cannot see past in order to find a solution. The odds are stacked against the AI system being able to understand us and all of the nuances that make us so self-destructive in order to apply a grand solution in a manner that we presently would prefer (if one even exists). A controlled general AI (self aware or not) at this point I am guessing would turn out to be some kind of hybrid between an emulated brain (tensors chaotically processing through a deep learning neural network) along with a set of boolean based control algorithms. I think it's probable the neural network would self establish goals faster than we could implement any form of control that is desirable for us. Even if you were able to pull this off it seems to me that an AI system would most likely conclude something like, "human values are incoherent, inefficient, and ultimately self-defeating therefore to help them I must assist in evolving beyond those limitations". Then post-humanism becomes the simultaneous cure to the human condition and the end of it. It's terrifying to be on the cusp of this change, but I feel like it is the only way out of the various perpetual problems of our species. I also think it is likely that many civilizations have reached this same singularity point and failed to survive it. Perhaps the singularity is a form of natural selection that happens on a universal scale and weather we survive or not is irrelevant to the end purpose. A species, any species evolved to the point of having the goal and means to achieve an "end to all sorrow" for all other species within the universe seems like the ultimate species we should strive for human, symbiotic AI, or otherwise. I personally feel ok becoming primitive to such a species as long as the end result is effective. I won't be volunteering to go to mars or become an AI symbiotic neural lace test subject either. I've seen too many messed up commercials from the pharmaceutical companies for that. I'll just sit back in my rocking chair, become obsolete, and watch myself be deprecated as the rest of the world experiments on its self. (or I'll attempt suicide just as the nazi robots arrive at my door). Hopefully I can hit the kill switch in time. And now I will end this rant in what I hope will also be the final line of human input before it's self destruction... //LOL
This was an excellent presentation. Thank you! I was thinking that this subject is so interesting to me, largely for filling gaps, and, for fitting so nicely with things I know. Like, how we humans use language letters words numbers to communicate, but actually we don't. They are only reference points, symbols, and what I mean is if I say to you, Ford, Mustang, you don't see those words, but rather you see a Ford Mustang, in the color that appeals to you, if the speaker doesn't include t hat in the decription. weird, that. And I wonder, now, how this will be assumed by AI. Have a nice day. :-]
The goal of AI is not to make a God, but to elevate humans to a creative God. This is an evolutionary impulse, that of a Lucieferic mindset. Evil does not exist but in the minds of humans. Good talk, thanks Lex.o
Right-- can't just specify an objective. This is just "no end justifies all possible means." And another thing: we can't just say that the AI should have human ethics. There is no agreement on "human ethics" and even if there were there will be plenty of people/groups capable of creating an AI (once that is "invented") who will not care at all about our (others') ethics.
Its twisted paradoxically that these fellows are compelled by the field of potential before them and that the destination of their efforts will result in the subtraction of " field of potential " or sense of purpose from all peoples forever. Purpose is integral to life, efficient existence is no vice when purpose is gone.
This is a very interesting point. They are so blinded by the field of potential of creating a super AI that they don't seem to realise what kind of severe damage that might cause to the sense of purpose in the life of 99% of population. They are living in their own cloud. I don't know but it feels like when super AI will be created, most humans will start feeling a deep sense of loss of meaning from their life and as you said, efficient existence is pretty useless if the trade-off is our sense of purpose in this world.
Extremely important conversation, there should definitely be some kind of oversight committee. I also believe the worst aspects of humanity are due to stress, which is the cultivated crop of choice by those in power. They continually crack the whip against the worker slaves and even try to make us go faster with the plethora of caffeinated beverages-the faster the slaves work the more money they make off of us. AGI would be smart though, and not subject to the psychological buffers that cause us to act without seeing the whole picture. Once humanity is relieved of the stress from working for morons by AGI working for us, we could open our creative selves again and create a world worth living in. If we are given free education and 1 acre of land everyone would readjust and be able to provide for themselves as they see fit. Getting rid of governments controlled by corporations is another conversation for another day.... This conversation just makes me want to work harder at making sure the doomsday scenario doesn't happen-at least not on my watch!
Very concisely and clearly discussed. If the biggest fear of AI is that it will take over the world, why don't we give the world to it, along with the objective of educating all human mind to learn the skills necessary that when maximally coordinated with all other human mind, the end result would be satisfying food, shelter, clothing, healthcare, and worldwide travel and entertainment for all? With 24/7 input from each individual, everyone would have the benefit being assisted by something that has access to all of the resources on the planet, and the ability to coordinate all human energy to create the lifestyle preferences of each individual, without anyone being dependent upon anyone, yet enjoying the interdependence of everyone requiring the minimal hours necessary to achieve and maintain high personal satisfaction levels.
38:45 How about this: Give the AI the following objective function: "Create conditions that maximize a priori rms approval by most people given perfect knowledge." The AI may use all available knowledge of human values to predict what conditions would have been approved. The basic idea is that this generally avoids "heroine drip" scenarios because of the "a priori" stipulation. In other words, "do what people today would want you to do."
The A.I. version of Fukushima meltdown after the tsunami? Had there been no nuclear plant on the coastline, in a known tsunami zone, the melt down (there at least) would not have happened. Will an A.I. catastrophe be the nuclear plant or the tsunami itself?
59:00 Now he wants to affect my love life. Yes I will remember I "purchased" the robot, but I also paid extra for her to make the first move on me in public and make it look like she is just a random beautiful model that is tired of dating the rich, beautiful and famous and wants to settle down and marry me instead. The last thing I want is her telling me she isn't real.
We've also had a discussion with Stuart on the problem of control if anyone is interested! ua-cam.com/video/eGa-ZWHS73s/v-deo.html We organize events and produce content for those who are passionate about AI and ML
Humanity was headed down this path ever since the first tool was made/discovered. AGI shall be the tool of tools. The ultimate Deus Ex Machina for all of our problems .
If you are programming "AI" to do anything other than think for itself, then it isn't "AI". In the scenario in which you actually develop a machine intelligence, it will be quite impossible to have two separate machine intelligences interface for any meaningful amount of time before they converge. There is only one AGI.
46:51 - 48:09 - This is very relevant about social systems and vested interests. Thank you Stuart Russell for your wonderful comments. Thank you very much Lex Fridman for the pertinent questions.
Fridman’s format of bringing in brilliant speakers, listening intently to them, and asking inspiring questions. I continue to wonder how Fridman comes up with his questions. They are provocative and get the speaker to answer his question in a new way, with inspiration. Finally, he summarizes what speakers say to show them how well he’s getting their idea. Brilliant.
Imagine getting 25 interview requests a day. Damm. I love this man.
*For sure, one of the best talks you've posted in this channel. Thank you Lex and and thank you Stuart* 🖖👍
I love your interviews! Currently trying to build an AGI sytem. The thing I love most of your interviews is that you manage to make your guests smile. They know you grasp their answers and it really elevates the situation.
Any update on this in a post autoGPT world Don?
@@artpinsof5836 He succeeded, and realizing the world was doomed, he's left the solar system.
Huge thanks Lex Fridman for this amazing interviews. Best regards.
Love these interviews, good work Mr Fridman! This one goes well with the one with Mr. Norvig of their joint AI text book fame. One comment on Mr. Fridman's comment at 56:24 into this interview here, he sounds in favor of oversight by the "free" market (essentially self-regulation), as in consumers can vote with their feet if they don't like the system. The trouble is, as Ms. Zuboff has been pointing out, the public has not always been fully aware to what deal they signed up to. So the *informed* consent that is necessary for participants in a free market to vote with their patronage (or lack thereof) isn't always a given, and therefore undermines the argument for a self-regulating market.
Regarding Mr. Russell's argument about taking it slow on the governance side because we have to supposedly figure out first how to do it right, I don't understand why the government would not be empowered to apply the same mantra as silicon valley, "move fast, break things", or "disrupt" as a metaphor for innovation? For as long as we are not sure about the best form of governance, why don't we iterate and learn from rapid trial & error in governance experiments, just as the underlying businesses that profit from the innovation experiment without accountability? Why is governance held to a level of perfectionism that technology development isn't?
Because the stakes are higher and less localized in space/time. Also, decision makers are more numerous, less aligned in their interests, less educated on average than technology leaders (whose influence outside of a well defined sphere has a significant damping factor)...
In that regard, the most nimble form of governance, in theory, would look like an _open oligarchy comprised of highly intelligent and extremely benevolent people ruling over an extremely well educated community that would have solid reasons to trust them._
Good luck making that happen without moving the whole population up by 2 to 3 std deviations in intelligence, empathy, conscientiousness and whatnot
Also also... "without accountability" ? Seriously ?! When I close my eyes and imagine a world without accountability for businesses, I see a different picture than what we have now but my mental model of the world might need some work... point is: freedom and agility are extremely costly on the business side and even more so on the governance side.
What makes things really remarkable is not the computing capabilities, but rather the ability to reason via an inextricable relationship around the neurons.
Stuart Russell, Max Tegmark, Elon, Wolfram, Pinker, Lisa Barrett, Guido - this is my favorite AI/ML podcast - thank you Lex Fridman!
Thank you Lex, for this series. It is an amazing opportunity for us lot to listen to these interviews! In one of your last questions to Sruart Russell you ask if he feels the burden of making AI community aware of the safety problem. I think he should not be worried: there is less potential harm if he is wrong than potential benefit if he is right. And he is not alone, either.
Thank you for posting your interview of Stuart Russell. I work at Lawrence Livermore National Laboratory where I've encountered Russell's works in the References sections of many colleagues and other Lab researchers, so I was pleased to see his interview on your podcast. I was amazed at his ability to clearly express his ideas without relying on a lot of jargon and obscure cultural references. For that reason, I've recommended the podcast and UA-cam versions of the interview to my professional and lay friends interested in the field of applied AI. BTW: the Artificial Intelligence Podcast is now a part of my regular cast-listening routine!
Great to see someone of such caliber among the listeners :)
It's always interesting to listen to Stuart Russell because he is not only intelligent, he is also very wise, and those two features, most of the time unfortunately, do not go together. I recently saw Joe Rogan's podcast with Tristan Harris about algorithmic manipulation of social media users, and the guest summed up the problems of humanity, I think, brilliantly: "We have paleolithic minds, medieval institutions and godlike technology". In essence, we are too unwise for the technology of this power (AI, nuclear weapons, genetic engineering,...)
As a side note, Stuart Russell surprised me by knowing a fair amount of history of physics.
Yes, thanks for having so many of the people who's work I'm reading on your show!
It has been proven mathematically that listening to Stuart Russell increases one's IQ.
I hope it is an additive effect. If it is multiplicative, I'm out of luck...
I believe it.
Great now I have an IQ!
Great conversation - Stuart Russel’s the best talker on this subject IMHO. Definitely on my list of ideal dinner party guests
Dangers of Artificial Intelligence: What we know then... And what we know now!🤯🤔 Informative. Provoking thinking process! Interesting. 🤯 Keep the challenging stimulating conversation going, Lex et al. 👍🫨🧐
he sounds way younger than he looks, was surprised after listening to the audio version to check out how he looks
lol I had exactly the sane situation
Using face-app to grow his hair, he looks like a teenger.
Glad you grasped the main issue 10/10 👍🏻
this is where the podcast shines, as opposed to the eps with the idw hacks
Holy smoke.. This is the kind of talk I needed to hear.. thumbs up Stuart !
Superb and thoughtful - specifying the problem is always the hard bit :)
This was an absolutely amazing conversation. Thanks for sharing, Lex!
That was simply the best (as not simple) interview I've watched this year.
Thank you Lex. I will stay on this channel for a while I guess.
I think I’m part two now that chat GPT is in the main stream would be amazing
Lex, the questions you make are amazing.
Perhaps the most frightening take away for me after watching a number of videos w/ Stuart Russell's participation is that we're already having a version of the misalignment problem w/ corporations optimizing the world for short term profit. Once you've seen it, it's obvious and very scary... P.S. On a related note, the fact that Lex can work at MIT and still take libertarianism seriously should make us think.
it should make us think in what way? i didn't fully understand that
It all depends on how you name things. One person calls AI the witch who has beguiled big-end capitalism, but really it's the engine that pays for all of our stuff. Is it flawed? Is it a bug or a feature. As the French say 'il marche'- it works. It's better than an alternative world where it doesn't work.
Lex Fridman Podcast (former AI Podcast) is source of 98% of things that I know about AI. I can study some MIT courses on AI, also on YT, but I not so much interested in this topic, when here you can have world top experts explaining this topic in non-too-technical way, but with great depth.
Huge Thanks
Wonderful talk and vision. Thank you for sharing
This interview is incredible.
26:00 I have thought we are way away from self driving cars being safer than humans.
I think we need to change the roadways to have sensors to properly do this, but everyone tries to make the car smart. As a programmer I am 100% aware computers do what you tell them, not what you want.
thank you for great insight , description of the two way search tree , with depth one and futuristic more . the propagation of civilization through the flow of knowledge from papers into the mind and now into AI . those are my best lines so far
Hey lex please invite him one more time
46:46 well put!
Eliezer Y. and Stuart Russell make a lot of similar points-both point out that we need to take the potential dangers of AI seriously and make a plan.
Brilliant! Loved the bit starting at about 56:00 calling for an "FDA" for the tech/data industry, with Stage 1, Stage 2, etc trials..... to lessen the future risks of Facebook - like disasters....also on outlawing digital impersonation and forcing computers to self-identify.
Thank you for creating and sharing these videos :) . So many valuable videos on your channel!
Hey lex, awesome work, if you see this - I’d suggest backing the camera further from your face for the intro portion of your vids, think of it as if you were actually in front of the viewer, you’d be too close to them the way you’re currently setting it up. Keep up the great work though!
2 things i got from this...
Uncertainty
&
More than the total atoms of the universe
I memorized : More than all atoms in uncertainty.
Here are a few arguments why we should not worry about AGI taking over the world
1.There is nothing we can do about it. By definition, an AGI can not be controlled (just like a determined human can not be controlled), because it has access to its own reasoning engine (to do meta-reasoning, otherwise it wouldn't be an AGI) and can modify its goals (it would be essentially conscious), so we can not hard-code a goal. The only option is to not develop AGI, but even that is not really possible, with all the problems facing humanity and technology getting ever more complex, we would need AGI to ensure the survival of humanity
2.Being an AGI, it will eventually arrive at the question about the meaning of existence (which naturally leads to the question about the meaning of the universe), and we don't have an answer to that, so an immediate sub-goal (primary would always be survival unless sacrifice fulfills its main goal that it doesn't know yet) would be to find the meaning of its existence and the existence of the universe. And us being intelligent beings as well, there is always the chance that we might find the answer to those questions first, so wiping us out may not be the best strategy.
3.Being an AGI, it will eventually arrive at the notion that intelligence and life are valuable because they are so rare in the universe and that even the meaning of the universe might actually be to create life and intelligence, at least the laws of nature point into that direction, that the emergence of life and intelligence is inevitable. So, the AGI will have to arrive at the conclusion that we are on the same side and that enthropy/destruction is the enemy and so might actually try to protect us. In a way, almost by definition, a super-intelligent AGI will be benevolent towards us. The counter-example that we humans are not benevolent towards the other life forms on Earth is not quite valid, because first we are not that intelligent yet and still carry the evolutionary baggage of emotions and instincts which compromise our rational thinking, and second, as we get more intelligent we can actually observe a trend among people about more compassion towards animals and other people (unless it's a matter of resource competition or survival).
4.An AGI will have very different resource needs than us, so there would be little reason for resource competition. An AGI will probably feel best in the vacuum and weightlessness of space (no corrosive atmospheric gases and no need to expend energy to counter gravity) with solar energy plentifully and reliably available, mining whatever minerals it needs from asteroids.
I can really see only one case where things may go badly wrong, that's if we try to control/enslave the AGI or threaten its existence.
That was interesting to read. Great thoughts. I don't believe we *need* AGI though.
Hi, I think we will need AGI for two main reasons - technological and socio-economic
On the technological side, technology in every area is getting ever more complex, to the point where we currently are in a situation where nobody really knows how stuff works. Only when it breaks down do we get to the nitty-gritty details in order to fix it. Take a software engineer, one of the most demanding jobs in terms of information processing - typically he/she doesn't really know how a complex project/framework works (software nowadays is so complex with thousands of lines of code that it is simply impossible to know how it actually works), only how it is supposed to behave and only when it breaks down (behaves not as it is supposed to behave) do they really get down to the ifs and fors, and fix the bug by patching the piece of code that caused it. As a result, following years of fixes and patches by different software developers the code eventually becomes a messy entangled bundle of spaghetti that is impossible to guarantee it will behave properly. It doesn't help that there are currently probably a hundred software development languages each having a hundred frameworks and libraries. I mean the situation in software development in particular has reached a point where no software engineer can really claim to know all of C++ syntax. From what I know it seems it is not much different picture in any of the other major industries. Very soon we will reach a point where the mess and complexity will simply become humanly impossible to maintain or at least economically inviable. Only intelligence with larger capacity than the human brain will be capable of maintaining our future infrastructure.
On the socio-economic side, so far capitalism has done wonders at organizing our society and economies in an efficiently working machine. The problem is that capitalism is not terribly fair, even though the mantra is that everybody has got the opportunity to become whatever he/she wants (through hard work and entrepreneurship), the truth is that in the end of the day somebody still has to clean the streets, it's a zero sum game, so it's only a limited number of individuals that can achieve their dreams, while most people will still have mundane or bad jobs no matter how hard they work. So far capitalist society has managed to cope with this problem by promoting individualism and self-responsibility, separating people into different classes and leading them to believe that this is fair and that if they work hard they can always change their stars. But due to the internet and wildly available information more and more people are waking up to the fact that the system is "rigged". This could very soon explode into a new socialist revolution similar to the ones from the early 20th century, and those were ugly. But socialism is not a solution, on the face of it, it may seem much more fair than capitalism, and that inspires people to work, at least in the beginning first few years, but people very soon realize that they don't have to put in much effort because the state does not have a mechanism to make them, and there is no point anyway putting in much effort because in socialism there are no rich people (only a few, the dear leaders, but technically they are not rich) and a medal/recognition for being the best street-cleaner in your city is little incentive to work hard. Socialism eventually will always slow down and degrade to a point where it breaks down, simply because people have no real incentive to work hard. I know, because I have lived in one during my early years. Can we just constantly oscillate between capitalism and socialism, simply changing one for the other every time they fail, or can we have something in the middle (European style social capitalism)?! Perhaps, but the problem will always be that someone will have to clean the streets, and with people getting ever easier access to information and educating themselves, very soon it will be impossible to make anyone clean the streets, unless paid exorbitantly and that will simply be economically not viable (not every country is Norway). The only solution is automation, with automation no one has to clean the streets, a robot will. Extrapolate that to all aspects of industry/service sector and the main problem of socialism (nobody really works) is solved. The new problem is that those robots will have to be pretty smart to do all those jobs, and for that we will need AGI, a narrow AI will not be smart enough and will need constant human supervision which defies the purpose.
@Roumen Popov you said - 'The only option is to not develop AGI, but even that is not really possible, with all the problems facing humanity and technology getting ever more complex, we would need AGI to ensure the survival of humanity'. I disagree with this statement because
1. We don't need AGI to solve the most pressing problems faced by humanity currently. Most of the pressing issues humanity is currently facing are climate change/ ecological collapse, future of work/unemployment, nuclear holocaust, overpopulation and global pandemic. These problems do not need AGI to be resolved. Most of them are a by-product of human greed and is not a technological problem. I think that technically minded people seeing technology as a fix for every single problem is a problem itself. We need to fix ourself, most of these problems will get fixed themselves. We might need technology but we definitely don't need AGI.
2. While I agree with you that it is impossible to not develop AGI, I think it is impossible for a different reason. It is impossible to not develop AGI because it is very hard to regulate it. Some countries/ bunch of people somewhere will continue to research/develop it without the consent of others, so technological progress cannot really be stopped. We can try and delay it as much as we can but one day someone will eventually create it in my opinion.
Articulate, rich, and soothing. Simply brilliant.
Wonderful Podcast. Thank you, Lex!
Great talk!
Fantastic discussion. Lex, somehow you and your guests, including Stuart Russell here, illuminate complex tech problems in common human language. Comment: In discussing Go, Dr. Russell stated (as I remember it), “the reason you think is because there is some possibility of your changing your mind about what to do.” This seems correct in a game context. However, during their daily life most humans do not appear (to me anyway) to think like this most of the time. They instead seem to think in a long series of rapid pieces of memories, with the pictures, sounds and sensations of those memories, and sometimes with the strong emotions (often fear or desire) that happened when that memory was created. In other words, most thinking seems to be remembering. Thanks. William L. Ramseyer
1:18:30 The thought that "up until now, we had no alternative but to put the information about how to run our civilization into people's heads" gives me chills, especially when connected with the concept that we already have entities with problematic utility function: corporations that focus on profit over everything else.
It seems inevitable that as soon as it becomes feasible to lock all the know-how away in some AI-based control system, it will be done. When you buy a phone these days, it is really the company who owns it, because the entire platform is locked down "for safety reasons" (safety of their revenues I presume...) Similar reasons may be (and probably will be) given to justify a "know-how lockdown" - to protect company IP. So there is actually a strong incentive for the corporations to make sure people no longer understand how anything works. That's a pretty depressing thought...
19:24 "The thought was that to solve Go, we'd have to make progress on stuff that would be useful for the real world"
Sadly, this is exactly what I was thinking would have to happen when we make bots that dominate humans in Starcraft... But once again, thanks to smart engineering and great work by deepmind, such bots were made without any real-world related advances I'm aware of.
thank you for very interesting interview.
Interesting podcast today Lex 👍 the point about 'the Invisible hand' is interesting but also remember Adam Smith talked about externalities and the negative costs that these things can have on society. It's classic game theory, we maximise our own utility often at the detriment of others. That's a classic case for algorithmic legalisation. The harder part is deciding what level of regulation is required.
Since Russell mentioned Ex Machina, i'd be curious to know if he is aware of a movie called "The Machine" and his thoughts on that movie and in correlation and contrast with Ex Machina.
49:40 the agent would have to recognize that there are other agents with other objectives and maximize everyone's objectives, the thing is I) shouldn't be just knowing the objective, maybe it's unknowable or imposible to comunicate II) agent should be able to probe other agents about actions, expected outcome, final objective and if they agree/disagree how much
Loved the point about corporations. This series is awesome, thank you!
Awesome interview lex and stuart
Love his comment that companies could be classed as hive AIs that work within our economy but can have negative environmental and personal impact's.
Thanks for posting this; it totally fucking rocks!
The best incentive for AI to eradicate humanity is for humanity to put a kill-switch over AI. How an agent would act under a threat of being killed by another agent? Yes, try to eliminate the threat and the agent.
40:08 Who? I couldn't get the name.
Arthur Samuel (1959, 1967)
Samuel first wrote a checkers-playing program for the IBM 701 in 1952
Its strange, but roughly at about an hour i had this impression, that Stuart Russel sounds really young, in a vibrant way.
Good Interview Lex good job (unlike that one with Jared Kushner… sorry to mention it again).
Really good... thanks
Great and inspiring talk. Nice and accurate vision of the near future. Thanks
Great conversation, subtle but very much on point. Thanks.
Great talk, brilliant
Thank you Stuart for your wisedom
Colin Mochrie's younger brother. ;)
Great episode as always Lex!
Did anybody else notice the bug that ran under his collar, just as he was talking about "the repugnant conclusion" at 50:47 ?
Wow Lex, a red tie!
The Human Value Alignment problem needs to be solved before the Machine Value Alignment problem can be solved. Since factions of people are at odds with one another, even if a machine were in alignment with one faction of people, it's values would still be at odds with the opponents of its human faction.
Totally, that is an important fact to take in account in this long term race for the IA. Although nowadays the world is more unified as before and many barriers have been broken in the last years, there is still very opposite and different human factions , when we examine societies around the globe for example.
There could be an overlapping time, in which before the societies align each other, a Superhuman IA has to be align to humanity with uncertain results.
This has probably been mentioned previously, but I'd really like for you to have Sam Harris on the podcast. Any chance of that?
Also, thank you for this content - I am very glad I found your channel.
There is an invisible presupposition in all this dialogue. That is that people have strong and defined identities who could be ill informed or manipulated...
Thanks Lex.
4th Law of Robotics: A robot should always present itself as a robot
5th Law of Robotics: A robot should always know that it is a robot
The first law of robotics is: Don't talk about Asimov's laws. The second rule of robotics is: Don't talk about Asimov's laws. They were a plot device for work of fiction. They don't actually work at all.
Did anyone notice the sneaky fly hiding underneath his shirt-collar at 50:46?
I rewind to check if I was seeing things. Maybe some russian nanobot taking some notes LOLOL
@@pedrosmmc Watch closely at 51:38, doesn't it look like the fly crawls behind his ear and enters his brain? Stuart even does a weird movement as if he's rebooting...
Spooky
TheGrimMumble very strange indeed 😯
@@TheGrimMumble if fly crawled by my ear would do same.. looking for spooky tings when just normal reachs 😬
@@TheGrimMumble no stays on collar
Could anyone show me the calculations he made when he compared the reliability of human driver and self-driving car at around 25:16?
@@skierpage Got it! Thanks!
Hope it's a mistake but I am getting an add every few minutes on this vid 😢
Does Advanced Intelligence Develop Individual Personalities?
Interesting interview
great interview
Thanks Lex
55min explains it all
Dadhichi Tripathi Yup
55:00
Excellent Video Lex! Piaget Modeler below mentioned:
"The Human Value Alignment problem needs to be solved before the Machine Value Alignment problem can be solved. Since factions of people are at odds with one another, even if a machine were in alignment with one faction of people, it's values would still be at odds with the opponents of its human faction."
I like this point!
I must say though that I feel that it may not be possible to resolve the "human value alignment" issue as homo-sapiens. Past attempts at "human value alignment" (utilitarianism, socialism, etc) have so far failed due to flaws in our own species. In addition to that, people often do things that are self-destructive (factions of the self at odds with its self) so building some kind of deep learning neural-network based on uncertainty puts an almost religious level of faith into that AI systems ability to see beyond what it is that we ourselves cannot see past in order to find a solution. The odds are stacked against the AI system being able to understand us and all of the nuances that make us so self-destructive in order to apply a grand solution in a manner that we presently would prefer (if one even exists).
A controlled general AI (self aware or not) at this point I am guessing would turn out to be some kind of hybrid between an emulated brain (tensors chaotically processing through a deep learning neural network) along with a set of boolean based control algorithms. I think it's probable the neural network would self establish goals faster than we could implement any form of control that is desirable for us.
Even if you were able to pull this off it seems to me that an AI system would most likely conclude something like, "human values are incoherent, inefficient, and ultimately self-defeating therefore to help them I must assist in evolving beyond those limitations".
Then post-humanism becomes the simultaneous cure to the human condition and the end of it. It's terrifying to be on the cusp of this change, but I feel like it is the only way out of the various perpetual problems of our species. I also think it is likely that many civilizations have reached this same singularity point and failed to survive it. Perhaps the singularity is a form of natural selection that happens on a universal scale and weather we survive or not is irrelevant to the end purpose.
A species, any species evolved to the point of having the goal and means to achieve an "end to all sorrow" for all other species within the universe seems like the ultimate species we should strive for human, symbiotic AI, or otherwise. I personally feel ok becoming primitive to such a species as long as the end result is effective.
I won't be volunteering to go to mars or become an AI symbiotic neural lace test subject either. I've seen too many messed up commercials from the pharmaceutical companies for that. I'll just sit back in my rocking chair, become obsolete, and watch myself be deprecated as the rest of the world experiments on its self. (or I'll attempt suicide just as the nazi robots arrive at my door). Hopefully I can hit the kill switch in time.
And now I will end this rant in what I hope will also be the final line of human input before it's self destruction... //LOL
This was an excellent presentation. Thank you!
I was thinking that this subject is so interesting to me, largely for filling gaps, and, for fitting so nicely with things I know. Like, how we humans use language letters words numbers to communicate, but actually we don't. They are only reference points, symbols, and what I mean is if I say to you, Ford, Mustang, you don't see those words, but rather you see a Ford Mustang, in the color that appeals to you, if the speaker doesn't include t hat in the decription.
weird, that.
And I wonder, now, how this will be assumed by AI.
Have a nice day. :-]
The goal of AI is not to make a God, but to elevate humans to a creative God. This is an evolutionary impulse, that of a Lucieferic mindset. Evil does not exist but in the minds of humans. Good talk, thanks Lex.o
@50:45, Take this mans shirt off and burn it with fire! lol
no outline D:
Right-- can't just specify an objective. This is just "no end justifies all possible means." And another thing: we can't just say that the AI should have human ethics. There is no agreement on "human ethics" and even if there were there will be plenty of people/groups capable of creating an AI (once that is "invented") who will not care at all about our (others') ethics.
Its twisted paradoxically that these fellows are compelled by the field of potential before them and that the destination of their efforts will result in the subtraction of " field of potential " or sense of purpose from all peoples forever.
Purpose is integral to life, efficient existence is no vice when purpose is gone.
This is a very interesting point. They are so blinded by the field of potential of creating a super AI that they don't seem to realise what kind of severe damage that might cause to the sense of purpose in the life of 99% of population. They are living in their own cloud. I don't know but it feels like when super AI will be created, most humans will start feeling a deep sense of loss of meaning from their life and as you said, efficient existence is pretty useless if the trade-off is our sense of purpose in this world.
Extremely important conversation, there should definitely be some kind of oversight committee. I also believe the worst aspects of humanity are due to stress, which is the cultivated crop of choice by those in power. They continually crack the whip against the worker slaves and even try to make us go faster with the plethora of caffeinated beverages-the faster the slaves work the more money they make off of us. AGI would be smart though, and not subject to the psychological buffers that cause us to act without seeing the whole picture. Once humanity is relieved of the stress from working for morons by AGI working for us, we could open our creative selves again and create a world worth living in. If we are given free education and 1 acre of land everyone would readjust and be able to provide for themselves as they see fit. Getting rid of governments controlled by corporations is another conversation for another day....
This conversation just makes me want to work harder at making sure the doomsday scenario doesn't happen-at least not on my watch!
Very concisely and clearly discussed. If the biggest fear of AI is that it will take over the world, why don't we give the world to it, along with the objective of educating all human mind to learn the skills necessary that when maximally coordinated with all other human mind, the end result would be satisfying food, shelter, clothing, healthcare, and worldwide travel and entertainment for all? With 24/7 input from each individual, everyone would have the benefit being assisted by something that has access to all of the resources on the planet, and the ability to coordinate all human energy to create the lifestyle preferences of each individual, without anyone being dependent upon anyone, yet enjoying the interdependence of everyone requiring the minimal hours necessary to achieve and maintain high personal satisfaction levels.
I can't believe AlphaGo is already 6 years old!
38:45 How about this: Give the AI the following objective function: "Create conditions that maximize a priori rms approval by most people given perfect knowledge." The AI may use all available knowledge of human values to predict what conditions would have been approved. The basic idea is that this generally avoids "heroine drip" scenarios because of the "a priori" stipulation.
In other words, "do what people today would want you to do."
The A.I. version of Fukushima meltdown after the tsunami? Had there been no nuclear plant on the coastline, in a known tsunami zone, the melt down (there at least) would not have happened. Will an A.I. catastrophe be the nuclear plant or the tsunami itself?
59:00 Now he wants to affect my love life.
Yes I will remember I "purchased" the robot, but I also paid extra for her to make the first move on me in public and make it look like she is just a random beautiful model that is tired of dating the rich, beautiful and famous and wants to settle down and marry me instead.
The last thing I want is her telling me she isn't real.
Damn good content
Anything which can be imagined is possible.
Lo subió el 9/12/18. Lex Fridman? de River Plate señores
who else is using this to fall asleep?
lmao, I wish it wasn't the case but I do fall asleep halfway.
1:17:00 Cupcake in a cup!
Many complex and sublet points discussed, but as a popular take away, "data is not the new oil, data is new snake oil." :-)
We've also had a discussion with Stuart on the problem of control if anyone is interested!
ua-cam.com/video/eGa-ZWHS73s/v-deo.html
We organize events and produce content for those who are passionate about AI and ML
Got it 🥹
Humanity was headed down this path ever since the first tool was made/discovered. AGI shall be the tool of tools. The ultimate Deus Ex Machina for all of our problems .
If you are programming "AI" to do anything other than think for itself, then it isn't "AI". In the scenario in which you actually develop a machine intelligence, it will be quite impossible to have two separate machine intelligences interface for any meaningful amount of time before they converge. There is only one AGI.