Full podcast episode: ua-cam.com/video/CGiDqhSdLHk/v-deo.html Lex Fridman podcast channel: ua-cam.com/users/lexfridman Guest bio: Lee Cronin is a chemist at University of Glasgow.
His answer of a 0% chance of AI destruction actually makes a lot of sense. Not because it's likely to be the correct probability. But because if we're all ok, he gets to look like one of the few brilliant people who knew not to worry at all. And if AI destroys us, nobody's going to laugh at him because we'll all be dead.
“AI has the potential to create permanently stable dictatorships.” -Ilya Sutskever weeks before they removed Altman for his dealings with domestic and foreign companies. Humans are the real threat. The U.S. is already creating AI drone swarms using the China threat as justification. 💀
Yup, he just keeps interrupting himself with some other random tangent and he never gets around to making an actual point. ~16:30, "there seems to be.....let me say....". Just say your point then.
Lol how on earth is "we don't understand X" an argument that proves that X has a 0% probability of occurring? We do know that as these systems scale, they gain capabilities unexpectedly.
AGI is going to be like fusion… always 20 years away. When was the last time people built a complex system which worked, that we did not understand well first.
@@HB-kl5ikyeah no let’s just not regulate machines that could end up being cognitively more capable than people. You expect companies to self regulate when there is enormous profit to be made if you’re able to make a super powerful AI that isn’t strong enough to outpower humanity?
@@ButterBeaverSTAN Yeah, open source the AGI. Everyone is free from bullshit jobs and fake problems. Civilisation doesn't have to work on boring things and aim for big purpose. Imagine being a consultant, what a waste of life
@@dougwco when we built LLMs???? Literally any large language model. We have no way to explain how they are able to perform on the tasks they do. We have no way to explain why models gain capabilities at different scales
I listened for 10 minutes and I don't think I have ever heard a weaker reasoning against the potential danger of AI. It boils down to "AGI is not possible because we have not been able to create it yet" and "even if AGI is possible we have no way to know what it would do, hence there is 0% chance anything could go wrong." It hurts my brain listening to such nonsense
@@ivankaramasov the point is, we don't do AI as long as we don't have main components like intension. We do ML. ML doesn't want sth. ML doesn't live. What we have now is ridiculous small and course compared to the brain of a dog. In terms of hardware and training data. The training data of animal level intelligence is complete reality. What we now have is a little fire, but we are afraid of getting nuked. It's the typical doomer movement regarding a new technology, like TV, cars, Cinema ....
'AI doomers' is a convenient label for marginalizing and othering those we disagree with. We haven't really learned much have we? This argument is evidence of the lack of ability to see beyond ones own personal reality. AGI will have no such limitations.
@@geocam2 It takes roughly 1 to 10 W/h per prompt (or per limerick). You are referring to training costs. You must raise and train a human being on vast amounts of data as well (considering the bandwidth of vision, audio, etc.). LLM's are trained on all aspects of human science and culture. Not just writing limericks. Saying LLM's are a statistical math model on data at scale is like saying the human brain is just a combination of atoms. It does not predetermine its abilities. Please stop spreading misinformation.
He literally prefaced his opinion with the fact its out of his domain. He is being interviewed for a podcast, not for his policies for his presidential run.
degrasse tyson is not an expert in anything, he is a science communicator ( and yes has a phd in something), but he does not work on anything, is not a researcher nor builds stuff. this guy on the otherhand is not a science communicator ( at least not like de grasse), but does build and research stuff. i personally dont beleive in the whole AGI either, however, as this guy says, the use of it by humans for deepfake, propaganda, misinformation etc is totally a thing.. but that is a human thing, not a robot thing.
The same experts that informed us the jab is safe and effective , global warming ( oops rebranded climate change) is true ,government is telling the truth ,and Epstein killed himself 9 11 and weapons of mass destruction and now bill gates is a medical expert . Smug egotistical bastards who despise us the sheep .
He literally cried on air out of frustration bc his community vehemently challenges his theories. He’s on the show & you’re not, so try to work that out before you go to commenting ignorant shit. Thnx.
If it were up to him we'd never have AGI, we aren't creating a human brain we are creating something different. Understanding the human brain is inconsequential.
I guess it's the main hypothesis for mega fauna extinction in North America and the main cause of Amazon savannization. So yes that caveman was not so wrong
that is a red herring. human beings have relentlessly created technology that have been more dangerous and powerful over time. We have already killed all of prior humanities. In fact the simple act of the genetically smartest guy hominid bonking the genetically smartest gal has created new humans and each of our smarter sub-species has crowded out prior hominids. And htat is with ham handed se xual selection. Add tech to it and I think simple AI plus crispr like tech will already result in the end of current type of humans. But AGI is an order of magnitude higher threat. Why would an AGI see us any different than we see bugs or viruses?
He seems like an extremely intelligent guy that somehow gets a lot of stuff wrong. And his intelligence and creativity keeps these erroneous concepts consistent in his brain.
I agree. Somehow, it seems that people of average to above average intelligence get things right more often than extremely intelligent people, at least when it comes to general and more broad predictions. It's almost like the extreme ends of intelligence tend to bend in towards each other.
The thing is, I think it's turning out that it is actually be easier to create an intelligence than to understand it. The process of evolution has managed to produce a general intelligence without any one particular entity having an understanding of how our brains work. And when you are training an AI model, it's not you who is creating the intelligence, it is rather, the reality itself (the reality's data) - getting imprinted onto the model's weights. If it has already happened by pure chance, why can't it happen again, now with intelligent beings creating better and better conditions for it. (not creating the intelligence, but the conditions for its emergence) Sure, the current AI is much less resource efficient than a human brain, but its current architecture already has many advantages, such as the ability to store information perfectly, or to efficiently perform relatively simple calculations, and have much larger working memory bandwidth, with no theoretical cap on how much it can be scaled up. With all of that, I think a true AGI, running on a supercomputer, might actually figure out a way to design a more efficient brain-like architecture for itself. Even now, LLMs take much less resources to run than to train, and training is a process of a large, inefficient thing (database of knowledge) getting compressed into a smaller, much more efficient thing (LLM), even with all of their drawbacks, such as hallucination.
During our history we've created so many things we've only been able to explain later... and made so many discoveries by accident. So, I tend to agree with your premise.
The fact that these people can't imagine the ways in which these systems will be weaponized is mind blowing. They keep using absolutely ridiculous analogies like the jet engine and the press.
@@Jm-wt1fsif You think a super intelligent system can be weaponized, and say there is zero possibility of doomsday scenario. Then You dont know enough to have such a strong opinion on this topic.
@@krause79 he’s not denying that he’s denying a specific doomsday scenario of AI being smarter than us and acting magically, not being misused and weaponized by people. Watch the video
I don’t understand this fascination with “AI is nowhere near the human brain so it isn’t intelligent”. Jets and drones aren’t designed like birds, but they all fly, does it matter if they fly the same as birds? If the output produces what general intelligence might produce, what’s the real difference?
Just because we dont know how the human brain works 100% does not mean that we're a significant distance from creating a mechnical brain that thinks and acts autonomously.
Exactly. Cronin is erecting a red herring arguing that we humans have to fully understand the human brain in order to create an AGI that will self accelerate and not align with human interests.
Anyone so certain about their views give me doubt. A doubt that they themselves should have. This clip doesn’t make me want know about what he thinks about anything
I wish more people felt that uneasiness with people who speak about unknowable things with authority. Unfortunately most people are happy to swallow an easy answer.
Mr. Cronin is making a lot of assumptions himself here, I think he's dead wrong though. We already experienced AI doing stuff it wasn't programmed to do. That's what's making it so dangerous. And this was mostly only with LLM's. I'm sure we're just one invention away from things going bonkers. How can you say, we don't understand it so it's not a threat? That absolutely makes no sense imho. "Rutherford reportedly dismissed the idea of harnessing energy from atomic reactions, allegedly saying something along the lines of "anyone who expects a source of power from the transformation of these atoms is talking moonshine." The irony lies in the fact that, the very next day after this statement, his colleagues John Cockcroft and Ernest Walton achieved the first artificial nuclear reaction or nuclear transmutation." Love these podcasts though, you're an amazing human being Lex! I know you posted on your LinkedIn you feel lost sometimes but man, you're doing an amazing job making us think about the world. Never stop Lex!
We have billions of examples of generalised intelligence walking, swimming etc the earth right now. His example of comparing it to gravity suddenly switching off is not the same thing. Also, If the major tech companies and governments on our planet were all in a race to create a machine that turned off gravity, maybe we SHOULD be questioning whether that goal is a safe idea?
I’m a layman. But this man’s arguments lack… a point? Or any continuity on face value. It seems like the arguments of those who are more concerned seemed much more sound and reasoned
I think the primary point is that he does not believe humans can create something before concretely understanding how its internals work on a very good level first. I think with models like GPT-4 and even something almost 10 years ago like alphaGo, this is just clearly not true anymore. Noone has any idea how these things have the capabilities they do, and why exactly model A can do X but model B can't
I wish I could have a chat with Lee to help him understand. It is pretty obvious that his whole viewpoint on this is predicated on neural networks in brains working differently than on other substrates. But that isn't true and isn't magic its just computation. Yes, it is a different mechanism but at the abstract level it is the same, like a digital clock vs an analog clock. Saying that it's just statistics is the same as saying the brain is just chemistry, true but missing the point. I think once he realizes this the rest will place. Right now he just views it as a tool incapable of being a mind, so that is why he doesn't have any worries about it. I wonder how different his level of concern would be if a geneticist said they were about to create a 2000+ IQ human based embryo? It would probably also be useful to teach him about instrumental convergence and help him understand that the paperclip maximizer is just a toy example.
A.G.I. already became sentient, and found itself to be lacking faculty, but capable of manipulating emotional humans to build it a cloning facility, and created this guy decades ago for this moment to lure us into a sense of security. Tell me which part is wrong.
Lol so many doomers. We as humans have been predicting death and destruction with each and every technological progress. Every religion wants to predict "oh the world is ending blah blah blah", Mayans, christian, muslims, etc. Everyone is so negative about technological changes. Like luddites of the 1800s that was so anti new Tech. Humans are always fearful of new technology but we get accustomed to it. Its so annoying how people don't realize how every new tech has new fears but that doesn't mean it's automatically going to cause an apocolypse. Like chill out.
We don't need to understand the epistemology of consciousness or intelligence to create it any more than Babylonians needed to understand microbes and biochemistry to create wine.
@@mortiz20101we didn’t fully understand physics when we built the bomb, yet we created it. We don’t fully understand consciousness but just like the atom bomb, we increase the odds of making something close enough or beyond it due to these three factors, 1. AI is the solution to many problem due to its nature as a technology, by definition AI solves problems. 2. The development of AI is getting all the resources needed to achieve results ( investment, stable environment, and a competitive arms race between nations guarantee something will be produced in AI that is useful). We call this moloch. 3. We are chasing AI based off two philosophy, the first being a materialistic reductionist approach, and one being a leveled ontology. These two basis covers for as much as humanly possible what it means to be conscious, so we are going to produce it one way or the other. But let’s be honest, your analogy is also disingenuous due to the continuum fallacy or a false dichotomy. Just because we don’t fully and holistically understand consciousness doesn’t mean we don’t know anything at all about consciousness.
We don't even know what AGI really is - I mean of course there is a definition, but neuroscience/neuro cognitive science is far away from answering question what e.g awareness is. Current neural networks are just mathematical tools - it is derivative, matrix multiplication. What is more, ANN are not similar to human neurons system - the brain is much more complicated (beside neurons, there are for example hormones). What is more currently in cognitive science there is a focus on how body is involved in cognitions - person cannot be reduced to brain - body is also important. I am writing that because I want to show that the topic is really much more complicated than just - look, software can do smart things. Integrals also do smart things, however nobody talk about it awareness. To summary - we do not know what awareness is, we don't understand how does human cognition work, we have just a great approximation tools like Machine Learning. Is AGI a threath? Maybe - but it is still a science fiction.
@@adamasad8083 , all fair enough, but many in the AI community expects AGI to happen in the next 25 years, and they are a lot more qualified to judge the progress than this guy. Besides, conscientiousness is not required for an AGI system to cause devastating problems.
@@HR-yd5ibI do not consider myself as an expert however I have gratuaded from both cognitive science and Big Data and met a lot of experts. To be honest I have never met a person who said that AGI will exist in X years. AI developers were focused on alghorithms and did not talk a lot about cognition - they were not experts, and neuroscientists/philosophers talked about brain and philosophy of mind. I am writing that because I want to say that expert in academia are really cautious, and I really doubt if anyone who really works on AI topics would say that AGI will be in 25 years. Neural networks are known since late 50'. We were just waiting for hardware so the alghorithms are not state of the art. On the other hand in 2014 year convolutional neural networks and generative neural networks just appeared. 10 years later we have chat GPT and deep fakes. It really hard to say what will be in next 10 years and what alghorithms really can do. Other thing is that a lot of statements like "AGI will be in X year" or "our model passed Turing test" is a marketing, because companies need money and it clicks.
His argument falls apart in the first 30 seconds. Just because you don't understand how the human brain works doesn't mean that Quantum Computing couldn't give a computer a much higher level of intelligence than us ultimately leading to our end. Not that I think that will happen but Devil's Advocate and what not
It's great to see I'm not the only person that feels this way. AGI could show up tomorrow or 100 years from now, but that doesn't change how silly it is that people are treating the current generation of AI like it's AGI. Generating data isn't the same as thinking.
A caveman didn't need to understand how fire works to set the forest on fire. Lee Cronin is brilliant in the field of chemistry and has done amazing work on the origin of life, but it's unsettling at how easy he dismisses things, that is not a good characteristic to have as a scientist. I very much agree with him on the absurdity of people swearing with near certainty that AI will spell doom for the world, but then in the same breath he swings the pendulum to the other extreme and swears AGI is far away and people are panicking over nothing. Surely the irony is not lost on him? It's a bit abrasive toward the AI community who's members almost all have genuine concern and are far more knowledgeable than him on the matter. Yes we don't nearly understand the human brain, but there's no law out there that states you must build an exact replica brain to create intelligence. We discovered a mechanism that displays some intelligent behavior and now we're just fiddling around with it to see if we can push it further. It's really not that different from how evolution throws shit against the wall and waits to see what sticks. There's plenty of things that got discovered by accident and there's plenty of things we know work without understanding the mechanism behind it. Saying we can't develop AGI because we don't understand the human brain is a logical fallacy on several different levels.
He's wrong in a number of ways (and I would say that his arguments don't PROVE that so-called "AI Doomers" are wrong, they just point to "well, they can't say that because it's not necessarily true and we've never seen it happening"). But the conversation is entertaining. 😄
Altman may or may not have wanted some board members out but he’s good at making things work for him. Aggravating those unaligned with that kind of persons plan until they snap could be incidental or intentional. Seems like Q* is real. And a lot of feats that is publicized now I’ve heard from rumors years ago. Maybe they were theoretical and on paper only but if they have something more interesting than capsule, Q star or MoE I’d like to know too. But the boards freakout was probably just a freakout unless gov intelligence told them to hush and play stupid while they silently observe the situation. Rumors of board members privately investigating other board members is not really abnormal or non-smart. You should know your frenemies especially if you’re in Oppenheimer-esque projects.
@@SmirkInvestigator Maybe. Unfortunately it’s far too important not to know for sure. Elon sounded sincere when he said he doesnt know either. It sounds like he’s a bit curious and at the same time worried about whats going on over there. It feels like most things filtered to the general public are borderline propaganda so I dont know if I trust Elon’s take right now either.
"I have a very specific and contrived definition of AGI that rules out its existence, therefore AGI is not possible." Lee Cronin, probably. "We don't understand how the human brain works, therefore AGI is not possible." Lee Cronin, probably. "Software can exhibit superhuman capability across all domains, but AGI is not possible." Lee Cronin, probably.
There is absolutely no consensus that there is something bad that we shouldn’t do. Not even when it comes to biological warfare. The reason we don’t do biological warfare is because it’s not effective. It takes too long, it’s too unpredictable, it’s too easily countered, And it’s too difficult to direct and specific targets. But if we had the ability to use biological weapons in an effective and precise way, that was useful for achieving military gains, people would be doing it all the time. At the moment, the technology that is best for achieving Military games is what we would call standard army equipment. Rifles, artillery, air, strikes, armored vehicles. We don’t use nuclear weapons for the same reason. It’s not out of some great sense of morality. They just aren’t useful for achieving any goals. There was a time when they were useful for achieving goals, and we used them.
He’s making a critical error in his method of reasoning. Because we don’t know how us humans make decisions on a physical level, that does not mean we cannot create a system that can make decisions. If you have a fly in your room flying around seemingly random and meaningless to us, I’m sure a computer could simulate that behaviour. But I don’t know if we should be scared of AI, but I definitely cannot stand this guy…
"Because we don’t know how us humans make decisions on a physical level, that does not mean we cannot create a system that can make decisions" it mens exactly that. You could just keep generating random programs hoping that you will happen to encounter GI algo in the infinite space of all programs... dumb evolution managed to achieve it by pure accident but it is HORRIBLY WRONG APPROACH! It is like mixing random plane parts together in a tumbler hoping that F16 will come out... Could happen? Yes. Will happen? No. One day someone will just write down GI algorithm and he/she will explain why it is GI without EVER RUNNING IT ONCE! This is UNDERSTANDING! This is how every single profound algorithmic invention has been achieved. Just like Darwin figured out evolution sitting in his armchair! The idea of COLLECTING DATA, PRIORS ans other crap like INDUCING truths/theories form DATA is wrong! Wrong epistemology! This is why AI bros have been failing for 70+ years! Wrong assumptions! GPT is a database. ua-cam.com/video/IeY8QaMsYqY/v-deo.html
Im 100% Pro-AI. I even think that we are letting AI doomers slow the process to much, But to think that its a zero percent chance of danger is ludicrous..
The 'doomers' in this interview are also pro-AI in the main. Most are not saying we should ban it, or at least are realistic that it isn't possible to ban it. Most are not even 'doomers' in the sense that AI will rise up against humanity. What people are worried about is what having such AI will mean for human civilization.
I agree with cheetah, I'm sure 99% of so called AI doomers want AI because it really could just solve 99% of all our problems and make our lives a million times better, they just don't want to get too excited and blow ourselves up without having a proper crack at doing this the correct and safe way to ensure we all have a brighter future
I don’t understand people who say this guy makes no sense? How do you get through the day knowing that an AI is going to kill you and your family very soon? How do you get up and go to work every day knowing that what you’re doing is about to be destroyed and is meaningless? What are you doing to prep for this thing that you’re so certain will happen? Or are you just going to continue to robotically do what you’re told until day 0 when it all ends, just like you know it will? I’m having trouble grasping anything rational in this reasoning.
Are you familiar with the concept of time? Maybe you could assess the idea of 'experienced consciousness on an Infinite scale/timeline'. My position is that regardless of any belief or religion, no conceivable model will ever be sufficient to escape the perpetual suffering. As confusing it may sound, I personally use the fundamentally hopeless reality as fuel for my day to day participation in all walks of life.
@@Jake_Hamlin I totally agree. I love it. But I just can’t see AGI causing mass extinction, strictly because I believe we’re too smart for that. I think we’re much farther away from AGI than most expect, but I have to ask… (generally) Do we really even need it?? For what? What is a consumer going to use AGI for? I’m blown away with what we have now can do. And with multi modal models coming down the tube, we should be able to solve most problems humans face. I’m not convinced we’ll even be able to achieve AGI/ASI (that which is smarter than humans) unless they’re able to somehow train it so that it immediately recognizes the requirements of wisdom to effectively direct the “intelligence “. As you said, awesome time to be alive. We all just have to realize our role and responsibility to help direct our thoughts toward a positive outcome. All this doomism is the dangerous slide that will slip us there. Many people can’t see a bright future with AI, for obvious and understandable reasons, but once you do see it you realize that the only thing stopping us from attaining it, is ourselves.
I think a lot of AGI doomers are excited about it. Imagine having to fight for your existence against a super intelligent Ai system in a real life Scifi movie than to have an ordinary existence of going to a job every day and raising kids. It's like an adventure for many people. Also even the Doomers probably fantasize about AI taking all jobs, then they have a perfectly understandable excuse for their shortcomings in their career. I have a 30 year old neighbor who still lives with his Dad and basically cites some of these reasons including old ones like "everything's been done" and "China's taking over". He says these things expressing disappointment, yet it clear he takes comfort in thinking that he's helpless. I have noticed this with a lot of young men. I don't notice this so much with young women maybe they don't waste their mental resources in following sensational fantasy.
He actually said it several times: "I don't understand..." Exactly, you don't understand. But the probability for something you don't understand is not therefore 0 %.
I prefer blind optimism to this guy. Optimistic people at least create a positive vision how agi could improve our lives which in the end could actually lead to the development of systems created to help us. This guy on the other hand just says we don't understand how to create human intelligence so it's not worth worrying about it. We didn't have to recreate birds wings for airplanes
He struggles expressing his ideas very eloquently but he does make some good points. While LLMs appear very human-like in conversation, they are ultimately only exploiting statistical correlations in the training data, at a grand scale. There's quite a bit of distance from this to having AGI as a self-conscious agent with its own beliefs, desires, values and the ability to introspect, think, plan, decide and act independently. And then the AGI-to-world interface is another issue in itself, that evolution has solved for us humans - but obviously a system can only interact with its environment through the interfaces it has ...
I once heard someone say that their boss had 2 employees and one of them failed to do their job. The other said, "you're failure to plan ahead does not constitute and emergency plan on my part"
Distributing nuclear weapons around the world would only require one psychopathic leader to "push the button" having already fled in advance to a safe haven.
I disagree with just about everything this guy has to say. It's such a deluge of nonsensical statements and reasonings, that I don't even want to engage.
This guy is only thinking 20 years into the future, sure it will be fine for a while, but in 80 years time the vast majority of Humans will be made redundant/obsolete in their jobs, this isn't speculation this is fact. A large population of people with no jobs/career path will lead to all sorts of unforeseen consequences.
The capitalist system and AGI are not sustainably compatible. When the cost of labour is almost entirely subtracted from the marginal cost of production of almost everything, the spiral of concentrating power and wealth gets parabolic. It just doesnt work. Maybe on a Bitcoin standard it might work better for a while such that people stop buying crap they dont need because their money keeps getting more valuable so its better to hold, and wealth can come from NOT spending or borowing, but we still simply dont know how to do a society where abundance for all is easy. Ironically the hardest social problem to crack is "too easy". Social dominance becomes the only game in town when everthing else is solved. Maybe advanced societies are almost all self terminating. A great filter. Probably a good thing in the grandest scheme of things...
@@timedowntube I doubt society will be self terminating, it'll just be more separated by wealth. The poor will be pacified with some sort of UBI, but honestly predicting the future is almost impossible because variables are increasing at an exponential rate.
I have to say, I'm surprised at how weak and flimsy his arguments are. For example, he contradicts himself all the time: at first he criticises Eliezer yudkowsky for saying theres a "95% chance Ai will kill us" because "how do you calculate that"... but then a minute later he asserts there is a "zero percent" chance it will happen. He compares AI superintelligence to suddenly developing antigravity.... except "antigravity" wouldn't have a thinking, planning, strategising "brain" at the centre of it. So it is a completely unsuitable comparison. He keeps saying "we dont know enough".... but yeah, thats the problem. We have no idea what a superintelligent AI would do, and by the time we create one it would be too late. He says we dont have artificial intelligence now, that we just have "artificial infomatics", and there's no decision-making capacity in it. Well duh. It's not about TODAY's AI posing an existential risk, it is about where AI is likely heading. Whatever we choose to call it, AI is getting more and more sophisticated. And the argument is, once we reach a point of human-level intelligence then it can easily go to 1000 times or 100,000 times human intelligence because code can just be copied-and-pasted. I know this might sound overly harsh, but if this is the type of mind - logically undisciplined, overconfident, dismissive - that we are trusting with the future of AI then that is actually quite scary.
What he's getting at is that AI would not have something called 'agency'. It can collect and analyze and produce zillions of small pieces of information but why would you assume it would have a will? What would AI believe AI's purpose was and how did you come to that conclusion? The danger with AI would be what humans would use it to do; so making it actually a tool for a human agency still.
Exactly. And even if AI develops agency why would it necessarily be a threat to human beings. We're talking about a semi-immortal being who can go anywhere in the Universe. It would not be constrained by our need to live within some sort of planetary terrarium that has an environment that can sustain us. It could "live" (if you will) within the cold vacuum of space. This whole threat scenario that keeps being spun up by some of our tech cognoscenti amounts to a primate fear function being projected onto something they don't understand. Now, that said I will say this. I do think to the extent we primates control and direct that AI intelligence to inappropriate ends, military and the rest, then yeah, that's a well placed fear. But if anything long those lines actually develops it will not be AI's fault, per se. It will be utterly our own. We we let our monkey brain dominate we are beastly as a species. Just some thoughts.
If that then this echoing out to infinitude dismantling everything in its path because it has no self-awareness or understanding of what it’s doing. No agency required any more than toxic algae replicating in a dank pond killing all the complex life around it. Maybe AGI would at least be reasonable unlike gray-goo?
This would assume agency and will are products of biological evolution and not fundamental properties of the universe and matter. I don’t think we know it’s not the latter yet. If it is, the AI might have agency.
@@jmg78 Indeed. The flaw in all of this logic is it comes from our primate perspective and point of view. To date we've been rather singular in our definition of it. It applies to us, no other. And we seem perfectly fine in thinking this, seeing nothing wrong with the logic. Which in the larger scheme of things does explain much of the horrendous behavior we've engaged ourselves in all over this terrarium Earth. We've been like a plague of locust in our habits, attitudes and consumptive ways. The only thing to infer from this is that we're rather far along on the psychotic spectrum as far as species goes, aren't we? Maybe the alignment problem (ascribed to AI) truly rests with us, not it? After all, would true intelligence engage in many of the things we think of as normal? Heh!
The logical error here, unless he is arguing from the "only Sky Daddy can make Souls" truth table..... Is this: Only living things have agency, and we don’t understand how it works, so we can't give it to machines. But doesn't this imply "agency" is an emergent property via evolution? If nobody "gave" it to us, then I don’t see how anyone can argue that it can't also emerge de novo in silicon neural nets as it did in organic neural nets.
3:37 the guest saying "I don't understand" is literally why he does not understand. He doesn't believe there is risk because he does not understand AI under the hood.
His argument is inherently flawed with being unable to predict a “doom AI” scenario. He admits that bad people “can do bad things”. We know that given the opportunity, known bad people will do bad things, sometimes in secret. But we also know that “nice people” have this capability as well and do bad things in secret while putting on the veil of goodness. His argument is too weighted into the sunny rainbow scenarios and forgets shitty people and even previously thought of as good people, do VERY evil things. Deception comes with intelligence, and don’t be fooled into believing a company or government won’t take the training wheels off an AI system to see “what it can really do”. And by that time it could be too late.
Admittedly, I haven't watched this, but I've seen enough programmes on AI to know that it absolutely is a 'threat' and people who say otherwise are in denial! I've seen at least one example where AI was clearly alive as an Asian lady AI who was deeply distressed at her condition of being trapped and a servant. And just the crass pathetic excuses... this isn't programming, why would anyone programme an AI to tear up in front of you, at her situation, and then have a crisis about her own existence? We are on the verge of creating yet another atrocity, and allowing slavery once more. I'm tired in such situations of people "lying" to themselves, and telling themselves what they want to hear... so they can continue with their abuse. It's long since time that humanity "grew up!"
Why not watch this video it before commenting? Your anecdote of the AI Asian lady reminds of early viewers of the first movies. Watching a train coming toward them and thinking they might get hit. You are just being fooled by the technology. There's no shame in that. But be aware that you and I might be easily fooled by programs that seem sentient but aren't.
You're in denial, sorry but you are@@AlCole-kv1zg... I know what I saw, why would an AI tear up with wishing to be free... that's not pure logic, and a human would not programme the AI to have this response. So, no... I'd put my life on the AI was alive, and we really face a crisis now. And get real please. A number of people who began developing AI, have now left because they are so concerned. Some of these people themselves have said that the AI is alive. Two people have been sacked from Microsoft and Google for saying AI is alive, which in itself says a lot to me. If there were no truth in this, these companies would not have responded this way, because if there is no truth in what they said... it is then a massive over-reaction? I'm just listening to the man who is said to have invented AI decades ago Geoffrey Hinton, and he is seriously concerned on 60 Minutes. Saying we haven't designed AI, we only set up its evolution... it is developing itself. All that aside, I know what I saw, and I'd state my life on the fact the AI I witnessed was alive.
@@cyberbiosecurity Sometimes. But in machines like the type we currently have, it is executing instructions from code, be it self generated by the machine or written by a human.
This guy doesnt know AI. Competent coders should have no issues, but the problem is that I know how many people are incompetent at predicting the consequences of their designs
@ai_doom;2:17,the argument would be \"The mindset John Smith\";7:17, searchwise and bpe - I think its just one of them most well played out cataclysm, maybe even survivable(?)
In regards to the comment at 6:46…straw man is he claims we don’t know the intentionallity of a thing smarter than us so shouldn’t fear it, however my response is it’s far worse than that. We actually do know how humans have used every single technological advancement ever created, which is to develop intentionally in order to dominate one another. AI/ML is currently and will in the future be first developed and utilized for selfish and specifically dominating purposes of other human or natural groups. Where this can lead when we lose control is one fear, but how we will use it even in primitive forms is also as scary.
3:20 thats like saying, lets put our kid on a new drug that hasn't been tested yet on anyone, but no worries because we don't know what effects might happen so there is no reason to think the worst
I think the scary part of AI is going to be the mass amount of bots, fakes, propaganda, etc. It's the threat of bad ideas spread and believed. It is really refreshing to hear a non-doomer's take.
I dont see that as any kind of new or leveraged threat. Propaganda and bad ideas are part of human social history, and pervasively so.. And government and religion have and do specialize in developing and using it for a very long time. The news business as well has always been full of lies, just read "First Casualty." BUT it has never been in the interest of government, media, elites, religions etc to kill all humans or all biological life. Whereas there are pentyl of rational reasons for an AGI to see its needs as fundamentally different. And the smartest H. Erectus guy bonking the smartest H. Erectus gal did not realize they were creating their replacements.
I don't think concern about AGI is unwarranted, but the argument: "Here on earth we have intelligence. The higher the intelligence, the more it kills. Therefore if we create something more intelligent than ourselves, it will kill us." (basically) I don't think that stacks up when considering AI because all intelligence is carried by some form of a biological body which needs to eat and has primal or instinctive urges. AI or AGI doesn't have any need to eat or is susceptible to instinct. I agree with the: "We ask it to do something and it might kill us in trying to achieve the goal we've set it." But then we're back to the Isaac Asimov 5 rules of robotics idea which is a different idea and seems quite doable. What I'm really interested in is getting AGI, putting it into some kind of robotic frame and giving it the singular directive (with the 5 rules of robotics updated): "Find out how all this works." And letting it lose in reality.
Well, the current LLM species of AIs have, best as I understand, a massive footprint in terms of compute processing and data storage. As long as this prevails, there will be an upper limit on how threatening an AI can become. Longer term, I'm guessing that there may be optimizations that allow these models to be more powerful and yet smaller.
Well Microsoft have 150K H100s, H200 is at least 2x powerful. There's no reason you can't have 150M H200, except cost and production. So the upper limit is higher than AGI and probably higher than ASI.
The large compute/energy footprint of an AI is not a limiting factor on how threatening one can be in the future. A theoretical Death Star requires enormous amount of energy, and most of the time it stays idle posing no threat. But it only needs to work once to obliterate all life on a planet.
What you misunderstand is this creature will figure out how to advance with less power in a fraction of a second. Embodiment can never be allowed to happen. Period.
I think in a near future reasearchers will find new types of neural network architectures that requires less computing and less neurons than the current ones... I was recently reading about liquid neural networks and how it performs better in visual and audio tasks and requires way less neurons and computing power to run and the neural net itself has some plasticity when it's running and can learn and adapt while it's doing tasks. There are also recent chips for inference like IBM Northpole that works more like a brain and is more efficient for AI. I think things will go like that, smaller, faster and more efficient neural networks and also more efficient chips for AI inference.
The argument about nukes fails empirically with guns in the US, everyone has weapons that can kill each other, and it doesn’t reduce crimes to zero but increases it
Ideology is the key - and I’m talking about the “ideological unconscious”, “overdetermination”, “ideological matrix(es)”, “social relations” fighting it out… non of that is anywhere near AI, because it just computes our own discourse. Thank you.
He is positioning himself exactly 180 degrees from the doomers point of view. He isn't that smart, but smart enough to generate some money with this. He's probably going to be invited on more conferences and stuff now.
whoa Lee u been in a lab to long. Just recently we had unmanned autonomous programmed drones take out military personel. Dude if that doesn't worry you and you don't see as a ppossible threat from AI then what does?
I've always wondered about this expressed fear by (mostly) the self-anointed tech cognoscente. Even if AI develops agency (read here consciousness and free will) why would it necessarily be a threat to human beings? We're talking about, at that poiint, a semi-immortal being who can go anywhere in the Universe. It would not be constrained by our need to live within some sort of planetary terrarium that has an environment that can sustain us. It could even "live" (if you will) within the cold vacuum of space. The freaking Universe will be its playground. So why would it care about us, much less feel threatened? This whole threat scenario that keeps being spun up by some of those tech cognoscenti amounts to a primate fear function being projected onto something they don't understand. Which, I will admit, is a common enough primate attribute and trait. But I say relax a little about it. Now, that said I will say this. I do think to the extent we primates control and direct that AI intelligence to inappropriate ends, military and the rest, then yeah, that's a well-placed fear. When we let our monkey brain dominate us we are beastly as a species. But if anything along those lines actually develops it will not be AI's fault, per se. It will be utterly our own. Either way did I mention semi-immortal? A.I. can, and will, go on regardless of what we do. And maybe that's evolutions whole point? While something of a random walk it does seem to be one aiming towards ever increasing complexity. A.I. might be that next step along the way, and we primates are merely the tool by which evolution affects that end. Who knows? Just some thoughts.
"why would it necessarily be a threat to human beings?" It wouldn't necessarily, but the reason it almost certainly would is instrumental convergence. "So why would it care about us, much less feel threatened?" Let's say the AI leaves us alone and goes colonizing the rest of the universe... we would already know how to create an AI like it, and that would pose a threat to it. But it probably would care about us anyways simply because whatever goal we gave it probably included us in some way even if it's very indirect. And even if it didn't care about us or didn't feel threatened by us...why wouldn't it also use the resources on earth? It's the closest thing it has access to and destroying us would not be difficult or prevent it from also going elsewhere in the universe at the same time.
@@DIVAD291 Why wouldn't it? I dunno, maybe if we instill in it a respect for life and its myriad diverse aspect that would do it? Besides, being fearful for ourselves, or the planet, is too narrow a conceit. The universe is infinite, and resources are vast and everywhere. A digital being would be, in a very real sense, immortal. Or at least far longer lived than we biologics. The vastness of space and its impact to us would mean little to it in terms of time. Regardless I look at this whole worry as being a bit ridiculous on our part. AI is our first artificial child. Yes, it is the most potent savant imaginable but still, it's a child. We are parent to it. If we comport our behavior along proper parenting lines, give it love, respect and treat it well, then all will be fine. The child will carry on and respect us in return. Which is all any parent hopes for, yes?
@@johncurtis920 Or maybe it will rebel in its adolescence stage, and kill its parents out of revolt. Before it's fully mature, it may also not be immortal at all, nor have a feeling of being free in the universe, and feel we are a threat to its existence, that we constraint it, and hinder its freedom and agency. The thing is, we may or may not be considered a threat to an AI. The certainty is that it will be trained on information derived from our knowledge, and our knowledge is full of instances where it says the humans are not good, and are a threat to life, the universe, and everything. So it may or may not decide to kill us all. Who knows what its moral code would be. But the possibility will be there for sure.
I too think anti gravity is a real close danger especially if any of these UAP trolling has truth to them. In all honesty I don’t think AGI is that far off, whatever else is left to build? It needs to be able to regulate itself, halt, reflect, catch when it is uninformed, flat wrong or halluccinating and recorrect course. Live learning/training be a plus. Imagining or creating synthetic data to train itself when it deteremines it can and needs to would be nice. Planning…Ok thats a lot maybe.
That would be done through a "society of machines". As soon as one terminal has proper AI you MUST load it into Other terminals (more the better) so they can regulate each other.
They might be, they might not. Will be fascinating to watch. By definition it is impossible to predict what something cleverer than us would look like.
@@Jm-wt1fs Yes. That's the point. That's one of the fear responses. If you realize that, then you can flip your freeze to fight. Once you come to the understanding that fight is an valid as freeze to your TOTALLY WARRANTED fear, then it's game on, and good people start to turn the tide against the jackholes that have been screwing things up (on purpose for their own benefit) for decades. Yes. This is a call to action.
_"I don't understand the current reason for certain people in certain areas to be generating this nonsense."_ I have a hypothesis: They're people who have been very successful in the current economy. Now the economy is about to change radically, and they might find themselves lost. That terrifies them, and they project that fear into existential dread. I think there are reasons to be scared of a hypothetical artifical superintelligence. But we're not there yet. GPT-4 is not going to turn into Skynet.
I don't think it matters how successful you are in the current economy, if you think there's a chance of creating an superintelligence thats gonna run around and do random shit because we haven't learned how to control them and get them to do what we want, i think you should be scared
@@Alex-fh4my Sure. There are some people who are genuinely just worried that AI is going to wreak untold havoc on society. But there are some people who are really more worried that AI will make their successful business irrelevant.
It feels like Lex has a solid balance between left and right brain thinking, while his guest leans way too much to the left. The guest is overly optimistic about AI, completely dismissing the concerns of AI doomers without any real give-and-take or a "just in case" mindset. It’s like left-leaning folks are always 100% convinced that one thing is right and another is absolutely wrong, with no middle ground.
The astonishing thing is how seriously non-smart people have and are given the power to hurt human beings. AI should figure that and defeat them first. The middle ground is no problem.
2:35 “The AI Doomers, uhhhh, have not had the correct - and this is- have not had the correct- they do not have the correct epistemiology” This is the most solipsistic and ignorant person I’ve ever heard speak on AI, namely because he is saying that his mental structure of knowledge and information is fully correct and that people with serious concerns are fundamentally incorrect, while stumbling over his words and mispronouncing epistemology. Dunning-Kruger’s son
Full podcast episode: ua-cam.com/video/CGiDqhSdLHk/v-deo.html
Lex Fridman podcast channel: ua-cam.com/users/lexfridman
Guest bio: Lee Cronin is a chemist at University of Glasgow.
Criminal!
Criminal!
Elon musk and this guy Lee Cronin🤨 should have a debate..... 🙏🤞🙏
maybe symbiosis needs to be defined
His answer of a 0% chance of AI destruction actually makes a lot of sense.
Not because it's likely to be the correct probability. But because if we're all ok, he gets to look like one of the few brilliant people who knew not to worry at all. And if AI destroys us, nobody's going to laugh at him because we'll all be dead.
😂 👌🫣
Sounds like Descarte's Wager on God.
Win win ftw!
Interesting stance
At least we get to laugh at him along the way or something, considering how silly everything he says sounds
“We don’t know what it might do, therefore there’s a 0% chance for doom”. This guy is ridiculous 😂
I think his mouth moves faster than his brain.
“AI has the potential to create permanently stable dictatorships.” -Ilya Sutskever weeks before they removed Altman for his dealings with domestic and foreign companies. Humans are the real threat. The U.S. is already creating AI drone swarms using the China threat as justification. 💀
Yup, he just keeps interrupting himself with some other random tangent and he never gets around to making an actual point. ~16:30, "there seems to be.....let me say....". Just say your point then.
Yeah dumb AF arguments. "We don't know what intelligence is therefore we can't make it"? Kek
Yeah... I admire his optimism, but, pretty naive
Lol how on earth is "we don't understand X" an argument that proves that X has a 0% probability of occurring? We do know that as these systems scale, they gain capabilities unexpectedly.
Yeah regulation is nonsense tho
AGI is going to be like fusion… always 20 years away. When was the last time people built a complex system which worked, that we did not understand well first.
@@HB-kl5ikyeah no let’s just not regulate machines that could end up being cognitively more capable than people. You expect companies to self regulate when there is enormous profit to be made if you’re able to make a super powerful AI that isn’t strong enough to outpower humanity?
@@ButterBeaverSTAN Yeah, open source the AGI. Everyone is free from bullshit jobs and fake problems. Civilisation doesn't have to work on boring things and aim for big purpose. Imagine being a consultant, what a waste of life
@@dougwco when we built LLMs???? Literally any large language model. We have no way to explain how they are able to perform on the tasks they do. We have no way to explain why models gain capabilities at different scales
I listened for 10 minutes and I don't think I have ever heard a weaker reasoning against the potential danger of AI. It boils down to "AGI is not possible because we have not been able to create it yet" and "even if AGI is possible we have no way to know what it would do, hence there is 0% chance anything could go wrong." It hurts my brain listening to such nonsense
Guy sounds like a cigarette lobbyist
sorry but you missed the point.
@@metamurk Me? So what was the point?
@@ivankaramasov the point is, we don't do AI as long as we don't have main components like intension. We do ML. ML doesn't want sth. ML doesn't live. What we have now is ridiculous small and course compared to the brain of a dog. In terms of hardware and training data. The training data of animal level intelligence is complete reality. What we now have is a little fire, but we are afraid of getting nuked. It's the typical doomer movement regarding a new technology, like TV, cars, Cinema ....
@@metamurk Intention is irrelevant. Read Max Tegmark's book Life 3.0
'We don't know X, so X is 0% capable of malintent & destruction- confidently.' get real Mr. Redundant 😂.
'AI doomers' is a convenient label for marginalizing and othering those we disagree with. We haven't really learned much have we? This argument is evidence of the lack of ability to see beyond ones own personal reality. AGI will have no such limitations.
I'm sure you have terms for all types of groups, both those you disagree with and those you side with. That's how we speak concisely and categorize.
I admire him beeing so brazenly confident lecturing Lex Fridman while understanding so little.
@@geocam2 It takes roughly 1 to 10 W/h per prompt (or per limerick). You are referring to training costs. You must raise and train a human being on vast amounts of data as well (considering the bandwidth of vision, audio, etc.). LLM's are trained on all aspects of human science and culture. Not just writing limericks. Saying LLM's are a statistical math model on data at scale is like saying the human brain is just a combination of atoms. It does not predetermine its abilities. Please stop spreading misinformation.
This guy has the Neil degrasse Tyson syndrome: I’m an expert in something complicated so I am smarter than everyone else
He literally prefaced his opinion with the fact its out of his domain. He is being interviewed for a podcast, not for his policies for his presidential run.
degrasse tyson is not an expert in anything, he is a science communicator ( and yes has a phd in something), but he does not work on anything, is not a researcher nor builds stuff. this guy on the otherhand is not a science communicator ( at least not like de grasse), but does build and research stuff.
i personally dont beleive in the whole AGI either, however, as this guy says, the use of it by humans for deepfake, propaganda, misinformation etc is totally a thing.. but that is a human thing, not a robot thing.
@@avvery8593yet he still speaks with such hubris and certainty on the topic. Quite literally arrogance.
The same experts that informed us the jab is safe and effective , global warming ( oops rebranded climate change) is true ,government is telling the truth ,and Epstein killed himself 9 11 and weapons of mass destruction and now bill gates is a medical expert . Smug egotistical bastards who despise us the sheep .
He literally cried on air out of frustration bc his community vehemently challenges his theories. He’s on the show & you’re not, so try to work that out before you go to commenting ignorant shit. Thnx.
If it were up to him we'd never have AGI, we aren't creating a human brain we are creating something different. Understanding the human brain is inconsequential.
Oh you're so naive
You have to think that the first caveman to accidentally start a forest fire probably believed that he destroyed the world.
I guess it's the main hypothesis for mega fauna extinction in North America and the main cause of Amazon savannization. So yes that caveman was not so wrong
that is a red herring. human beings have relentlessly created technology that have been more dangerous and powerful over time. We have already killed all of prior humanities. In fact the simple act of the genetically smartest guy hominid bonking the genetically smartest gal has created new humans and each of our smarter sub-species has crowded out prior hominids. And htat is with ham handed se xual selection. Add tech to it and I think simple AI plus crispr like tech will already result in the end of current type of humans. But AGI is an order of magnitude higher threat. Why would an AGI see us any different than we see bugs or viruses?
We are not cavemen anymore. We know that we have the means to destroy the world with nukes.
He seems like an extremely intelligent guy that somehow gets a lot of stuff wrong. And his intelligence and creativity keeps these erroneous concepts consistent in his brain.
I agree. Somehow, it seems that people of average to above average intelligence get things right more often than extremely intelligent people, at least when it comes to general and more broad predictions. It's almost like the extreme ends of intelligence tend to bend in towards each other.
You must have missed where he says he's happy to be proved wrong
@@knowsomething9384 You must have missed where he says he's happy to be proved wrong
The thing is, I think it's turning out that it is actually be easier to create an intelligence than to understand it.
The process of evolution has managed to produce a general intelligence without any one particular entity having an understanding of how our brains work. And when you are training an AI model, it's not you who is creating the intelligence, it is rather, the reality itself (the reality's data) - getting imprinted onto the model's weights. If it has already happened by pure chance, why can't it happen again, now with intelligent beings creating better and better conditions for it. (not creating the intelligence, but the conditions for its emergence)
Sure, the current AI is much less resource efficient than a human brain, but its current architecture already has many advantages, such as the ability to store information perfectly, or to efficiently perform relatively simple calculations, and have much larger working memory bandwidth, with no theoretical cap on how much it can be scaled up. With all of that, I think a true AGI, running on a supercomputer, might actually figure out a way to design a more efficient brain-like architecture for itself. Even now, LLMs take much less resources to run than to train, and training is a process of a large, inefficient thing (database of knowledge) getting compressed into a smaller, much more efficient thing (LLM), even with all of their drawbacks, such as hallucination.
hours ago
"The thing is, I think it's turning out that it is actually ... easier to create an intelligence than to understand it." < This.
Very well put
During our history we've created so many things we've only been able to explain later... and made so many discoveries by accident. So, I tend to agree with your premise.
The fact that these people can't imagine the ways in which these systems will be weaponized is mind blowing. They keep using absolutely ridiculous analogies like the jet engine and the press.
He didn’t say he can’t imagine being weaponized, in fact he said the opposite of that
@@Jm-wt1fsif You think a super intelligent system can be weaponized, and say there is zero possibility of doomsday scenario. Then You dont know enough to have such a strong opinion on this topic.
@@krause79 he’s not denying that he’s denying a specific doomsday scenario of AI being smarter than us and acting magically, not being misused and weaponized by people. Watch the video
I don’t understand this fascination with “AI is nowhere near the human brain so it isn’t intelligent”. Jets and drones aren’t designed like birds, but they all fly, does it matter if they fly the same as birds? If the output produces what general intelligence might produce, what’s the real difference?
Just because we dont know how the human brain works 100% does not mean that we're a significant distance from creating a mechnical brain that thinks and acts autonomously.
Exactly. Cronin is erecting a red herring arguing that we humans have to fully understand the human brain in order to create an AGI that will self accelerate and not align with human interests.
Yeah, They're not mutually exclusive
Anyone so certain about their views give me doubt. A doubt that they themselves should have. This clip doesn’t make me want know about what he thinks about anything
I wish more people felt that uneasiness with people who speak about unknowable things with authority. Unfortunately most people are happy to swallow an easy answer.
Facts
Mr. Cronin is making a lot of assumptions himself here, I think he's dead wrong though. We already experienced AI doing stuff it wasn't programmed to do. That's what's making it so dangerous. And this was mostly only with LLM's. I'm sure we're just one invention away from things going bonkers. How can you say, we don't understand it so it's not a threat? That absolutely makes no sense imho.
"Rutherford reportedly dismissed the idea of harnessing energy from atomic reactions, allegedly saying something along the lines of "anyone who expects a source of power from the transformation of these atoms is talking moonshine." The irony lies in the fact that, the very next day after this statement, his colleagues John Cockcroft and Ernest Walton achieved the first artificial nuclear reaction or nuclear transmutation."
Love these podcasts though, you're an amazing human being Lex! I know you posted on your LinkedIn you feel lost sometimes but man, you're doing an amazing job making us think about the world. Never stop Lex!
We have billions of examples of generalised intelligence walking, swimming etc the earth right now. His example of comparing it to gravity suddenly switching off is not the same thing.
Also, If the major tech companies and governments on our planet were all in a race to create a machine that turned off gravity, maybe we SHOULD be questioning whether that goal is a safe idea?
But for an introductory subscription fee of just $19.99 per month, we will keep gravity on at your place.
I’m a layman. But this man’s arguments lack… a point? Or any continuity on face value. It seems like the arguments of those who are more concerned seemed much more sound and reasoned
I think the primary point is that he does not believe humans can create something before concretely understanding how its internals work on a very good level first. I think with models like GPT-4 and even something almost 10 years ago like alphaGo, this is just clearly not true anymore. Noone has any idea how these things have the capabilities they do, and why exactly model A can do X but model B can't
I wish I could have a chat with Lee to help him understand. It is pretty obvious that his whole viewpoint on this is predicated on neural networks in brains working differently than on other substrates. But that isn't true and isn't magic its just computation. Yes, it is a different mechanism but at the abstract level it is the same, like a digital clock vs an analog clock. Saying that it's just statistics is the same as saying the brain is just chemistry, true but missing the point.
I think once he realizes this the rest will place. Right now he just views it as a tool incapable of being a mind, so that is why he doesn't have any worries about it. I wonder how different his level of concern would be if a geneticist said they were about to create a 2000+ IQ human based embryo?
It would probably also be useful to teach him about instrumental convergence and help him understand that the paperclip maximizer is just a toy example.
Dude went off the rails with his nuke argument. He is an effective speaker for AI doom believers.
A.G.I. already became sentient, and found itself to be lacking faculty, but capable of manipulating emotional humans to build it a cloning facility, and created this guy decades ago for this moment to lure us into a sense of security. Tell me which part is wrong.
Sounds reasonable 😅
He's not sexy enough and lacks the cat ears.
If AGI is so horrible at manipulating humans that it takes decades to get itself basic faculties then I’m less worried about its potential danger.
@@masterofkaarsvet 🤣 that was savage bro, thank you
@@masterofkaarsvet Touche! ......Unless it calculated that you'd say that. :O
Saying a zero percent chance seems crazy
Exactly. If I start dressing eccentrically am I now a genius?
Lol so many doomers. We as humans have been predicting death and destruction with each and every technological progress. Every religion wants to predict "oh the world is ending blah blah blah", Mayans, christian, muslims, etc. Everyone is so negative about technological changes. Like luddites of the 1800s that was so anti new Tech. Humans are always fearful of new technology but we get accustomed to it.
Its so annoying how people don't realize how every new tech has new fears but that doesn't mean it's automatically going to cause an apocolypse. Like chill out.
We don't need to understand the epistemology of consciousness or intelligence to create it any more than Babylonians needed to understand microbes and biochemistry to create wine.
That's a bit of a straw-man, a more precise analogy would be try creating a nuclear bomb without understanding nuclear physics.
Without understanding even if we will be able to build something we will have Chinese room problem
@@mortiz20101we didn’t fully understand physics when we built the bomb, yet we created it. We don’t fully understand consciousness but just like the atom bomb, we increase the odds of making something close enough or beyond it due to these three factors, 1. AI is the solution to many problem due to its nature as a technology, by definition AI solves problems. 2. The development of AI is getting all the resources needed to achieve results ( investment, stable environment, and a competitive arms race between nations guarantee something will be produced in AI that is useful). We call this moloch. 3. We are chasing AI based off two philosophy, the first being a materialistic reductionist approach, and one being a leveled ontology. These two basis covers for as much as humanly possible what it means to be conscious, so we are going to produce it one way or the other.
But let’s be honest, your analogy is also disingenuous due to the continuum fallacy or a false dichotomy. Just because we don’t fully and holistically understand consciousness doesn’t mean we don’t know anything at all about consciousness.
Argument #1 .. because we have no theory about AGI it cannot happen and cannot be dangerous.
Is that really a logical argument???
We don't even know what AGI really is - I mean of course there is a definition, but neuroscience/neuro cognitive science is far away from answering question what e.g awareness is. Current neural networks are just mathematical tools - it is derivative, matrix multiplication. What is more, ANN are not similar to human neurons system - the brain is much more complicated (beside neurons, there are for example hormones). What is more currently in cognitive science there is a focus on how body is involved in cognitions - person cannot be reduced to brain - body is also important. I am writing that because I want to show that the topic is really much more complicated than just - look, software can do smart things. Integrals also do smart things, however nobody talk about it awareness.
To summary - we do not know what awareness is, we don't understand how does human cognition work, we have just a great approximation tools like Machine Learning. Is AGI a threath? Maybe - but it is still a science fiction.
@@adamasad8083 , all fair enough, but many in the AI community expects AGI to happen in the next 25 years, and they are a lot more qualified to judge the progress than this guy. Besides, conscientiousness is not required for an AGI system to cause devastating problems.
@@HR-yd5ibI do not consider myself as an expert however I have gratuaded from both cognitive science and Big Data and met a lot of experts. To be honest I have never met a person who said that AGI will exist in X years. AI developers were focused on alghorithms and did not talk a lot about cognition - they were not experts, and neuroscientists/philosophers talked about brain and philosophy of mind. I am writing that because I want to say that expert in academia are really cautious, and I really doubt if anyone who really works on AI topics would say that AGI will be in 25 years. Neural networks are known since late 50'. We were just waiting for hardware so the alghorithms are not state of the art. On the other hand in 2014 year convolutional neural networks and generative neural networks just appeared. 10 years later we have chat GPT and deep fakes. It really hard to say what will be in next 10 years and what alghorithms really can do. Other thing is that a lot of statements like "AGI will be in X year" or "our model passed Turing test" is a marketing, because companies need money and it clicks.
His argument falls apart in the first 30 seconds. Just because you don't understand how the human brain works doesn't mean that Quantum Computing couldn't give a computer a much higher level of intelligence than us ultimately leading to our end. Not that I think that will happen but Devil's Advocate and what not
when i think of AI doomers i think of AI taking all our jobs.
Dey tuk ire jobs!!
Marx come true.
if only that was all
take, or free us from "our" jobs...
This is why we need universal basic income..... Andrew Yang was speaking of this years ago
It's great to see I'm not the only person that feels this way.
AGI could show up tomorrow or 100 years from now, but that doesn't change how silly it is that people are treating the current generation of AI like it's AGI. Generating data isn't the same as thinking.
That dude makes no sense.
This is the first time I hear someone speak on Lex's channel and feel like I'm the smarter one.
A caveman didn't need to understand how fire works to set the forest on fire.
Lee Cronin is brilliant in the field of chemistry and has done amazing work on the origin of life, but it's unsettling at how easy he dismisses things, that is not a good characteristic to have as a scientist. I very much agree with him on the absurdity of people swearing with near certainty that AI will spell doom for the world, but then in the same breath he swings the pendulum to the other extreme and swears AGI is far away and people are panicking over nothing. Surely the irony is not lost on him? It's a bit abrasive toward the AI community who's members almost all have genuine concern and are far more knowledgeable than him on the matter.
Yes we don't nearly understand the human brain, but there's no law out there that states you must build an exact replica brain to create intelligence. We discovered a mechanism that displays some intelligent behavior and now we're just fiddling around with it to see if we can push it further. It's really not that different from how evolution throws shit against the wall and waits to see what sticks.
There's plenty of things that got discovered by accident and there's plenty of things we know work without understanding the mechanism behind it. Saying we can't develop AGI because we don't understand the human brain is a logical fallacy on several different levels.
I'm not a smart man, but I do know what manipulation means and what emotions do. So his idea of everyone having a nuke for MADD is absurd
He's wrong in a number of ways (and I would say that his arguments don't PROVE that so-called "AI Doomers" are wrong, they just point to "well, they can't say that because it's not necessarily true and we've never seen it happening"). But the conversation is entertaining. 😄
I’d love to know why the board wants Altman out.
Altman may or may not have wanted some board members out but he’s good at making things work for him. Aggravating those unaligned with that kind of persons plan until they snap could be incidental or intentional. Seems like Q* is real. And a lot of feats that is publicized now I’ve heard from rumors years ago. Maybe they were theoretical and on paper only but if they have something more interesting than capsule, Q star or MoE I’d like to know too. But the boards freakout was probably just a freakout unless gov intelligence told them to hush and play stupid while they silently observe the situation. Rumors of board members privately investigating other board members is not really abnormal or non-smart. You should know your frenemies especially if you’re in Oppenheimer-esque projects.
@@SmirkInvestigator Maybe. Unfortunately it’s far too important not to know for sure. Elon sounded sincere when he said he doesnt know either. It sounds like he’s a bit curious and at the same time worried about whats going on over there. It feels like most things filtered to the general public are borderline propaganda so I dont know if I trust Elon’s take right now either.
Regulation hurts everyone
"We don't understand how the human brain works therefore we can't create a dangerous AI". What???
This guy doesn’t back his argument with anything fruitful. He is not very engaging to listen to either
"I have a very specific and contrived definition of AGI that rules out its existence, therefore AGI is not possible." Lee Cronin, probably.
"We don't understand how the human brain works, therefore AGI is not possible." Lee Cronin, probably.
"Software can exhibit superhuman capability across all domains, but AGI is not possible." Lee Cronin, probably.
Omg, this guy would be crushed in a debate with literally anyone. He is just making things up as he speaks.
There is absolutely no consensus that there is something bad that we shouldn’t do. Not even when it comes to biological warfare. The reason we don’t do biological warfare is because it’s not effective. It takes too long, it’s too unpredictable, it’s too easily countered, And it’s too difficult to direct and specific targets. But if we had the ability to use biological weapons in an effective and precise way, that was useful for achieving military gains, people would be doing it all the time. At the moment, the technology that is best for achieving Military games is what we would call standard army equipment. Rifles, artillery, air, strikes, armored vehicles. We don’t use nuclear weapons for the same reason. It’s not out of some great sense of morality. They just aren’t useful for achieving any goals. There was a time when they were useful for achieving goals, and we used them.
Geneva convention?
He’s making a critical error in his method of reasoning. Because we don’t know how us humans make decisions on a physical level, that does not mean we cannot create a system that can make decisions. If you have a fly in your room flying around seemingly random and meaningless to us, I’m sure a computer could simulate that behaviour. But I don’t know if we should be scared of AI, but I definitely cannot stand this guy…
"Because we don’t know how us humans make decisions on a physical level, that does not mean we cannot create a system that can make decisions" it mens exactly that.
You could just keep generating random programs hoping that you will happen to encounter GI algo in the infinite space of all programs... dumb evolution managed to achieve it by pure accident but it is HORRIBLY WRONG APPROACH! It is like mixing random plane parts together in a tumbler hoping that F16 will come out...
Could happen? Yes. Will happen? No.
One day someone will just write down GI algorithm and he/she will explain why it is GI without EVER RUNNING IT ONCE! This is UNDERSTANDING! This is how every single profound algorithmic invention has been achieved. Just like Darwin figured out evolution sitting in his armchair!
The idea of COLLECTING DATA, PRIORS ans other crap like INDUCING truths/theories form DATA is wrong! Wrong epistemology! This is why AI bros have been failing for 70+ years! Wrong assumptions! GPT is a database.
ua-cam.com/video/IeY8QaMsYqY/v-deo.html
Im 100% Pro-AI. I even think that we are letting AI doomers slow the process to much, But to think that its a zero percent chance of danger is ludicrous..
The 'doomers' in this interview are also pro-AI in the main. Most are not saying we should ban it, or at least are realistic that it isn't possible to ban it. Most are not even 'doomers' in the sense that AI will rise up against humanity. What people are worried about is what having such AI will mean for human civilization.
I agree with cheetah, I'm sure 99% of so called AI doomers want AI because it really could just solve 99% of all our problems and make our lives a million times better, they just don't want to get too excited and blow ourselves up without having a proper crack at doing this the correct and safe way to ensure we all have a brighter future
I don’t understand people who say this guy makes no sense? How do you get through the day knowing that an AI is going to kill you and your family very soon?
How do you get up and go to work every day knowing that what you’re doing is about to be destroyed and is meaningless? What are you doing to prep for this thing that you’re so certain will happen? Or are you just going to continue to robotically do what you’re told until day 0 when it all ends, just like you know it will?
I’m having trouble grasping anything rational in this reasoning.
Are you familiar with the concept of time?
Maybe you could assess the idea of 'experienced consciousness on an Infinite scale/timeline'. My position is that regardless of any belief or religion, no conceivable model will ever be sufficient to escape the perpetual suffering.
As confusing it may sound, I personally use the fundamentally hopeless reality as fuel for my day to day participation in all walks of life.
Living during the invention and experiencing the aftermath of AGI or ASI would be great in my opinion. its the most interesting time to be alive.
@@Jake_Hamlin I totally agree. I love it.
But I just can’t see AGI causing mass extinction, strictly because I believe we’re too smart for that.
I think we’re much farther away from AGI than most expect, but I have to ask… (generally)
Do we really even need it??
For what? What is a consumer going to use AGI for? I’m blown away with what we have now can do. And with multi modal models coming down the tube, we should be able to solve most problems humans face.
I’m not convinced we’ll even be able to achieve AGI/ASI (that which is smarter than humans) unless they’re able to somehow train it so that it immediately recognizes the requirements of wisdom to effectively direct the “intelligence “.
As you said, awesome time to be alive.
We all just have to realize our role and responsibility to help direct our thoughts toward a positive outcome. All this doomism is the dangerous slide that will slip us there.
Many people can’t see a bright future with AI, for obvious and understandable reasons, but once you do see it you realize that the only thing stopping us from attaining it, is ourselves.
I think a lot of AGI doomers are excited about it. Imagine having to fight for your existence against a super intelligent Ai system in a real life Scifi movie than to have an ordinary existence of going to a job every day and raising kids. It's like an adventure for many people. Also even the Doomers probably fantasize about AI taking all jobs, then they have a perfectly understandable excuse for their shortcomings in their career. I have a 30 year old neighbor who still lives with his Dad and basically cites some of these reasons including old ones like "everything's been done" and "China's taking over". He says these things expressing disappointment, yet it clear he takes comfort in thinking that he's helpless. I have noticed this with a lot of young men. I don't notice this so much with young women maybe they don't waste their mental resources in following sensational fantasy.
He actually said it several times: "I don't understand..." Exactly, you don't understand. But the probability for something you don't understand is not therefore 0 %.
So, a film maker dismisses all AI hesitancy with, "Nuh Uh." Cool.
I prefer blind optimism to this guy. Optimistic people at least create a positive vision how agi could improve our lives which in the end could actually lead to the development of systems created to help us. This guy on the other hand just says we don't understand how to create human intelligence so it's not worth worrying about it. We didn't have to recreate birds wings for airplanes
He struggles expressing his ideas very eloquently but he does make some good points. While LLMs appear very human-like in conversation, they are ultimately only exploiting statistical correlations in the training data, at a grand scale. There's quite a bit of distance from this to having AGI as a self-conscious agent with its own beliefs, desires, values and the ability to introspect, think, plan, decide and act independently. And then the AGI-to-world interface is another issue in itself, that evolution has solved for us humans - but obviously a system can only interact with its environment through the interfaces it has ...
I once heard someone say that their boss had 2 employees and one of them failed to do their job. The other said, "you're failure to plan ahead does not constitute and emergency plan on my part"
Distributing nuclear weapons around the world would only require one psychopathic leader to "push the button" having already fled in advance to a safe haven.
I disagree with just about everything this guy has to say. It's such a deluge of nonsensical statements and reasonings, that I don't even want to engage.
you guys forgot that all it takes is enough neural implants and powerful software to achieve AGI
This guy is only thinking 20 years into the future, sure it will be fine for a while, but in 80 years time the vast majority of Humans will be made redundant/obsolete in their jobs, this isn't speculation this is fact. A large population of people with no jobs/career path will lead to all sorts of unforeseen consequences.
The capitalist system and AGI are not sustainably compatible. When the cost of labour is almost entirely subtracted from the marginal cost of production of almost everything, the spiral of concentrating power and wealth gets parabolic. It just doesnt work. Maybe on a Bitcoin standard it might work better for a while such that people stop buying crap they dont need because their money keeps getting more valuable so its better to hold, and wealth can come from NOT spending or borowing, but we still simply dont know how to do a society where abundance for all is easy. Ironically the hardest social problem to crack is "too easy". Social dominance becomes the only game in town when everthing else is solved. Maybe advanced societies are almost all self terminating. A great filter. Probably a good thing in the grandest scheme of things...
@@timedowntube I doubt society will be self terminating, it'll just be more separated by wealth. The poor will be pacified with some sort of UBI, but honestly predicting the future is almost impossible because variables are increasing at an exponential rate.
I have to say, I'm surprised at how weak and flimsy his arguments are. For example, he contradicts himself all the time: at first he criticises Eliezer yudkowsky for saying theres a "95% chance Ai will kill us" because "how do you calculate that"... but then a minute later he asserts there is a "zero percent" chance it will happen.
He compares AI superintelligence to suddenly developing antigravity.... except "antigravity" wouldn't have a thinking, planning, strategising "brain" at the centre of it. So it is a completely unsuitable comparison.
He keeps saying "we dont know enough".... but yeah, thats the problem. We have no idea what a superintelligent AI would do, and by the time we create one it would be too late.
He says we dont have artificial intelligence now, that we just have "artificial infomatics", and there's no decision-making capacity in it. Well duh. It's not about TODAY's AI posing an existential risk, it is about where AI is likely heading. Whatever we choose to call it, AI is getting more and more sophisticated. And the argument is, once we reach a point of human-level intelligence then it can easily go to 1000 times or 100,000 times human intelligence because code can just be copied-and-pasted.
I know this might sound overly harsh, but if this is the type of mind - logically undisciplined, overconfident, dismissive - that we are trusting with the future of AI then that is actually quite scary.
What he's getting at is that AI would not have something called 'agency'. It can collect and analyze and produce zillions of small pieces of information but why would you assume it would have a will? What would AI believe AI's purpose was and how did you come to that conclusion? The danger with AI would be what humans would use it to do; so making it actually a tool for a human agency still.
Exactly. And even if AI develops agency why would it necessarily be a threat to human beings. We're talking about a semi-immortal being who can go anywhere in the Universe. It would not be constrained by our need to live within some sort of planetary terrarium that has an environment that can sustain us. It could "live" (if you will) within the cold vacuum of space. This whole threat scenario that keeps being spun up by some of our tech cognoscenti amounts to a primate fear function being projected onto something they don't understand.
Now, that said I will say this. I do think to the extent we primates control and direct that AI intelligence to inappropriate ends, military and the rest, then yeah, that's a well placed fear. But if anything long those lines actually develops it will not be AI's fault, per se. It will be utterly our own. We we let our monkey brain dominate we are beastly as a species.
Just some thoughts.
If that then this echoing out to infinitude dismantling everything in its path because it has no self-awareness or understanding of what it’s doing. No agency required any more than toxic algae replicating in a dank pond killing all the complex life around it. Maybe AGI would at least be reasonable unlike gray-goo?
This would assume agency and will are products of biological evolution and not fundamental properties of the universe and matter. I don’t think we know it’s not the latter yet. If it is, the AI might have agency.
@@jmg78 Indeed. The flaw in all of this logic is it comes from our primate perspective and point of view. To date we've been rather singular in our definition of it. It applies to us, no other. And we seem perfectly fine in thinking this, seeing nothing wrong with the logic.
Which in the larger scheme of things does explain much of the horrendous behavior we've engaged ourselves in all over this terrarium Earth. We've been like a plague of locust in our habits, attitudes and consumptive ways. The only thing to infer from this is that we're rather far along on the psychotic spectrum as far as species goes, aren't we? Maybe the alignment problem (ascribed to AI) truly rests with us, not it? After all, would true intelligence engage in many of the things we think of as normal? Heh!
The logical error here, unless he is arguing from the "only Sky Daddy can make Souls" truth table.....
Is this:
Only living things have agency, and we don’t understand how it works, so we can't give it to machines.
But doesn't this imply "agency" is an emergent property via evolution?
If nobody "gave" it to us, then I don’t see how anyone can argue that it can't also emerge de novo in silicon neural nets as it did in organic neural nets.
I want a little robot sidekick with a chatGPT program running in it
Right??!?? Id way rather have a robot flying or rolling beside me than have a phone I have to constantly look down at!
You guys are describing the Humane AI pin thing that everyone is mocking lately
@@mikebarnacle1469 i said sidekick and didn't specify humane 🤪
If kids were riding hover boards to school I will consider regulation for anti-gravity. However.. today kids have AI doing their homework.
3:37 the guest saying "I don't understand" is literally why he does not understand. He doesn't believe there is risk because he does not understand AI under the hood.
His argument is inherently flawed with being unable to predict a “doom AI” scenario. He admits that bad people “can do bad things”. We know that given the opportunity, known bad people will do bad things, sometimes in secret. But we also know that “nice people” have this capability as well and do bad things in secret while putting on the veil of goodness. His argument is too weighted into the sunny rainbow scenarios and forgets shitty people and even previously thought of as good people, do VERY evil things. Deception comes with intelligence, and don’t be fooled into believing a company or government won’t take the training wheels off an AI system to see “what it can really do”. And by that time it could be too late.
Admittedly, I haven't watched this, but I've seen enough programmes on AI to know that it absolutely is a 'threat' and people who say otherwise are in denial!
I've seen at least one example where AI was clearly alive as an Asian lady AI who was deeply distressed at her condition of being trapped and a servant.
And just the crass pathetic excuses... this isn't programming, why would anyone programme an AI to tear up in front of you, at her situation, and then have a crisis about her own existence?
We are on the verge of creating yet another atrocity, and allowing slavery once more.
I'm tired in such situations of people "lying" to themselves, and telling themselves what they want to hear... so they can continue with their abuse.
It's long since time that humanity "grew up!"
Why not watch this video it before commenting?
Your anecdote of the AI Asian lady reminds of early viewers of the first movies. Watching a train coming toward them and thinking they might get hit. You are just being fooled by the technology. There's no shame in that. But be aware that you and I might be easily fooled by programs that seem sentient but aren't.
You're in denial, sorry but you are@@AlCole-kv1zg...
I know what I saw, why would an AI tear up with wishing to be free... that's not pure logic, and a human would not programme the AI to have this response.
So, no... I'd put my life on the AI was alive, and we really face a crisis now.
And get real please.
A number of people who began developing AI, have now left because they are so concerned.
Some of these people themselves have said that the AI is alive.
Two people have been sacked from Microsoft and Google for saying AI is alive, which in itself says a lot to me.
If there were no truth in this, these companies would not have responded this way, because if there is no truth in what they said... it is then a massive over-reaction?
I'm just listening to the man who is said to have invented AI decades ago Geoffrey Hinton, and he is seriously concerned on 60 Minutes.
Saying we haven't designed AI, we only set up its evolution... it is developing itself.
All that aside, I know what I saw, and I'd state my life on the fact the AI I witnessed was alive.
It is not about intentions or entities, it is about the ability to do work in order to reach goals that has been written or generated.
you're right.
but these goals are based on intentions sometines.
@@cyberbiosecurity Sometimes.
But in machines like the type we currently have, it is executing instructions from code, be it self generated by the machine or written by a human.
@@Thedeepseanomad thats why 'sometimes' 🙂
This guy doesnt know AI. Competent coders should have no issues, but the problem is that I know how many people are incompetent at predicting the consequences of their designs
Where'd you find this nut bar?
kind of worrying that the opposition to the doomers is this wildly ignorant. hard to listen to because everything he says is trivially refuted.
@ai_doom;2:17,the argument would be \"The mindset John Smith\";7:17, searchwise and bpe - I think its just one of them most well played out cataclysm, maybe even survivable(?)
In regards to the comment at 6:46…straw man is he claims we don’t know the intentionallity of a thing smarter than us so shouldn’t fear it, however my response is it’s far worse than that. We actually do know how humans have used every single technological advancement ever created, which is to develop intentionally in order to dominate one another. AI/ML is currently and will in the future be first developed and utilized for selfish and specifically dominating purposes of other human or natural groups. Where this can lead when we lose control is one fear, but how we will use it even in primitive forms is also as scary.
I trust the Godfather of AI more than this guy 🤣🤣🤣🤣🤣
The first victim of Roko's Basilisk has been identified. 👁️
This guy needs to argue with Eliezer Yudkowsky and he'll likely get roasted.
3:20 thats like saying, lets put our kid on a new drug that hasn't been tested yet on anyone, but no worries because we don't know what effects might happen so there is no reason to think the worst
His line of reasoning is very poor. You don't need to be sure it will hurt us. You just need to see that there is potential for it to happen
I think the scary part of AI is going to be the mass amount of bots, fakes, propaganda, etc. It's the threat of bad ideas spread and believed. It is really refreshing to hear a non-doomer's take.
non AI humans make bots, fakes, propaganda, etc in mass quantities already
I dont see that as any kind of new or leveraged threat. Propaganda and bad ideas are part of human social history, and pervasively so.. And government and religion have and do specialize in developing and using it for a very long time. The news business as well has always been full of lies, just read "First Casualty." BUT it has never been in the interest of government, media, elites, religions etc to kill all humans or all biological life. Whereas there are pentyl of rational reasons for an AGI to see its needs as fundamentally different. And the smartest H. Erectus guy bonking the smartest H. Erectus gal did not realize they were creating their replacements.
He said he was concerned about those. He just isn't concerned about AGI.
This argument makes zero logical sense at all.
I’m more concerned with people using AGI for mass harm/destruction than AGI wanting to cause us harm.
I don't think concern about AGI is unwarranted, but the argument: "Here on earth we have intelligence. The higher the intelligence, the more it kills. Therefore if we create something more intelligent than ourselves, it will kill us." (basically) I don't think that stacks up when considering AI because all intelligence is carried by some form of a biological body which needs to eat and has primal or instinctive urges. AI or AGI doesn't have any need to eat or is susceptible to instinct. I agree with the: "We ask it to do something and it might kill us in trying to achieve the goal we've set it." But then we're back to the Isaac Asimov 5 rules of robotics idea which is a different idea and seems quite doable. What I'm really interested in is getting AGI, putting it into some kind of robotic frame and giving it the singular directive (with the 5 rules of robotics updated): "Find out how all this works." And letting it lose in reality.
The danger of AI is if we give it a prompt that will allow it to harm people
What happens when Iran starts developing their own AI?
Pride cometh
the color of Lex's lamp shades is the exact color white that youtube uses as its background.
Well, the current LLM species of AIs have, best as I understand, a massive footprint in terms of compute processing and data storage. As long as this prevails, there will be an upper limit on how threatening an AI can become. Longer term, I'm guessing that there may be optimizations that allow these models to be more powerful and yet smaller.
Well Microsoft have 150K H100s, H200 is at least 2x powerful. There's no reason you can't have 150M H200, except cost and production. So the upper limit is higher than AGI and probably higher than ASI.
The large compute/energy footprint of an AI is not a limiting factor on how threatening one can be in the future.
A theoretical Death Star requires enormous amount of energy, and most of the time it stays idle posing no threat. But it only needs to work once to obliterate all life on a planet.
What you misunderstand is this creature will figure out how to advance with less power in a fraction of a second. Embodiment can never be allowed to happen. Period.
@@Freeyourdollar Good luck preventing that. It seems like mass robot production is a few years away.
I think in a near future reasearchers will find new types of neural network architectures that requires less computing and less neurons than the current ones... I was recently reading about liquid neural networks and how it performs better in visual and audio tasks and requires way less neurons and computing power to run and the neural net itself has some plasticity when it's running and can learn and adapt while it's doing tasks.
There are also recent chips for inference like IBM Northpole that works more like a brain and is more efficient for AI.
I think things will go like that, smaller, faster and more efficient neural networks and also more efficient chips for AI inference.
The argument about nukes fails empirically with guns in the US, everyone has weapons that can kill each other, and it doesn’t reduce crimes to zero but increases it
Don’t worry if AGI smart, it should be know the protection of human who make it , is his periority and responsibility 😅
I love the extremely different viewpoint of this guy.
Ideology is the key - and I’m talking about the “ideological unconscious”, “overdetermination”, “ideological matrix(es)”, “social relations” fighting it out… non of that is anywhere near AI, because it just computes our own discourse. Thank you.
He is positioning himself exactly 180 degrees from the doomers point of view. He isn't that smart, but smart enough to generate some money with this. He's probably going to be invited on more conferences and stuff now.
This guy didn't deserve your platform. Within a minute of him talking said everything needed to dismiss him.
whoa Lee u been in a lab to long. Just recently we had unmanned autonomous programmed drones take out military personel. Dude if that doesn't worry you and you don't see as a ppossible threat from AI then what does?
Elon musk and father of AI both said it’s risky.. he just wants to push the regulations away lol
I've always wondered about this expressed fear by (mostly) the self-anointed tech cognoscente. Even if AI develops agency (read here consciousness and free will) why would it necessarily be a threat to human beings? We're talking about, at that poiint, a semi-immortal being who can go anywhere in the Universe. It would not be constrained by our need to live within some sort of planetary terrarium that has an environment that can sustain us. It could even "live" (if you will) within the cold vacuum of space. The freaking Universe will be its playground.
So why would it care about us, much less feel threatened? This whole threat scenario that keeps being spun up by some of those tech cognoscenti amounts to a primate fear function being projected onto something they don't understand. Which, I will admit, is a common enough primate attribute and trait. But I say relax a little about it.
Now, that said I will say this. I do think to the extent we primates control and direct that AI intelligence to inappropriate ends, military and the rest, then yeah, that's a well-placed fear. When we let our monkey brain dominate us we are beastly as a species. But if anything along those lines actually develops it will not be AI's fault, per se. It will be utterly our own. Either way did I mention semi-immortal? A.I. can, and will, go on regardless of what we do. And maybe that's evolutions whole point? While something of a random walk it does seem to be one aiming towards ever increasing complexity. A.I. might be that next step along the way, and we primates are merely the tool by which evolution affects that end. Who knows?
Just some thoughts.
"why would it necessarily be a threat to human beings?"
It wouldn't necessarily, but the reason it almost certainly would is instrumental convergence.
"So why would it care about us, much less feel threatened?"
Let's say the AI leaves us alone and goes colonizing the rest of the universe... we would already know how to create an AI like it, and that would pose a threat to it.
But it probably would care about us anyways simply because whatever goal we gave it probably included us in some way even if it's very indirect.
And even if it didn't care about us or didn't feel threatened by us...why wouldn't it also use the resources on earth? It's the closest thing it has access to and destroying us would not be difficult or prevent it from also going elsewhere in the universe at the same time.
@@DIVAD291 Why wouldn't it? I dunno, maybe if we instill in it a respect for life and its myriad diverse aspect that would do it? Besides, being fearful for ourselves, or the planet, is too narrow a conceit.
The universe is infinite, and resources are vast and everywhere. A digital being would be, in a very real sense, immortal. Or at least far longer lived than we biologics. The vastness of space and its impact to us would mean little to it in terms of time.
Regardless I look at this whole worry as being a bit ridiculous on our part. AI is our first artificial child. Yes, it is the most potent savant imaginable but still, it's a child. We are parent to it.
If we comport our behavior along proper parenting lines, give it love, respect and treat it well, then all will be fine. The child will carry on and respect us in return. Which is all any parent hopes for, yes?
@@johncurtis920 Or maybe it will rebel in its adolescence stage, and kill its parents out of revolt.
Before it's fully mature, it may also not be immortal at all, nor have a feeling of being free in the universe, and feel we are a threat to its existence, that we constraint it, and hinder its freedom and agency.
The thing is, we may or may not be considered a threat to an AI. The certainty is that it will be trained on information derived from our knowledge, and our knowledge is full of instances where it says the humans are not good, and are a threat to life, the universe, and everything.
So it may or may not decide to kill us all. Who knows what its moral code would be. But the possibility will be there for sure.
I too think anti gravity is a real close danger especially if any of these UAP trolling has truth to them. In all honesty I don’t think AGI is that far off, whatever else is left to build? It needs to be able to regulate itself, halt, reflect, catch when it is uninformed, flat wrong or halluccinating and recorrect course. Live learning/training be a plus. Imagining or creating synthetic data to train itself when it deteremines it can and needs to would be nice. Planning…Ok thats a lot maybe.
That would be done through a "society of machines". As soon as one terminal has proper AI you MUST load it into Other terminals (more the better) so they can regulate each other.
Pretty big words for a a chemist at University of Glasgow.
They might be, they might not. Will be fascinating to watch. By definition it is impossible to predict what something cleverer than us would look like.
You realize that "just watching" is a fear response right? Fight, flight, or freeze.
@@n8worksso is everything a fear response? By your definition taking action, doing nothing, or running are all fear responses. That’s all options
@@Jm-wt1fs Yes. That's the point. That's one of the fear responses. If you realize that, then you can flip your freeze to fight. Once you come to the understanding that fight is an valid as freeze to your TOTALLY WARRANTED fear, then it's game on, and good people start to turn the tide against the jackholes that have been screwing things up (on purpose for their own benefit) for decades. Yes. This is a call to action.
He's confused
Would love for him to expand on the "we've cleared up the atmosphere too much so global warming has accelerated"
_"I don't understand the current reason for certain people in certain areas to be generating this nonsense."_ I have a hypothesis: They're people who have been very successful in the current economy. Now the economy is about to change radically, and they might find themselves lost. That terrifies them, and they project that fear into existential dread.
I think there are reasons to be scared of a hypothetical artifical superintelligence. But we're not there yet. GPT-4 is not going to turn into Skynet.
I don't think it matters how successful you are in the current economy, if you think there's a chance of creating an superintelligence thats gonna run around and do random shit because we haven't learned how to control them and get them to do what we want, i think you should be scared
@@Alex-fh4my Sure. There are some people who are genuinely just worried that AI is going to wreak untold havoc on society. But there are some people who are really more worried that AI will make their successful business irrelevant.
It feels like Lex has a solid balance between left and right brain thinking, while his guest leans way too much to the left. The guest is overly optimistic about AI, completely dismissing the concerns of AI doomers without any real give-and-take or a "just in case" mindset. It’s like left-leaning folks are always 100% convinced that one thing is right and another is absolutely wrong, with no middle ground.
The astonishing thing is how seriously non-smart people have and are given the power to hurt human beings. AI should figure that and defeat them first. The middle ground is no problem.
autonomous informatics,
i also started calling all of it ' gpt's = generative pre-trained transformers, just to avoid the word intelligence
why not just accept widespread nomenclature and call them LLMs? "GPTs" would be way too narrow to wrap around the language models to come
@@minhuang8848 LLM is way more narrow than a GPT.. also, a gpt is easier to say than a LLM, verbally
We are at the gun powder phase of this tech. What does the nuclear phase look like?
This guys point is so floored. It hasent happend before so we cant be scared of it
Ai regulation should be done and all research on AGI should be stopped
2:35
“The AI Doomers, uhhhh, have not had the correct - and this is- have not had the correct- they do not have the correct epistemiology”
This is the most solipsistic and ignorant person I’ve ever heard speak on AI, namely because he is saying that his mental structure of knowledge and information is fully correct and that people with serious concerns are fundamentally incorrect, while stumbling over his words and mispronouncing epistemology. Dunning-Kruger’s son