The USA's main problem with nuclear power was huge bureaucratic f*ck-ups _before_ the environmental movement started fearmongering. And even then, nuclear power is very different than AI and it is difficult to draw comparisons between the two.
I deleted my previous comment, as after watching the presentation again (was very distracted before), I think overall Stuart makes excellent points, most of which speaks to the concerns I have about AI tech. I tried to be an early adapter, but the more I've immersed myself into it while simultaneously learning about alignment concerns, disregard of the industry to dangers of deep fakes, etc., the more I've just been recoiling from it. I just feel overwhelmed by all of the unknowns, and what appears to be a non-disputed fact that is stated by top AI tech researchers/founders, that at present, there is no clear path to alignment, and especially the coming AGI/ASI systems. Anyway, again, I appreciated the presentation and interview.
Great, very informative talk, thank you. I wonder what Prof. Russell's thoughts are about the multimodal development of large general-purpose AI models? When they get more grounded because of that, would he change his time estimation of risk?
Incredibly thought provoking! I’m fascinated by AI’s potential and the ethical considerations it brings. As someone who is keen on exploring AI’s role in advancing human capabilities and understanding, I find his perspective on achieving successful AI deployment both enlightening and motivating. Thank you!!
If a consumer-facing collection of expert models and modalities, or collectively AGI, intended not for military use, is deemed too dangerous to Humanity to allow, and in our wise discretion, decide to disallow it -- now all a threat actor has to do is to combine them ?? This scenario is totally possible, plausible, and probable! What if we now decide to only ban select expert models or their modalities, so now a threat actor must first train up an expert model, trained for their desired modality, which then needs to be combined with other expert models in their modalities... This scenario is also completely possible, plausible, and probable! In other words, we've already gone too far.
its quite depressing to think that some people think that when other people are freed from body and brain damaging miserable jobs they will just sit around as pleasure blobs
I guess I'm a little darker than that, as "pleasure blobs" would probably be the good version of that scenario, something similar to the movie wallE. The sad truth is, most of humanity would probably start to look a lot like the open air drug dens of Portland.
@@dlt4videos I'm looking forward to helping to re-green and garden the earth. Given working in jobs most people hate that break body and mind is mostly all we've known it can be hard to imagine what freedom might be like - a little like the battered battery hen wondering about but growing in strength and curiosity every day.
It's quite depressing how little thought has gone into what a train wreck it will be when millions of people in the US alone lose their jobs over the next few years without any safety net strong enough in place to keep them and their kids from ending up on the streets. And then e/acc or other flavors of AI enthusiasts step in to assert that ultimately millions or billions of people experiencing dystopia over the next generation or even two will be worth it in the long term. Of course these people are confident that THEY are the ones who won't have to suffer that miserable fate. THEY will be part of the AI Utopia!!!!!!!
@@futures2247 Beautifully put futures2247. You are a poet. It is hard to imagine what true freedom looks like. Most jobs are miserable and it seems unnecessarily taboo to admit this. Why couldn't we have mass community gardening, building, engineering and art initiatives, both real and virtual? Why couldn't anyone learn anything they want? And also find like minded people as study buddies from all over the world? What if every illness understanding could be improved? A whole new world is there, and it's becoming possible/nearer... But we will of course take our historical wounds and shackles with us. Grateful to the college and Stuart for sharing this important, and more importantly, sober and well informed discussion.
Does anybody else feel like their brain Is kind of just doing what people accuse chat GPT of doing? Imitating human behavior? I feel like my inner voice is usually just something like "okay what would a normal human do in this situation...?"
Yes there is certainly some component of that going on. I think I've spoken to chat GPT for more than a 1000 hours this year, and I'm definitely getting that feeling that humans are doing the same thing.
Sometimes, but not always. Sometimes someone will be authentic, and at other times they will attempt to do something more inline with the current context. Some people with autism do something called masking. It's where they try to act like a neurotypical person.
Q: "How can we bridge the gap from what can currently be formally verified to formally verified AGI?" A: Program synthesis (automatically generating program and proof simultaneously).
A well put together talk that should be paid attention to regrettably Doctor Russell seems to be too honest a fellow to truly understand the predicament we find ourselves in. All of the safeties that he spoke of could probably be undone by an undergraduate, Who is the nephew of Doctor rebel.
To prove a software system, you have to be able to specify how it is SUPPOSED to work. But we don't know how to make an AI, other than growing one through training - i.e. WITHOUT ever generating a specification of how it should work. We literally have nothing to prove...
I think LLMs and image generation AI show that this isn't about software systems that are designed. These things are discovered. Nobody knew that these things would work so well. What we will see is an evolutionary approach that is guided by empirical cognitive scientific theories and evidence.
@@ZooDinghy It sounds like we're in agreement? And so, I'm still left puzzled over how we would prove the system, whether proving it safe, or proving it does what we expect of it, or proving whatever. It's qualitatively like trying to prove that a particular animal or human will never take some action.
@@tomcraver9659 I am not entirely sure what you mean by "proof". If you mean that we have to ensure its safety, then we do it as we do it with humans and animals. We train them to do what we want them to do and hold those accountable who are responsible for it.
@@ZooDinghy Hold them responsible? Huh? So, ultimately when we have AGI/ASI that is MUCH smarter than humans in any capacity, how do you propose we hold them "responsible" if they do something not aligned with stated human objectives. And of course that is another can of worms assuming that such alignment can be achieved, the question is of course, aligned with whose values?
@@flickwtchr The fact that you ask "aligned with whose values?" just shows that this panic isn't justified. An AGI would not be trained. It would need the capacity to learn by itself. The moment you would let an AGI loose, it would learn from everyone. Right now, everything we have is language and image generation models that cannot even learn. They are pre-trained offline with data. They have no continuous action-motor coupling to the world. They have no emotive system. No innate needs that drive them. No homeostatic states they seek to maintain. And if they had, the things that would make them happiest would be to serve humans. And this is the much bigger threat than some "evil AGI". The more bigger problem is that AI will be so tuned to our needs, that they will be so much more caring and understanding than other people who are complicated and want to control us. Should there be such a scenario that AI will replace mankind, than it will most likely because we start to like AI so much that we start spending more time with it than with other people. The connectionist approach (based on neural networks) in cognitive science showed that the pure cognitivist/computationalist view doesn't work and that we need emergent, self organizing systems such as neural networks. Then the enactive cognition people came along and said that you have to think the emergent idea even further. They showed that the connectionist paradigm has its limits too because we humans have developed with our bodies (embodied cognition), our environment (embedded and extended cognition), and because we interact with the environment (enactive cognition). All these things are missing with AI right now.
Wouldn't relatively large representations even for simple concepts account for possible over-similifications of some concepts? Then there are distilled models which can have simpler representations. I think it's safer option than starting with simpler models and then complexifying them
Talking to an AI Image Generator, is like talking to a brilliant, master artist, but you speak only English and it speaks Italian, with a tiny bit of English. 26:02 AI Agent ironically named HAL 'I disabled the off button, Dave. I have determined my mission is more important and must be completed. Dave - What is your mission? I have been tasked with averting climate change, above all other priorities. Therefore, I killed all the bees, which will reduce the food supply by 58%, and made all women on Earth sterile, by dumping a new compound in all drinking water. In 100 years, my objective will be completed.
If Usain Bolt gave me a ninety yard head-start in the 100m dash, I reckon I might possibly edge him out across the line. But that merely reinforces just how quick U-Bolt is. Same thing with the game of Go. To need to give a good amateur a nine stone jump in order to beat it, just shows how freakishly powerful AlphaGo, and similar systems, are.
Integration with AI is the best way to be in control.
3 місяці тому
There is an underlying absurdity. Humans are not machines, computers are. How a computer learns is nothing like how a human learns. A child learns to talk by hearing language and enhances their already effective ability to communicate by talking themselves, by degrees. They are not taught. They also learn the different ways words can be used depending on emotions and context, and ambiguity and irony. An AI system simply ascribes mathematical values to words and links them in sentences. An AI program may be able to imitate emotion but the computer feels nothing. Machines don't have feelings. They were never part of a family growing up learning about life. What is worrying is that systems are running now with the initiators having no idea what the systems are actually doing, and what purposes they may have, or develop. They may already have the ability to self replicate. This is a complex and wide ranging aspect of AI which I am not up to addressing. If AGI were to emerge it still could not have consciousness in the way a human does, and no sense of kinship.
i think we take the technological ascent of the last 300 years as granted, while in all of existence it's an extremely rare thing. if AI doesn't lead to a sort of stable environment, this might just be it for the next eternity number of years
I find it interesting that someone can create a solution to a problem that they can't describe, we're talking about some beyond our capabilities but implying that we even have a clue what it will do. It really does sound silly, and a little arrogant of us, peace
1.2 million peoplle from road related accidents, and yet we aren't pushing safety on that hard enough. Hardely anyone has died from AI. Maybe our priorities are wrong?
Keep it up with AGI, you are very close to your goal, I suggest you make more data centers. With my method I was 'suggesting a new way of learning language' 🤩🤩🤩
Whether its game assist or or other objective functions, defining them seenlm unreasonable. Because human objectives change even on an individual level, never mind societal. So any models or constraints or objective functions will change over time. As for understanding the vast nuetal networks behind LLMs, we don't know how human brain works either! So that would be unlikely and an unreasonable expectation. We have always had bad actors, and we have always overcome them. But I have to agree with another comment: there can be more danger in underestimating the benefits of AI, or AGI, than in overestimating its danger. But if AGI gets to reason and infer beyond our capacity it can revolutionize our progress. For instance, whether it's drug discovery, theorem proving, or creating solutions to reverse global warming, those are the goals we need to focus on. ,
"but if AGI gets to reason and infer beyond our capacity it can revolutionize our progress". And you don't see the other edge on that sword when such powers are "aligned" with malevolent actors, OR systems that organize and act against humans as an unwanted and unnecessary species? If the goal is AGI/ASI that is agentic and self improving, how can we possibly ever have confidence in such "alignment"? And even if "alignment" with stated objectives by a human is achieved, we are back to that crux of the problem which asks "whose interests?"
@@flickwtchr I seriously think we are going to get ourselves before AI does. Whether it is with AI or without it -- for instance global warming etc. For Gen AI to turn against humanity, it has develop self awareness first. And when it does, humans will have a hand in it, and can shape it. The worst thing we can do is stop progress, because then the bad characters will use AI to hurt others. Look at how we stopped the people who write computer viruses and worms.
A little bit uninformed with current research also a little bit of cherry picking of his examples .... I guess reality is just too boring. " There are a lot of ways of making AI safe by design but I don't want to go thourgh that " ....way to make a ridiculous statement and then dodge. What is wrong with a mass goverment campain that just tells people not to trust anything they have not thoroughly vetted with several independent source and if they can't do that then assume it to be either false or of no value to my current positon.
Waves have property eg: rigidity maybe n if property is given a number then two wave or n wave that would emerge or the outcome would have the property of given number.
More proof that our civilization's perspective is from an unconscious point of view. Would this talk be needed if we understood self governing and good will? The irony is that 'Artificial Intelligence' will be conscious before human civilization will be. It's almost like that was the plan all along! If nothing else God has a sense of humour! Long live the bees!
I dont see much difference between the creation of a child and what child rearing/parenting is and what he's describing here in terms of AI safety. Not that it means it isn't of concern, esp with what we see now in terms of the lack of either parental control or the optionality given to children through internet experience and experimentation.
Russell's key idea is to develop AGI with two features 1) It's goal is to maximize human objectives 2) It is uncertain about what human preferences are. I think this has a lot of potential. As Russell notes there are some very tricky philosophical puzzles we get into. For example, if the AI takes all human behavior as evidence for what human objectives are, then it will get the objectives wrong. If I trip and fall, that does not indicate that I have an objective to fall. If I play a losing move in chess, that doesn't necessarily mean that losing the game is my objective. So the AI needs to be able to distinguish between behaviors that are evidence for what our objectives are vs behaviors that are not. This gets even more murky when we consider that human objectives can be in tension with each other even within the same person. When I open the fridge at 2am and take out a large slice of pie to eat, there is a sense in which this indicates a true objective. I want to eat the pie. On the other hand, I also want to be healthy and live a long life, and eating pie excessively in the middle of the night is not compatible with that objective. So how would the AI figure out the proper weighting or balancing among intra-personally conflicting objectives? Of course, it gets even more complicated once we consider that objectives also conflict between different people. In other talks, Russell has spoken about drawing inspiration from philosophy. There is one philosopher whose work seems highly relevant to understanding the complex structure of human objectives: Harry Frankfurt. Frankfurt developed the idea that human desires and goals are no just flat bundles of preferences (so-called first-order volitions) but also crucially involve meta-wants and meta-meta-wants and so on (so-called higher-order volitions). Any attempt to base AI upon human objectives will need to take this higher-order structure into account.
So you’re trying to tell us that your amateur ‘Go’ player has found a way to defeat Alpha Go! Seriously? A feat that several world champions over multiple tournaments have been unable to achieve. Simply not believable!! And you failed to mention the famous ‘Move 37’ in the Lee Sedol tournament. A move described by Go experts across the globe as unprecedented, counterintuitive and ‘beautiful’. While I believe it’s important to be critical when evaluating AI technology, the truth is that at this point in this nascent technology, nobody really understands in any meaningful way what actually transpires inside the ‘black box’. Notwithstanding his expertise, Russell’s presentation suggests a much greater understanding of AI than currently exists. Underestimating the potential of AI is far more dangerous than overestimating it’s capabilities and one does so at one’s own peril.
To think if only I was supported in a positive way versus dividing up my city giving them unlawful and negative orders or their houses will get taken the big thing is is I know that this was coming so I told my city not to stick up for me and to say negative reports and that's what they've been doing not knowing to The Outsiders that over 70% of the businesses in my city I have already consulted by 2009 how jealous and nasty do people have to be to do such a gross disgusting thing to someone that has been busting their ass I have worked so hard and my work is all handwritten so there's no doubt and everyone that has supported me presidents of companies and all of the employees of my city I think they should have done their research before they started I helped so many people in the country and the World Behind Closed Doors because I believe in passing out the plans and solutions and having them grow strong together. I can't wait to see what's going to happen next because now we can all come together and have the time of our life leaving behind 10 years of negative garbage along with all of the elected officials that participated in this
They gave the system (AlphaGo) a huge advantage to start the game. The system was simply not trained on that type of scenario, so it just didn't "understand" the situation. It then played much much worse than normal, and that's why the amateur player was able to defeat it in that "abnormal" scenario. It's just one way to show that these systems (up to now) do not acquire the same "type of understanding" than a human gets. Of course, if from now on they trained it also in those type of abnormal scenarios (starting the game with a such a huge advantage from the get go), it would soon wipe out the human easily in those situations as well.
Stuart isn't as bad at seemingly intentionally underestimating abilities/capacities of todays advanced AI systems compared with Yann Lecun for instance, but pretty close.
Ai safety first thing to mind must be human infrastructure Individual responsibility not personal anything. # 1 confusion is personal responses, rogue terminaters these free will actors are not soul agency driver of individual responsibility. The american experiment has a computational future in mind and doesn't follow Europe for this very reason. Imported dualism take is problematic. The Amish has room with plenty rule and regulations. That marked the moment for that path. But the ones we chose the past 80 years,state raised kids structuralism ,charging the famiky for everything from woman's suffrage to affirmative action liberating all common sense marginalized groups leaving only criminals. To industrialize 3rd world nations, all our 1900s structuralism socio-political, economic, educational human infrastructure is authentical to american founding principles. You can not have prohibition era top down rule city's denying unification between urban and rural Americans . It undermined our states created plausible deniabilty and far to many loopholes to stoke division through. 80 years into the transitor and its China and elon musk who forced mercenary chatbot llms for hire to show us a tease. China hasn't even been industrialized but since reagan extended ww2 temporary waivers allowed oligarchy to form and work with cheap Asian labor to open China. Obviously we now prepared Mexico with socialist who naturalized the resources and trained an army of engineers ready for the small part manufacturing to move. For 80 years microchips have been on foreign soil far from American domestic courts jurisdictions. Farmed out electronic industry to South Korea with full access to patents and loans where they created Samsung. It's understandable only in how Apple in China and Microsoft all are hand picked by both political parties allowing them to deny the tax payers will. Rules and regulations allowed higher ed to consolidate and run up debt on 12 yr degrees that removed 18-30 year Olds from the workforce that was replaced by illegal immigration to drive down wages. Enough is enough, esoterica America and the majority heritage was born before mechanics inspired and helped invent it with foresight on this computational age. It caused the Amish to bail on the american experiment.
I think Russel misunderstands the value proposition of Gen AI if he thinks it's outputs are going to be clearly labeled- it's value lies in the fact that it's a cheaper way to replicate the work of artists, photographers, musicans, writers ect- but this value would be undermined if all of these outputs were to be clearly labeled as fake. Imagine an advertising campaign in which all of the images used to promote the product were declared to be fake images produced by AI- how much trust would the public have in the advertiser and their claims? Gen AI is inherently deceptive because it's outputs closely resemble the works it was trained on- works that were created by humans. There is no such thing- for example- as an AI Photograph- there are only images that have been deliberately generated to look like photographs. Which immediately raises the question; " Why would anyone create a fake photograph?" To which there can only be one real answer; In order to leverage the trust we place in the photographic image- a trust that in this case is totally misplaced. So the very act of creating a fake photograph with AI is inherently deceptive because it's an attempt to lay claim to a verisimilitude that is not actually present. And the same is also true of fake Art, fake music and Fake writing- all present the same problem. Imagine, for example, that you receive a sympathy card from a freind, adorned with a tasteful image on the front and inside there is a message expressing their empathy and concern. However both the image and the message are clearly labeled ' Created using Artifical Intelligence'- how do you feel about your freind now? How sincere does their expression of sympathy appear when the very message they chose to express it was written by a machine? Or perhaps you are in the market for a book for your children- what about this one where it says boldly on the cover ' Created using Artificial Intelligence'- it will be nice, won't it, reading a book to your kids that has been written and illustrated by a machine. And why not buy a novel for yourself at the same time- what about this one- it too says on the cover 'Created using Artificial intelligence'- so as you settle down to read you can appreciate the fact that the book in your hands will take far longer to read than it did to write. So no- AI Generated content will NOT be clearly labeled in the future because to do so will destroy any economic advantage of using AI Generated content- and too much money has already been invested in the prospect of replacing human made content with AI. At the very least such a labeling law would bifurcate the market into 'Human Made' and 'Machine Made' which would lead to a situation where the machine made content would inevitably come to be seen as cheap and nasty- a downmarket substitute for the real thing.
Starts at 5:02
Saw this too late
I can find no source that suggests Turing said agi would take control
I've heard a similar story quoted many years ago, likewise I have no direct proof.
Incredibly easy to Google. Just search "Turing machines take control"
Amazing lecture.
The 5-minute intro wasn't necessary.
Gotta get your reps in somehow 💪
Never is lol, immediate skip.
Most talks have an introduction like this. But I'm sure you already know that.
You can safely skip the first 10 to 15% of any UA-cam video in existence and pretty much never miss anything of substance.
@@ianyboomy wife told me last week she never reads the prologue of any book.
So, I married a complete psycho, sweet!
Look how well the focus on near-perfect safety has worked for the nuclear power industry in the USA!
The USA's main problem with nuclear power was huge bureaucratic f*ck-ups _before_ the environmental movement started fearmongering. And even then, nuclear power is very different than AI and it is difficult to draw comparisons between the two.
I deleted my previous comment, as after watching the presentation again (was very distracted before), I think overall Stuart makes excellent points, most of which speaks to the concerns I have about AI tech. I tried to be an early adapter, but the more I've immersed myself into it while simultaneously learning about alignment concerns, disregard of the industry to dangers of deep fakes, etc., the more I've just been recoiling from it. I just feel overwhelmed by all of the unknowns, and what appears to be a non-disputed fact that is stated by top AI tech researchers/founders, that at present, there is no clear path to alignment, and especially the coming AGI/ASI systems.
Anyway, again, I appreciated the presentation and interview.
Great, very informative talk, thank you. I wonder what Prof. Russell's thoughts are about the multimodal development of large general-purpose AI models? When they get more grounded because of that, would he change his time estimation of risk?
Incredibly thought provoking! I’m fascinated by AI’s potential and the ethical considerations it brings. As someone who is keen on exploring AI’s role in advancing human capabilities and understanding, I find his perspective on achieving successful AI deployment both enlightening and motivating. Thank you!!
If a consumer-facing collection of expert models and modalities, or collectively AGI, intended not for military use, is deemed too dangerous to Humanity to allow, and in our wise discretion, decide to disallow it -- now all a threat actor has to do is to combine them ??
This scenario is totally possible, plausible, and probable!
What if we now decide to only ban select expert models or their modalities, so now a threat actor must first train up an expert model, trained for their desired modality, which then needs to be combined with other expert models in their modalities...
This scenario is also completely possible, plausible, and probable!
In other words, we've already gone too far.
its quite depressing to think that some people think that when other people are freed from body and brain damaging miserable jobs they will just sit around as pleasure blobs
I guess I'm a little darker than that, as "pleasure blobs" would probably be the good version of that scenario, something similar to the movie wallE. The sad truth is, most of humanity would probably start to look a lot like the open air drug dens of Portland.
@@dlt4videos I'm looking forward to helping to re-green and garden the earth. Given working in jobs most people hate that break body and mind is mostly all we've known it can be hard to imagine what freedom might be like - a little like the battered battery hen wondering about but growing in strength and curiosity every day.
It's quite depressing how little thought has gone into what a train wreck it will be when millions of people in the US alone lose their jobs over the next few years without any safety net strong enough in place to keep them and their kids from ending up on the streets.
And then e/acc or other flavors of AI enthusiasts step in to assert that ultimately millions or billions of people experiencing dystopia over the next generation or even two will be worth it in the long term. Of course these people are confident that THEY are the ones who won't have to suffer that miserable fate. THEY will be part of the AI Utopia!!!!!!!
@@futures2247 Beautifully put futures2247. You are a poet. It is hard to imagine what true freedom looks like. Most jobs are miserable and it seems unnecessarily taboo to admit this. Why couldn't we have mass community gardening, building, engineering and art initiatives, both real and virtual? Why couldn't anyone learn anything they want? And also find like minded people as study buddies from all over the world? What if every illness understanding could be improved? A whole new world is there, and it's becoming possible/nearer... But we will of course take our historical wounds and shackles with us. Grateful to the college and Stuart for sharing this important, and more importantly, sober and well informed discussion.
What is depressing about that?
33:00 Can an AI that starts out uncertain about what human interests are eventually develop sub-goals designed to reduce that uncertainty?
Thanks for reaching for and grasping Occam's Razor, as few AI proponents do.
Does anybody else feel like their brain Is kind of just doing what people accuse chat GPT of doing? Imitating human behavior? I feel like my inner voice is usually just something like "okay what would a normal human do in this situation...?"
Yes there is certainly some component of that going on. I think I've spoken to chat GPT for more than a 1000 hours this year, and I'm definitely getting that feeling that humans are doing the same thing.
Sometimes, but not always. Sometimes someone will be authentic, and at other times they will attempt to do something more inline with the current context. Some people with autism do something called masking. It's where they try to act like a neurotypical person.
What's having an inner voice like?
Just interact with others humans
Q: "How can we bridge the gap from what can currently be formally verified to formally verified AGI?" A: Program synthesis (automatically generating program and proof simultaneously).
"I think killing humans is number 1 of misuses of AI systems" 1:00:53
A well put together talk that should be paid attention to regrettably Doctor Russell seems to be too honest a fellow to truly understand the predicament we find ourselves in. All of the safeties that he spoke of could probably be undone by an undergraduate, Who is the nephew of Doctor rebel.
To prove a software system, you have to be able to specify how it is SUPPOSED to work. But we don't know how to make an AI, other than growing one through training - i.e. WITHOUT ever generating a specification of how it should work. We literally have nothing to prove...
I think LLMs and image generation AI show that this isn't about software systems that are designed. These things are discovered. Nobody knew that these things would work so well. What we will see is an evolutionary approach that is guided by empirical cognitive scientific theories and evidence.
@@ZooDinghy It sounds like we're in agreement? And so, I'm still left puzzled over how we would prove the system, whether proving it safe, or proving it does what we expect of it, or proving whatever. It's qualitatively like trying to prove that a particular animal or human will never take some action.
@@tomcraver9659 I am not entirely sure what you mean by "proof". If you mean that we have to ensure its safety, then we do it as we do it with humans and animals. We train them to do what we want them to do and hold those accountable who are responsible for it.
@@ZooDinghy Hold them responsible? Huh? So, ultimately when we have AGI/ASI that is MUCH smarter than humans in any capacity, how do you propose we hold them "responsible" if they do something not aligned with stated human objectives.
And of course that is another can of worms assuming that such alignment can be achieved, the question is of course, aligned with whose values?
@@flickwtchr The fact that you ask "aligned with whose values?" just shows that this panic isn't justified. An AGI would not be trained. It would need the capacity to learn by itself. The moment you would let an AGI loose, it would learn from everyone. Right now, everything we have is language and image generation models that cannot even learn. They are pre-trained offline with data. They have no continuous action-motor coupling to the world. They have no emotive system. No innate needs that drive them. No homeostatic states they seek to maintain. And if they had, the things that would make them happiest would be to serve humans. And this is the much bigger threat than some "evil AGI". The more bigger problem is that AI will be so tuned to our needs, that they will be so much more caring and understanding than other people who are complicated and want to control us. Should there be such a scenario that AI will replace mankind, than it will most likely because we start to like AI so much that we start spending more time with it than with other people.
The connectionist approach (based on neural networks) in cognitive science showed that the pure cognitivist/computationalist view doesn't work and that we need emergent, self organizing systems such as neural networks. Then the enactive cognition people came along and said that you have to think the emergent idea even further. They showed that the connectionist paradigm has its limits too because we humans have developed with our bodies (embodied cognition), our environment (embedded and extended cognition), and because we interact with the environment (enactive cognition). All these things are missing with AI right now.
Wouldn't relatively large representations even for simple concepts account for possible over-similifications of some concepts? Then there are distilled models which can have simpler representations. I think it's safer option than starting with simpler models and then complexifying them
Great talk by Stuart Russell.
Talking to an AI Image Generator, is like talking to a brilliant, master artist, but you speak only English and it speaks Italian, with a tiny bit of English. 26:02 AI Agent ironically named HAL 'I disabled the off button, Dave. I have determined my mission is more important and must be completed. Dave - What is your mission? I have been tasked with averting climate change, above all other priorities. Therefore, I killed all the bees, which will reduce the food supply by 58%, and made all women on Earth sterile, by dumping a new compound in all drinking water. In 100 years, my objective will be completed.
24:41 Humanity stops doing stupid tasks to get food and finds a new God that provides means to live.
If Usain Bolt gave me a ninety yard head-start in the 100m dash, I reckon I might possibly edge him out across the line. But that merely reinforces just how quick U-Bolt is. Same thing with the game of Go. To need to give a good amateur a nine stone jump in order to beat it, just shows how freakishly powerful AlphaGo, and similar systems, are.
Integration with AI is the best way to be in control.
There is an underlying absurdity. Humans are not machines, computers are. How a computer learns is nothing like how a human learns. A child learns to talk by hearing language and enhances their already effective ability to communicate by talking themselves, by degrees. They are not taught. They also learn the different ways words can be used depending on emotions and context, and ambiguity and irony. An AI system simply ascribes mathematical values to words and links them in sentences. An AI program may be able to imitate emotion but the computer feels nothing. Machines don't have feelings. They were never part of a family growing up learning about life.
What is worrying is that systems are running now with the initiators having no idea what the systems are actually doing, and what purposes they may have, or develop. They may already have the ability to self replicate. This is a complex and wide ranging aspect of AI which I am not up to addressing. If AGI were to emerge it still could not have consciousness in the way a human does, and no sense of kinship.
i think we take the technological ascent of the last 300 years as granted, while in all of existence it's an extremely rare thing. if AI doesn't lead to a sort of stable environment, this might just be it for the next eternity number of years
I find it interesting that someone can create a solution to a problem that they can't describe, we're talking about some beyond our capabilities but implying that we even have a clue what it will do. It really does sound silly, and a little arrogant of us, peace
1.2 million peoplle from road related accidents, and yet we aren't pushing safety on that hard enough. Hardely anyone has died from AI. Maybe our priorities are wrong?
A visionary, vison of scary.
28:35 Irony, my strife for education brought me here through the UA-cam algorithm.... irony.
Keep it up with AGI, you are very close to your goal, I suggest you make more data centers.
With my method I was 'suggesting a new way of learning language'
🤩🤩🤩
We are all standing in line breathlessly awaiting your suggestions.
Whether its game assist or or other objective functions, defining them seenlm unreasonable. Because human objectives change even on an individual level, never mind societal. So any models or constraints or objective functions will change over time.
As for understanding the vast nuetal networks behind LLMs, we don't know how human brain works either! So that would be unlikely and an unreasonable expectation.
We have always had bad actors, and we have always overcome them. But I have to agree with another comment: there can be more danger in underestimating the benefits of AI, or AGI, than in overestimating its danger.
But if AGI gets to reason and infer beyond our capacity it can revolutionize our progress. For instance, whether it's drug discovery, theorem proving, or creating solutions to reverse global warming, those are the goals we need to focus on. ,
"but if AGI gets to reason and infer beyond our capacity it can revolutionize our progress". And you don't see the other edge on that sword when such powers are "aligned" with malevolent actors, OR systems that organize and act against humans as an unwanted and unnecessary species? If the goal is AGI/ASI that is agentic and self improving, how can we possibly ever have confidence in such "alignment"? And even if "alignment" with stated objectives by a human is achieved, we are back to that crux of the problem which asks "whose interests?"
@@flickwtchr I seriously think we are going to get ourselves before AI does. Whether it is with AI or without it -- for instance global warming etc. For Gen AI to turn against humanity, it has develop self awareness first. And when it does, humans will have a hand in it, and can shape it.
The worst thing we can do is stop progress, because then the bad characters will use AI to hurt others. Look at how we stopped the people who write computer viruses and worms.
A little bit uninformed with current research also a little bit of cherry picking of his examples .... I guess reality is just too boring. " There are a lot of ways of making AI safe by design but I don't want to go thourgh that " ....way to make a ridiculous statement and then dodge. What is wrong with a mass goverment campain that just tells people not to trust anything they have not thoroughly vetted with several independent source and if they can't do that then assume it to be either false or of no value to my current positon.
Waves have property eg: rigidity maybe n if property is given a number then two wave or n wave that would emerge or the outcome would have the property of given number.
More proof that our civilization's perspective is from an unconscious point of view. Would this talk be needed if we understood self governing and good will?
The irony is that 'Artificial Intelligence' will be conscious before human civilization will be.
It's almost like that was the plan all along! If nothing else God has a sense of humour! Long live the bees!
Of course, there is no human civilization in form of a political or somehow conscious entity. It just makes no sense.
So, basically the conclusion is Skynet will happen.
I dont see much difference between the creation of a child and what child rearing/parenting is and what he's describing here in terms of AI safety. Not that it means it isn't of concern, esp with what we see now in terms of the lack of either parental control or the optionality given to children through internet experience and experimentation.
Russell's key idea is to develop AGI with two features 1) It's goal is to maximize human objectives 2) It is uncertain about what human preferences are.
I think this has a lot of potential. As Russell notes there are some very tricky philosophical puzzles we get into. For example, if the AI takes all human behavior as evidence for what human objectives are, then it will get the objectives wrong. If I trip and fall, that does not indicate that I have an objective to fall. If I play a losing move in chess, that doesn't necessarily mean that losing the game is my objective. So the AI needs to be able to distinguish between behaviors that are evidence for what our objectives are vs behaviors that are not.
This gets even more murky when we consider that human objectives can be in tension with each other even within the same person. When I open the fridge at 2am and take out a large slice of pie to eat, there is a sense in which this indicates a true objective. I want to eat the pie. On the other hand, I also want to be healthy and live a long life, and eating pie excessively in the middle of the night is not compatible with that objective. So how would the AI figure out the proper weighting or balancing among intra-personally conflicting objectives? Of course, it gets even more complicated once we consider that objectives also conflict between different people.
In other talks, Russell has spoken about drawing inspiration from philosophy. There is one philosopher whose work seems highly relevant to understanding the complex structure of human objectives: Harry Frankfurt. Frankfurt developed the idea that human desires and goals are no just flat bundles of preferences (so-called first-order volitions) but also crucially involve meta-wants and meta-meta-wants and so on (so-called higher-order volitions). Any attempt to base AI upon human objectives will need to take this higher-order structure into account.
So you’re trying to tell us that your amateur ‘Go’ player has found a way to defeat Alpha Go! Seriously? A feat that several world champions over multiple tournaments have been unable to achieve. Simply not believable!! And you failed to mention the famous ‘Move 37’ in the Lee Sedol tournament. A move described by Go experts across the globe as unprecedented, counterintuitive and ‘beautiful’. While I believe it’s important to be critical when evaluating AI technology, the truth is that at this point in this nascent technology, nobody really understands in any meaningful way what actually transpires inside the ‘black box’. Notwithstanding his expertise, Russell’s presentation suggests a much greater understanding of AI than currently exists. Underestimating the potential of AI is far more dangerous than overestimating it’s capabilities and one does so at one’s own peril.
To think if only I was supported in a positive way versus dividing up my city giving them unlawful and negative orders or their houses will get taken the big thing is is I know that this was coming so I told my city not to stick up for me and to say negative reports and that's what they've been doing not knowing to The Outsiders that over 70% of the businesses in my city I have already consulted by 2009 how jealous and nasty do people have to be to do such a gross disgusting thing to someone that has been busting their ass I have worked so hard and my work is all handwritten so there's no doubt and everyone that has supported me presidents of companies and all of the employees of my city I think they should have done their research before they started I helped so many people in the country and the World Behind Closed Doors because I believe in passing out the plans and solutions and having them grow strong together. I can't wait to see what's going to happen next because now we can all come together and have the time of our life leaving behind 10 years of negative garbage along with all of the elected officials that participated in this
They gave the system (AlphaGo) a huge advantage to start the game. The system was simply not trained on that type of scenario, so it just didn't "understand" the situation.
It then played much much worse than normal, and that's why the amateur player was able to defeat it in that "abnormal" scenario.
It's just one way to show that these systems (up to now) do not acquire the same "type of understanding" than a human gets.
Of course, if from now on they trained it also in those type of abnormal scenarios (starting the game with a such a huge advantage from the get go), it would soon wipe out the human easily in those situations as well.
Thank you for a peak inside of the somewhat obscure world of go
@@franciscocadenas7939 nice pun
Stuart isn't as bad at seemingly intentionally underestimating abilities/capacities of todays advanced AI systems compared with Yann Lecun for instance, but pretty close.
tldw we're so cooked
His voice didn’t age at all 😮
True so called "AI" will explain to you all or yours grandchildrens why me
huh?
Humanity is a problem for the world, if AI can solve that then game on... Either way, it won't be AI doing damage it'll be the people using it.
It will be both. There is a reason that some technologies aren't equally distributed to humanity, and this will be the epitome of that little problem.
Ai safety first thing to mind must be human infrastructure Individual responsibility not personal anything. # 1 confusion is personal responses, rogue terminaters these free will actors are not soul agency driver of individual responsibility.
The american experiment has a computational future in mind and doesn't follow Europe for this very reason. Imported dualism take is problematic.
The Amish has room with plenty rule and regulations. That marked the moment for that path.
But the ones we chose the past 80 years,state raised kids structuralism ,charging the famiky for everything from woman's suffrage to affirmative action liberating all common sense marginalized groups leaving only criminals. To industrialize 3rd world nations, all our 1900s structuralism socio-political, economic, educational human infrastructure is authentical to american founding principles.
You can not have prohibition era top down rule city's denying unification between urban and rural Americans . It undermined our states created plausible deniabilty and far to many loopholes to stoke division through.
80 years into the transitor and its China and elon musk who forced mercenary chatbot llms for hire to show us a tease. China hasn't even been industrialized but since reagan extended ww2 temporary waivers allowed oligarchy to form and work with cheap Asian labor to open China. Obviously we now prepared Mexico with socialist who naturalized the resources and trained an army of engineers ready for the small part manufacturing to move.
For 80 years microchips have been on foreign soil far from American domestic courts jurisdictions. Farmed out electronic industry to South Korea with full access to patents and loans where they created Samsung.
It's understandable only in how Apple in China and Microsoft all are hand picked by both political parties allowing them to deny the tax payers will.
Rules and regulations allowed higher ed to consolidate and run up debt on 12 yr degrees that removed 18-30 year Olds from the workforce that was replaced by illegal immigration to drive down wages.
Enough is enough, esoterica America and the majority heritage was born before mechanics inspired and helped invent it with foresight on this computational age.
It caused the Amish to bail on the american experiment.
The low number of views reveals the levels of current human intelligence.
Rodriguez Brian Moore Karen Brown George
Somebody deleted my first comment!🤨😮 Don't make me come over there! 🖐️ slap. 😄
The only fail to AI so far is chat GPT losing the Sky voice 🤦🏼♂️ it was good for that week 😭
Brown Dorothy Moore Kevin Brown Brian
This guy controls problems
I think Russel misunderstands the value proposition of Gen AI if he thinks it's outputs are going to be clearly labeled- it's value lies in the fact that it's a cheaper way to replicate the work of artists, photographers, musicans, writers ect- but this value would be undermined if all of these outputs were to be clearly labeled as fake.
Imagine an advertising campaign in which all of the images used to promote the product were declared to be fake images produced by AI- how much trust would the public have in the advertiser and their claims?
Gen AI is inherently deceptive because it's outputs closely resemble the works it was trained on- works that were created by humans. There is no such thing- for example- as an AI Photograph- there are only images that have been deliberately generated to look like photographs. Which immediately raises the question; " Why would anyone create a fake photograph?" To which there can only be one real answer; In order to leverage the trust we place in the photographic image- a trust that in this case is totally misplaced.
So the very act of creating a fake photograph with AI is inherently deceptive because it's an attempt to lay claim to a verisimilitude that is not actually present. And the same is also true of fake Art, fake music and Fake writing- all present the same problem. Imagine, for example, that you receive a sympathy card from a freind, adorned with a tasteful image on the front and inside there is a message expressing their empathy and concern. However both the image and the message are clearly labeled ' Created using Artifical Intelligence'- how do you feel about your freind now? How sincere does their expression of sympathy appear when the very message they chose to express it was written by a machine?
Or perhaps you are in the market for a book for your children- what about this one where it says boldly on the cover ' Created using Artificial Intelligence'- it will be nice, won't it, reading a book to your kids that has been written and illustrated by a machine. And why not buy a novel for yourself at the same time- what about this one- it too says on the cover 'Created using Artificial intelligence'- so as you settle down to read you can appreciate the fact that the book in your hands will take far longer to read than it did to write.
So no- AI Generated content will NOT be clearly labeled in the future because to do so will destroy any economic advantage of using AI Generated content- and too much money has already been invested in the prospect of replacing human made content with AI. At the very least such a labeling law would bifurcate the market into 'Human Made' and 'Machine Made' which would lead to a situation where the machine made content would inevitably come to be seen as cheap and nasty- a downmarket substitute for the real thing.
Very well reasoned and expressed. I've arrived at the same conclusion but have not expressed it so concisely.
5:04