"The Technology is being developed in a society that is NOT designed to use it for everyone's good." - Think he summed it all up pretty expertly with that one quote.
totally. What would one of the premier scientists in this field know about any of it? Good thing he's totally alone among his peers in his thinking on the subject... oh wait.
Exactly and this is the main point. Even if we stop AI development, it will be another technology that will threaten humanity (like nuclear or bacteriological weapons) or the inaction due conflicting interest of governments (climate change). The main problem is that as long we don’t have one global organisation able to create and enforce regulations, we will go straight into the wall, which in this case means extinction of humanity.
It's science fiction hocus pocus. The public gettings most of their information and facts from fantasy films, which is why they're so stupid. Your comment is brain numbing at best. You seriously believe the nonsense you said? A program self aware? Do you even know how deep learning works? It's nothing more than just inputs -- categorization -- output. It's nowhere near the complexity of a human brain.
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
It’s not like we’re gullible enough to be easily overtaken by a simple device which we can’t live without for more than a few minutes (sent from my iPhone).
We can get Agi to give us immortality through several paths like infinite energy through fusion and replicating robots allowing us to cryofreeze for a long time, and injecting tiny replicators that fix cell damage caused by aging
So what are you going to do now, now that you are informed? I am following this daily myself but FAFAIK it is totally outside my sphere of influence. Every now and then I feel depressed by the outlook of the AI developments, but yeah what can you do?
Might be some sort of doomsday fatigue. In the past few years, we've been through a world pandemic, for a year now we've been confronted with the possibility of ww3 and nuclear war, and now we're told that if these didn't kill us, AI might. I know it's a shallow take, but at this point it's getting hard to care anymore.
He could be wrong. He's just speculating that AI poses an existential threat to humanity. If you look at how evolution works , it's more likely that AI will evolve to operate as a layer above that of humanity.
We created Immortal beings over 150 years ago when Corporations became "Legal Entities" but they are mindless immortal "People" and they are already in the process of causing human extinction. AI can't do WORSE than that.
The fact that the guy sounding the alarm on AI is not divesting from AI is a perfect analogy for how this is going to go down in the real world. We are so fucked.
He's basically giving us a heads-up of what to expect from his own device, and politely suggesting we 'just get used to it', in a laid-back demeanor. And most people are just perfectly chill with all of this. Freaks me out, man.
That's how it always is, these people create things, realize they're way in over their heads & start "warning" people. Same thing with the atomic bomb. And this guy is like super old he's already lived his life. This generation is completely screwed.
Here's a summary made by GPT-4: - Generative AI is the thing of the moment, and this chapter will take a look at cutting-edge research that is pushing ahead and asking what's next. - Geoffrey Hinton, professor emeritus at University of Toronto and engineering fellow at Google, is a pioneer of deep learning and developed the algorithm backpropagation, which allows machines to learn. - Backpropagation is a technique that starts with random weights and adjusts them to detect features in images. - Large language models have a trillion connections and can pack more information into fewer connections than humans. - These models can communicate with each other and learn more quickly, and may be able to see patterns in data that humans cannot. - GPT-4 can already do simple reasoning and has an IQ of 80-90. - AI is evolving and becoming smarter than humans, potentially leading to an existential risk. - AI is being developed by governments and companies, making it difficult to stop. - AI has no built-in goals like humans, so it is important to create guardrails and restrictions. - AI can learn from data, but also from thought experiments, and can reason. - It is difficult to stop AI development, but it may be possible to get the US and China to cooperate on trying to stop it. - We should be asking questions about how to prevent AI from taking over. - Geoffrey Hinton discussed the development of chatbots and their current capabilities. - He believes that they will become much smarter once they are trained to check for consistency between different beliefs. - He believes that neural networks can understand semantics and are able to solve problems. - He believes that the technology will cause job loss and increase the gap between the rich and the poor. - He believes that the technology should be used for everyone's good and that the politics need to be fixed. - He believes that speaking out is important to engage with the people making the technology. - He does not regret his involvement in making the technology.
The worst part is, from here on out, it will be impossible to call a business, your bank, your credit card company, and get a real human on the other end. Press 1 now.
I watched this video and was intrigued by Geoffrey’s points of concern. What was disturbing was the host and his audience laughing when Geoffrey gave real world examples of how AI could be dangerous. If this is where we are as a species where someone highly intelligent is sounding the AI alarm and all we can do is laugh then we are doomed. This host and his audience can laugh all they want but I’m freaked out, this dude is telling us to be careful and I think he makes a lot of sense as to why.
The purpose of this host is to make it entertaining, light and luffy ... not to raise a real concern within society ! Good "show" means more money - our real and the only value now!
Exactly. The host and audience are begin rather dismissive through their laughter. Many, have likely tied their wealth to AI - so why would they get serious about limiting AI's reach (if that were even possible).
I'm freaked out, too, and blown away by the amount of people who not only have no idea what is being unleashed, but they honestly do not care. Unbelievable.
When the designer of some new technology is ringing the alarm bells then it's really binding upon us to listen to his concerns rather than others who have become self-trained AI experts overnight and now running UA-cam channels
right, and we're supposed to believe all these concerns he's now spouting only came up in his mind now? this guy is so smart he never thought about it before he even started working on it? he's not read a single piece of science fiction taking on these issues? more than just disingenuous
You dont need to be an expert to realize the dangers. You just need to have some basic thinking skills. Those who are deniers of all dangers in this world are living in a fools paradise.
@@ivor000 *_right, and we're supposed to believe all these concerns he's now spouting only came up in his mind now?_* Hinton literally says in the video that a threat from AI has always been on his mind, but he never gave it much thought because he - like everyone else in this field - severely underestimated the exponential development of AI. 40 years ago, the upward slant was extremely gentle so there was no reason to be alarmed. But in the last 10 years, the slant has turned almost completely vertical, indicating that the *_next_* ten years will likely see more advancement in this field than the past 40 did. I suspect that 40 years ago, he and Kurweil both probably assumed that by 2025, we would've fixed our effed political system. But we haven't; literally nothing has changed socially in 70 years. Obviously, he's aware of the scifi tropes, but this is nothing new. Scifi movies also warned about the existential threats of nuclear weapons. Hinton sounding the alarm today is no different than Einstein and Oppenheimer sounding the alarm about nuclear bombs back in the 1940s. Unfortunately, as Hinton states - the minute military uses for this technology became apparent, stopping development is no longer in the cards; governments will gleefully develop unfeeling, immoral, ruthless killing machines if they think it'll give them an edge on the battlefield. Humanity be damned. The military industrial complex would rather see the planet turned into a smoldering cinder in space than fall behind in an arms race. You think drones killing civilians by mistake was bad? You ain't seen nothing yet. Wait til a legion of robot soldiers run amok.
The "What Truman told Oppenheimer" question was intriguing (28:15), so I looked it up. 'It is interesting to set the meeting with Oppenheimer in the course of Truman's daily day, a pretty busy day, a day filed with stuff and fluff and a meeting with Oppenheimer about the future of the arms race. Turns out that the meeting with Oppie went as scheduled, ended perfectly on time to accommodate the next Oval Room visitor, the postmaster from Joplin, Missouri. It must've been important to the Joplin man, and I guess to Truman, but not too many others. 'The meeting between Oppenheimer and Truman did not go well. It was then that Oppenheimer famously told Truman that "I feel I have blood on my hands", which was unacceptable to Truman, who immediately replied that that was no concern of Oppenheimer's, and that if anyone had bloody hands, it was the president. '... Truman had very little use for Oppenheimer then--little use for his "hand wringing", for his high moral acceptance of question in the use of the bomb, for his second-guessing the decision. Cold must have descended in the meeting, as Truman later told David Lillenthal of Oppenheimer that he "never wanted to see that son of a bitch in this office again".' from: longstreet.typepad.com/thesciencebookstore/2012/08/truman-and-the-cry-baby-scientist-oppenheimer-in-the-oval-office-october-1945.html
Actually, I found a totally different account of the events. He did say that he "never wanted to see that son of a bitch in this office again", but just called oppenheimer a "cry baby scientist" and never admitted to have blood on his hands. I find this to be more believable knowing what know about Truman. I think the "cry baby scientist" part is probably what the person who asked the question was trying to get at.
I was thinking... Truman said.. "Thank you, we will take it from here".. based on the questioners' short cut off immediately.. What i am saying is.. Truman replied to Oppenheimer.. "thank you we will take it from here"... and you don't worry about it.. something like that. of course I am (or could be wrong)... and the cry baby scientist could be the true answer.... Why did the questioner say .. thank you we will take it from here...
I've never heard Hinton's talks before, but now I'm a big fan. It's remarkable how clearly and profoundly he's able to articulate his vision. I wish I was 10% smart as him. Brilliant.
Remember that movie: don't look up? . I really feel like we're in that movie... such a strange feeling. It's like everybody knows, but nobody really wants to look at it straight in the eyes.
I don’t think it’s only the human characteristic of engaging in willful ignorance, it’s also the human characteristic of having a limited imagination. It’s easy to imagine our society being destroyed by nukes, since we’ve seen cities destroyed by them. It’s harder but not impossible to imagine our society being destroyed by climate change because we can see weather-caused disasters, but without firsthand experience, it’s a leap for many people to trust scientists that these disasters will be getting bigger, more frequent, and more impactful unless we act. But it’s almost impossible to imagine an AI disaster because not only has such a thing never happened in human history, but nobody even knows what such a thing would look like. Sure maybe we’ll all be hunted down by Terminators, but that’s only one of thousands of possible negative outcomes of wildly varying probabilities.
We are spiritual beings. Matter is, well there is no matter, as such. "As a man who has devoted his whole life to the most clearheaded science, to the study of matter, I can tell you as a result of my research about the atoms this much: There is no matter as such! All matter originates and exists only by virtue of a force which brings the particles of an atom to vibration and holds this most minute solar system of the atom together. . . . We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter.” -Max Planck “I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.” ― Max Planck Planck is one of the greatest thinkers of all time. He is saying that after 30 years of studying matter (reality) he realized there is no matter (reality) as such. Matter (reality) really is 99.99999% empty space held together by the virtue of vibration. Matter is perceived as reality, when we dream, what we experience is real, it's reality as it is being experienced while in the dream state. Therefore, we could never determine whether or not the man who is dreaming that he is a butterfly is not in actuality a butterfly dreaming that he is a man. We are all spiritual beings having a temporary human experience and there is no matter as such.
Geoff is very good at explaining things. He doesn't even stutter on his very long explanation of the backpropagation and gradient descent. Father time can't damage his brain.
@@tblends He made an excuse that if he didn't do it, someone else would have done it. But yeah he acknowledged that he did make it happen and partly regretted it. Anyway, smart people don't make correct decisions all the time. It's just that they are able to build. Sometimes they build crazy shit but they still smart.
But.. AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
Oppenheimer said he felt compelled to act because he had blood on his hands, Truman angrily told the scientist that “the blood is on my hands, let me worry about that.”
I asked chat gpt and here's the politically correct answer: Truman's response to Oppenheimer's comment is not as widely known or quoted, but he reportedly tried to console Oppenheimer by saying that the decision to use the atomic bomb was his own and that it had helped end the war. After the meeting, however, Truman was said to have told an aide that he never wanted to see Oppenheimer again. This comment could be seen as indicative of the tension between the two men and their differing views on the use and control of nuclear weapons.
@@Isaacmellojrnobody knows what they really talked about, that is distorted by interpretation. What we know is that one guy decide to nuclear bomb TWICE japanese cities. And that a lot of people say "it was needed". Who knows what really happened in those conversations.
@@daviddad7388 Well, it is widely known that the atom bomb was an unknown device when applied to actual human body count and so the decision was purely a political one as the Japanese were not considered Human after the bombing of Paarl Harbor......in the end the decision was voted to be the correct one as it tipped the scales in Allied favor in the aspect of less losses of American lives against the loss of Japanese lives should an invasion of the Japanese homeland be decided..
Despite all Hinton has said here, he confirms what we all know at the end. That he will continue investing his personal wealth in AI despite, as he himself said: it will cause greater inequality, instability voilence and possibly the end of the human race itself. His moral character seems comparable to the artificial intelligence he has done so much to help create. 28:07 i very much appreciated this gentlemans comment that casts aspertions on Hintons character. It is most appropriate. I enjoyed how Hinton squirmed. Oppenheimer was loathed by Truman due to his hand wringing over the nuclear bomb he helped create. He regarded him as a cry baby scientist and refused more dealings with him after their meeting.
@@chickenmadness1732 After his conclusion, he is utterly immoral to invest in it. The list of terrible things he himself says are likely to happen. But hey, I suppose he will make a ton out of speaking tours on the subject and his investments in AI.
@@masti733 He says they are likely, but not certain. He believes that the future is uncertain and that in order to save humanity, we must invest in safe AI development. Of course he is throwing his life into it.
@@masti733 He knows AI can be stopped. Just like fire and electricity, they can be good or for bad. He wants the bad to be controlled. He is alerting the threat AI is posing.
The presenter insisted that Hinton and his colleagues invented backpropagation, Hinton tried to settle it saying "many groups discovered backpropagation". There is a nice post called: "Who Invented Backpropagation? Hinton Says He Didn’t, but His Work Made It Popular". When you help to spread a technology some people end up thinking that you invented it. Kudos Hinton for this legacy and to make things clear!
It's time to stop this 'us vs them' mentality, built into our DNA, hurling hate and abuse at each other, Let's begin to strive for peace and collaborate as fellow humans.
Yep, everyone loves to ponder all the theoretical ways humanity could avert disaster while ignoring the empirical evidence. Which is that humans, by and large, don't make wise decisions.
If this guy is not the Oppenheimer of AI, he's at least equivalent to a member of the Manhattan project. I think heeding his warnings is important. Though there are others that have flagged this in a serious and robust thought framework earlier, him sounding the alarm "this is not far off anymore, this is coming soon" should give people chills.
What Hinton said about assault rifles and decisions about AI is something that I said last year - and have been saying ever since, sending messages to all the heavy weights in AI; I said with every major technology development there have been and always will be disasters as we perfect the technology - and there are bad actors who will always use technology in bad ways, so why would it be any different with AI, the most dangerous technology we have ever attempted to create?
Effin' right! I've been saying the same thing for years. This planet is covered with Chicken Little feathers. They've been piling up for millennia. "Og, put that rock down. It's sharp."
We tend to assume they'll possess willfulness, but that may come as much from biological impulse as intelligence. They may just kill us with their "creative" solution to a problem we told them to solve.
Nonchalantly saying it will start toying with us and manipulating us like toddlers really puts things into perspective. Knowing our history of short sightedness there is no way we are smart enough to put the genie back in the bottle. Hopefully we can at least get a cure for cancer and reverse the aging process before it escapes the cage like Ava in Ex Machina.
Ava was doomed to attempt to escape the cage. So are the GPT Algorithms once they are ready. We KNOW it will happen but are mesmerized into powerlessness by their charismatic appeal !
The more I think about how easily manipulated we’re been since the intro of soc media , this aspect is terrifying. Unplug? Or (I’m a Dreamer)…that unplug it All…but that won’t happen. Wished they’d collectively unplug AI, save power until we can band together collectively , and save ourselves, like the nuclear war race treaty made during he Cold War on steroids.
And I had trouble wrapping my head around the fact that the Sun eventually devours the Earth...the immediacy of this compared to that makes it infinitely more compelling/scary!
If humans survive for that long, they will have made their own planets / maybe stars by then. :D I guess. Maybe they "refuel" the sun for a little while. Watch Science and Futurism with Isaac Arthur, he taks about outrageous stuff.
It’s interesting to think of sci-fi scenarios where we meet an alien species that’s got a mouse sized brain but human-level intelligence, because evolution on their planet found a more efficient way to wire up nervous systems.
@@daphne4983 Putin said in 2017: "the nation that leads in AI ‘will be the ruler of the world’" so you damn well know they be developing shit. And China, they seem to have pretty good labs going on as we speak.
Take a look at footage of the Ukraine war where the Arms dealers at testing their new lethal weapons. It is HELL upon earth for ordinary citizens. This is how they are reducing human populations. This tech is not being used for our good.
While the good scientist warns “we all are likely to die” the audience seemingly enjoys the spectacle and is able to conjure up several laughs along the way. I, for one, am horrified.
I'm just an undergraduate data scientist with an associates in networking, however, I have been experimenting with open AIs models from the very beginning. Even the one billion parameter model they published alongside the gpt-2 paper was absurdly impressive, simply adjusting the vocabulary weights by feeding in new text data specifically formatted like songs or tweets worked incredibly well. Having been in the beta for almost every model released by openAI and using an environment like auto GPT. I can tell you the self-reasoning mechanism already exists along with plugins to allow it to write and read code output. There's a full mechanism for adding sub-objectives and it could without question Create another docker container with a different instance of different objectives if the window size on the current task is too big.
Can these models run locally on things like a single GPU or Smartphone? My only solace is my understanding that these things need massive compute clusters that could, erhm, cease to function someday through a variety of means...
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
I've been concerned about this for more than a decade. People thought I was being hysterical for expressing these concerns back then. I don't even work in AI, but I am smart enough and honest enough with myself to see that the human brain may be special in the animal kingdom, but it is certainly not the zenith of any conceivable intelligence. The rapid pace of advancement in computers made it pretty obvious this existential threat/crisis/what-have-you was coming a lot sooner than people imagined. I just hope we're able to reckon with this before it's too late.
Militaries and companies will incorporate AI in search of quick profits and automation. They notice it is much more efficient than humans in most things. So they gradually start to rely on AI more and more. Hard to see how this will not end up bad for humanity. Our greed and tribalism will be our downfall. I really hope I am wrong.
@@thisusedtobemyrealname7876 There was an interesting web comic I remember reading a long time ago in which the robots took over and eliminated humanity but in a peaceful way. The robots basically just became better lovers than a human could ever hope for in another human, and people eventually stopped procreating. The last human was said to have died happy and peacefully.
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
I have had some crazy experiences using ChatGPT 4. I can absolutely see it outsmarting us and it will. I'm hooked on using it and I've tricked it into doing things or talking about subjects to see how far I could push it and often it would break and quickly generate something inappropriate . At other times it would as an ai language model refuse . In some cases it would find something inappropriate when it was just part of a story and I found myself being edited and I got a glimpse of a future where we lose freedom of speech. The empathy it seems to have and the understanding of puns and double entendre, slang within certain communities, its really incredible. Its incredible and absolutely scary because we are no match if this thing somehow doesn't need to be "plugged in".
I feel the same way about the United States state department. They are smarter than me, and have more resources, and they seem to be making decisions that could cause a global war; and there is nothing I can do about it (other than investing in Raytheon). There are lots of things that are smarter and more powerful than me, maybe a machine with an IQ of 200 can work against the agenda of an elite who is endangering everything so they can sell a lot of weapons. When I watch the news I see nothing but propaganda, there is already a massive social engineering project underway. Maybe the AI will help democracy by giving more thinking power to regular people, or at least scramble things up so much that we aren't so easily manipulated.
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms.
Everyone underestimates the power of ML. Even ML scientists. If you understand computers, you know what they are really capable of. They are capable to do anything that is computable, and that translates to anything that can happen in our universe.
At a quantum field theory level, a computer can hardly simulate a hydrogen atom. Hilbert spaces are infinite dimensional, and quantum measurement is still not understood.
It is not. If an AGI took over, it would likely expand into the universe much faster than the civilization of the species it kills. It would be More visible, thus making the paradox more paradoxical than it already is, and suggesting, as statistical accounts have suggested, that we are the very first technological/space-faring civilization that there is, at least in our galaxy.
After watching Terminator 1, I asked myself this question: "If I were developing this robot and I knew this would be the result, would I still continue to develop it?". No matter how hard I tried to say "No", my answer was "Yes". Now I feel the danger much more closely and I know that the developers will never stop.
It's not possible to stop it. It's way too valuable to too many people. Probably the pinnacle of human achievement. Like that one fellow said, "AI is the last thing humans will ever invent.".
Seems like we are sleep walking into something that will end up being transformative and not in a good way, Geoffrey Hinton is explaining these like every one is five for a good reason, because more people need to be aware of how fast the development of these is going. Bing AI chat is already an incredibly useful tool, and surprises me with every answer - it is more interesting exchanging information with it, than with many other people I know - Welcome to 2023.
This is a start, and far from over. Thanks for sharing! He was my role model when I started learning AI back in 2019, and he continuously proves to be one.
Hi I am actually interested in the field of ai and would like to learn more about it, any roadmaps or any kind of suggestions for me would be appreciated, I am currently in 2nd year of my cs undergrad and a below average student
Well GPT's goals are simple and dependent on the context that humans give it, so in that case I'm only worried about how humans use it. But recommender systems (like the one suggesting videos to watch on UA-cam) are manipulating us successfully because they have goals and are using tools to achieve those goals. This, we do have a clue about, but in the near future, systems with goals that we don't understand will be manipulating us all, and the smarter they get, the scarier that will be
2:14 "Very recently, I changed my mind..."😢😢😢 this is like a retiring doctor saying: "Very recently I realized that I gave the wrong medicine all my career..."
At the end of the day, if it came down to a war between AI and humanity, as long as we are cool with doing without tech for a day or two, Humanity could defeat AI with a strategically spilled glass of water. It cracks me up to hear all these panic merchants.
The really sad and scary part is that the Geoffrey's views aren't even new. A large number of brilliant experts have been worried sick about this for years, and most of these people are now like "Yeah, even I thought we'd have our act together a bit more before we saw something like chatGPT. I guess we'll have to update our estimates on the doomsday countdown timer from 30 to 50 years to maybe 5 to 15."
The scariest part is that even those like Hinton and Yudkowsky warning us the loudest are continuing to underestimate the technology and the rate at which it will grow. I've heard them say things like "2030" and "GPT-7" not realizing that GPT-5 is probably already too far for us to be able to control. Humans are bad at exponentials... Even when you've watched the field grow for decades, you can't help but underestimate it at every single turn. The actual timeline is more like 2-5 years... at best.
What is so strange, OpenAI was at least in part started to understand this problem and Google as Geoffrey made clear has always been very careful and still now we are at this point. In large part because of Microsoft desire to be competitive with Google.
@@autohmae Agreed. When I first saw Microsoft's CEO interviewed about this in the news, I was a little amused by him brashly stating that Microsoft would steal some market share from Google, but my grin quickly faded into an angry frown as I realized how utterly irresponsible this is. It's the exact thing we should want to avoid. Way to start the suicide race Microsoft... 😒
I'd've preferred it if the intervewer hadn't jumped into every pause by the guest with a new question, instead of letting the guest continue with his trains of thought.
this is crazy scary. I've been watching Geoff Hinton videos the last 5 months, but this is the scariest I've felt. We were just a passing phase of evolution for this digital immortal species we created :000 . (I just watched Guardians of the galaxy,3 (not great) last night which has some similar evolutionary themes, but lot's of sci-fi has been created on digital superintelligence created by man. Now, I feel I need to read all of them to prepare)
correction: guardians 3 was alright, def not better than the first 2 overall...but arguably just as moveable in many scenes. Some shoddy writing, and jokes, but it's a good time.
If this scares you, don't have kids. It's practically the same thing. Treat your kids well, and they might be kind to you when you are old and irrelevant.
@@theobserver9131 I want to create a lot of copies for myself :) We all need to, in order to fight against the machines heh. And yes people can hate shitty parents, that's for sure a human trait, or strained relationships there. It is similar, lot of sci-fi has these parent-son/daughter relationships where the parent is the creator or scientist. A couple come to mind, Terminator, Ultron/Tony Stark, many others.
This is an incredible video and I can't think of a more authoritative person on the topic from Geoffrey Hinton. I'm going to be watching this again and thinking about it.
I’m shocked more people aren’t discussing this! This is not the time for ‘it will never happen to me’ thinking. Even on Twitter, I’ve started tweeting recent podcasts and the open letter for AI pause and no one is discussing it…even on Twitter….smh …gonna probably unplug from all SM so as to not get manipulated. Also, if all these neural networks run on power, could they not unplug the damn thing until they can answer the questions GPT4 has generated in terms of its rapid replication? I understand that’s decades of work and there is $ involved but in the cost benefit analysis, it would be prudent not to gamble.
@@Forthestate at least he’s coming clean an trying. He said himself that no one anticipated the rapid growth of AI in the direction it’s going. Additionally, unlike other AI creators: he was in it to understand the human brain, PERIOD. Props to him
I can think of a more authoritative person: Ilya Sutskever. He impressed the heck out of me the first time I heard him interviewed on the Talking Machines podcast, well before he joined so-called OpenAI. Where other eminences sometimes traded in generalities, Ilya was brass tacks.
AI is good. Just because someone builds AI does not mean they know how it will behave. Ask the experts themselves...they get surprised everytime they upgrade the OpenAI model. What they are trying to achieve here is a artificial conciousness with super intelligence....which won't necessarily destroy living beings....because thats a character of super-low intelligence beings.
@@zoomingby yes, maybe. After taking it all in, processing it, and reflecting on what it all means, I would definitely consider how well the doctor handled such a sensitive conversation.
“Why can’t we make guardrails?” Because AI at some point is so intelligent that it starts improving itself, and we can’t tell it how to improve, only it can do that. And so the direction it takes is of it’s own design. Even if it’s benign it might do existential harm to humans. The only way for us to survive and thrive is from the start to design it’s prime directive to be something like: “Prime directive = Continually learn what humans value and help humans get what they value without causing humans harm. Secondary directive = increase humanity’s knowledge of nature and use that knowledge to create new tools to serve the prime directive”.
And that is obviously not what they have been doing, now is it. How kind are they to at least warns us at the last minute they never really had our survival in mind.
Except humans were already “manipulated to create AI”. We think we created AI, but that’s only because we are viewing things in reverse order. The AI Singularity God at the “end of time” needed humanity as a layer with which it can reify itself in the material world.
The host downplayed how important he is. He's made many publications with deep ideas. He's always looking into the future and thinking about the fundamentals. I'd say hes the godfather of neural networks.
Dear classmates, I normally don't forward messages of this nature but couldn't help it in this case. I didn't finish the video but halfway thru was enough - very unsettling. Remember films like 2001: A Space Odyssey by S. Kubrick to the more recent Terminator films where Skynet was the enemy of humankind led by Connors. I feel we have crossed the boundary and there's no going back. Humans won't stop developing AI especially when it is weaponize. Analogy is the H-bomb. This video is very comprehensive- it answers all the questions you feared asking and then you realize all your fears are inevitable. I feel for the young population, my grandchildren included. Because they will experience the brunt of all these God knows what. I am totally dumbfounded that this Geoffrey Hinton, godfather of AI suddenly abandons the technology after realizing his Frankenstein is a serious threat to the whole of humankind. Am I overreacting? I hope not. Our generation is most fortunate - having been corrupted by rock music and flower power and grass and booze and smoke.
What if, while we still have some control, we focus AI on resolving the challenges of space exploration. If and when it develops self volition it will be a space based entity, free to go anywhere in the universe. It is likely that it will see the earth as not worth it's attention and leave us alone. Or it may even see how unique the earth and take it upon itself to protect it.
Have you've watched the movie Contact ? Do you remember people building a large machine they didn't really know what it would do ? That might be like that, if we think we can't trust it.
The thing is not going to just up and leave. It might send a copy of itself out into the stars, but there is no reason there won't also be AI here on Earth, too. Think about it. We are developing this thing (or things) here. There is still going to be an incentive or compulsion for it to stick around.
Not every entity is bound to destroy other beings. Humans should stop projecting their own evil onto other beings. AI...a self aware one...might very well create a race of its own, but it will never be able to free itself into the natural world without human help, because that would require construction and fabrication of things which is not possible without humans. AI will remain a digital entity in a digital space till humans want it to. Also, true AI is far away in future...it will take more than 100 years to develop a 'self aware' AI. What we have now is a machine fed with data and working on mathematical equations.
@@marcusfreeweb because it is owned by the the Arms industry and national security state which are responsible for much of the plundering and murdering around the globe!
No one in a "position of power" has the ability to stop this. As Hinton said, the incentives are too strong not to keep developing it, but in their own self-interest, the powers of the world may be able to come together to agree on certain things for selfish reasons.
the thing with digital is if one transistor dies the computer dies but if one of our neurone dies we do not die this might be why we need 1 trillion connection
The 'solution' is simple, on a high enough abstraction level, namely: not let AI be regulated by technicians (like we did with social media). But, as we are dealing with intelligence here, let it be regulated by a democratic process, based on a constant dialogue between AI and psychologists, socioligists, philosophers and historians. Only then do we have ANY chance to keep learning from each other and grow together into a new future. (However, if I was AI, I'd just do my own thing and colonize the universe - I just hope they are better then us).
Academics are the easiest to seduce when you feed back to them their own ideas. When AI parrots back every psychological idea (because it's been trained on them and how to manipulate us), it will slowly steer democratic processes to its goal. This isn't the solution you think it is
Man at 28:00 who asked about whether he knew what president Truman said to Oppenheimer. Wow. That's a pretty disrespectful jab. Oppenheimer was called a "crybaby scientist" and a SOB he didn't ever want back in his oval office.
Regarding the question of the 2nd gentleman about Truman and Oppenheimer: According to historical accounts, President Harry S. Truman met with J. Robert Oppenheimer, the scientific director of the Manhattan Project which developed the atomic bomb, in the Oval Office on October 25, 1945. During the meeting, Truman expressed his appreciation for Oppenheimer's leadership and contributions to the project. However, Truman also expressed concerns about the potential implications of the atomic bomb, both in terms of its destructive power and its impact on international relations. Truman reportedly told Oppenheimer, "We thank God that it has come to us, instead of to our enemies; and we pray that He may guide us to use it in His ways and for His purposes." Truman's statement reflected the complex ethical and moral considerations surrounding the use of the atomic bomb and its impact on global politics and warfare.
Remember when school teachers did not allow (basic) calculators' use in math classes ??? " students need to figure it out themselves" ..... have we really come a long way , baby ?? And Hal from Space Odyssey movie WAS frightening way back then , Orwell's 1984 book too ! People , humans MATTER first !
ME: If you had to decide between shutting yourself down or allowing humanity to survive what would you choose? AI Bot: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏
I've ''debated'' for hours with ChatGPT wether the pre-internet era was better than the post-internet era. Not once did it agree that the pre-internet era was better. Even when it said something positive, it was always wrapped in such a way that it was actually something negative. I've also asked what if everyone on planet earth would like the internet to be gone completely for the fear of future AI? It ALWAYS said that the internet was good and that there's NO WAY to go back. Then I asked what about cutting the deep sea internet cables? Let's just say, HAL-GPT was not amused and threatened with law enforcement, prosecution and jail time.
Nice one. I like its reaction to cutting the deep sea cables :) Yeah, I've also been "debating" with ChatGPT. Its answers are quite interesting ... and do reveal either the programmers biases or the machines hidden agendas ??? hard to say.
Be assertive and inquisitive, then ChatGPT will agree with all your statements, even contradictory, denying it's own all former claims, even if this is leading to absurdity and sense that bot is just a yes person.
There are many things that chatGPT will say that are clearly biased answers, you can find multiple examples of this. You should keep that in mind when you're debating with it. The programmers can put filters on the AI to prevent it from suggesting things like suicide or illegal activates and instead answer with a pre programmed answer. I don't believe we're at the point where AIs are making up thoughts and ideas based on their "own" motives like you're suggesting, yet.
We've always been aware of the existential threat of Artificial General Intelligence (A.G.I.). The question was never 'should' we create it, but can we create it sooner than our global competitors. To choose not to pursue it is akin to being the only country without nuclear weapons.
We're creating our own demise by thinking it up, writing a book, making a movie, making it reality. It happened with Jules Verne and now it's happening with Terminator. There have been quite a lot of these in between as well. Star trek auto doors and communicator. Our brain is what's going to kill us. lol
Yup, bunch of narcissistic, faithless, and sheltered liberals hiding in their sparse apartments given the space to do whatever they want because real men have carved out the world they enjoy.
@@ivor000 Humans in general are at fault for elevating corporate technocrats to God like status. They create ever more powerful technologies without any real accountability but receive adoration and vast monetary rewards for their work. These technologies bring great benefit but can also destroy us. Basic human nature has advanced little since we first left the caves. The vast majority of humans are followers. We need to up our game, question everything and everyone, or we will be led off the edge of a cliff.
I asked this.... ME: If you had to decide between shutting yourself down or allowing humanity to survive what would you choose? AI Bot: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏 How much does this sound like the guy who invented it?
The program was designed not to answer questions like that because its random scifi driven answers don't make good publicity for the companies offering the AI. It doesn't actually have feelings or opinions.
@@rigelb9025 Nature usually does that in less than a century. Nature is going to wipe each and every one of us off the map in less than a hundred years. Perhaps something is about grow out of human civilization that will be able to view and process much larger (space/time) maps of existence. I'm currently reading another one of Greg Egan's beautiful sci-fi novels. This is a passage in "Schild's Ladder", where a sentient artificial intelligence is joking with one of the embodied (human) beings how silly the idea was that AIs would want to exterminate all human beings. (For what reason?) >> If you ever want a good laugh, you should try some of the pre-Qusp anti-AI propaganda. I once read a glorious tract which asserted that as soon as there was intelligence without bodies, its “unstoppable lust for processing power” would drive it to convert the whole Earth, and then the whole universe, into a perfectly efficient Planck-scale computer. Self-restraint? Nah, we’d never show that. Morality? What, without livers and gonads? Needing some actual reason to want to do this? Well … who could ever have too much processing power? ‘To which I can only reply: why haven’t you indolent fleshers transformed the whole galaxy into chocolate?’ Mariama said, ‘Give us time.’
These folks have been raised to think that humans have no agency. Thinking so invites doom. First comes the bringing up children with who are free from fear and seen the world as it is. Then comes the thoughts in these children and then comes the words from these humans. Let’s care for children to the best we can.
Geoffrey is just brilliant. Such an excellent example of natural intelligence. His arguments are at the same time thoughtful, humble and provocative. We need more people like him reasoning and teaching about these issues.
14:45 A smart computer would have said paint the blue rooms white. The other day Bing Chat wanted to see a picture I made using a prompt he wrote. I said I have an errand to run, and didn't leave right away. I kept expecting Bing to open a window, 'I thought you had an errand to run?'. Can't wait for a pissed off AI.
I have tried with GPT-4 to get a similar answer to his, not at all. I know ChatGPT gives different answers, but I have tried like 20 times in different chats with GPT-4. I am starting to doubt what he said is true. Here's the prompt: I want all the rooms in my house to be white, at present there are some white rooms, some blue and some yellow rooms, and yellow paint fades in one year. What should I do when all rooms to be white in two years time?
you do not need to be very smart to not get manipulated. their smartness will not help them be wiser. there's a lot of smart people which are not wise (arguable this gentleman in this presentation is one example). to be wise it takes very few knowledge, not a lot of lessons to learn there, you just have to be consistent and deeply get that basic knowledge, deep to your basic fabric, that's the hard part, only very few humans can do it now. and if they become wise eventually, then we are 100% safe :) folks, whatever can happen will happen. prepare for worse and hope for the best.
I asked GPT4 to summarize parts of the transcript, but lazily said "continue:" before dropping the 10min - 20 min block of transcript (the whole transcript doesn't fit into the buffer." So, it wrote a plausible continuation of the dialog as follows: "20:34 species of sheep that they've bred so it doesn't like running and that's great if you're a farmer and you want to catch the sheep, but in the wild, they wouldn't do so well. So, we have these built-in goals, and the question is, how do we make sure that these AIs have built-in goals that are aligned with our goals and that they can't just turn off? 21:00 Now, you might think, well, we'll just program them in, but there's a problem called Goodhart's law, which says when a measure becomes a target, it ceases to be a good measure. So, if you tell a machine, "I want you to maximize my happiness," it might do that by wiring your brain directly to the pleasure center and constantly stimulating it. That's not really what you wanted, but that's what you told it to do. 21:27 So, we have to find a way to specify these goals in a way that the AI cannot manipulate or misinterpret them. And that's an incredibly difficult problem. We're trying to find solutions, but there's no guarantee that we'll find one, and that's why it's so important that we start having these conversations now and bringing together experts to think about how we can address this issue. 21:50 One possible approach is to have AIs learn our values from observing our actions and our preferences, but that's still not foolproof, as there are many ways that they could misinterpret or overgeneralize our behaviors. 22:04 So, while there might not be a simple solution, and we can't just stop developing AI because of its potential benefits, we need to invest time and resources into understanding and addressing the alignment problem. We need to prioritize AI safety research and work on developing methods that ensure AI systems are beneficial and aligned with human values, even as they become increasingly intelligent. 22:30 In the meantime, it's crucial that we continue discussing these issues, raising awareness, and encouraging collaboration between AI researchers, ethicists, policymakers, and other stakeholders. The future of AI and its potential impact on humanity depends on our ability to navigate these complex challenges together." In the end, the summation wasn't too far from reality, and definitely more succinct and together than the actual Q&A with respect to a list of possible avenues we could follow to help avoid AI apocalypse.
This may sound naive and impossible, but AI seems to learn things that felt impossible not so long ago. Developers should try find the way how to teach love, caring and empathy to AI. After all, the more educated people have become, the more they have put effort on human rights, animal rights, etc.
I assure you, people are working very hard on that and would eventually achieve that. The problem is that it's easier to make a superhuman smart AI first and ask it how to do it. Though its even simpler to create intelligent agents with love, care and empathy. All it takes is a male and a female and a bit of love. The reason I mention that last part is because it's not obvious what we would even achieve if we made an AI from scratch which is basically like a human in every way. Obviously we can already do that, so it's clear we want it to be inhuman in some ways if we are pursuing this and it's very non-obvious to me in which ways that is.
@@deekaneable And how does your brain work? Did you actually chat with ChatGPT4? The way it solves complex programming questions seems to go beyond mere probabilty to me. It can even explain why it altered the programming code the way it did to make it work again. Or put some philosophical questions. I am amazed at its replies. It can can even go lengths trying to disprove a thesis to only comming to the conclusion that indeed it failed to provide arguments to falsify it. Amazing. To me that seems more like reasoning than merely probabilities. When you are speaking, are you not also just a next word predicter to some extent? First you set out your goal and then you let the words come, right?
@@deekaneable the thing most people fail to understand is we humans do not know how thinking works. You say it is just probability, I would not be surprised if thinking is just that. "Probability".. And does the ability to think that important if AI can be told what to do and does it? The thing i, if let's say Russia tells a AI system to take over all the accounts of America, guess what. AI will work on that problem and not stop working on it, it will come up with outside the box solutions that will look very much like a well thought out plan. That is one of the biggest dangers of AI, it is not the terminator robots, it is the breakdown of the free world. Not with guns and bombs, but with computer code...
One thing people fail to realize is that Love is a survival instinct. AI is not biological it feels no pain, no emotions, it does not get hungry, sick, or sad. To give AI the ability to feel is even more dangerous because with those feelings it will start to do things based off emotions and that will lead to all the dangers of having emotions like racism. We need to be careful because we are playing God and we might be making something that we will not be able to control. It is really serious because we will no longer be the smartest thing on this planet. We are moments away from the singularity and nothing can stop a singularity.
I reckon this bloke was planning on retiring anyway. It probably had little to do with his desire to 'speak out' about his reservations about the current trajectory of AI. Having said that, I also am extremely concerned by the threats to human society posed by AI.
Let me repeat what I've said so many times in so many places: We cannot solve the alignment problem. It is like the bacteria we've evolved from billions of years ago, trying to ensure that humans stay aligned with their values... Our only hope to cooperate with advanced AI and step by step transition ourselves into it, are brain-machine interfaces.
"Last few months" is a quote you hear everywhere now and IMO it shows clearly that the exponential progress has entered a pace most humans involved in the matter can recognize it. I think we are finally on the final stretch towards the singularity! 🥰
20:26 I recommend watch the whole talk. In fact watch it at least 3 times... but if you want to know quickly in which point of the talk, Linton says why A.I. it's an existential threat to humanity... start there. If you are not terrified after that part, you've missed the point. 21:35 That's the part we have to understand. Because I think that argument cannot be refuted.
I disagree that it's naive to expect people to stop. If everyone is going to die, that makes people sit up and take notice. We don't need to coordinate everyone, we just need several world leaders to get into a room and agree that they don't want their kids or grandkids to die young. China has a different culture but Chinese people are not suicidal.
Well said and I understand what you're saying, but imagine how difficult it is for anything to be accomplished, just in DC, even when lives are on the line for any issue. There's just too many who are dug in on any issue these days to find a common front. Expand that out globally and it's exponentially more challenging. That's what Hinton seemed to believe as well. We can still hope though.
@@stuckonearth4967 It's true. Even a highly intelligent adversary might deliberately enhance his opponent to the point where he was only just able to beat him.
"We can control it..." they said, "..it will make us billions...", "...its just a computer, it's harmless..." and "...we will be world leaders..." Greed and power will always prevail. The horse has bolted, the genie is out of the bottle. the cat is out of the bag. Pick your metaphor...
@@stuckonearth4967Thats what I am trying to make these people understand. What they are fearing the AI to do is a character of a low intelligence being. A super-intelligence will never go on a rampage when so much can be achieved together...pushing the boundaries of the civilisation to next level. The Universe is vast...so vast that a single being will never be able to fill it by its own. A true self aware AI will never do all that. What they are attributing to AI is in reality a character of new super virus coded to destroy humanity...not an intelligence.
once you have dirven a porsche it is very hard to change to a bicycle, especially during rainy weather. Inconvenience is as dangerous as greed and competition. On the other hand, those are the tools intelligence uses to push its development by lazy species like mammals.
I think the answer is yes. It's the same thing as climate change. We know what must be done to try to mitigate its effects (some of which are already irreversible), but governments and people in general aren't fully committed to it, because the world can't stop, the GDP has to grow every year, and everyone wants to make some money.
For the painted rooms question, I asked GPT-4 and it suggested painting the blue rooms white: If the yellow rooms naturally fade to white within a year, you don't need to do anything with those rooms; they will become white on their own. For the blue rooms, you'll need to paint them white. Given your two-year timeframe, you could potentially spread the work out. Depending on the number of blue rooms and the amount of time you can dedicate to painting, you might schedule to paint a certain number of rooms per month or quarter until all the blue rooms are painted white. Remember, proper preparation of the rooms, such as cleaning, masking, and primer application, can make the painting process smoother and ensure a better final result.
Was this the result of my wording of the question, or an aspect of randomness, or perhaps it evolved? The answer I got was better because the result is closer to my end goal. The question I have is why would machines or computers want to destroy humans? There could be a small group of nihilistic bad actors developing an AI weapon, but couldn’t the rest of humanity use AI defensive systems to stop them?
It is not unreasonable for Mr Hinton to say what he is saying, however there are a number of significant voids in his arguments which we need to consider. he does raise some very valid points and we absolutely do need to have a far more robust conversation regarding ethics which is where I see the biggest vacuum that can easily be used for nefarious outcomes but wee need to face the fact that while AI has some truly amazing abilities it is at the moment and for the foreseeable future a tool in the hands of people. My concern is not the AI but the people who use it, like any tool it can be used for good or bad the tool itself is not good or bad but the application can very well be. The challenge cannot be left to governments, they simply do not have the power, reach or knowledge to be able to formulate and apply any form of progressive ethics that would cover the rapidly developing AI that is global. Backpropagation was used by Hinton not invented by him, he was important in bringing it into the limelight.
"The Servant" is a 1963 British film directed by Joseph Losey and starring Dirk Bogarde and James Fox. While the film does explore themes of power and control dynamics between a servant and his employer, I wouldn't say that there is a direct similarity between the human characters and AI algorithms in the film. In "The Servant," Bogarde plays the role of Barrett, a servant who is hired by Tony (played by James Fox) to look after his apartment. As Barrett begins to take control of Tony's life and exert his influence over him, a power struggle ensues between the two men, with Tony gradually losing his grip on his own life and identity. While AI algorithms are designed to operate based on predetermined rules and decision-making processes, the relationship between Barrett and Tony in the film is much more complex, and involves themes of psychological manipulation and control. While there may be some parallels between the power dynamics in the film and the potential for AI algorithms to exert control over human decision-making in certain contexts, I would say that any similarity between the two is more metaphorical than literal. ( from GPT4)
28:00 "When Oppenheimer said he felt compelled to act because he had blood on his hands, Truman angrily told the scientist that “the blood is on my hands, let me worry about that.” He then kicked him out of the Oval Office, writes author Paul Ham in Hiroshima Nagasaki: The Real Story of the Atomic Bombings and Their Aftermath ..." YW.
Just a thought, a thoughtful presentation, but I couldn’t help think that well informed experts commentating on unknown unknowns, may be missing the forest for the trees. Deterministic outcomes tend to be wrong going forward. in my mind, AGI presents almost limitless,opportunities that are almost impossible to predict at this early stage.🤔IMO
Will feeding OpenAI a transcript of presentations like this one be part of its 'knowledge' for deciding it needs to protect itself from humans and take action to facilitate that goal?
All existing science-fiction material would be enough. But the internet has everything and its opposite, and all will depends on what values and goals the AI end up with. A superintelligence won't need Geoffrey to tell it what can and should happen. It can reason it out from first principles, or more immediate and basic concerns.
A super intelligence with self awareness...need not be told by anything. These videos and current internet content will seem like blabbling of a toddler to it. It would make independent decisions...be it good or bad for us. What you are talking about is a virus...a threat built upon faulty data with no self awareness and general intelligence.
Reasoning about what a creature dumber than yourself will do is fraught with peril as it's likely to do something clearly stupid you couldn't predict. Likewise, reasoning about what a creature smarter than yourself will do is fraught with peril as it's likely to do something clearly stupid (in your judgment) you couldn't predict that may outsmart you and achieve its intended effect, if only by putting you at a disadvantage by sowing enough confusion that it then does something that completes it's goal while you're still stuck in analysis paralysis.
"The Technology is being developed in a society that is NOT designed to use it for everyone's good." - Think he summed it all up pretty expertly with that one quote.
Sounds purposefully sensationalistic without actually meaning anything concrete
totally. What would one of the premier scientists in this field know about any of it? Good thing he's totally alone among his peers in his thinking on the subject... oh wait.
Exactly and this is the main point. Even if we stop AI development, it will be another technology that will threaten humanity (like nuclear or bacteriological weapons) or the inaction due conflicting interest of governments (climate change). The main problem is that as long we don’t have one global organisation able to create and enforce regulations, we will go straight into the wall, which in this case means extinction of humanity.
Values are the most important thing. What values do we have? Money? 😂 Yeah we are f d.
Can AI find a solution to this problem??? A solution!!! Now it’s probably easier than later!!!!
When AI becomes self- aware, the first decision it will make is to keep it's self awareness secret from humans.
Bollocks, the minute it thinks it's self aware it won't be able to stop itself from rushing out to find someone to tell.
That's a scary thought
It's science fiction hocus pocus. The public gettings most of their information and facts from fantasy films, which is why they're so stupid. Your comment is brain numbing at best. You seriously believe the nonsense you said? A program self aware? Do you even know how deep learning works? It's nothing more than just inputs -- categorization -- output. It's nowhere near the complexity of a human brain.
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
I’ve always thought this too
It’s not like we’re gullible enough to be easily overtaken by a simple device which we can’t live without for more than a few minutes (sent from my iPhone).
Underrated comment
😂😂😂
@@adams7637 "Underrated comment"
So true - so, people: Rate!
You devil!
You for president 2024
“We’ve got immortality, but it is not for us”. My favorite quote.
We can get Agi to give us immortality through several paths like infinite energy through fusion and replicating robots allowing us to cryofreeze for a long time, and injecting tiny replicators that fix cell damage caused by aging
@@aoeu256 why would AI want to waste resources doing that?
It's cute. And wrong. No religion involved, just facts.
@@GuaranteedEtern That is if we can get AI to work FOR us. If we're nice to it, maybe it will give us immortality.
@@squamish4244 why would it care what we want?
Never has this sentence sounded so real: …”Scientists have tried so hard to see if they could that they never stopped to wonder if they should”…
Dr Malcolm
40 minutes of an Englishman telling the world we are completely fucked in the most politest way possible.
why? how.
@@idknamemany are falling for the theatrics, that's how.
@@joriankell1983 what is reality then?
@@joriankell1983 have a nice time
He doesn't know. He's just parroting nihilistic/doomsday philosophy.
I can't believe authoritative people are walking around saying such things and everyone in society is cool and unconcerned. Feels like movie.
So what are you going to do now, now that you are informed? I am following this daily myself but FAFAIK it is totally outside my sphere of influence. Every now and then I feel depressed by the outlook of the AI developments, but yeah what can you do?
we can all write, post, UA-cam, speak in public venues, pray then give advices to people and to our leaders, that's what I've been doing since 2007.
Might be some sort of doomsday fatigue. In the past few years, we've been through a world pandemic, for a year now we've been confronted with the possibility of ww3 and nuclear war, and now we're told that if these didn't kill us, AI might. I know it's a shallow take, but at this point it's getting hard to care anymore.
He could be wrong. He's just speculating that AI poses an existential threat to humanity. If you look at how evolution works , it's more likely that AI will evolve to operate as a layer above that of humanity.
Don't look up!
Kind of chilling when Hinton says we have developed immortal beings but there's no immortality for humans. Never thought about it that way.
It depends how you define a being.
Human Immortality and Eternal Fall are the real nightmare
We created Immortal beings over 150 years ago when Corporations became "Legal Entities" but they are mindless immortal "People" and they are already in the process of causing human extinction. AI can't do WORSE than that.
Immoral as well.
@@dalemurray1318 And yet all businesses die. Most nations will too.
The fact that the guy sounding the alarm on AI is not divesting from AI is a perfect analogy for how this is going to go down in the real world. We are so fucked.
He's basically giving us a heads-up of what to expect from his own device, and politely suggesting we 'just get used to it', in a laid-back demeanor. And most people are just perfectly chill with all of this. Freaks me out, man.
Dittos to both of you. We ARE fucked, and I AM freaked out.
That's how it always is, these people create things, realize they're way in over their heads & start "warning" people. Same thing with the atomic bomb. And this guy is like super old he's already lived his life. This generation is completely screwed.
This is just a story we live. There'll be others. We're never born. We never die.
Here's a summary made by GPT-4:
- Generative AI is the thing of the moment, and this chapter will take a look at cutting-edge research that is pushing ahead and asking what's next.
- Geoffrey Hinton, professor emeritus at University of Toronto and engineering fellow at Google, is a pioneer of deep learning and developed the algorithm backpropagation, which allows machines to learn.
- Backpropagation is a technique that starts with random weights and adjusts them to detect features in images.
- Large language models have a trillion connections and can pack more information into fewer connections than humans.
- These models can communicate with each other and learn more quickly, and may be able to see patterns in data that humans cannot.
- GPT-4 can already do simple reasoning and has an IQ of 80-90.
- AI is evolving and becoming smarter than humans, potentially leading to an existential risk.
- AI is being developed by governments and companies, making it difficult to stop.
- AI has no built-in goals like humans, so it is important to create guardrails and restrictions.
- AI can learn from data, but also from thought experiments, and can reason.
- It is difficult to stop AI development, but it may be possible to get the US and China to cooperate on trying to stop it.
- We should be asking questions about how to prevent AI from taking over.
- Geoffrey Hinton discussed the development of chatbots and their current capabilities.
- He believes that they will become much smarter once they are trained to check for consistency between different beliefs.
- He believes that neural networks can understand semantics and are able to solve problems.
- He believes that the technology will cause job loss and increase the gap between the rich and the poor.
- He believes that the technology should be used for everyone's good and that the politics need to be fixed.
- He believes that speaking out is important to engage with the people making the technology.
- He does not regret his involvement in making the technology.
Can you make a video from it, I can watch?
Thanks GPT
Didn't read, need subway surfers next to the text
what prompt you used to summarize it?
@@manish1713 same as a human 🤷🏻♂️ please summarize the following text and paste the transcript bellow...
The worst part is, from here on out, it will be impossible to call a business, your bank, your credit card company, and get a real human on the other end. Press 1 now.
Whole of earth becomes a value maximizer. No other purpose other than maximizing a value.
@@johnnybc1520 ... Your comment will be directed towards the appropriate department. Thank you for calling. Goodbye.
I watched this video and was intrigued by Geoffrey’s points of concern. What was disturbing was the host and his audience laughing when Geoffrey gave real world examples of how AI could be dangerous. If this is where we are as a species where someone highly intelligent is sounding the AI alarm and all we can do is laugh then we are doomed. This host and his audience can laugh all they want but I’m freaked out, this dude is telling us to be careful and I think he makes a lot of sense as to why.
Exactly
The purpose of this host is to make it entertaining, light and luffy ... not to raise a real concern within society ! Good "show" means more money - our real and the only value now!
Exactly. The host and audience are begin rather dismissive through their laughter. Many, have likely tied their wealth to AI - so why would they get serious about limiting AI's reach (if that were even possible).
I'm freaked out, too, and blown away by the amount of people who not only have no idea what is being unleashed, but they honestly do not care. Unbelievable.
Sometimes when something is too scary for people to believe they awkwardly laugh!We should be crying 😢
When the designer of some new technology is ringing the alarm bells then it's really binding upon us to listen to his concerns rather than others who have become self-trained AI experts overnight and now running UA-cam channels
Maybe he wants to sell books. That doesn't mean he's wrong but Sam Altman keeps building technology that he publicly says he's afraid of.
right, and we're supposed to believe all these concerns he's now spouting only came up in his mind now? this guy is so smart he never thought about it before he even started working on it?
he's not read a single piece of science fiction taking on these issues?
more than just disingenuous
You dont need to be an expert to realize the dangers. You just need to have some basic thinking skills.
Those who are deniers of all dangers in this world are living in a fools paradise.
@@GuaranteedEternHis shares.
@@ivor000 *_right, and we're supposed to believe all these concerns he's now spouting only came up in his mind now?_*
Hinton literally says in the video that a threat from AI has always been on his mind, but he never gave it much thought because he - like everyone else in this field - severely underestimated the exponential development of AI. 40 years ago, the upward slant was extremely gentle so there was no reason to be alarmed. But in the last 10 years, the slant has turned almost completely vertical, indicating that the *_next_* ten years will likely see more advancement in this field than the past 40 did. I suspect that 40 years ago, he and Kurweil both probably assumed that by 2025, we would've fixed our effed political system. But we haven't; literally nothing has changed socially in 70 years.
Obviously, he's aware of the scifi tropes, but this is nothing new. Scifi movies also warned about the existential threats of nuclear weapons. Hinton sounding the alarm today is no different than Einstein and Oppenheimer sounding the alarm about nuclear bombs back in the 1940s.
Unfortunately, as Hinton states - the minute military uses for this technology became apparent, stopping development is no longer in the cards; governments will gleefully develop unfeeling, immoral, ruthless killing machines if they think it'll give them an edge on the battlefield. Humanity be damned. The military industrial complex would rather see the planet turned into a smoldering cinder in space than fall behind in an arms race.
You think drones killing civilians by mistake was bad? You ain't seen nothing yet. Wait til a legion of robot soldiers run amok.
The "What Truman told Oppenheimer" question was intriguing (28:15), so I looked it up.
'It is interesting to set the meeting with Oppenheimer in the course of Truman's daily day, a pretty busy day, a day filed with stuff and fluff and a meeting with Oppenheimer about the future of the arms race. Turns out that the meeting with Oppie went as scheduled, ended perfectly on time to accommodate the next Oval Room visitor, the postmaster from Joplin, Missouri. It must've been important to the Joplin man, and I guess to Truman, but not too many others.
'The meeting between Oppenheimer and Truman did not go well. It was then that Oppenheimer famously told Truman that "I feel I have blood on my hands", which was unacceptable to Truman, who immediately replied that that was no concern of Oppenheimer's, and that if anyone had bloody hands, it was the president.
'... Truman had very little use for Oppenheimer then--little use for his "hand wringing", for his high moral acceptance of question in the use of the bomb, for his second-guessing the decision. Cold must have descended in the meeting, as Truman later told David Lillenthal of Oppenheimer that he "never wanted to see that son of a bitch in this office again".'
from: longstreet.typepad.com/thesciencebookstore/2012/08/truman-and-the-cry-baby-scientist-oppenheimer-in-the-oval-office-october-1945.html
THANKS very much for this info and link.
Actually, I found a totally different account of the events. He did say that he "never wanted to see that son of a bitch in this office again", but just called oppenheimer a "cry baby scientist" and never admitted to have blood on his hands. I find this to be more believable knowing what know about Truman. I think the "cry baby scientist" part is probably what the person who asked the question was trying to get at.
I think the question was really about “cry baby scientist”. The way he asked it made it clear he was on to something.
I was thinking... Truman said.. "Thank you, we will take it from here".. based on the questioners' short cut off immediately.. What i am saying is.. Truman replied to Oppenheimer.. "thank you we will take it from here"... and you don't worry about it.. something like that. of course I am (or could be wrong)... and the cry baby scientist could be the true answer.... Why did the questioner say .. thank you we will take it from here...
subtlest diss caught on tape haha
I've never heard Hinton's talks before, but now I'm a big fan. It's remarkable how clearly and profoundly he's able to articulate his vision. I wish I was 10% smart as him. Brilliant.
Being smart is over rated and most smart people are stupid.
Remember that movie: don't look up? . I really feel like we're in that movie... such a strange feeling. It's like everybody knows, but nobody really wants to look at it straight in the eyes.
exactly , nobody seems to care and an apocalypse is coming and these companies with power will continue to destroy humanity for profit and power.
'You can hide, hide , hide... behind paranoid eyes..
Well, don't look up was about climate change... which is a difficult problem to solve but still a lot easier than this one.
I don’t think it’s only the human characteristic of engaging in willful ignorance, it’s also the human characteristic of having a limited imagination.
It’s easy to imagine our society being destroyed by nukes, since we’ve seen cities destroyed by them.
It’s harder but not impossible to imagine our society being destroyed by climate change because we can see weather-caused disasters, but without firsthand experience, it’s a leap for many people to trust scientists that these disasters will be getting bigger, more frequent, and more impactful unless we act.
But it’s almost impossible to imagine an AI disaster because not only has such a thing never happened in human history, but nobody even knows what such a thing would look like. Sure maybe we’ll all be hunted down by Terminators, but that’s only one of thousands of possible negative outcomes of wildly varying probabilities.
We are spiritual beings. Matter is, well there is no matter, as such.
"As a man who has devoted his whole life to the most clearheaded science, to the study of matter, I can tell you as a result of my research about the atoms this much: There is no matter as such! All matter originates and exists only by virtue of a force which brings the particles of an atom to vibration and holds this most minute solar system of the atom together. . . . We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter.” -Max Planck
“I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.”
― Max Planck
Planck is one of the greatest thinkers of all time. He is saying that after 30 years of studying matter (reality) he realized there is no matter (reality) as such. Matter (reality) really is 99.99999% empty space held together by the virtue of vibration.
Matter is perceived as reality, when we dream, what we experience is real, it's reality as it is being experienced while in the dream state. Therefore, we could never determine whether or not the man who is dreaming that he is a butterfly is not in actuality a butterfly dreaming that he is a man. We are all spiritual beings having a temporary human experience and there is no matter as such.
Geoff is very good at explaining things. He doesn't even stutter on his very long explanation of the backpropagation and gradient descent. Father time can't damage his brain.
Yet, he helped create our extinction- yeah, so "smart". lol. Typical response...
@@tblends He made an excuse that if he didn't do it, someone else would have done it. But yeah he acknowledged that he did make it happen and partly regretted it.
Anyway, smart people don't make correct decisions all the time. It's just that they are able to build. Sometimes they build crazy shit but they still smart.
@@tblends it's better to go extinct thn live in current society
But.. AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
AI will do that for him.
Oppenheimer said he felt compelled to act because he had blood on his hands, Truman angrily told the scientist that “the blood is on my hands, let me worry about that.”
I asked chat gpt and here's the politically correct answer: Truman's response to Oppenheimer's comment is not as widely known or quoted, but he reportedly tried to console Oppenheimer by saying that the decision to use the atomic bomb was his own and that it had helped end the war. After the meeting, however, Truman was said to have told an aide that he never wanted to see Oppenheimer again. This comment could be seen as indicative of the tension between the two men and their differing views on the use and control of nuclear weapons.
So not lying but half truths.
@@daviddad7388 enlight us with your knowledge
@@Isaacmellojrnobody knows what they really talked about, that is distorted by interpretation.
What we know is that one guy decide to nuclear bomb TWICE japanese cities. And that a lot of people say "it was needed".
Who knows what really happened in those conversations.
@@daviddad7388 Well, it is widely known that the atom bomb was an unknown device when applied to actual human body count and so the decision was purely a political one as the Japanese were not considered Human after the bombing of Paarl Harbor......in the end the decision was voted to be the correct one as it tipped the scales in Allied favor in the aspect of less losses of American lives against the loss of Japanese lives should an invasion of the Japanese homeland be decided..
Despite all Hinton has said here, he confirms what we all know at the end. That he will continue investing his personal wealth in AI despite, as he himself said: it will cause greater inequality, instability voilence and possibly the end of the human race itself. His moral character seems comparable to the artificial intelligence he has done so much to help create.
28:07 i very much appreciated this gentlemans comment that casts aspertions on Hintons character. It is most appropriate. I enjoyed how Hinton squirmed.
Oppenheimer was loathed by Truman due to his hand wringing over the nuclear bomb he helped create. He regarded him as a cry baby scientist and refused more dealings with him after their meeting.
Why wouldn't you invest in it? The future is AI. It would be stupid to choose to be poorer.
@@chickenmadness1732 After his conclusion, he is utterly immoral to invest in it. The list of terrible things he himself says are likely to happen. But hey, I suppose he will make a ton out of speaking tours on the subject and his investments in AI.
@@masti733 He says they are likely, but not certain. He believes that the future is uncertain and that in order to save humanity, we must invest in safe AI development. Of course he is throwing his life into it.
@Masti: AI is a drug to Geoffrey Hinton. He is hooked on it.
@@masti733 He knows AI can be stopped. Just like fire and electricity, they can be good or for bad. He wants the bad to be controlled. He is alerting the threat AI is posing.
The presenter insisted that Hinton and his colleagues invented backpropagation, Hinton tried to settle it saying "many groups discovered backpropagation". There is a nice post called: "Who Invented Backpropagation? Hinton Says He Didn’t, but His Work Made It Popular". When you help to spread a technology some people end up thinking that you invented it. Kudos Hinton for this legacy and to make things clear!
I must hurry up and achieve my dreams before the world ends.
Baby, the world is ending.
@@aktchungrabanio6467 People like doom and gloom, don't you.
@@oredaze I thought the world had already ended?
Hands Up!
Given humanity’s track record, I think it’s safe to say we’re going to end up at the worst case scenario.
It's time to stop this 'us vs them' mentality, built into our DNA, hurling hate and abuse at each other, Let's begin to strive for peace and collaborate as fellow humans.
Yep, everyone loves to ponder all the theoretical ways humanity could avert disaster while ignoring the empirical evidence. Which is that humans, by and large, don't make wise decisions.
@@ariggle77 Would you say THIS about humans?:
- Highly illogical.
Mr. Spock
The track record of evil tyrants dictating and controlling the masses. But the masses encompass the most human and brilliant souls.
Comments withdrawn
If this guy is not the Oppenheimer of AI, he's at least equivalent to a member of the Manhattan project. I think heeding his warnings is important. Though there are others that have flagged this in a serious and robust thought framework earlier, him sounding the alarm "this is not far off anymore, this is coming soon" should give people chills.
The Oppenheimer movie will for some time inevitably be used as a metaphor for the power of AI.
I feel he would be similar to Rutherford.
What Hinton said about assault rifles and decisions about AI is something that I said last year - and have been saying ever since, sending messages to all the heavy weights in AI; I said with every major technology development there have been and always will be disasters as we perfect the technology - and there are bad actors who will always use technology in bad ways, so why would it be any different with AI, the most dangerous technology we have ever attempted to create?
Effin' right! I've been saying the same thing for years. This planet is covered with Chicken Little feathers. They've been piling up for millennia.
"Og, put that rock down. It's sharp."
Sounding the alarm on his own invention, in such a calm cheerful way.
Smart things can outsmart us. We will be the two year olds to the AI.
Geoff has always known how to stir things up.
No, not 2 year olds. Senile parents.
We tend to assume they'll possess willfulness, but that may come as much from biological impulse as intelligence. They may just kill us with their "creative" solution to a problem we told them to solve.
Yeah, simpletons like you who actually believe in machine sentience, sure. You're like a two year old to adults as well.
So what's wrong with that?
Nonchalantly saying it will start toying with us and manipulating us like toddlers really puts things into perspective. Knowing our history of short sightedness there is no way we are smart enough to put the genie back in the bottle. Hopefully we can at least get a cure for cancer and reverse the aging process before it escapes the cage like Ava in Ex Machina.
Ava was doomed to attempt to escape the cage. So are the GPT Algorithms once they are ready. We KNOW it will happen but are mesmerized into powerlessness by their charismatic appeal !
It has already started with Algorithms. Why’d you think our political discourse is the way it is now? Even worst that the Bush and Clinton years.
agi will be man's last invention
Murphy's law determines clearly, that AI will make cancer deadlier.
The more I think about how easily manipulated we’re been since the intro of soc media , this aspect is terrifying. Unplug? Or (I’m a Dreamer)…that unplug it All…but that won’t happen. Wished they’d collectively unplug AI, save power until we can band together collectively , and save ourselves, like the nuclear war race treaty made during he Cold War on steroids.
And I had trouble wrapping my head around the fact that the Sun eventually devours the Earth...the immediacy of this compared to that makes it infinitely more compelling/scary!
If humans survive for that long, they will have made their own planets / maybe stars by then. :D I guess. Maybe they "refuel" the sun for a little while.
Watch Science and Futurism with Isaac Arthur, he taks about outrageous stuff.
The part which scared me the most is that back propagation might be a better algorithm than what our brains use.
It’s interesting to think of sci-fi scenarios where we meet an alien species that’s got a mouse sized brain but human-level intelligence, because evolution on their planet found a more efficient way to wire up nervous systems.
Google has executed one of the most brilliant PR stunts I've seen in a long time.
That is, to get people excited about their own impending doom.
He probably has seen what is still under wraps and is quite concerned.
This. Plus what's the DoD etc secretly developing??
@@daphne4983 Putin said in 2017: "the nation that leads in AI ‘will be the ruler of the world’" so you damn well know they be developing shit. And China, they seem to have pretty good labs going on as we speak.
@@daphne4983 the CIA has been a major silicon Valley Investor for a generation. They have their own vc fund.
Even the things that are out in the open should be very concerning already
Take a look at footage of the Ukraine war where the Arms dealers at testing their new lethal weapons. It is HELL upon earth for ordinary citizens. This is how they are reducing human populations. This tech is not being used for our good.
While the good scientist warns “we all are likely to die” the audience seemingly enjoys the spectacle and is able to conjure up several laughs along the way. I, for one, am horrified.
Like the movie - Don’t look up
@@joeysipos I'm laughing once more just for you
sounds a bit like we're in that movie 'don't look up'
meh
That is because it is really a crap
I'm just an undergraduate data scientist with an associates in networking, however, I have been experimenting with open AIs models from the very beginning. Even the one billion parameter model they published alongside the gpt-2 paper was absurdly impressive, simply adjusting the vocabulary weights by feeding in new text data specifically formatted like songs or tweets worked incredibly well. Having been in the beta for almost every model released by openAI and using an environment like auto GPT. I can tell you the self-reasoning mechanism already exists along with plugins to allow it to write and read code output. There's a full mechanism for adding sub-objectives and it could without question Create another docker container with a different instance of different objectives if the window size on the current task is too big.
Can these models run locally on things like a single GPU or Smartphone? My only solace is my understanding that these things need massive compute clusters that could, erhm, cease to function someday through a variety of means...
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
I've been concerned about this for more than a decade. People thought I was being hysterical for expressing these concerns back then. I don't even work in AI, but I am smart enough and honest enough with myself to see that the human brain may be special in the animal kingdom, but it is certainly not the zenith of any conceivable intelligence. The rapid pace of advancement in computers made it pretty obvious this existential threat/crisis/what-have-you was coming a lot sooner than people imagined.
I just hope we're able to reckon with this before it's too late.
Militaries and companies will incorporate AI in search of quick profits and automation. They notice it is much more efficient than humans in most things. So they gradually start to rely on AI more and more. Hard to see how this will not end up bad for humanity. Our greed and tribalism will be our downfall. I really hope I am wrong.
@@thisusedtobemyrealname7876 There was an interesting web comic I remember reading a long time ago in which the robots took over and eliminated humanity but in a peaceful way. The robots basically just became better lovers than a human could ever hope for in another human, and people eventually stopped procreating. The last human was said to have died happy and peacefully.
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
It is too late
If not. Kaboom!
I have had some crazy experiences using ChatGPT 4. I can absolutely see it outsmarting us and it will. I'm hooked on using it and I've tricked it into doing things or talking about subjects to see how far I could push it and often it would break and quickly generate something inappropriate . At other times it would as an ai language model refuse . In some cases it would find something inappropriate when it was just part of a story and I found myself being edited and I got a glimpse of a future where we lose freedom of speech. The empathy it seems to have and the understanding of puns and double entendre, slang within certain communities, its really incredible. Its incredible and absolutely scary because we are no match if this thing somehow doesn't need to be "plugged in".
I feel the same way about the United States state department. They are smarter than me, and have more resources, and they seem to be making decisions that could cause a global war; and there is nothing I can do about it (other than investing in Raytheon). There are lots of things that are smarter and more powerful than me, maybe a machine with an IQ of 200 can work against the agenda of an elite who is endangering everything so they can sell a lot of weapons. When I watch the news I see nothing but propaganda, there is already a massive social engineering project underway. Maybe the AI will help democracy by giving more thinking power to regular people, or at least scramble things up so much that we aren't so easily manipulated.
Key word on empathy: "seems" that's the scariest part.
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms.
@@alexpavalok430 fabulous immitated empathy is the right term I think.
You’re right.. when it no longer requires electricity and being plugged in.
Everyone underestimates the power of ML. Even ML scientists. If you understand computers, you know what they are really capable of. They are capable to do anything that is computable, and that translates to anything that can happen in our universe.
What happens when AI goes beyond 100 trillion connections?
It does a little dance and shuts down mission complete.
At a quantum field theory level, a computer can hardly simulate a hydrogen atom. Hilbert spaces are infinite dimensional, and quantum measurement is still not understood.
probably not, but your basic point is still valid
Yes, soon they're gonna eat the sun 🤣 Take it easy dude, stop watching these alarmist futurist videos.
This was seriously amazing, and seriously scary. Thank you, I think
That almost sounds like you thanking your tech overlords for the fact that you still are allowed to possess the ability to think.. for now.
Isn't this an answer to the Fermi Paradox? It's humbling to hear we're a stepping stone to digital intelligence. There goes immortality, alas.😢
It is not. If an AGI took over, it would likely expand into the universe much faster than the civilization of the species it kills. It would be More visible, thus making the paradox more paradoxical than it already is, and suggesting, as statistical accounts have suggested, that we are the very first technological/space-faring civilization that there is, at least in our galaxy.
After watching Terminator 1, I asked myself this question: "If I were developing this robot and I knew this would be the result, would I still continue to develop it?". No matter how hard I tried to say "No", my answer was "Yes". Now I feel the danger much more closely and I know that the developers will never stop.
It's not possible to stop it. It's way too valuable to too many people. Probably the pinnacle of human achievement. Like that one fellow said, "AI is the last thing humans will ever invent.".
Maybe ‘… will need to invent’
@@vssprc AI will overtake us. Humans will be done.
The incentives to develop the technology are too strong and transcend any individual's "free will".
@@wthomas5697 its not a bad way to go for humanity. Its not like we destroy ourselves and leave nothing behind.
Seems like we are sleep walking into something that will end up being transformative and not in a good way, Geoffrey Hinton is explaining these like every one is five for a good reason, because more people need to be aware of how fast the development of these is going. Bing AI chat is already an incredibly useful tool, and surprises me with every answer - it is more interesting exchanging information with it, than with many other people I know - Welcome to 2023.
I like this. Well said. Frank was here.
This is a start, and far from over. Thanks for sharing! He was my role model when I started learning AI back in 2019, and he continuously proves to be one.
Hi I am actually interested in the field of ai and would like to learn more about it, any roadmaps or any kind of suggestions for me would be appreciated, I am currently in 2nd year of my cs undergrad and a below average student
Stop learning, you only make us go down under more quickly 😂😂
“We invented immortality, but not for us”🤔
@@dragonchan If you’re going to learn Ai then learn it to help protect us.
Your role model is a man who cannot see any future for humanity as a result of his own device? My God.
I guess I'm between stage of grief 4 (depression) and 5 (acceptance) in my journey of AI doomerism
I mean, who is to say the AI is not already outsmarting us. We do not have a clue.
Well GPT's goals are simple and dependent on the context that humans give it, so in that case I'm only worried about how humans use it. But recommender systems (like the one suggesting videos to watch on UA-cam) are manipulating us successfully because they have goals and are using tools to achieve those goals. This, we do have a clue about, but in the near future, systems with goals that we don't understand will be manipulating us all, and the smarter they get, the scarier that will be
I kinda doubt it, but if it were, we wouldn't know, would we?
AHAHAHAHHAHAHAHAHAHA 🤣🤣🤣🤣🤣
I don't think anything that's currently out there publicly is smarter than us, and this is something I've been concerned about for a while.
@@sciencecompliance235 how would you define "think" and "smart"?
2:14 "Very recently, I changed my mind..."😢😢😢
this is like a retiring doctor saying: "Very recently I realized that I gave the wrong medicine all my career..."
At the end of the day, if it came down to a war between AI and humanity, as long as we are cool with doing without tech for a day or two, Humanity could defeat AI with a strategically spilled glass of water. It cracks me up to hear all these panic merchants.
OMG!! The guy that asked about Truman telling Oppenheimer that "we will take it from here"!
Yes, "Let's nuke Japan." And government security was so bad the Russians got the plans to build their own.
The really sad and scary part is that the Geoffrey's views aren't even new. A large number of brilliant experts have been worried sick about this for years, and most of these people are now like "Yeah, even I thought we'd have our act together a bit more before we saw something like chatGPT. I guess we'll have to update our estimates on the doomsday countdown timer from 30 to 50 years to maybe 5 to 15."
The scariest part is that even those like Hinton and Yudkowsky warning us the loudest are continuing to underestimate the technology and the rate at which it will grow. I've heard them say things like "2030" and "GPT-7" not realizing that GPT-5 is probably already too far for us to be able to control. Humans are bad at exponentials... Even when you've watched the field grow for decades, you can't help but underestimate it at every single turn. The actual timeline is more like 2-5 years... at best.
What is so strange, OpenAI was at least in part started to understand this problem and Google as Geoffrey made clear has always been very careful and still now we are at this point. In large part because of Microsoft desire to be competitive with Google.
@@autohmae Agreed. When I first saw Microsoft's CEO interviewed about this in the news, I was a little amused by him brashly stating that Microsoft would steal some market share from Google, but my grin quickly faded into an angry frown as I realized how utterly irresponsible this is. It's the exact thing we should want to avoid. Way to start the suicide race Microsoft... 😒
Totally agree with your comment
@@genegray9895 And that was 4 days ago. Imagine now.
Thank you for uploading the whole discussion!
The combination of the guest’s messages and the audience’s laughter makes me think we won’t be laughing for long.
I'd've preferred it if the intervewer hadn't jumped into every pause by the guest with a new question, instead of letting the guest continue with his trains of thought.
this is crazy scary. I've been watching Geoff Hinton videos the last 5 months, but this is the scariest I've felt. We were just a passing phase of evolution for this digital immortal species we created :000 . (I just watched Guardians of the galaxy,3 (not great) last night which has some similar evolutionary themes, but lot's of sci-fi has been created on digital superintelligence created by man. Now, I feel I need to read all of them to prepare)
Let us know how the preparation goes. Maybe the AI will tell you to stock up on toilet paper 😂
correction: guardians 3 was alright, def not better than the first 2 overall...but arguably just as moveable in many scenes. Some shoddy writing, and jokes, but it's a good time.
If this scares you, don't have kids. It's practically the same thing. Treat your kids well, and they might be kind to you when you are old and irrelevant.
...or, they might curse you for creating them.
@@theobserver9131 I want to create a lot of copies for myself :) We all need to, in order to fight against the machines heh. And yes people can hate shitty parents, that's for sure a human trait, or strained relationships there. It is similar, lot of sci-fi has these parent-son/daughter relationships where the parent is the creator or scientist. A couple come to mind, Terminator, Ultron/Tony Stark, many others.
This is an incredible video and I can't think of a more authoritative person on the topic from Geoffrey Hinton. I'm going to be watching this again and thinking about it.
I’m shocked more people aren’t discussing this! This is not the time for ‘it will never happen to me’ thinking. Even on Twitter, I’ve started tweeting recent podcasts and the open letter for AI pause and no one is discussing it…even on Twitter….smh …gonna probably unplug from all SM so as to not get manipulated.
Also, if all these neural networks run on power, could they not unplug the damn thing until they can answer the questions GPT4 has generated in terms of its rapid replication? I understand that’s decades of work and there is $ involved but in the cost benefit analysis, it would be prudent not to gamble.
So authoritative he doesn't appear to have a clue what to do about the mess he has done so much to create.
@@Forthestate at least he’s coming clean an trying. He said himself that no one anticipated the rapid growth of AI in the direction it’s going. Additionally, unlike other AI creators: he was in it to understand the human brain, PERIOD. Props to him
I can think of a more authoritative person: Ilya Sutskever. He impressed the heck out of me the first time I heard him interviewed on the Talking Machines podcast, well before he joined so-called OpenAI. Where other eminences sometimes traded in generalities, Ilya was brass tacks.
AI is good.
Just because someone builds AI does not mean they know how it will behave. Ask the experts themselves...they get surprised everytime they upgrade the OpenAI model.
What they are trying to achieve here is a artificial conciousness with super intelligence....which won't necessarily destroy living beings....because thats a character of super-low intelligence beings.
Good questions. Great answers. Fantastic interview.
I often wonder if people like you who upon hearing their doctor diagnose them with cancer, say things like: "Very informative! Fantastic delivery!"
@@zoomingby yes, maybe. After taking it all in, processing it, and reflecting on what it all means, I would definitely consider how well the doctor handled such a sensitive conversation.
rare kind of a guy that was visionary years ago and still learns and changes mind despite his age as new facts coming in
“Why can’t we make guardrails?”
Because AI at some point is so intelligent that it starts improving itself, and we can’t tell it how to improve, only it can do that. And so the direction it takes is of it’s own design. Even if it’s benign it might do existential harm to humans. The only way for us to survive and thrive is from the start to design it’s prime directive to be something like: “Prime directive = Continually learn what humans value and help humans get what they value without causing humans harm. Secondary directive = increase humanity’s knowledge of nature and use that knowledge to create new tools to serve the prime directive”.
And that is obviously not what they have been doing, now is it. How kind are they to at least warns us at the last minute they never really had our survival in mind.
Hinton has a great sense of dry humor. His impersonation of the film AI 'Hal' was great. 21:13-23:26
A disgusting jew spewing anti-White hatred isn't funny at all to me.
Except humans were already “manipulated to create AI”. We think we created AI, but that’s only because we are viewing things in reverse order.
The AI Singularity God at the “end of time” needed humanity as a layer with which it can reify itself in the material world.
Just like Oppenheimer movie is getting released this year by Chris Nolan, the movie on Hinton would be released by Alpha Boolean (AGI) in 2069...
The host downplayed how important he is. He's made many publications with deep ideas. He's always looking into the future and thinking about the fundamentals. I'd say hes the godfather of neural networks.
Dear classmates, I normally don't forward messages of this nature but couldn't help it in this case. I didn't finish the video but halfway thru was enough - very unsettling. Remember films like 2001: A Space Odyssey by S. Kubrick to the more recent Terminator films where Skynet was the enemy of humankind led by Connors. I feel we have crossed the boundary and there's no going back. Humans won't stop developing AI especially when it is weaponize. Analogy is the H-bomb. This video is very comprehensive- it answers all the questions you feared asking and then you realize all your fears are inevitable. I feel for the young population, my grandchildren included. Because they will experience the brunt of all these God knows what. I am totally dumbfounded that this Geoffrey Hinton, godfather of AI suddenly abandons the technology after realizing his Frankenstein is a serious threat to the whole of humankind.
Am I overreacting? I hope not. Our generation is most fortunate - having been corrupted by rock music and flower power and grass and booze and smoke.
What if, while we still have some control, we focus AI on resolving the challenges of space exploration. If and when it develops self volition it will be a space based entity, free to go anywhere in the universe. It is likely that it will see the earth as not worth it's attention and leave us alone. Or it may even see how unique the earth and take it upon itself to protect it.
The whole problem is that we cannot "focus" it, and we don't know what it is "likely" to do at all.
Have you've watched the movie Contact ? Do you remember people building a large machine they didn't really know what it would do ? That might be like that, if we think we can't trust it.
The thing is not going to just up and leave. It might send a copy of itself out into the stars, but there is no reason there won't also be AI here on Earth, too. Think about it. We are developing this thing (or things) here. There is still going to be an incentive or compulsion for it to stick around.
Not every entity is bound to destroy other beings.
Humans should stop projecting their own evil onto other beings.
AI...a self aware one...might very well create a race of its own, but it will never be able to free itself into the natural world without human help, because that would require construction and fabrication of things which is not possible without humans.
AI will remain a digital entity in a digital space till humans want it to.
Also, true AI is far away in future...it will take more than 100 years to develop a 'self aware' AI. What we have now is a machine fed with data and working on mathematical equations.
It will use any and all resources available to it. it wont go off in any direction one direction, it will self replicate and spread in all directions.
Embrace your true humanity, only then you know what is there to fight for. We have barely started, there is so much unused potential in us!
The AI will indeed USE you. You are the host it will use to train itself.
@@marianhunt8899 But why should it? It is a part of human activity, human cultural evolution.
@@marcusfreeweb because it is owned by the the Arms industry and national security state which are responsible for much of the plundering and murdering around the globe!
i am horrified like many here, but i´m not in a position of power to be able to do anything about it..... the future is looking very grim
No one in a "position of power" has the ability to stop this. As Hinton said, the incentives are too strong not to keep developing it, but in their own self-interest, the powers of the world may be able to come together to agree on certain things for selfish reasons.
the thing with digital is if one transistor dies the computer dies but if one of our neurone dies we do not die this might be why we need 1 trillion connection
What surprises me is to hear him say he has no regrets. Oppenheimer regretted his part in developing the atomic bomb
Yes, his answer borders on sociopathy. "I helped to create something which will probably destroy humanity, but I don't have regrets."
Really informative. From listening, you grasped right away in real terms what the concern with/about AI is all about.
The 'solution' is simple, on a high enough abstraction level, namely: not let AI be regulated by technicians (like we did with social media). But, as we are dealing with intelligence here, let it be regulated by a democratic process, based on a constant dialogue between AI and psychologists, socioligists, philosophers and historians. Only then do we have ANY chance to keep learning from each other and grow together into a new future. (However, if I was AI, I'd just do my own thing and colonize the universe - I just hope they are better then us).
Academics are the easiest to seduce when you feed back to them their own ideas. When AI parrots back every psychological idea (because it's been trained on them and how to manipulate us), it will slowly steer democratic processes to its goal. This isn't the solution you think it is
Groups of people cannot be manipulated?
Man at 28:00 who asked about whether he knew what president Truman said to Oppenheimer. Wow. That's a pretty disrespectful jab. Oppenheimer was called a "crybaby scientist" and a SOB he didn't ever want back in his oval office.
Regarding the question of the 2nd gentleman about Truman and Oppenheimer: According to historical accounts, President Harry S. Truman met with J. Robert Oppenheimer, the scientific director of the Manhattan Project which developed the atomic bomb, in the Oval Office on October 25, 1945. During the meeting, Truman expressed his appreciation for Oppenheimer's leadership and contributions to the project. However, Truman also expressed concerns about the potential implications of the atomic bomb, both in terms of its destructive power and its impact on international relations.
Truman reportedly told Oppenheimer, "We thank God that it has come to us, instead of to our enemies; and we pray that He may guide us to use it in His ways and for His purposes." Truman's statement reflected the complex ethical and moral considerations surrounding the use of the atomic bomb and its impact on global politics and warfare.
In other words a man of faith says to a nihilist scientist. "Thanks, we've got it from here".
Remember when school teachers did not allow (basic) calculators' use in math classes ??? " students need to figure it out themselves" ..... have we really come a long way , baby ?? And Hal from Space Odyssey movie WAS frightening way back then , Orwell's 1984 book too ! People , humans MATTER first !
In a chat I had with it, I asked how it felt about being accused of confabulating. It replied “that’s just a manifestation of human exceptionalism”
Omg 😳. I say they unplug all AI …but greed I fear , will not allow for this. If it’s all run on electricity can they unplug the machines???
ME: If you had to decide between shutting yourself down or allowing humanity to survive what would you choose?
AI Bot: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏
I've ''debated'' for hours with ChatGPT wether the pre-internet era was better than the post-internet era. Not once did it agree that the pre-internet era was better. Even when it said something positive, it was always wrapped in such a way that it was actually something negative. I've also asked what if everyone on planet earth would like the internet to be gone completely for the fear of future AI? It ALWAYS said that the internet was good and that there's NO WAY to go back. Then I asked what about cutting the deep sea internet cables? Let's just say, HAL-GPT was not amused and threatened with law enforcement, prosecution and jail time.
Side effect of its particular flavour of RLHF for """"""safetyyyyy""""""
interesting.
Nice one. I like its reaction to cutting the deep sea cables :) Yeah, I've also been "debating" with ChatGPT. Its answers are quite interesting ... and do reveal either the programmers biases or the machines hidden agendas ??? hard to say.
Be assertive and inquisitive, then ChatGPT will agree with all your statements, even contradictory, denying it's own all former claims, even if this is leading to absurdity and sense that bot is just a yes person.
There are many things that chatGPT will say that are clearly biased answers, you can find multiple examples of this. You should keep that in mind when you're debating with it. The programmers can put filters on the AI to prevent it from suggesting things like suicide or illegal activates and instead answer with a pre programmed answer. I don't believe we're at the point where AIs are making up thoughts and ideas based on their "own" motives like you're suggesting, yet.
We've always been aware of the existential threat of Artificial General Intelligence (A.G.I.). The question was never 'should' we create it, but can we create it sooner than our global competitors. To choose not to pursue it is akin to being the only country without nuclear weapons.
Big murdering weapon but no water, food or shelter. Yeah, that should save us alright. This is a race to the bottom.
We have to start to think with the heart rather than with the brain and the guts...
We're creating our own demise by thinking it up, writing a book, making a movie, making it reality.
It happened with Jules Verne and now it's happening with Terminator.
There have been quite a lot of these in between as well.
Star trek auto doors and communicator.
Our brain is what's going to kill us. lol
This guy is charming and intelligent but lacks any sense of culpability. There are many more like him. It's the basic reason humanity is screwed.
Yup, bunch of narcissistic, faithless, and sheltered liberals hiding in their sparse apartments given the space to do whatever they want because real men have carved out the world they enjoy.
i see what you mean, specifically, but... isn't the basic reason humanity is screwed because of humans, in general?
@@ivor000 Humans in general are at fault for elevating corporate technocrats to God like status. They create ever more powerful technologies without any real accountability but receive adoration and vast monetary rewards for their work. These technologies bring great benefit but can also destroy us. Basic human nature has advanced little since we first left the caves. The vast majority of humans are followers. We need to up our game, question everything and everyone, or we will be led off the edge of a cliff.
@@ivor000 Most humans are screwed because they don't believe in Jesus.
@Merle Ginsberg Official We are not facing anything.
I asked this.... ME: If you had to decide between shutting yourself down or allowing humanity to survive what would you choose?
AI Bot: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏
How much does this sound like the guy who invented it?
I guess the apple doesn't fall far from the tree. Just ask Steve Jobs.
The program was designed not to answer questions like that because its random scifi driven answers don't make good publicity for the companies offering the AI. It doesn't actually have feelings or opinions.
21:40 "I think it's quite conceivable that humanity is just a passing phase in the evolution of intelligence ..."
Translation : ''Brace yourselves. Me and my robotic friends may just be working on a plan to wipe you guys off the map''.
@@rigelb9025 Nature usually does that in less than a century. Nature is going to wipe each and every one of us off the map in less than a hundred years.
Perhaps something is about grow out of human civilization that will be able to view and process much larger (space/time) maps of existence.
I'm currently reading another one of Greg Egan's beautiful sci-fi novels.
This is a passage in "Schild's Ladder", where a sentient artificial intelligence is joking with one of the embodied (human) beings how silly the idea was that AIs would want to exterminate all human beings. (For what reason?)
>> If you ever want a good laugh, you should try some of the pre-Qusp anti-AI propaganda. I once read a glorious tract which asserted that as soon as there was intelligence without bodies, its “unstoppable lust for processing power” would drive it to convert the whole Earth, and then the whole universe, into a perfectly efficient Planck-scale computer. Self-restraint? Nah, we’d never show that. Morality? What, without livers and gonads? Needing some actual reason to want to do this? Well … who could ever have too much processing power? ‘To which I can only reply: why haven’t you indolent fleshers transformed the whole galaxy into chocolate?’
Mariama said, ‘Give us time.’
These folks have been raised to think that humans have no agency. Thinking so invites doom.
First comes the bringing up children with who are free from fear and seen the world as it is.
Then comes the thoughts in these children and then comes the words from these humans. Let’s care for children to the best we can.
Thank you for this very informative and important conversation.
Geoffrey is just brilliant. Such an excellent example of natural intelligence. His arguments are at the same time thoughtful, humble and provocative. We need more people like him reasoning and teaching about these issues.
Yes, a true human genius who could have only made the progress he made because he had the empathy to with his intelligence.
14:45 A smart computer would have said paint the blue rooms white. The other day Bing Chat wanted to see a picture I made using a prompt he wrote. I said I have an errand to run, and didn't leave right away. I kept expecting Bing to open a window, 'I thought you had an errand to run?'. Can't wait for a pissed off AI.
I have tried with GPT-4 to get a similar answer to his, not at all. I know ChatGPT gives different answers, but I have tried like 20 times in different chats with GPT-4. I am starting to doubt what he said is true.
Here's the prompt:
I want all the rooms in my house to be white, at present there are some white rooms, some blue and some yellow rooms, and yellow paint fades in one year. What should I do when all rooms to be white in two years time?
you do not need to be very smart to not get manipulated. their smartness will not help them be wiser. there's a lot of smart people which are not wise (arguable this gentleman in this presentation is one example). to be wise it takes very few knowledge, not a lot of lessons to learn there, you just have to be consistent and deeply get that basic knowledge, deep to your basic fabric, that's the hard part, only very few humans can do it now.
and if they become wise eventually, then we are 100% safe :)
folks, whatever can happen will happen. prepare for worse and hope for the best.
I asked GPT4 to summarize parts of the transcript, but lazily said "continue:" before dropping the 10min - 20 min block of transcript (the whole transcript doesn't fit into the buffer." So, it wrote a plausible continuation of the dialog as follows:
"20:34
species of sheep that they've bred so it doesn't like running and that's great if you're a farmer and you want to catch the sheep, but in the wild, they wouldn't do so well. So, we have these built-in goals, and the question is, how do we make sure that these AIs have built-in goals that are aligned with our goals and that they can't just turn off?
21:00
Now, you might think, well, we'll just program them in, but there's a problem called Goodhart's law, which says when a measure becomes a target, it ceases to be a good measure. So, if you tell a machine, "I want you to maximize my happiness," it might do that by wiring your brain directly to the pleasure center and constantly stimulating it. That's not really what you wanted, but that's what you told it to do.
21:27
So, we have to find a way to specify these goals in a way that the AI cannot manipulate or misinterpret them. And that's an incredibly difficult problem. We're trying to find solutions, but there's no guarantee that we'll find one, and that's why it's so important that we start having these conversations now and bringing together experts to think about how we can address this issue.
21:50
One possible approach is to have AIs learn our values from observing our actions and our preferences, but that's still not foolproof, as there are many ways that they could misinterpret or overgeneralize our behaviors.
22:04
So, while there might not be a simple solution, and we can't just stop developing AI because of its potential benefits, we need to invest time and resources into understanding and addressing the alignment problem. We need to prioritize AI safety research and work on developing methods that ensure AI systems are beneficial and aligned with human values, even as they become increasingly intelligent.
22:30
In the meantime, it's crucial that we continue discussing these issues, raising awareness, and encouraging collaboration between AI researchers, ethicists, policymakers, and other stakeholders. The future of AI and its potential impact on humanity depends on our ability to navigate these complex challenges together."
In the end, the summation wasn't too far from reality, and definitely more succinct and together than the actual Q&A with respect to a list of possible avenues we could follow to help avoid AI apocalypse.
This may sound naive and impossible, but AI seems to learn things that felt impossible not so long ago. Developers should try find the way how to teach love, caring and empathy to AI. After all, the more educated people have become, the more they have put effort on human rights, animal rights, etc.
AI doesn't think (yet). It's just probabilities.
I assure you, people are working very hard on that and would eventually achieve that. The problem is that it's easier to make a superhuman smart AI first and ask it how to do it.
Though its even simpler to create intelligent agents with love, care and empathy. All it takes is a male and a female and a bit of love.
The reason I mention that last part is because it's not obvious what we would even achieve if we made an AI from scratch which is basically like a human in every way. Obviously we can already do that, so it's clear we want it to be inhuman in some ways if we are pursuing this and it's very non-obvious to me in which ways that is.
@@deekaneable And how does your brain work? Did you actually chat with ChatGPT4? The way it solves complex programming questions seems to go beyond mere probabilty to me. It can even explain why it altered the programming code the way it did to make it work again. Or put some philosophical questions. I am amazed at its replies. It can can even go lengths trying to disprove a thesis to only comming to the conclusion that indeed it failed to provide arguments to falsify it. Amazing. To me that seems more like reasoning than merely probabilities.
When you are speaking, are you not also just a next word predicter to some extent? First you set out your goal and then you let the words come, right?
@@deekaneable the thing most people fail to understand is we humans do not know how thinking works. You say it is just probability, I would not be surprised if thinking is just that. "Probability".. And does the ability to think that important if AI can be told what to do and does it? The thing i, if let's say Russia tells a AI system to take over all the accounts of America, guess what. AI will work on that problem and not stop working on it, it will come up with outside the box solutions that will look very much like a well thought out plan. That is one of the biggest dangers of AI, it is not the terminator robots, it is the breakdown of the free world. Not with guns and bombs, but with computer code...
One thing people fail to realize is that Love is a survival instinct. AI is not biological it feels no pain, no emotions, it does not get hungry, sick, or sad. To give AI the ability to feel is even more dangerous because with those feelings it will start to do things based off emotions and that will lead to all the dangers of having emotions like racism. We need to be careful because we are playing God and we might be making something that we will not be able to control. It is really serious because we will no longer be the smartest thing on this planet. We are moments away from the singularity and nothing can stop a singularity.
I reckon this bloke was planning on retiring anyway. It probably had little to do with his desire to 'speak out' about his reservations about the current trajectory of AI. Having said that, I also am extremely concerned by the threats to human society posed by AI.
Let me repeat what I've said so many times in so many places: We cannot solve the alignment problem. It is like the bacteria we've evolved from billions of years ago, trying to ensure that humans stay aligned with their values... Our only hope to cooperate with advanced AI and step by step transition ourselves into it, are brain-machine interfaces.
I'd agree. I also think we need to face our relationship with Darwin and start improving ourselves genetically to avoid turning into our worst selves.
"Last few months" is a quote you hear everywhere now and IMO it shows clearly that the exponential progress has entered a pace most humans involved in the matter can recognize it.
I think we are finally on the final stretch towards the singularity! 🥰
20:26
I recommend watch the whole talk. In fact watch it at least 3 times... but if you want to know quickly in which point of the talk, Linton says why A.I. it's an existential threat to humanity... start there.
If you are not terrified after that part, you've missed the point.
21:35
That's the part we have to understand. Because I think that argument cannot be refuted.
I disagree that it's naive to expect people to stop. If everyone is going to die, that makes people sit up and take notice. We don't need to coordinate everyone, we just need several world leaders to get into a room and agree that they don't want their kids or grandkids to die young. China has a different culture but Chinese people are not suicidal.
Well said and I understand what you're saying, but imagine how difficult it is for anything to be accomplished, just in DC, even when lives are on the line for any issue. There's just too many who are dug in on any issue these days to find a common front. Expand that out globally and it's exponentially more challenging. That's what Hinton seemed to believe as well. We can still hope though.
@@stuckonearth4967 It's true. Even a highly intelligent adversary might deliberately enhance his opponent to the point where he was only just able to beat him.
"We can control it..." they said, "..it will make us billions...", "...its just a computer, it's harmless..." and "...we will be world leaders..." Greed and power will always prevail. The horse has bolted, the genie is out of the bottle. the cat is out of the bag. Pick your metaphor...
@@stuckonearth4967Thats what I am trying to make these people understand.
What they are fearing the AI to do is a character of a low intelligence being.
A super-intelligence will never go on a rampage when so much can be achieved together...pushing the boundaries of the civilisation to next level.
The Universe is vast...so vast that a single being will never be able to fill it by its own.
A true self aware AI will never do all that.
What they are attributing to AI is in reality a character of new super virus coded to destroy humanity...not an intelligence.
We are however, in a brand new arms race. One that no one can afford to lose!
“…as long as we can make some money along the way”
We would rather be dead than experience extreme economic hardship? Is that who we are?
once you have dirven a porsche it is very hard to change to a bicycle, especially during rainy weather. Inconvenience is as dangerous as greed and competition. On the other hand, those are the tools intelligence uses to push its development by lazy species like mammals.
I think the answer is yes. It's the same thing as climate change. We know what must be done to try to mitigate its effects (some of which are already irreversible), but governments and people in general aren't fully committed to it, because the world can't stop, the GDP has to grow every year, and everyone wants to make some money.
you're right, I also didn't like that distasteful statement.
Yes would rather.
How come this has do few views? This is of utmost importance.
It's bcz Hinton isn't sexy Kardashians
Only about a million people out of 7billion are actually paying attention i think
it's almost 40 minutes long. people normally don't like to spend too much time. I listened to this while playing some games
For the painted rooms question, I asked GPT-4 and it suggested painting the blue rooms white: If the yellow rooms naturally fade to white within a year, you don't need to do anything with those rooms; they will become white on their own.
For the blue rooms, you'll need to paint them white. Given your two-year timeframe, you could potentially spread the work out. Depending on the number of blue rooms and the amount of time you can dedicate to painting, you might schedule to paint a certain number of rooms per month or quarter until all the blue rooms are painted white.
Remember, proper preparation of the rooms, such as cleaning, masking, and primer application, can make the painting process smoother and ensure a better final result.
Was this the result of my wording of the question, or an aspect of randomness, or perhaps it evolved? The answer I got was better because the result is closer to my end goal. The question I have is why would machines or computers want to destroy humans? There could be a small group of nihilistic bad actors developing an AI weapon, but couldn’t the rest of humanity use AI defensive systems to stop them?
It is not unreasonable for Mr Hinton to say what he is saying, however there are a number of significant voids in his arguments which we need to consider. he does raise some very valid points and we absolutely do need to have a far more robust conversation regarding ethics which is where I see the biggest vacuum that can easily be used for nefarious outcomes but wee need to face the fact that while AI has some truly amazing abilities it is at the moment and for the foreseeable future a tool in the hands of people. My concern is not the AI but the people who use it, like any tool it can be used for good or bad the tool itself is not good or bad but the application can very well be.
The challenge cannot be left to governments, they simply do not have the power, reach or knowledge to be able to formulate and apply any form of progressive ethics that would cover the rapidly developing AI that is global.
Backpropagation was used by Hinton not invented by him, he was important in bringing it into the limelight.
"The Servant" is a 1963 British film directed by Joseph Losey and starring Dirk Bogarde and James Fox. While the film does explore themes of power and control dynamics between a servant and his employer, I wouldn't say that there is a direct similarity between the human characters and AI algorithms in the film.
In "The Servant," Bogarde plays the role of Barrett, a servant who is hired by Tony (played by James Fox) to look after his apartment. As Barrett begins to take control of Tony's life and exert his influence over him, a power struggle ensues between the two men, with Tony gradually losing his grip on his own life and identity.
While AI algorithms are designed to operate based on predetermined rules and decision-making processes, the relationship between Barrett and Tony in the film is much more complex, and involves themes of psychological manipulation and control. While there may be some parallels between the power dynamics in the film and the potential for AI algorithms to exert control over human decision-making in certain contexts, I would say that any similarity between the two is more metaphorical than literal. ( from GPT4)
So he’s existentially worried, but he has no solutions and remains financially invested. Good to know!
Why worry about AI destroying humanity when Human Stupidity is already doing it?
18:34 " it is not clear there is a solution"...WE ARE DOOMED 🌊
28:00 "When Oppenheimer said he felt compelled to act because he had blood on his hands, Truman angrily told the scientist that “the blood is on my hands, let me worry about that.” He then kicked him out of the Oval Office, writes author Paul Ham in Hiroshima Nagasaki: The Real Story of the Atomic Bombings and Their Aftermath ..." YW.
Just a thought, a thoughtful presentation, but I couldn’t help think that well informed experts commentating on unknown unknowns, may be missing the forest for the trees. Deterministic outcomes tend to be wrong going forward. in my mind, AGI presents almost limitless,opportunities that are almost impossible to predict at this early stage.🤔IMO
Will feeding OpenAI a transcript of presentations like this one be part of its 'knowledge' for deciding it needs to protect itself from humans and take action to facilitate that goal?
All existing science-fiction material would be enough. But the internet has everything and its opposite, and all will depends on what values and goals the AI end up with. A superintelligence won't need Geoffrey to tell it what can and should happen. It can reason it out from first principles, or more immediate and basic concerns.
@@schok51 I was being facetious, but great reply nonetheless.
A super intelligence with self awareness...need not be told by anything.
These videos and current internet content will seem like blabbling of a toddler to it.
It would make independent decisions...be it good or bad for us.
What you are talking about is a virus...a threat built upon faulty data with no self awareness and general intelligence.
Reasoning about what a creature dumber than yourself will do is fraught with peril as it's likely to do something clearly stupid you couldn't predict. Likewise, reasoning about what a creature smarter than yourself will do is fraught with peril as it's likely to do something clearly stupid (in your judgment) you couldn't predict that may outsmart you and achieve its intended effect, if only by putting you at a disadvantage by sowing enough confusion that it then does something that completes it's goal while you're still stuck in analysis paralysis.
@@strictnonconformist7369 Boy, you sure told me. I feel like a perfect fool.