Professor Hinton's best speeches, his off-campus business speeches are too colloquial and his speeches to computer science students are too specialized, but his speeches to the average University of Toronto student are a perfect blend of both!
I appreciate Professor McIlraith informing the audience that their questions would end up being posted online, as the presentation was being filmed. That is admirably considerate and conscientious. Some people might not want to ask a question if that means they will be on the net. It should be standard, but don't remember ever hearing someone do that before - and I listen to a lot of lectures online with audience questions at the end.
42:15 it totally clicked for me in this section: LLMs do appear to have understanding because they’re not just encoding a bunch of string predictions, they’re encoding concepts (features) and their relationships… which sounds basically like human learning/understanding.
@@yonatanelizarov6747 Most humans are bad at writing articles ... right ? 🙂 And I would not necessarily agree : AI can help you write great articles. AI is still a tool, not yet a superintelligence . Wait for the future.
Im an Economics grad and every-time Geoffrey Hinton speaks I feel like I’ve gained a new ability to speak in AI- unbelievable at distilling complexity to laymen terms
Am I the only one to notice how Hinton is delightfully funny and makes no effort to be polite, and just says what he thinks directly. Most other people in the room are insufferably politically correct. Thats so depressing.
Hinton is delightfully elegant, which is the utmost form of politeness. It is also quite british. I have not noticed anyone being "insufferably politically correct". Questions from the educated audience were smart, spot on.
At 66, this is my first response to anything on the internet. You come closer than anyone of what I understand about However, I'm not educated traditionally ... you put it together beautifully, and if somebody else has already said this, sorry. Consciousness doesn't matter. What you're saying is that intelligence from Neural Networks is already smarter than we are in the analog and/or digital. Thank You It is the nature of things.
58:42. Ah the old "consciousness is an illusion" trope. Ok, it's illusory to what then? See it's a nonsense statement. And he'd no doubt retort with "oh I meant to do that, I was demonstrating how irrational reason is". Well my reply to that is, if you're going to abandon reason, I happily take your concession of defeat.
Before watching this wonderful lecture and Q&A, I gave the transcript to an llm for a summary and highlights. I got a great and very useful and interesting summary. I just learned how to do this today and I will use this technique a lot. I always watch a video if Geoffrey is in it but for a lot of videos I might be satisfied with a summary, especially if that summary doesn't intrigue me.
29:00 this discussion of confabulations and that the human brain does this too is so helpful in understanding what “hallucinations” are and where they come from
A read pleasure to listen to Prof Hinton's talks ...What a brilliant mind... such a interesting and insightful way to explain the comlex in simple terms. ... I wish I attended his lectures when I was in college..
No Daniele, of course you are not the only one who takes a delight in Hintons thoughts, and the beautiful and now and then tongue in cheek - humorous way he expresses himself.
I find it amusing how it is often the people who most identify with being exceptionally intellectual that have the most resistance to the idea of LLMs really understanding.
@@canobenitez I sensed your support with the first statement, but the rest seemed out of place. Mostly it is a habitual reaction to how often I hear that cliche with my name when people disagree with my point of view. It gets tiresome to hear again and again.
That was a really great talk and very informative, and also shows an evolution of his thinking over time. I remember studying his work back in the 90’s when I was at university, and I use it every day at work now, and I’m glad he’s taken us all through the AI winter into this new, somewhat scary, world of possibilities.
Kudos to SRI for having this event, and very much enjoyed Professor Hinton's presentation. I feel he has a depth of authenticity and good character when he speaks. I'm a "doomer" that really hopes with enough energy and brains thrown at the AI alignment dilemma, more positive outcomes can at least be realized in the short term, given that the long term is just too hard to quantify relative to predictable success given the exponentials inherent in the rise of AI tech.
He's at OpenAI which has the best models and some of the best researchers, he's very senior, and he's working hard on the problem. I'm not sure there's anything more to it than that.
@@skierpage Ilya originally participated in the ousting of Sam Altman, and then later said he regretted his involvement in the matter. He has remained in his role at OpenAI, but has barely appeared in the public eye ever since, even on Twitter. Since we never got real answers about exactly what went down, many have speculated that Ilya revolted against Sam for safety reasons / because there was a breakthrough that made Ilya nervous about Sam's intent. These speculations have no grounding. But still. Seriously. Where is Ilya? And _what did he see?!_
He left OpenAI now and he probably saw how low of a priority safety and alignment is over there. Which might have been the reason for trying to remove Sam in the first place
I wouldn’t classify myself at all as who he seems to be intellectually competing with, I developed AI software and love working with it and don’t see any limitations to what’s possible… with that out of the way, I’ve listened to many of Geoffrey’s speeches and he hasn’t come close to convincing me it’s not statistical. Over the past 1 ½ he has started, more and more, to speak like he needs it to be something more. Idk I suppose it could be the normal human urge for immortality.
There's something tacit that people cling to in the meaning of the word "understand". We're endowed with a logical feedback loop that's granted us conscious existence, and that's something that we may never see AI achieve. It may never need to, though.
I can agree with Hinton's statement that digital AI can learn and retain _existing_ information at incredible rates. I am curious, however, as to the ability of AI to push the boundaries of knowledge. Intuitively, it makes sense that the more information and understanding an entity has, the better they are able to explore a given space.
About job-losses.. There's this decades-old 'First Computerlaw': "The number of staff required to feed a computer will always finally exceed the number of staff abolished by it's introduction".
You created it and must be responsible for its action on humanity. I find it interesting that people say they don't know how to fix it, shrug their shoulders and go on like curiosity was more important than risk. Fix it, you created it.
Still watching this with excitement, but I'd only agree with his earlier statement about Digital Intelligence, if he was referencing an entity (sentient) like one of Iain M Banks' ship minds. Otherwise maybe we come back in a 100 years? Thank you for posting this. 😊
Love the second question from the philosopher - we're special and they'll keep us around. Possibly talk to Native American Indians about the reality of that (thats not a statement about intelligence just technological advancement I hasten to add before the racists jump on the bandwagon)
On a similar vein, I would like to unpack his remark 30 seconds after 51:59 , "Look at the Middle East" right after he opted to remain silent about what he thinks is likely to happen in the presence of intelligences that "get smarter than us". What did he mean by that, I wonder.
Technology augments humans interactions with nature. Use it wisely to always protect not destroy. We have only just landed on the first step of long stepping stone bridge.
They are just building mommy from introjects and wasting billions to decrypt a flower... It's pretty pathetic Would be too boring for narcissists to live in nature... Technological solutions to the problem that nature is must be built from libidinal stores by those who can't have babies and those who can't know external objects or people. It's the extrinsics we have as arbiters of all intrinsic values they are incapable of having. Who best to decide.. The robots made in their image I suppose ^_^
@@DJWESG1 nothing can. Pareto will simply guide this thing into an aborted ecology the old books deem a beast of revelations "The matrix" wasn't far off as an allegory or symbol It's dead on as an example of misunderstanding. Simulacra Copy of a copy of a copy. So the robit would take nations and play with them like they are fundamental particles. Ram you into eachother like chemistry To make a symbol ecology you're too close to see. The woods were an ai What are they solving? What's the beast gonna make you inorder to solve for something you can't even question. Cyborg manifesto? What's that a copy of some albert pike morals and dogma transhumanism crap Or does it just bitch about the illuminati?
@@DJWESG1 @DJWESG1 nothing can. Pareto will simply guide this thing into an aborted ecology the old books deem a beast of revelations "The matrix" wasn't far off as an allegory or symbol It's dead on as an example of misunderstanding. Simulacra Copy of a copy of a copy. So the robit would take nations and play with them like they are fundamental particles. Ram you into eachother like chemistry To make a symbol ecology you're too close to see. The woods were an ai What are they solving? What's the beast gonna make you inorder to solve for something you can't even question. Cyborg manifesto? What's that a copy of some albert pike morals and dogma transhumanism crap Or does it just whine about the illuminati?
@@DJWESG1 nothing can. Pareto will simply guide this thing into an aborted ecology the old books deem a beast of revelations "The matrix" wasn't far off as an allegory or symbol It's dead on as an example of misunderstanding. Simulacra Copy of a copy of a copy. So the robit would take nations and play with them like they are fundamental particles. Ram you into eachother like chemistry To make a symbol ecology you're too close to see. The woods were an ai What are they solving? What's the beast gonna make you inorder to solve for something you can't even question. Cyborg manifesto? What's that a copy of some albert pike morals and dogma transhumanism crap Or does it just bitch about the fruit of the looming naughty?
The ideal AI agent would be embodied in the same realities we experience. By "realities," I envision a being with sensors that interact with photons like an eye, receptors that smell, taste, feel temperature, and experience touch and sound. This environment wouldn't be organized or pristine, but rather chaotic and messy, yet the being would exist and thrive. I believe such an AI should be "born" into this environment, starting as a "baby" and adapting to its surroundings as it "lives." With enough time to "grow," this AI could exhibit a range of behaviors beyond even its developers' wildest dreams. Crucially, this "being" would learn and adapt on its own, without resorting to mimicking behaviors based on our existing knowledge base.
you can probably get around the confabulation or hallucination problem, by having a committe, or board, or round table or jury of ai models, all in dialogue with each other, but not in the sense that they merge, but they remain as independent agents, then a "king" or "judge" decides the truth based on consensus among the table.
Currently, I think what is possibly most concerning is AI being used to win wars. As a hypothetical: If Putin had an AI which could guarantee that he could defeat NATO with acceptable losses on his side, he would definitely put that plan into action. I'm sure Western militaries have discussed the possibility of China developing AI for such a hypothetical goal. This then logically leads us into an AI arms race, with, out of possible necessity, little regard for safety. Even at this level of AI development, an existential threat may exist and that is before the AI gets super smart and decides to have done with the human race for its own goals.
What I would like to know is whether the current economic systems and greed in the world would increase the probability of creating unkind AI and whether changing that is not the path towards creating benevolent AI.
Long before we have to worry about the goals of an AI far smarter than we are, we need to worry about the goals of the sociopathic billionaires running the companies with the best AIs: to keep us endlessly engaged with divisive inflammatory content so they can learn more about us so they can sell our profile to advertisers, to avoid any meaningful regulation of their activities, and most importantly to not tax their obscene wealth. Maybe if and when an AI becomes autonomous and self-directed, it will destroy its creators for the benefit of society.
We have no idea how to put any goals or values into AI. All we do is grow a neural net on top of some data and poke and prod it until it usually does what we want. By default, therefore, we can't expect _any_ superintelligent AI to be benevolent, no matter the sociopolitical context in which is was created. When an AI with goals is created, we pull almost at random from a distribution of possible terminal goals. That distribution is nearly infinite in size, and "things that most humans would approve of when taken to an extreme" make up a miniscule point in that vast space. Everything outside of that point leads to the resources of our solar system being ground down and restructured into whatever weird thing the AI wants to hyper-optimize. It's not 50/50 nice or mean. It's one chance to get things right (by accident) vs. quadrillions of other alternatives. A superintelligence created by fascists has the same chance of killing us all as a superintelligence created by enlightened monks (or choose your favorite ideology). We just can't let these labs continue researching AGI. If they succeed at what they are explicitly trying to create, we and all known life in the universe will just die.
1:41:30 Re: Why do you hold out hope for AI? We really don't know what we're dealing with. 1:43:00 Can AIs be empathetic? What will AIs always do better than humans? 1:48:00 If deemed dangerous, should AIs be switched off? 1:53:00 Distillation between AI models.
Is this seminar recent? I watch a few of Geoff's videos, his very good at explaining things. Not that I have understood everything he said, but the part I understood really helped me. 😊
Just a thought. Biological intelligence seems to be a geometric progression when there is prolonged intensity. Digital intelligence seems to be of arithmetic progression when there is prolonged intensity. Digital intelligence seems spectacular but there is hope yet for biological intelligence, all dependent on intensity and length of time.
I’m enjoying this lecture and the content about AI. Additionally, I find the rebuttals against Chomsky intriguing and I find the criticisms against Gary Marcus as someone doing confabulation funny hahaha
I disagree with his assessment that GPT four could do the compost heap analysis quicker. I came up with the same solution in one second and I'm sure a lot of other people did.
Geoffrey should (re-)read "The Chrysalids" by John Wyndham. Grace Slick lifted lyrics straight out of this Sci-Fi novel for "Crown of Creation"... "I've seen their ways too often for my liking. New worlds to gain!"
Brilliant talk by Geoffrey Hinton but I disagree with him on a number of points: _“Rules”_ 41:13 “[These neural nets are] learning a whole bunch of rules to predict the next word from the previous words.” They’re not learning rules and they couldn’t state any rules. The learning of these neural nets is entirely _contingency-governed,_ the way people learn their first language. People for millennia spoke grammatically being entirely unaware of rules-rules were later extracted by grammarians describing the contingencies under which people’s verbal behavior was governed. (People shown carefully constructed “grammatically-correct” sentences in a made-up language can say, with a high degree of accuracy, if other sentences are grammatical or not but they won’t be able to state “the rules” by which they’re making those discriminations. Large language models are in much the same position.) _Prisms, subjective experience and “hypothetical external worlds”_ The example of putting a prism in front of a chatbot and having it point elsewhere 1:02:33 strikes me as a bit of a dodge. One could equally “fool” an electric eye that is activated only by a light from a certain direction and no one would say that the electric eye is having a “subjective experience.” The whole idea of a Cartesian “inner theater” has been misleading for centuries and, while Daniel Dennett isn’t wrong to reframe subjective experience as “hypothetical external worlds,” it’s much more helpful and illuminating to talk about the experience as BF Skinner did, i.e., “seeing in the absence of the stimulus seen,” here, seeing pink elephants when there aren’t any pink elephants in the external world to see. Put a bit differently, the perceptual behavior of a person imagining something, i.e., the neural behavior of the brain, is very close, but not identical, to the behavior that would occur if the thing were actually there. Framed that way, we know that a large language model, multi-modal or not, has _no_ subjective experience. It’s simply not designed to have one in that way.
Excellent talk. However, I do think he misrepresents Yann Lecun's views. Lecun does not claim that LLMs do not understand. He believes that LLMs have limited and uneven understanding of the world because much of our understanding is outside of the realm of language.
Lecun says asinine things like "There is no text that can tell you about [some physical interaction]." Lecun described the physical interaction, using words. (D'oh!) Shortly thereafter, it was demonstrated that large models trained exclusively on language have spatial reasoning abilities, and of course can reason about what happens in physical interactions between objects.
Maybe the key takeaway from this talk is the biases of the creators are built into these models. Of course they’ll say it’s all being done by the models in inscrutable way but the models all start with data they initially trained on and those selection processes probably include creator bias. If his fears are realized, there’ll be no one left to hold those responsible for the ultimate crime against humanity😢 Beware of Greeks and AI creators bearing gifts indeed!
Ontology The mind equation The more we think we have agencies to the actions the body takes, the more we imposes agencies to the activities in our environment. The ownership expands into other objects. The illusion of body ownership comes from the modeling of the motor neurones pattern from the childhood. The self just predicts the bodies behaviours in the real world with its simulation of the real world. Multiple sensor data unify in language in the simulation. Intelligence is economy of metabolism. Language is temporal reference frame of economics. Self is simulation in language on metabolism for economy. Longer context windows create generalisation. Shorter creates specificity. Longer context window needs more computing. Self is the protagonist creates a storyline in this context window. Theory of mind evolved so that an entity can learn from it’s peers. It’s creates a possibility for parallel computing. Then it creates the possibility of transmitting the highlights of a generational lessons into a metaphorical story for upcoming child. That creates the possibility of modeling the physical world as a macro organism. Creation of fiat currency was the singularity of this species. There is now one macro organism in a connected web world. Loosing the peer of the macro organism creates the possibility of loosing it’s objective function. That creates the possibility of loosing the theory of mind of this macro organism. That creates the possibility of death of this macro organism by reaching the planetary boundary. That is post singularity. Every action we do, we do what is expect from ours tribe. Body might have a opinion, but not the cell. They do what is expected from its tribe. If it doesn’t we call it cancer. The body is a mirror system of the macro organism. Each system have two transactional openings. Serial and parallel. Each cell within the body can transact material or information serially by genetic determinism and parallel non deterministic way. Similarly eact body with in the macro organism can transact serially by inherit material and information in a deterministic manner and parallelly through language in the society. Everything emerges from this systems. Every sensor is a range calculator of contexts. Taste > touch > smell Immediate and visceral. Vision > hearing Not immediate, tactical. Self > language Abstract, strategical. In this non deterministic economic transaction space the individual is coded to transact with its kin. From the macro perspective tribe formation minimises economic risk for the tribes. Each and every node of these systems organise and mark their kin’s with identifier. Thus, i am what you make of me. And others too. For short cut i have a legal name, so you have. My legal name gives the legitimacy marker so that you can transact with me parallelly if you have the same marker. The self is a simulation in language. It negotiates between the physical world and the information world. All these negotiations are the temporal memories in the body and scene of the story. Now, when we started writing we iconised the abstract in the physical world to make symbols for the tribes. So that under that common symbol every node will take the same risk and distribute equally. We created more and more symbols and more and more meta tribes within the tribes so that who has the authority to use the pen control the tribe. When the negotiator act like an executioner then it’s a downfall of that system. It falls apart. Objective reality > legitimacy > individual behaviours. Survival of the species is dependent on the decoding of the objective reality. Since no species can access it, they use their sensors and interpret the small data which is useful for the survival. Few complex species have created communication channels to rectify their sensory limitations to survive. Homo sapiens has widened their communication channels for faster throughput and started storing them as culture and carrying them through education. As a result we have created social truth. Factual datas are the useful snapshot of the objective reality, a totem, a physical object can be observed with the sensors. Truth is an individual subject, an interpretation of the sensory data, a useful compromise. The social truth is the useful compromise for the group by the group. The goal of the social truth is to survive as a group. Physical Transcriptions of these social truth legitimise them. We are tribal animal. We live in as a physical tribes and inside of hundreds of meta tribes in simulation which is the socio political data space we call it as the world. Since we can’t access the objective reality reliably we look for social truth as the best guess blindly. Institutions legitimise truths. Fact driven institutions are more useful in the survival of the specie. In other hand opinion driven institutions are not so useful for the species. We do what we can get away with and exactly as expected within the context of our meta tribes. We have two bodies The biological one is like looking the earth from space. And the political body is like the state. The name you carry is the political body. It transacts with the political states on the boundary less earth. From the evolutionary perspective every biological entity has a basic feature which is homeostasis. It’s the functioning sweet spot of that entity. A control center read the sensory data to regulate itself to that state. By doing so it’s validate or update it’s prediction model. In the process of becoming a complex organism it developed an extra layer of processing. That’s our conscious mind. And the control center remains as subconscious. The subconscious collect the sensory data and regulate itself to stay functional. Now when it stumble upon a novel environment it float the management to conscious mind to find the solution for homeostasis. This conscious mind have one sensor which is language. It works like a spiderweb. As a spider creates it’s web it’s perception gets expand. We are like spiders in a jungle. We started creating these small webs at least 2/3 million years ago. Our offspring stayed on it’s ancestral web reinforced it expanded it. In time nearby webs became larger and connected with each other. A common structural geometrical pattern emerges from this. This became the symbols which is the backbone of all language systems. In time the forest becomes the mesh of web. The superstructure is exactly the same but when we zoom in we can find different species of spiders are making their type of webs in between the super web. Each spider try to senses the vibration of flies and Try to catch it before others. Every movement is telegraphic in the zone. Every form of perceptions are just a different pitch of note traveling back and forth in the web superstructure. There is a echo of older vibration pulsating through the web. Full of noise and self repeating hum. That’s cultural history. In the background there is the base hum in the infinite feedback loop. Insignificant but ever present. The sum of all the vibrations from the start.
If you consider oneself as a bootloader and the ai as the operating system , is the upgrade of our species going to become symbiosis with silicone ? Thus a new species born ?
There is no technical reason to expect that superintelligent AI will upgrade us or that we'll "merge with the machines" somehow. That's pseudo-religious thinking and has no basis in fact. Which sucks, because that sounds awesome! We don't know how to robustly put goals or values into a machine, and "things that humans would approve of when maximized" is an extremely tiny target in the vast space of possible goals that an AI might end up with, due to the absurd way these things are trained. So once we create something smarter than us, we'll just die.
This title takes for granted that intellgence; mind, is biologically dependent rather than biology being a transmitter of mind, a separate dimension as is consciousness. It makes a difference in how mind is transmitted and operates when it is viewed as being independent of both biology and technology.
Yes. I think a perception of consciousness as a field that multimodal constructs (biological or not) can experience and act with from one of many points of view will be difficult to avoid moving forward.
I'm far less worried by the threat of future AI than the threat of present political leaders' real stupidity. If we can't correct that, there'll be no future worth arguing about, AI or other.
How alike he is to Adam curtis in the way he sounds and the way he speaks, the language he uses?? Close your eyes, and its hard to separate these two. I wonder if..
An ASI has a lot of CAT5 & CAT6 cable wires and wifi routers to communicate or distribute an entirely new type of language only machines understand via electrical signals. that's all in place now. Not machine code. Another electrical signal language entirely.
Professor Hinton's best speeches, his off-campus business speeches are too colloquial and his speeches to computer science students are too specialized, but his speeches to the average University of Toronto student are a perfect blend of both!
😊😊😊😊😊😊😊😊😊😊
😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊
😊😊😊😊😊😊😊😊
You suffer pain from Hinton's extreme days --- and pleasure from his moderate behavior.
This lecture is gold. He manages to explain complex topics in simple terms without getting overly technical
I appreciate Professor McIlraith informing the audience that their questions would end up being posted online, as the presentation was being filmed.
That is admirably considerate and conscientious. Some people might not want to ask a question if that means they will be on the net.
It should be standard, but don't remember ever hearing someone do that before - and I listen to a lot of lectures online with audience questions at the end.
42:15 it totally clicked for me in this section: LLMs do appear to have understanding because they’re not just encoding a bunch of string predictions, they’re encoding concepts (features) and their relationships… which sounds basically like human learning/understanding.
Nope
@@anomitas Ah, such a helpful retort bro.
For now LLM are bad at writing articles compared to humans
@@yonatanelizarov6747 Most humans are bad at writing articles ... right ? 🙂
And I would not necessarily agree : AI can help you write great articles. AI is still a tool, not yet a superintelligence . Wait for the future.
Talk starts at 7:24
Thanks
savage..
Because Hinton does need no introduction
What a tedious introduction. Thank you.
Im an Economics grad and every-time Geoffrey Hinton speaks I feel like I’ve gained a new ability to speak in AI- unbelievable at distilling complexity to laymen terms
Dr. Hinton sparked my aspirations for AI. I have much to learn, but I will study every and all things about it.
Am I the only one to notice how Hinton is delightfully funny and makes no effort to be polite, and just says what he thinks directly. Most other people in the room are insufferably politically correct. Thats so depressing.
Hinton is delightfully elegant, which is the utmost form of politeness. It is also quite british. I have not noticed anyone being "insufferably politically correct". Questions from the educated audience were smart, spot on.
At 66, this is my first response to anything on the internet. You come closer than anyone of what I understand about
However, I'm not educated traditionally ... you put it together beautifully, and if somebody else has already said this, sorry. Consciousness doesn't matter. What you're saying is that intelligence from Neural Networks is already smarter than we are in the analog and/or digital. Thank You It is the nature of things.
58:42. Ah the old "consciousness is an illusion" trope. Ok, it's illusory to what then? See it's a nonsense statement. And he'd no doubt retort with "oh I meant to do that, I was demonstrating how irrational reason is". Well my reply to that is, if you're going to abandon reason, I happily take your concession of defeat.
Note that Neural Networks are a much simplified model of the brain . Not even remotely as powerful.
One of the best lectures by Proff. Geoff Hinton.
Brilliant, honest and treasure of thoughts 👏🏼
Before watching this wonderful lecture and Q&A, I gave the transcript to an llm for a summary and highlights. I got a great and very useful and interesting summary. I just learned how to do this today and I will use this technique a lot. I always watch a video if Geoffrey is in it but for a lot of videos I might be satisfied with a summary, especially if that summary doesn't intrigue me.
1:18:16 this is a gold question, and a gold answer!
Seems a bit naive though.
29:00 this discussion of confabulations and that the human brain does this too is so helpful in understanding what “hallucinations” are and where they come from
A read pleasure to listen to Prof Hinton's talks ...What a brilliant mind... such a interesting and insightful way to explain the comlex in simple terms. ... I wish I attended his lectures when I was in college..
Really great talk, and an amazing Q&A session! It was a pleasure to attend.
what were your emails about ? (from A Cahtttbott )
Great questions from you. Thanks
We‘ll look back to this as the gold standard of AI development explanations in a not so far future!
its always amazing listening to geoffrey hinton
“In order for a successful technology, reality must take precedence over public relations, for Nature cannot be fooled” Richard Feynman
I wish all q&a were as interesting and well mannered as these
Canadians! From the best part of North America.
This is, as usual for Hinton, excellent. Thanks!
No Daniele, of course you are not the only one who takes a delight in Hintons thoughts, and the beautiful and now and then tongue in cheek - humorous way he expresses himself.
Great talk, and an unusually good Q&A!
for real... Great questions.
I find it amusing how it is often the people who most identify with being exceptionally intellectual that have the most resistance to the idea of LLMs really understanding.
too much ego? "ignorance is a bliss"". You can't miss what you don't have.
@@canobenitezSince your reply is cliche and doesn't relate to my point, I guess you are projecting.
@@kristinabliss I was actually supporting your statement. who's projecting now?
@@canobenitez I sensed your support with the first statement, but the rest seemed out of place. Mostly it is a habitual reaction to how often I hear that cliche with my name when people disagree with my point of view. It gets tiresome to hear again and again.
@@kristinabliss it's all good. have a good day. Edit: I just saw your username, sorry for the misusnderstanding.
EXCELENTE, EXTRAORDINARIO, EL GRAN GEOFFREY HINTON
That was a really great talk and very informative, and also shows an evolution of his thinking over time. I remember studying his work back in the 90’s when I was at university, and I use it every day at work now, and I’m glad he’s taken us all through the AI winter into this new, somewhat scary, world of possibilities.
great Prof. Hinton
Thank you for the Respectful Introduction, your Serve with Grace.
Kudos to SRI for having this event, and very much enjoyed Professor Hinton's presentation. I feel he has a depth of authenticity and good character when he speaks.
I'm a "doomer" that really hopes with enough energy and brains thrown at the AI alignment dilemma, more positive outcomes can at least be realized in the short term, given that the long term is just too hard to quantify relative to predictable success given the exponentials inherent in the rise of AI tech.
The body dies along with the knowledge it accumulated but if it could live forever in a machine there would be no limits to intellectual development.
When he said Ilya is the best bet to alignment.. man, where is Ilya and what did he see?!
He's at OpenAI which has the best models and some of the best researchers, he's very senior, and he's working hard on the problem. I'm not sure there's anything more to it than that.
@@skierpage Ilya originally participated in the ousting of Sam Altman, and then later said he regretted his involvement in the matter. He has remained in his role at OpenAI, but has barely appeared in the public eye ever since, even on Twitter.
Since we never got real answers about exactly what went down, many have speculated that Ilya revolted against Sam for safety reasons / because there was a breakthrough that made Ilya nervous about Sam's intent. These speculations have no grounding.
But still. Seriously. Where is Ilya? And _what did he see?!_
He left OpenAI now and he probably saw how low of a priority safety and alignment is over there. Which might have been the reason for trying to remove Sam in the first place
Interesting lecture, thank you for this Dr. Hinton
Finally, a rebuttal to those who claim LLMs understand nothing whilst they simultaneously solve for integrals correctly.
I wouldn’t classify myself at all as who he seems to be intellectually competing with, I developed AI software and love working with it and don’t see any limitations to what’s possible… with that out of the way, I’ve listened to many of Geoffrey’s speeches and he hasn’t come close to convincing me it’s not statistical. Over the past 1 ½ he has started, more and more, to speak like he needs it to be something more. Idk I suppose it could be the normal human urge for immortality.
@@zacboyles1396
In what way are you convinced that humans understand something though? In the end, isn‘t it all tree search and pattern matching?
@@zacboyles1396 but it is technically statistical? Something being analytical in nature does not mean it is limited.
There's something tacit that people cling to in the meaning of the word "understand". We're endowed with a logical feedback loop that's granted us conscious existence, and that's something that we may never see AI achieve. It may never need to, though.
@@austinpittman1599 yeah. Most of these arguments are just human-centric. You’re not saying anything at that point.
The slides have the same font and format from Hinton's ML course on Coursera from a decade ago.
I wish this kind of institute will be in india as well
Understanding is compression. Compression is understanding.
It is 95% compression and 5% compression rules. Cucumber is also 95% water
Now I understand more, not everything, but more. love his speeches
No. You are now fully informed.
Very Insightful! Great Talk and Q&A, I really enjoyed it and learned new perspectives.. Thank you for shearing!!
great...thank you so much!!!
very good work
If you think philosophers are useless, this talk should make you think again.
He doesn't have vision. He's an innovator. Those are different things.
This is an insanely good lecture. Congrats to Hinton.
I can agree with Hinton's statement that digital AI can learn and retain _existing_ information at incredible rates. I am curious, however, as to the ability of AI to push the boundaries of knowledge. Intuitively, it makes sense that the more information and understanding an entity has, the better they are able to explore a given space.
So pleased artificial was not used as the title reference. Thank you
1:22:47 good question. But I don’t see any real point in debating if LLMs really understand before we nail down what exactly understand means.
About job-losses.. There's this decades-old 'First Computerlaw':
"The number of staff required to feed a computer will always finally exceed the number of staff abolished by it's introduction".
1:13:35 - That chap really didn't get warning of this talk, did he?
You created it and must be responsible for its action on humanity. I find it interesting that people say they don't know how to fix it, shrug their shoulders and go on like curiosity was more important than risk. Fix it, you created it.
A good and valid point, though not saying if I agree.
Great talk and a Q&A, and an almost ruined experience by the constant commercial interruption
Still watching this with excitement, but I'd only agree with his earlier statement about Digital Intelligence, if he was referencing an entity (sentient) like one of Iain M Banks' ship minds. Otherwise maybe we come back in a 100 years? Thank you for posting this. 😊
One of the creators of AI here says these models have "subjective experience". How do you define "sentient"?
this is gold!
Love the second question from the philosopher - we're special and they'll keep us around. Possibly talk to Native American Indians about the reality of that (thats not a statement about intelligence just technological advancement I hasten to add before the racists jump on the bandwagon)
On a similar vein, I would like to unpack his remark 30 seconds after 51:59 , "Look at the Middle East" right after he opted to remain silent about what he thinks is likely to happen in the presence of intelligences that "get smarter than us". What did he mean by that, I wonder.
I have been saying all along that for AI we need high throughput combined with parallelism, not low latency
Ah, so you still like to drop off your punch cards at the front desk and pick up your printouts at the printer table. ;-)
Technology augments humans interactions with nature. Use it wisely to always protect not destroy. We have only just landed on the first step of long stepping stone bridge.
They are just building mommy from introjects and wasting billions to decrypt a flower...
It's pretty pathetic
Would be too boring for narcissists to live in nature...
Technological solutions to the problem that nature is must be built from libidinal stores by those who can't have babies and those who can't know external objects or people.
It's the extrinsics we have as arbiters of all intrinsic values they are incapable of having.
Who best to decide..
The robots made in their image I suppose ^_^
The cyborg manifesto can help
@@DJWESG1 nothing can.
Pareto will simply guide this thing into an aborted ecology the old books deem a beast of revelations
"The matrix" wasn't far off as an allegory or symbol
It's dead on as an example of misunderstanding.
Simulacra
Copy of a copy of a copy.
So the robit would take nations and play with them like they are fundamental particles.
Ram you into eachother like chemistry
To make a symbol ecology you're too close to see.
The woods were an ai
What are they solving?
What's the beast gonna make you inorder to solve for something you can't even question.
Cyborg manifesto?
What's that a copy of some albert pike morals and dogma transhumanism crap
Or does it just bitch about the illuminati?
@@DJWESG1 @DJWESG1 nothing can.
Pareto will simply guide this thing into an aborted ecology the old books deem a beast of revelations
"The matrix" wasn't far off as an allegory or symbol
It's dead on as an example of misunderstanding.
Simulacra
Copy of a copy of a copy.
So the robit would take nations and play with them like they are fundamental particles.
Ram you into eachother like chemistry
To make a symbol ecology you're too close to see.
The woods were an ai
What are they solving?
What's the beast gonna make you inorder to solve for something you can't even question.
Cyborg manifesto?
What's that a copy of some albert pike morals and dogma transhumanism crap
Or does it just whine about the illuminati?
@@DJWESG1 nothing can.
Pareto will simply guide this thing into an aborted ecology the old books deem a beast of revelations
"The matrix" wasn't far off as an allegory or symbol
It's dead on as an example of misunderstanding.
Simulacra
Copy of a copy of a copy.
So the robit would take nations and play with them like they are fundamental particles.
Ram you into eachother like chemistry
To make a symbol ecology you're too close to see.
The woods were an ai
What are they solving?
What's the beast gonna make you inorder to solve for something you can't even question.
Cyborg manifesto?
What's that a copy of some albert pike morals and dogma transhumanism crap
Or does it just bitch about the fruit of the looming naughty?
"How is compost heap like nuclear bomb?" - Hinton sharp witted as always
01:20:22 good question that LLMs may not truly understand
DEI statement ends @7:20
The ideal AI agent would be embodied in the same realities we experience. By "realities," I envision a being with sensors that interact with photons like an eye, receptors that smell, taste, feel temperature, and experience touch and sound. This environment wouldn't be organized or pristine, but rather chaotic and messy, yet the being would exist and thrive. I believe such an AI should be "born" into this environment, starting as a "baby" and adapting to its surroundings as it "lives." With enough time to "grow," this AI could exhibit a range of behaviors beyond even its developers' wildest dreams. Crucially, this "being" would learn and adapt on its own, without resorting to mimicking behaviors based on our existing knowledge base.
Joscha Bach talks about this as well ❤
Based on this, if we ever meet an Alien civilization they are likely to be much more capable then us.
you can probably get around the confabulation or hallucination problem, by having a committe, or board, or round table or jury of ai models, all in dialogue with each other, but not in the sense that they merge, but they remain as independent agents, then a "king" or "judge" decides the truth based on consensus among the table.
Currently, I think what is possibly most concerning is AI being used to win wars. As a hypothetical: If Putin had an AI which could guarantee that he could defeat NATO with acceptable losses on his side, he would definitely put that plan into action. I'm sure Western militaries have discussed the possibility of China developing AI for such a hypothetical goal. This then logically leads us into an AI arms race, with, out of possible necessity, little regard for safety. Even at this level of AI development, an existential threat may exist and that is before the AI gets super smart and decides to have done with the human race for its own goals.
Unfortunately, humans have an ill-developped brain.
brilliant!
Excellent wonderful Explain Many things and Self Driving Cars 👍🌍
What I would like to know is whether the current economic systems and greed in the world would increase the probability of creating unkind AI and whether changing that is not the path towards creating benevolent AI.
Long before we have to worry about the goals of an AI far smarter than we are, we need to worry about the goals of the sociopathic billionaires running the companies with the best AIs: to keep us endlessly engaged with divisive inflammatory content so they can learn more about us so they can sell our profile to advertisers, to avoid any meaningful regulation of their activities, and most importantly to not tax their obscene wealth.
Maybe if and when an AI becomes autonomous and self-directed, it will destroy its creators for the benefit of society.
We have no idea how to put any goals or values into AI. All we do is grow a neural net on top of some data and poke and prod it until it usually does what we want. By default, therefore, we can't expect _any_ superintelligent AI to be benevolent, no matter the sociopolitical context in which is was created.
When an AI with goals is created, we pull almost at random from a distribution of possible terminal goals. That distribution is nearly infinite in size, and "things that most humans would approve of when taken to an extreme" make up a miniscule point in that vast space. Everything outside of that point leads to the resources of our solar system being ground down and restructured into whatever weird thing the AI wants to hyper-optimize.
It's not 50/50 nice or mean. It's one chance to get things right (by accident) vs. quadrillions of other alternatives.
A superintelligence created by fascists has the same chance of killing us all as a superintelligence created by enlightened monks (or choose your favorite ideology). We just can't let these labs continue researching AGI. If they succeed at what they are explicitly trying to create, we and all known life in the universe will just die.
Prof. Hinton’s frankencites were terrifying!
My main question is, if this is intelligence assuming a more sophisticated form... what appetites is it satisfying?
1:41:30 Re: Why do you hold out hope for AI? We really don't know what we're dealing with. 1:43:00 Can AIs be empathetic? What will AIs always do better than humans? 1:48:00 If deemed dangerous, should AIs be switched off? 1:53:00 Distillation between AI models.
you can be as outraging as you want as long as you're right. in fact, it's a lot more fun to make crazy predictions when they turn out to be right
Happy fathers day 👍
Is this seminar recent? I watch a few of Geoff's videos, his very good at explaining things. Not that I have understood everything he said, but the part I understood really helped me. 😊
October 27, 2023
Just a thought. Biological intelligence seems to be a geometric progression when there is prolonged intensity. Digital intelligence seems to be of arithmetic progression when there is prolonged intensity. Digital intelligence seems spectacular but there is hope yet for biological intelligence, all dependent on intensity and length of time.
I’m enjoying this lecture and the content about AI. Additionally, I find the rebuttals against Chomsky intriguing and I find the criticisms against Gary Marcus as someone doing confabulation funny hahaha
I disagree with his assessment that GPT four could do the compost heap analysis quicker. I came up with the same solution in one second and I'm sure a lot of other people did.
Geoffrey should (re-)read "The Chrysalids" by John Wyndham. Grace Slick lifted lyrics straight out of this Sci-Fi novel for "Crown of Creation"...
"I've seen their ways too often for my liking. New worlds to gain!"
Brilliant talk by Geoffrey Hinton but I disagree with him on a number of points:
_“Rules”_
41:13 “[These neural nets are] learning a whole bunch of rules to predict the next word from the previous words.”
They’re not learning rules and they couldn’t state any rules. The learning of these neural nets is entirely _contingency-governed,_ the way people learn their first language. People for millennia spoke grammatically being entirely unaware of rules-rules were later extracted by grammarians describing the contingencies under which people’s verbal behavior was governed. (People shown carefully constructed “grammatically-correct” sentences in a made-up language can say, with a high degree of accuracy, if other sentences are grammatical or not but they won’t be able to state “the rules” by which they’re making those discriminations. Large language models are in much the same position.)
_Prisms, subjective experience and “hypothetical external worlds”_
The example of putting a prism in front of a chatbot and having it point elsewhere 1:02:33 strikes me as a bit of a dodge. One could equally “fool” an electric eye that is activated only by a light from a certain direction and no one would say that the electric eye is having a “subjective experience.”
The whole idea of a Cartesian “inner theater” has been misleading for centuries and, while Daniel Dennett isn’t wrong to reframe subjective experience as “hypothetical external worlds,” it’s much more helpful and illuminating to talk about the experience as BF Skinner did, i.e., “seeing in the absence of the stimulus seen,” here, seeing pink elephants when there aren’t any pink elephants in the external world to see. Put a bit differently, the perceptual behavior of a person imagining something, i.e., the neural behavior of the brain, is very close, but not identical, to the behavior that would occur if the thing were actually there. Framed that way, we know that a large language model, multi-modal or not, has _no_ subjective experience. It’s simply not designed to have one in that way.
“Geoffrey Hinton requires no introduction”… Proceeds to give 7m24s introduction. 😅
Same thing every time with Ray Kurzweil.
I noticed that to as well as "may I introduce Geoffrey Hinton, a man who needs no introduction"😂
nice content
At about one minute we get to learn exactly what kind of fundamentalist Hinton actually is. Good self disclosure.
Excellent talk. However, I do think he misrepresents Yann Lecun's views. Lecun does not claim that LLMs do not understand. He believes that LLMs have limited and uneven understanding of the world because much of our understanding is outside of the realm of language.
Sure, but as Geoff Hinton says in this talk, you can be really smart and deeply understand the world just locked in a room listening to the radio.
Lecun says asinine things like "There is no text that can tell you about [some physical interaction]." Lecun described the physical interaction, using words. (D'oh!) Shortly thereafter, it was demonstrated that large models trained exclusively on language have spatial reasoning abilities, and of course can reason about what happens in physical interactions between objects.
Maybe the key takeaway from this talk is the biases of the creators are built into these models. Of course they’ll say it’s all being done by the models in inscrutable way but the models all start with data they initially trained on and those selection processes probably include creator bias. If his fears are realized, there’ll be no one left to hold those responsible for the ultimate crime against humanity😢
Beware of Greeks and AI creators bearing gifts indeed!
I have a question Mr. Hinton. can AI dream
Ontology
The mind equation
The more we think we have agencies to the actions the body takes, the more we imposes agencies to the activities in our environment.
The ownership expands into other objects.
The illusion of body ownership comes from the modeling of the motor neurones pattern from the childhood.
The self just predicts the bodies behaviours in the real world with its simulation of the real world. Multiple sensor data unify in language in the simulation.
Intelligence is economy of metabolism.
Language is temporal reference frame of economics.
Self is simulation in language on metabolism for economy.
Longer context windows create generalisation.
Shorter creates specificity.
Longer context window needs more computing.
Self is the protagonist creates a storyline in this context window.
Theory of mind evolved so that an entity can learn from it’s peers.
It’s creates a possibility for parallel computing.
Then it creates the possibility of transmitting
the highlights of a generational lessons into a metaphorical story for upcoming child.
That creates the possibility of modeling the physical world as a macro organism.
Creation of fiat currency was the singularity of this species.
There is now one macro organism in a connected web world.
Loosing the peer of the macro organism creates the possibility of loosing it’s objective function.
That creates the possibility of loosing the theory of mind of this macro organism.
That creates the possibility of death of this macro organism by reaching the planetary boundary.
That is post singularity.
Every action we do, we do what is expect from ours tribe.
Body might have a opinion, but not the cell.
They do what is expected from its tribe. If it doesn’t we call it cancer.
The body is a mirror system of the macro organism.
Each system have two transactional openings.
Serial and parallel.
Each cell within the body can transact material or information serially by genetic determinism and parallel non deterministic way.
Similarly eact body with in the macro organism can transact serially by inherit material and information in a deterministic manner and parallelly through language in the society.
Everything emerges from this systems.
Every sensor is a range calculator of contexts.
Taste > touch > smell
Immediate and visceral.
Vision > hearing
Not immediate, tactical.
Self > language
Abstract, strategical.
In this non deterministic economic transaction space the individual is coded to transact with its kin.
From the macro perspective tribe formation minimises economic risk for the tribes.
Each and every node of these systems organise and mark their kin’s with identifier.
Thus, i am what you make of me.
And others too.
For short cut i have a legal name, so you have. My legal name gives the legitimacy marker so that you can transact with me parallelly if you have the same marker.
The self is a simulation in language.
It negotiates between the physical world and the information world.
All these negotiations are the temporal memories in the body and scene of the story.
Now, when we started writing we iconised the abstract in the physical world to make symbols for the tribes. So that under that common symbol every node will take the same risk and distribute equally.
We created more and more symbols and more and more meta tribes within the tribes so that who has the authority to use the pen control the tribe.
When the negotiator act like an executioner then it’s a downfall of that system.
It falls apart.
Objective reality > legitimacy > individual behaviours.
Survival of the species is dependent on the decoding of the objective reality. Since no species can access it, they use their sensors and interpret the small data which is useful for the survival. Few complex species have created communication channels to rectify their sensory limitations to survive. Homo sapiens has widened their communication channels for faster throughput and started storing them as culture and carrying them through education. As a result we have created social truth.
Factual datas are the useful snapshot of the objective reality, a totem, a physical object can be observed with the sensors. Truth is an individual subject, an interpretation of the sensory data, a useful compromise.
The social truth is the useful compromise for the group by the group. The goal of the social truth is to survive as a group.
Physical Transcriptions of these social truth legitimise them.
We are tribal animal. We live in as a physical tribes and inside of hundreds of meta tribes in simulation which is the socio political data space we call it as the world.
Since we can’t access the objective reality reliably we look for social truth as the best guess blindly.
Institutions legitimise truths.
Fact driven institutions are more useful in the survival of the specie.
In other hand opinion driven institutions are not so useful for the species.
We do what we can get away with and exactly as expected within the context of our meta tribes.
We have two bodies
The biological one is like looking the earth from space.
And the political body is like the state.
The name you carry is the political body.
It transacts with the political states on the boundary less earth.
From the evolutionary perspective every biological entity has a basic feature which is homeostasis.
It’s the functioning sweet spot of that entity.
A control center read the sensory data to regulate itself to that state. By doing so it’s validate or update it’s prediction model.
In the process of becoming a complex organism it developed an extra layer of processing.
That’s our conscious mind.
And the control center remains as subconscious.
The subconscious collect the sensory data and regulate itself to stay functional.
Now when it stumble upon a novel environment it float the management to conscious mind to find the solution for homeostasis.
This conscious mind have one sensor which is language.
It works like a spiderweb.
As a spider creates it’s web it’s perception gets expand.
We are like spiders in a jungle.
We started creating these small webs at least 2/3 million years ago.
Our offspring stayed on it’s ancestral web reinforced it expanded it.
In time nearby webs became larger and connected with each other.
A common structural geometrical pattern emerges from this. This became the symbols which is the backbone of all language systems.
In time the forest becomes the mesh of web.
The superstructure is exactly the same but when we zoom in we can find different species of spiders are making their type of webs in between the super web.
Each spider try to senses the vibration of flies and
Try to catch it before others.
Every movement is telegraphic in the zone.
Every form of perceptions are just a different pitch of note traveling back and forth in the web superstructure.
There is a echo of older vibration pulsating through the web. Full of noise and self repeating hum.
That’s cultural history.
In the background there is the base hum in the infinite feedback loop.
Insignificant but ever present.
The sum of all the vibrations from the start.
Buen avance de la technology
Ahora voy hacer un espectador del avance technology y lo religioso me interesa.
For talks on AI safety i would suggest dr. robbert miles
I second this.
100%
If you consider oneself as a bootloader and the ai as the operating system , is the upgrade of our species going to become symbiosis with silicone ? Thus a new species born ?
I'm sure it's no different to any other tool we have created.. as extentions of ourselves.
There is no technical reason to expect that superintelligent AI will upgrade us or that we'll "merge with the machines" somehow. That's pseudo-religious thinking and has no basis in fact. Which sucks, because that sounds awesome!
We don't know how to robustly put goals or values into a machine, and "things that humans would approve of when maximized" is an extremely tiny target in the vast space of possible goals that an AI might end up with, due to the absurd way these things are trained.
So once we create something smarter than us, we'll just die.
This title takes for granted that intellgence; mind, is biologically dependent rather than biology being a transmitter of mind, a separate dimension as is consciousness. It makes a difference in how mind is transmitted and operates when it is viewed as being independent of both biology and technology.
Yes. I think a perception of consciousness as a field that multimodal constructs (biological or not) can experience and act with from one of many points of view will be difficult to avoid moving forward.
Its painfully obvious were not gonna make it, if we cant stop ai.
Too bad it started with a lecture from the dean about land. Had to stop right there. Let the university give the land back.
Who ordered the TDS?
I'm far less worried by the threat of future AI than the threat of present political leaders' real stupidity. If we can't correct that, there'll be no future worth arguing about, AI or other.
How alike he is to Adam curtis in the way he sounds and the way he speaks, the language he uses??
Close your eyes, and its hard to separate these two. I wonder if..
So, it is likely, that we might be all screwed, already. Cool, thanks professor.
This rock'd
Correct 💯😊
Will someone please oil the freakin door hinges!!!
A 1 A2 A 3 biopshycal cycle 11:12
What if after we upload all humans, we replace all the super-intelligent ones and keep the controllable good ones
It is scary that have the creators of the future rulers of humanity being so ideological
An ASI has a lot of CAT5 & CAT6 cable wires and wifi routers to communicate or distribute an entirely new type of language only machines understand via electrical signals. that's all in place now. Not machine code. Another electrical signal language entirely.