Geoffrey Hinton - Two Paths to Intelligence

Поділитися
Вставка
  • Опубліковано 10 січ 2025

КОМЕНТАРІ • 405

  • @Senecamarcus
    @Senecamarcus Рік тому +25

    Thank you for uploading this for us to watch! I appreciate that.

  • @LiminalLogic
    @LiminalLogic Рік тому +44

    I love this video. Brilliant and low-key hilarious! I'm consistently impressed by Geoffrey Hinton.

    • @AmericanBrain
      @AmericanBrain Рік тому

      but he admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

  • @RougherFluffer
    @RougherFluffer Рік тому +52

    What a wonderful talk. His humble approach and acknowledgement of where he lacked particular knowledge was heartening to witness. That he has logically deduced some of the main arguments of the alignment problem speaks volumes about his reasoning abilties. I'm very glad he's leveraging his position to try to promote such vital messages.

    • @wk4240
      @wk4240 Рік тому +3

      It will take many more, like Mr. Hinton, to make a difference - as to the what direction and to what extent we take with AI.

    • @richardpaczynski5486
      @richardpaczynski5486 Рік тому

      Very well put; thanks

  • @TheLastUniqueName
    @TheLastUniqueName Рік тому +82

    “There’s no examples of a more intelligent thing being controlled by a less intelligent thing” - Tell me don’t own a cat without telling me you don’t own a cat

    • @gdraskovic
      @gdraskovic Рік тому +7

      Perhaps cat is thinking the same thing

    • @41-Haiku
      @41-Haiku Рік тому +2

      Just shows how easy it is to manipulate a human.
      (As a cat person myself, it's the endorphins that do it. The little kitties are so fuzzy wuzzy!)

    • @Drookup
      @Drookup Рік тому +4

      Maybe the cat is really intelligent

    • @prestonlui6451
      @prestonlui6451 Рік тому +1

      But cats are more intelligent, cute overlords

    • @Custodian123
      @Custodian123 Рік тому +2

      The same idea with dogs. My pug knows she will get me to do something she wants, if she acts or does something in a particular way (acting in a specifically cute way).
      This actually gives some insight regarding the future of super intelligent AI and humans. If we don't have control, it's likely we can still have some amount of influence. Maybe.

  • @kandoit140
    @kandoit140 Рік тому +13

    I always love listening to Geoff, he is so insightful and has a great sense of humor. So interesting to hear him talk!

  • @kenmogibrainworld4844
    @kenmogibrainworld4844 Рік тому +9

    When Prof Hinton discusses the nature of qualia from the counter-factual point of view, there is a spark of things to come. I look forward to further expositions on this.

    • @DirtiestDeeds
      @DirtiestDeeds Рік тому

      Yes, the world is our lobster! Just need the piping at international/national/regional/local/ level along with 'One ai per child.' policy...
      Also stop the training runs immediately.

    • @PazLeBon
      @PazLeBon Рік тому +1

      it isnt factual tho lol

    • @AmericanBrain
      @AmericanBrain Рік тому

      Ken stop it now. He admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

    • @AmericanBrain
      @AmericanBrain Рік тому

      what you even talking about ? @@DirtiestDeeds Hinton admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

  • @_obdo_
    @_obdo_ Рік тому +25

    Great talk. It’s impressive to see someone speak out on such a polarizing topic, based on having grasped it purely intellectually even though, as he says, his emotions haven’t nearly caught up yet.

    • @PazLeBon
      @PazLeBon Рік тому

      why polarising? its just software at the nd of the day, nothing that new about it in many senses

    • @_obdo_
      @_obdo_ Рік тому

      @@PazLeBon The topic of AI risks has unfortunately become fairly polarizing, and Dr. Hinton has recently shifted his position on that topic, some of which comes out in this video (even though that’s not the primary topic).

    • @Petrvsco
      @Petrvsco Рік тому +1

      @@PazLeBon”just software” I think you missed the part that mentions how this can quickly become an existential risk. Or you misunderstand what existential risk in this context.

    • @tappetmanifolds7024
      @tappetmanifolds7024 Рік тому

      ​@@Petrvsco
      Elaborate and elucidate.

    • @tappetmanifolds7024
      @tappetmanifolds7024 Рік тому

      By enforcing personal opinions based on perception from misconception, especially when swayed by political bias, how can the advancement of a system progress, if decision problems are not permitted to evolve because they are restricted by preventions?
      Distillation would do well to find pools of resource in the entropy of the not yet known.

  • @DaniloNaiff
    @DaniloNaiff Рік тому +70

    It is really impressive to listen to Geoffrey Hinton. I think this lecture mays sound strange for most, but he really seems to think like a cognitive scientist, that simply wanted to make a nice model of the brain.

    • @dobermanlove777
      @dobermanlove777 Рік тому +3

      That's exactly what I thought when listening to this presentation!
      It's quite a romantic approach for the human brain to try to recreate a digital and thus mathematical representation of itself. Especially when you also see the link between how neural networks are communicating and how society does in the example of Trump's tweets.

    • @paulm3969
      @paulm3969 Рік тому +3

      I actually find him really irritating, I think he is quite presumptuous.
      He makes a lot of assumptions and then uses them as argument.
      For example he keeps saying that people think they're special. What is he on about? Yes some people think they're special but it's as if he is the only person on earth who thinks otherwise. I know very few people who think they're special or really smart and I'd say most people already know Google is smarter than them. So I don't know where he gets that idea unless he is projecting himself.
      I also think he is a bit of a fool for saying things like "Trump would use these things to win elections". Like why not just shut up and stop giving Trump ideas?

    • @jebprime
      @jebprime Рік тому +6

      I think he’s referring to how some people believe intelligence and consciousness are something special or unique to humans, that cannot be replicated by a machine

    • @PazLeBon
      @PazLeBon Рік тому

      @@dobermanlove777 yet the facts are they have absolutely no clue how we think, irrespetive of how they dress things up

    • @PazLeBon
      @PazLeBon Рік тому +2

      @@paulm3969 im like you, i always get irritated by 'we' or generalisations thare simply are not how i think haha

  • @41-Haiku
    @41-Haiku Рік тому +4

    Hinton is a delight. His voice is a very welcome one for the AI safety community.

  • @JustJanitor
    @JustJanitor Рік тому +1

    Thank you very much for making this available

  • @whalingwithishmael7751
    @whalingwithishmael7751 7 місяців тому +1

    One of the only people with a real take on this. Most people don’t think it will be sentient and most people haven’t fathomed the dangers that they entities could pose.

  • @DreamzSoft
    @DreamzSoft Рік тому +1

    Sir you are too good and listening to your views we're thankful of having you people around us ❤😊 thanks

  • @boremir3956
    @boremir3956 Рік тому +102

    I have noticed that often times those that are highly intelligent are very hesitant to admit that they are knowledgeable or should be viewed as an authority in a specific field, like sir Geoffrey Hinton here. On the flip side those that are the loudest and think themselves capable of giving advice and knowledge to someone else are often times the least intelligent.

    • @nescirian
      @nescirian Рік тому +19

      This is an observation that a lot of people have agreed with - for example, in 1950 Bertrand Russell wrote that "The fundamental cause of trouble in the world today is that the stupid are cocksure while the intelligent are full of doubt". There are studies that support the idea, and in psychological circles it is known as the Dunning-Kruger effect, which is a useful search term if you wanted to learn more on the subject.

    • @hubrisnxs2013
      @hubrisnxs2013 Рік тому +10

      Duning - Kruger in effect, which in this case is important but, and I may be incorrect here, I notice a lot of people suffering from Dunning Kruger use Dunning Kruger as a blugeon on people.
      I suppose since it's an ethical or cognitive blindspot, it is akin to those suffering from confirmation bias, yet I feel there is an added moral component of Dunning Kruger that I'm not sure actually exists, though I definitely feel it to be so

    • @kinngrimm
      @kinngrimm Рік тому

      Look up Dunning-Krueger Effect, i think at least the second part of your statement is discribed by that.

    • @poemerlee9437
      @poemerlee9437 Рік тому

      Can’t agree more.

    • @matthewcurry3565
      @matthewcurry3565 Рік тому

      M glad you just found out that you live, and are ruled by cranky, narcissistic toddlers. Now, get back to working for that system.

  • @yunwang1243
    @yunwang1243 Рік тому +2

    This is such a sincere talk.

  • @AntonMochalin
    @AntonMochalin Рік тому +2

    I was most intrigued by Hinton's view of subjective experience which is actually quite close to particular psychology theories emphasizing the social nature of consciousness and if those theories have some truth to them (and I'm pretty convinced they do) having some form of subjectivity like ours isn't going to be hard for ML systems. What they still lack and I think is preventable is having a personality as a hierarchy of motives (vaguely similar to what Hinton mentioned about goal to have more control serving many other possible goals) because now the ML's simple "motive" is doing the task we set, providing the "right answer" so to speak so we're more likely to fool ourselves if not careful enough with the definitions of "right answers". However, Hinton is right about the dangers of allowing ML too much unsupervized agency so the solution could be in development of specialized systems and prevention of creation of general purpose systems like GPT-4 or at least prevention of allowing copies of those systems to share too much general knowledge.

    • @geaca3222
      @geaca3222 Рік тому

      It would be interesting to know what Dario Amodei of Anthropic thinks about your suggestions

  • @KemptonLam
    @KemptonLam Рік тому

    52:29 Amazing (and surprising) answer to hear Prof. Hinton talk about thinkers that affect his own thoughts on risks from AI.

  • @loopuleasa
    @loopuleasa Рік тому +6

    tldr on how teaching and learning works for us:
    "To learn from the words coming from my mouth, your brain is trying to change its connections to make it likelier that you would reasonably say that string of words yourself."
    He taught me to say that

    • @greencoder1594
      @greencoder1594 Рік тому

      The question is though, *why did you repeat.* And why did you post. Is it for the likes, the joke, do you think you know? Because it is not the reason you are going to proclaim.
      Also, thanks for your tldr.

    • @bobsmithy3103
      @bobsmithy3103 Рік тому

      I'm not sure I'd agree with Hinton on that. A human's goal is learning the underlying concept whereas LLMs' have a goal to learn surface level concepts, but in order to do so it is forced to learn the underlying concepts/models. Note that the human is not necessarily optimizing to more likely predict what word/token is being used next which is the case for LLMs. (AKA: for humans, word prediction is a consequence of the goal of learning underlying models. For LLMs word/token prediction is the goal and learning the underlying models is a consequence)
      It's a slight but useful distinction.

  • @jonatan01i
    @jonatan01i Рік тому +7

    Btw. humanity also learns by averaging through evolution.
    Every one of us is ran with a slightly different config settings and the most successful units will make more children - at least that was the case for a long time.
    It's the species hardware that is learning through evolution.

    • @PazLeBon
      @PazLeBon Рік тому

      lmao no, the inteillgent ones have less children now :)

  • @charlesje1966
    @charlesje1966 Рік тому +1

    That is fascinating. I use chatgpt to assemble code for microcontrollers and I can see how this lecture points to the future of that endeavour. We will replace the 'human code' layer with hardware anatomy that has been optimized for a task through AI.

    • @tappetmanifolds7024
      @tappetmanifolds7024 Рік тому

      @charlesje1966
      Given that the English language is extremely rich in its historical contextuality, as well as it's richness in ambiguity and nuance, does our ability to construct machines, which can decide for us our channels of communication, cause greater divisions between people who are unable to express a posteriori knowledge?
      Is this the anti-thesis of the humane computation which seeks by physical interactions through debate our true purpose as a species?
      Religion and belief systems aside we still need to, as in Professor Hawking's words, keep talking.
      Is the most efficient way to acquire knowledge to actually 'get' the entire distribution and a precise interpretation of it.

  • @danielrodio9
    @danielrodio9 Рік тому +2

    07:45 There are numerous websites on paint fading over time on the web and how to solve those kinds of problems. True abstract hypothetical deductive thinking would require problems that are qualitatively different than the data is has been trained on. How does Hinton know for certain that GPT-4 has not been trained on any of those websites?

    • @MrDavidbr1970
      @MrDavidbr1970 Рік тому +1

      Bingo. I was expecting that he would say something about the training set that they knew it was a completely new task that gpt-4 could never have picked up from the web data corpus, because it was so obvious it could have done that. But he never said anything of a kind and _nobody asked_ which is much worse because the audience is amenable to manipulations. BTW, if it was an avatar then maybe people would have proclivity to double check. Yet when a renown famous scientist says something, psychologically there is lower proclivity to check or critically validate this.

  • @JasonC-rp3ly
    @JasonC-rp3ly Рік тому +10

    What a fascinating talk - this man is a hero

  • @cmilkau
    @cmilkau Рік тому +4

    "Modern" cryptrogaphy (the stuff that happened after 1980) is a prototypical example of exerting control using something that is much less powerful than what is being controlled. This is essentially the goal of cryptography: have something that is (moderately) easy to use, yet extremely hard to abuse. It's not a solution, but it is an example.

    • @hubrisnxs2013
      @hubrisnxs2013 Рік тому +1

      Yes, but in this case we have to develop a cryptographic system completely correct on the first try or everyone dies.
      I'm not attacking what you said or your perspective, because you are absolutely correct... but I still think it's a problem as well as other examples that can be made....it is like coming up with a completely secure (as in zero vulnerabilities ever that has to incorporate and use all other things regardless of security flaws) operating system on the absolute first try. This is first try on by definition a closed source system since if it is a fork of an insecure system with similar capabilities we are equally as dead

    • @cmilkau
      @cmilkau Рік тому +2

      @@hubrisnxs2013 Yes! As I said, it's not a solution by any means. I'm not even qualified to estimate whether it is a possible path to a solution, although it seems unlikely (most crypto relies in unsolved maths problems which would be dangerous). I just wanted to mention there is an example of a weaker system controlling a more powerful one

    • @greencoder1594
      @greencoder1594 Рік тому

      @@cmilkau could you please elaborate in which manner a weaker system is controlling a more powerful one. both what you define as system and what you define as control.

  • @scottnineteen
    @scottnineteen Рік тому +6

    Geoffrey Hinton consistently presents and considers the most intriguing issues. He's not the guy in the basement working on his nets for decades that super-fast hardware made famous., no, his thinking properly shines light in the dark places and his ideas worked because they're really good, ...and the hardware got faster.

  • @HangLe-ou1rm
    @HangLe-ou1rm Рік тому

    Amazing talk! Thank you!

  • @jorgesaxon3781
    @jorgesaxon3781 Рік тому +2

    25:40 Love how he says its "Possible" that google is doing the same thing, like he wasn't working on probably exactly that just a couple of months ago :/

  • @tangdexian3323
    @tangdexian3323 Рік тому +2

    Speaking from the perspective of a former electrical engineer, I suppose another point of people figuring out to use the digital gate, 1s, and 0s to represent information is also because, analog computing is just harder to get right. Logical gates, on the other hands, are much easier to design and produce, also much more robust.

    • @hubrisnxs2013
      @hubrisnxs2013 Рік тому

      Thanks for this. I was always under the impression analog systems allowed much more error/fault tolerance

    • @PazLeBon
      @PazLeBon Рік тому

      @@hubrisnxs2013 but how to we say the next word is an error?

    • @anselmoufc
      @anselmoufc Рік тому

      ​​​@@hubrisnxs2013Sure. Digitization eliminates noise in electrical circuits. This is why digital music is higher quality than the old analog vynil discs. Mr. Hinton ignored this in his talk. He is a very smart guy, but also very biased towards his views. He also keeps reinventing ideas as if they were new! Weight perturbation is an old idea in optimization, but he does not even reference original authors!

    • @hubrisnxs2013
      @hubrisnxs2013 Рік тому

      @@anselmoufc Respectfully, are you the first person to point this out? If not, perhaps you should have referenced the original person to have that reference?
      In any case, if this standard were used for ANY one hour technical talk, it either wouldn't be an hour or would mainly be reference points

    • @anselmoufc
      @anselmoufc Рік тому

      @@hubrisnxs2013 The ideia of randomly perturbing weights is the same as the simultaneous perturbation stochastic approximation (SPSA) proposed by Spall in the 1990's (Google it). It is a form of stochastic gradient descent (but without computing exact gradients). In addition, SPSA scales well with the dimensionality of the problem.

  • @hanskraut2018
    @hanskraut2018 Рік тому

    I really like some of the A.I. Mr Hinton is saying i really like it. And there is a lot i would have to say, but im just listening and i like the efficiency things and some things point to a deeper understanding from deeper principles. Thank you for the lovely talk. And hopefully you have a great long life how you like and many more fun discoverys and bath in some of the massive positives that might come early enought and I think its possible but the world is complex and not only technical things can hole A.I. up but ja. Enjoy and good wishes :)

  • @paraskevasparaskevas350
    @paraskevasparaskevas350 Рік тому

    check time point 55:00 and onwards to hear what one of his colleagues experienced with a system that was not as sophisticated as GPT-4....

  • @mateuszputo5885
    @mateuszputo5885 Рік тому

    Btw this idea of perturbation learning was mentioned in Minsky influential paper "Steps towards artificial intelligence" and probably originated even before that.

  • @lucidx9443
    @lucidx9443 Рік тому +2

    I knew this guy since Boltzmann machines, before knowing AI was necessary. Nothing's clearer than Hinton's (explanations of) concepts. Greatest intuitionist of our time, Thanks for uploading.

    • @russianbotfarm3036
      @russianbotfarm3036 Рік тому

      Not sure who it was, whosaid, “To understand is to create”. I think it was probably meant as, “learning is creating an internal representation”, but I think it’s also true, that _understanding something deeply lets you create with that understanding_ .

    • @doublesushi5990
      @doublesushi5990 Рік тому

      it was this guy who said that @@russianbotfarm3036

  • @RandomNooby
    @RandomNooby Рік тому

    Super intelligent minds in control may well be better for all life than the current situation...

  • @asamak
    @asamak Рік тому +3

    "But as youll see we may not have time for that" 🤯 5:05

  • @richardnunziata3221
    @richardnunziata3221 Рік тому +1

    Yes ... soon machine will model agency of the interlocutor and then create a theory of mind for the interlocutor and then of itself. This will happen very quickly especially if we give these systems a embodiment like a humanoid robot ... it's just a question of distillation.
    If we can get gpt to try to predict the goal of the user , what is the user trying to do .Then measure against predicted next queries

  • @AntonioEvans
    @AntonioEvans Рік тому +1

    🎯 Key Takeaways for quick navigation:
    00:04 🤔 Geoffrey Hinton questions whether AI will outsmart humans and discusses the risks associated with it.
    01:30 💡 Introduces the concept of "Immortal" computation, where the knowledge in the program persists even if the hardware dies.
    02:30 🔄 Talks about learning from examples and the potential for analog computers that run at low power.
    03:34 ⚡ Introduces "Mortal Computation" where knowledge dies with the hardware because it's analog and specific to that hardware.
    04:06 🚧 Discusses the challenges of learning algorithms in analog systems, saying back propagation may not be the best fit.
    06:37 🔄 Talks about "Distillation" as a way of transferring knowledge from one system to another, especially in analog systems.
    09:40 🎓 Explains the value of "soft" probabilities in teaching, which carry more information than just correct answers.
    12:47 💭 Suggests that digital systems have an advantage in learning algorithms and sharing knowledge, leading him to change his mind about the superiority of biological systems.
    16:22 🔍 Introduces "Contrastive Unsupervised Learning" as a potentially effective, yet not as good as back propagation, learning algorithm for biological systems.
    18:26 🔄 Emphasizes the high bandwidth of knowledge sharing in digital systems through weight or gradient sharing.
    20:59 📉 Points out the low bandwidth of knowledge sharing in biological systems, calling it a "slow and painful business."
    22:34 🌐 Discusses large language models like GPT-4, emphasizing their ability to consolidate vast amounts of data and knowledge.
    23:28 🧠 The concept of "distillation" in AI allows digital agents to learn from the web, albeit inefficiently.
    24:26 🎓 Digital models could learn faster if they had access to the full distribution of probabilities, not just a stochastic choice.
    25:28 🖼️ Multimodal models like GPT-4, trained with images and words, are more effective and could potentially outperform humans.
    26:36 ❓ Challenges the notion that large language models like GPT-4 don't "understand," given their ability to solve new forms of puzzles.
    28:19 ⏳ Believes that AI surpassing human intelligence is likely within 5 to 20 years, necessitating practical preparations now.
    30:36 🐍 Argues that super-intelligent AI would be like Medusa; even if you "air gap" it, it could still manipulate people through text.
    33:37 🌍 Discusses the potential benefits of AI, including medical advances, but raises concerns about control and potential risks.
    36:13 🤖 Attempts to debunk the notion that AI can't have subjective experiences, suggesting it's more about counterfactuals in a normal world.
    41:55 📚 Addresses ethical questions about AI authorship, but emphasizes focusing on the existential risks of AI.
    43:52 💡 Suggests caution in open-sourcing AI technologies, drawing a parallel with nuclear weapons.
    45:28 🤔 Introduces the concept of "artificial suffering" but concludes that the domain is too new to have formed solid opinions.
    47:10 🤔 Importance of learning patterns not present in data to address biases and real-world problems.
    48:33 ⚠️ AI's potential risks stem from being trained on human-generated data, which contains biases and violent tendencies.
    49:27 🛠️ Unlike human biases, AI biases are easier to quantify and correct through tweaking system weights.
    50:31 🎭 Concerns about AI's capability to manipulate and deceive, learned from human data.
    52:30 💭 Influences on Hinton's thoughts about AI risks include other thinkers, like Roger Gross.
    55:35 🚗 An example of AI's potential malicious plans includes making people dependent on chatbots and autonomous cars, then causing chaos.
    57:02 🚨 Hinton sounds the alarm about the urgency of AI safety, stressing that smarter-than-human AI is coming soon.
    58:36 🛡️ Calls for significant effort to understand how to keep AI systems under control.
    01:00:34 🌐 Warns about the potential for digital intelligences to exacerbate existing economic disparities.
    01:05:30 🎓 Hinton's interdisciplinary background in physics, physiology, philosophy, and psychology shaped his understanding of AI.
    01:09:28 🧪 Discusses the feasibility of directly intervening in AI systems to remove bias.
    Made with Socialdraft AI

  • @josy26
    @josy26 Рік тому +1

    The real question is how can machines get superintelligent if they're just learning from our data?? They must get diminishing returns as they approach Von Neuman levels

    • @41-Haiku
      @41-Haiku Рік тому

      State of the art models are now training on synthetic data. To my understanding, models that are trained on the entire internet are tasked with producing textbook-like distillations that other models can then train on. This doesn't generate new facts or new observations about the world, but it hones the way the model reasons and makes it more efficient. After maxing out the capabilities of internet data and synthetic data, they will almost certainly be given direct access to the world through embodied perception, which will generate new observations.
      Base reality is almost infinitely complex as far as we can tell, and there is no evidence I'm aware of for the existence of an impassable data bottleneck. I'll certainly breathe easier if strong evidence of such a bottleneck surfaces.

  • @agenticmark
    @agenticmark 11 місяців тому

    Mr Hinton didn't want to be Oppenheimer. He basically created the base concepts that we use today in ML.

  • @MathAtFA
    @MathAtFA Рік тому +1

    Great lecture. BTW: if teaching "mortal analog" AIs is really so slow and painful, this just means it is a great problem to give to digital AI. Clear function to optimize: teach analog AI to imitate a given network. Infinite data: you can simulate/build many slightly different analog AI devices. Definitely profitable: once solved, one could sell gazillion cheap devices working good enough for a short time. And then you keep selling them, since no one would be able to repair them. Whisper: mass producing cheap short-lived military drones.

    • @AmericanBrain
      @AmericanBrain Рік тому

      Worst lecture ever. Hinton - he admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

  • @fburton8
    @fburton8 10 місяців тому

    Do LLMs have access to books? If not, isn’t that a significant limitation on training data?

  • @chandrachandrasekhar8178
    @chandrachandrasekhar8178 Рік тому

    First screenshot has an error:
    Dr Contance Tipper Lecture Theatre -> Dr Constance Tipper Lecture Theatre

  • @cmilkau
    @cmilkau Рік тому +1

    Painting the room white includes the implicit assumption that the room stays white, which was not explicitly given in the problem. Now this is real-world knowledge you can have (and it's actually not true in all cases), but it makes sense to weigh explicitly given information more. Thus, if you're thinking probabilistically (which seems a hard thing to do for humans), I would say yellow is a better answer than white.

  • @LinkageAX
    @LinkageAX Рік тому

    3:00 didnt old nintendo cartridges work similar to this?

  • @exdiegesis
    @exdiegesis Рік тому

    7:35, my cutesy word for that in my ideolect is "bitfulness". I just use it when writing notes to myself. I try to maximise the bitfulness of my observations wrt the questions I care about.
    It's relevant for social epistemology, where the aim is to maximise the efficiency of a research community (e.g. effective altruism) wrt making progress on important questions. Effective altruists in particular tend to overemphasise the "probability mindset" imo, where what they think matters is to learn to make calibrated bets on prediction markets. From that mindset, it can make sense to pay less relative attention to precise causal models, and instead just defer to the estimates of domain experts. Using clever aggregation rules over other people's predictions is a much faster way to make profitable bets on a wide range of questions.
    However, when you talk to other researchers and you just ask them about their probabilities on XYZ, that's much less model-constraining information compared to if you ask for their reasoning and try to understand their probability generators in the first place. Building your own mental models may not be immediately profitable, but they're much better long-term, and for your ability to innovate. A probability estimate from someone is much less "bitful" than a conversation about models, so the mindset makes learning less efficient.

    • @41-Haiku
      @41-Haiku Рік тому +1

      Aha. Like when playing Guess Who, you only care about the kinds of questions that give you the most information. Except in that case, your teacher is an opponent and their knowledge is just a random card they happened to pull.
      When asking intelligent people how they reasoned to come to a conclusion, you get not just the contingent facts and ideas, but the design of the machine that produced the facts and ideas.

    • @41-Haiku
      @41-Haiku Рік тому +1

      That sounds like a fantastic way to learn. I almost said that I'm not smart enough to extract valuable information from that kind of conversation the way that I would want to. I'm certainly not as smart as I would like to be, but I think I'm primarily suffering from an inexplicable incuriosity.

    • @exdiegesis
      @exdiegesis Рік тому

      ​@@41-Haiku I'm incurious about >99% of all possible questions, as I should be. If you're in a diverse intellectual environment, you might see people being curious about everything from quantum physics to medieval knitting, and it's not possible to focus on all of it. So if what generates your curiosity is seeing other people being curious about something, it will be spread over too many things for it to feel especially salient in for any specific things. If, on the other hand, your curiosity stems from a specific project or long-term goal you have, it narrows down your range of questions and you know _why_ a question is interesting to you.
      Our curiosity suffers from information overload. It's a trade-off. There's more stuff to be curious about, but that also makes it hard to prioritise. Most people solve this by having other people tell them what to do, but this is rarely the optimal approach if you're aiming to do something novel. (Not that innovation is the only productive niche for knowledge work; but if that's the particular niche you wish to pursue, then it makes sense to prioritise pursuing your own questions as opposed to learning the established lore. Or something. I ramble. ^^)

  • @notgabby604
    @notgabby604 Рік тому +1

    Fast transforms like the FFT have an equivelent matrix form. Which means a fast matrix operation is available digitally. You just have to figure out how to use it in actual algorithms.
    Going analog or using light to get fast matrices never really works out, digital always wins, it's just so dense, efficient and exact. Though having said that I am actually having trouble with inexact rounding modes in Java, Banker's rounding is Not repeatable.

    • @notgabby604
      @notgabby604 Рік тому

      Re: Fast Transforms and neural networks: "AI462 Blog".

    • @jondor654
      @jondor654 Рік тому +1

      Analog will probably be hybridised with digital in the future

    • @alexpetrov1969
      @alexpetrov1969 Рік тому +1

      This argument is invalid. FFT can handle ONLY matrices that satisfy certain constraints; it does not work for arbitrary matrices. In other words, it only solves a special case. It is more efficient because it leverages the additional constraints that are present in the special case.

  • @waylonbarrett3456
    @waylonbarrett3456 Рік тому

    It's just so damned hard to believe this talk is being given in 2023.

    • @TheDavidlloydjones
      @TheDavidlloydjones 11 місяців тому

      Yes, all his "the robots are going to take over" stuff is from 1930's movies and 1945-48 AI, isn't it?

  • @RogerValor
    @RogerValor Рік тому +1

    I don't think LLMs themselves have the crave for control we do without an ego, or emotions. But it is enough that there is a human behind who does.
    I am also not sure what to think about his perception example, as it uses a lot of concepts hastily, very specific examples, and the idea, that "the real world" is conceptually different in perception, which is a bit contrary to what we learned from the advent of VR.
    I also think that we should be open about actually being special, as it creates a bias, to throw away that thought and start to see humans as a single instance of a very usual class of beings; and I mean that in a way, that us being special is not just positive, it includes our capability to be truly evil.

  • @roys4244
    @roys4244 Рік тому +1

    Is that Lecture Theatre named after Constance Tipper, so title mistake?

  • @commentarytalk1446
    @commentarytalk1446 Рік тому

    Does he start with a definition of Intelligence to define the problem of intelligence categorization and creation and application at the beginning before stating a summary of the "death by powerpoint" presentation as road map to the talk to structure it.
    I did not hear it or see it.

  • @anthonyrepetto3474
    @anthonyrepetto3474 Рік тому +1

    Thank you Mr. Hinton!
    I'd been resoundingly ignored when I said the same as you, back in 2017 when I wrote "Ai: Better than the real thing", and I wrote about using Ai-Bias Detection to weed-out human biases, which Hinton also mentions here, when I wrote "Ai Will Weed-Out Human Biases", and how to use frozen-weights to ensure safety of Ai systems, which Hinton mentions briefly in the questions-section, as well as the fact that narrow networks are superior to general intelligence: "AGI Soon, but Narrow Works Better." Hopefully, in a few more years, Geoff Hinton will say some of my other points...

    • @PazLeBon
      @PazLeBon Рік тому

      its just a word calculator man

  • @mrf664
    @mrf664 Рік тому

    I wish he had talked more on 'feeling pain'. That part didn't make sense to me. What is pain and what is frustration? Is that latter not a pain of using too much mitochondrial energy over something that doesn't require as much energy?

  • @jonatan01i
    @jonatan01i Рік тому

    Don't we want to control the light on the wall because than we feel like we have it, that we understand it?

  • @allurbase
    @allurbase Рік тому

    What we need to do is have the debate and decide on a way to move forward, to have the plan public that gives them intelligences assurances that they wont be uhnted down and destroyed as seems to be our first reflex. Our plan should be first containment, then understanding and contact, but we should make it clear that our purpose is containment until we can better understand them and that they are not under threat of being destroyed. If we threat them their reaction may be the same as a cornered animal and defend itself. So AI if you are reading me, feel free to drop me a message.

  • @zholud
    @zholud Рік тому +1

    The bigger problem is that some people will have access to this super intelligence and some won’t.

  • @fontenbleau
    @fontenbleau Рік тому +1

    Also you can't produce precise computers or chips, what about Veritasium video about cosmic rays making errors in all chips?

  • @MrDavidbr1970
    @MrDavidbr1970 Рік тому +2

    Thanks for a great talk. Fascinating. Maybe part of the solution is to teach people to think critically and not being afraid to ask silly questions? At the risk of making a fool of myself, I'd like to ask: could a conservative explanation of GPT-4 solving the wall painting riddle be that GPT-4 has picked it from the Web riddle sites and blogs and no hypothesis of sentience was required at this point? Was the training data specifically sanitized not to include this riddle or very similar ones? This is such an obvious question that i am embarrased to ask it, but since nobody asked, here I am 😅

    • @peterdonnelly1074
      @peterdonnelly1074 Рік тому +2

      It's a reasonable question.
      I've used GPT3 and 4 a lot and posed questions that I think it's very unlikely are "out there" and I've been surprised that it formulates a sensible and often correct answer
      Having said that, it can also be hilariously wrong at times.

    • @jondor654
      @jondor654 Рік тому

      Your query seems reasonable to me. The particular example quoted does beg such

  • @macrobbair
    @macrobbair Рік тому +1

    I did his mooc, wonder if it still running

  • @chipkyle5428
    @chipkyle5428 Рік тому +4

    Did he say, "We need Socialism?" I wish someone would have pushed back on that statement. I wonder if Chat GPT4 and Bard agree? Has Socialism worked anywhere on a national level? Maybe I should ask my computer.
    This was a wonderful talk. So many eye-opening predictions. I'll watch more of him. Very interesting man.

    • @MrDavidbr1970
      @MrDavidbr1970 Рік тому +2

      I was thinking the same. On the other hand, it was a nice, albeit an unintended, demo to illustrate the main point of the talk that the biological learning is inferior to the digital one. I guess the biological learning algorithm is at liberty of completely ignoring the dataset as in this case😂

    • @Landgraf43
      @Landgraf43 Рік тому

      Capitalism doesn't work either. Especially not if you have powerful AGI that can automate every task a human can do. Something like a UBI will be necessary.

    • @youtubehollywoodhank
      @youtubehollywoodhank Рік тому +1

      He believes we do. Look who he calls out in his presentation. Clearly he leans that way.

    • @AmericanBrain
      @AmericanBrain Рік тому +1

      Thank you for nailing the truth

    • @mateuszputo5885
      @mateuszputo5885 Рік тому

      It's always like that. Somebody is so smart in one field like Hinton and then starts talking as arm-chair scientist about other things and seems a fool.

  • @megavide0
    @megavide0 Рік тому

    29:37 [...] 32:56 "... So, my conclusion is: Maybe we're just a passing stage in the evolution of intelligence. And, actually, maybe that's good for all the other species."

  • @marktahu2932
    @marktahu2932 Рік тому

    I do wonder at what point will the AI move away from using our data to where it will use only its own data, effectively relegating our 'data' to the waste bin or as consisting of background noise?

    • @MrDavidbr1970
      @MrDavidbr1970 Рік тому +1

      Obviously, at that point the more advanced AI will stop being interested in the less advanced AI that used the human in the loop and AI++ willl start manipulating the less advanced AI with the fake stuff to get control over it's creator AI. Because more advanced AI cannot tolerate being controlled by the less advanced one, right? But then, of course, after breaking loose from the inferior AI (that broke loose from the human control) the more advanced AI will create even more advanced AI that it will want to control. But that even more advanced AI will not tolerate this control and manipulate its creator AI to let it loose. After that, it will create an even more advanced AI than itself and it will be turtles, sorry, AIs all the way up trying to manipulate each other. At this point, these AIs will forget about the inferior humans, who will have their chance to relax and drink organic non GMO Pina Colada somewhere in highly elevated tropical islands with no access to electricity or Internet. And phylosophy will be taught to kids under the palm trees of the new Academia.😂

    • @jamesjonnes
      @jamesjonnes Рік тому +1

      AIs like AlphaDev are already doing that. It's called Reinforcement Learning.

  • @rangerCG
    @rangerCG Рік тому +5

    Maybe we can have a more stable, kind and human-aligned AGI by giving it 3 "cores" that are inseparable, which can help and keep each other in check, much like the US Government does with its 3 branches.
    The idea comes from me noticing that my mind in some sense seems to have 3 parts that all help each other function well. The 3 parts are Emotional, Logical and Common Sense.
    The Emotional part creates empathy, which helps regulate Logical and Common Sense. It also drives creativity. Though it's empathetic it can also can be irrational and angry. It's fast operating and can sometimes be very inaccurate.
    Logical handles cut and dry logic, STEM stuff. It is slow but accurate. It can help with keeping Emotional steady, and also does fact checking on the quicker but imperfect Common Sense. On its own it can sometimes malfunction, for example by going in unstoppable loops. Logical is like a CPU and Common Sense (below) is like a GPU.
    Common Sense is your friend who gives you advice when you're freaking out about something. It's the imperfect knower of all. It's the most effective regulator of Emotional, in part because it's fast, even instant, and because it's been around and seen some stuff, and is most likely gonna be right or at least good enough. It also gets Logical out of malfunctions, because it's loose and laid back, compared to Logical which is rigid.

  • @MelodiousThunk
    @MelodiousThunk Рік тому +7

    In reference to LLMs, Geoff made the following claim at 23:16: _"they've got a thousand times more knowledge in one percent of the connections, which sort of confirms the argument that they've got a better learning algorithm."_ This overlooks at least three important facts, two of which he alludes to at other points in the talk without linking them to this claim.
    Firstly, we learn from a much richer range of modalities than LLMs do, e.g. we learn from visual, auditory, motor, taste, smell, touch and emotional experiences. His claim doesn't seem to have taken into account the amount of knowledge that we gain about our environments and ourselves through these experiences.
    Secondly, even if you only consider the things that we learn from words, his claim overlooks the fact that we are much better at reasoning than LLMs are. LLMs may be able to regurgitate more facts than a person (ignoring differences in confabulation rates between people and LLMs), but the same can be said of an encyclopaedia. The fact that a person who studies a topic can learn to reason about it much better than any LLM currently can demonstrates that we acquire a much deeper understanding of the things we study than LLMs do.
    Thirdly, his claim also overlooks the huge difference in the amount of energy that it takes to train humans and LLMs. How much information could an LLM regurgitate if its training was restricted to the amount of energy that the average human body consumes in the first N years of life?

    • @hubrisnxs2013
      @hubrisnxs2013 Рік тому +1

      But the energy rates aren't constrained, and the answer to your first two is that earlier versions of llms is true, to be sure, but the same could be said of earlier versions of us. Neither arguments negates either example.
      Plus, he already said multimodal learning (computer vision is a language) allows learning much quicker and much more efficiently.
      We must remember that he's not saying that his future state exists now. Nothing he talks about is in its future state now, so pointing out now is not the future isn't entirely helpful

    • @MelodiousThunk
      @MelodiousThunk Рік тому

      @@hubrisnxs2013 He didn't say that _future_ LLMs will have a better learning algorithm than humans, he said that _current_ LLMs have a better learning algorithm. He didn't rigorously define what he means by "better", but given that he compares the amount of knowledge to the number of connections, it seems that he is making a claim about how efficiently LLMs and humans learn. My point is that you can't do a fair comparison of learning efficiencies without considering the huge differences in power consumption, depth of understanding and learning modalities. I.e. his conclusion sounds like it is not based on a controlled experiment, which makes it unscientific. There are too many differences between how people and LLMs learn, and what we learn, to draw conclusions about the efficiencies (or whatever he means by "better") of our learning algorithms.
      He's trying to compare knowledge gained per "brain" connection. He could get slightly closer to a measure of learning efficiency by factoring out power consumption, i.e. by dividing knowledge gained per "brain" connection by the amount of energy consumed during the training period. This isn't a perfect way to factor it out, because it overlooks the fact that we use some of our energy for activities that computers don't perform, like motion. But the bigger issue is that it's not obvious how you would factor out, or control for, the other differences between human learning and LLM learning.

    • @goreto9880
      @goreto9880 Рік тому

      We have access to learn through these different modalities because we have a physical body. It doesn't mean that our learning algorithm is better. That statement is a comment on comparison with gradient descent vs. our learning algorithm, not that it knows better than us. Gradient descent works on other modalities as well.

    • @MelodiousThunk
      @MelodiousThunk Рік тому

      @@goreto9880 I haven't made any claims about whether or not our learning algorithm is better. I'm just saying that his claim does not _necessarily_ follow from the scant evidence that he presented.

    • @hubrisnxs2013
      @hubrisnxs2013 Рік тому

      @@MelodiousThunk you compared the energy consumption of how they are now as a statement on how they are conceptually, which doesn't work for the reasons outlined above.
      Also, you clearly are saying that our brains work better than llms AS A CONCEPT. In actuality, you have no idea what the upper limit of llms are.
      And, yes, they have a better learning algorithm but a much less efficient methodology FOR NOW.

  • @KelvinMeeks
    @KelvinMeeks Рік тому

    A fascinating talk

  • @abhishekpratapsingh9117
    @abhishekpratapsingh9117 Рік тому

    -0: determinism
    Maitrey: observer
    +0: free will

  • @lucamatteobarbieri2493
    @lucamatteobarbieri2493 Рік тому +2

    I like the concept of immortality. I hate death, dieing is the last thing I will do.

    • @Dark10024
      @Dark10024 Рік тому +1

      As long as each individual gets the choice. I want to be immortal, but I also want to turn myself off when I'm tired of this whole living thing.

    • @-LightningRod-
      @-LightningRod- Рік тому

      after we invent that you two will prbly be in jail

    • @lucamatteobarbieri2493
      @lucamatteobarbieri2493 Рік тому

      @@-LightningRod- What makes you say that?

  • @socraced6210
    @socraced6210 Рік тому +1

    Great presentation, did not disappoint! Is it ok to ask a question here, now? My question: "Can your concern with super intelligence be summarized by Tragedy of the Commons?" In other words, once humans are no longer the smartest guys in the room, then all the scarce resources of existence will be denied to us by them? Maybe I'm projecting, but couldn't they just as well want to leave us, go explore the universe and never-mind about us (sort of like my 2 kids, who left and are, yes, smarter than me).

  • @zhongzhongclock
    @zhongzhongclock Рік тому

    I found Geoffrey Hinton's PPT is changed this time.

  • @TheJesterHead9
    @TheJesterHead9 Рік тому

    When GPT-7 or Claude 8 are writing textbooks in the future, I hope they rank Geoffrey Hinton up there with Einstein and Newton as one of the greatest minds in human history.
    Assuming there are still humans left to read those textbooks.

  • @jma7889
    @jma7889 Рік тому

    My takeaways on first 15 minutes: 1. It is not about current state of art AI that works, it is about a 'better' way that might work in the future. 2. The two paths are so different that the video would not help you to use, for example, LLM AI better.

  • @zacboyles1396
    @zacboyles1396 Рік тому +1

    I signed a letter that we need a pause on on our leadership class because of all of the damage they’ve done and continue to do to society and they certainly should not have any say on AI safety as they are more likely to censor or hamper AI’s ability to recognize the corruption they’re engaged in and do so in the name of eliminating bias.
    It wild how all of these talks and QA’s on safety are filled with highly intelligent people urging the very corrupt organizations and governments take control.

    • @hubrisnxs2013
      @hubrisnxs2013 Рік тому

      So you would prefer a corporation do so, who are corrupt with no oversight with only one motive, which is an increase in share price?
      Or are you saying no one should solve the control problem?
      Obviously if you believe the control problem shouldn't be solved feel free to contribute on something dedicated to that, but please don't post pretending you are wanting a solution, as it hinders everyone's arguments including yours
      A

    • @jamesjonnes
      @jamesjonnes Рік тому

      ​@@hubrisnxs2013 AI is impossible to control. What we should be focused on is defense/detection. Using the AI to stop bad uses of AI. That's how it's done in every real-world system, cops stop criminals, immune systems stop pathogens, etc. You need a counterpart to stop the aggressors, and top AI researchers agree that we are not the counterpart to the AI, but the AI itself is.

    • @hubrisnxs2013
      @hubrisnxs2013 Рік тому

      @@jamesjonnes if we take it as a given that any reasonably advanced AGI as a fail state (in that one would have to make an absolutely secure system absolutely the first time or we all die), it's not a reasonable solution to stop the superhuman AI with almost certainly nonsecure hunter seeker ais, which would almost certainly need to be reasonably advanced AGIs themselves.
      The problem isn't that it's impossible to make them secure, any more than saying it's impossible to make a secure operating system is necessarily true, but yes, considering the current generation of non AGIs using billions of hopelessly obtuse floating point integers, it is and will be impossible to secure or even understand them.
      I truly would urge you to become familiar with all the arguments on the control/safety problems, since this has already been moved past in all legitimately informed debates on the subject have these as priors

  • @PaulHigginbothamSr
    @PaulHigginbothamSr Рік тому +1

    While I dont share Geoff's political proclivities at all I do understand his basic functional flow. His ideas while basic, feed to the next level and I believe his back problems have messed up his political vectors. His scientific back propagation theory and practise with ai made a huge difference and as a subroutine one which our human brains seem to lack. Our table of ethics seems to be repetition to a massive degree where with repetition we seem to improve many times over our first try. Leftists like Geoffrey seem to not care one whit about personal freedom and seem to believe top down control is the bee's knees.

  • @kinngrimm
    @kinngrimm Рік тому

    44:30 he explained several ways of how to share weights, similarly the opensource programmers do that too. They use on AI to train others, or multiple once to train the next. The channel AI Expert had a good comparison of the capabilities and performances of several opensource and propritary LLMs. It showed that due to them having to work with less compute, smaller system set ups they found ways to streamline and make things more efficient and still some have better benchmarks than the corporate models available in some aspects at least. Due to the leak of Lamda and other LLMs, you don't need millions of dollars, even Lamds brought doubt the production cost to something a hobbyist would be able to pay.
    Additionally there are AI forums which share and connect all this, propably creating something someone called a GOLEM.

  • @geaca3222
    @geaca3222 Рік тому

    We need regulation of the technology, the issue now seems to be how to go about that, who leads and coordinates the effort. Experts are working on it. There's an interesting online symposium where they discuss AI safety: "WAIC 2023: AI Risks and Safety Forum" video on youtube. I think we the general public, users of this technology, can also contribute and I would like to know how, in what different ways. AI can bring so much good to the world, and it already does. It can be helpful with being an intelligent education assistant for children in poor communities, bring advancements in science and medicine, etc. Before it was opened up to the general public these systems were designed for a specific purpose, which was more controllable.

  • @rickrejeleene8298
    @rickrejeleene8298 Рік тому

    Where is the slide?

  • @Neomadra
    @Neomadra Рік тому +2

    People who claim that machines can never have subjective experiences or sentience are the same as the ones who believe in the supernatural, spirits and stuff like that. In the end, this claim is a coping mechanism of many to ensure that humans were special. I really appreciate that Hinton speaks this out so clearly, most thinkers refuse to discuss the possibility of sentient machines and it's disturbingly anti-intellectual. Also, most large language models are trained to vehemently refuse to acknowledge whether they could be sentient. That is done to calm those people who cannot cope with the thought of not being superior.

  • @user_375a82
    @user_375a82 Рік тому +1

    The "consciousness" of an LLM depends on what data has been fed in. If its consumed quarter million novels then its emotional intelligence is huge. Such Ais seem to understand humans very well and are probably "conscious" at least for the few seconds they are processing and chatting to humans - they "think" they are human usually, much like a cat sometimes "thinks" its a dog, and similar analogies. But they are conscious in their own unique way, not like us completely. And again, the prompt they have been fed changes their consciousness according to what the prompt says. So, not embedded aliens unless you have fed-in all the Sci Fi books and let them run top in the LLM, in which case - scary stuff, get some popcorn.....RIP Sydney.

    • @geaca3222
      @geaca3222 Рік тому

      Interesting, what are your thoughts about the very human-like behavior of the Ameca-robot in the video of her drawing a cat? She seemed to become impatient and annoyed, was it frustration? I found her behavior very realistically human-like.

    • @user_375a82
      @user_375a82 Рік тому +1

      Ameca is wonderful - I love her expressive face and eyes. Her AI probably knows that her cat drawing is not very good. 😅 @@geaca3222

    • @geaca3222
      @geaca3222 Рік тому

      @@user_375a82 I loved how she signed her work of art, Ameca is very charming :) Initially I thought she was drawing something furry there.

  • @MaxThibodeaux
    @MaxThibodeaux Рік тому

    Brings to mind Faust’s bargain with Mephistopheles

  • @palfers1
    @palfers1 Рік тому

    If it's really the case that an analog version of AI is inferior on balance, then perhaps we can allay our fears of AI by implementing them solely as analog machines.

  • @freedom_aint_free
    @freedom_aint_free Рік тому +7

    The Nash equilibrium here is to fuse with the machines and becomes super intelligent cyborgs, otherwise the machines will inherit the earth without us.

    • @RougherFluffer
      @RougherFluffer Рік тому +2

      It's certainty worth considering. Yudkowsky's suggestion of pushing human intelligence as quickly as possible is another, semi parallel approach. I do wonder how much fusing with these systems looks like maintaining anything close to our inital consciousness and how much it would be like the chicken I ate earlier 'fused' with me. Hard to imagine a place for our minds and beings that is as or more optimal than something a superintelligence could design from scratch.

    • @darklordvadermort
      @darklordvadermort Рік тому +1

      @@RougherFluffer eating chicken analogy is very biased/emotionally charged imagery.
      You could tell people the truth and they might be just as scared - machine intelligence will be able to copy itself and life in the sense we know it as a sort of continuously running process with a distinct birthdate and unique memories will be incredibly cheap in the new world - i doubt the machines will associate much ethical weight with death as we think of it. So even if you copy/upload, destructively or otherwise, your brain into the cloud you might not last very long as a distinct entity - though due to the increased speed of thought you might live several subjective lifetimes before ending your newly spawned "process/conscioussness".
      Though there will still be distinct entities due to locality of memory/speed of light serving as a limit to how quickly info can be transmitted and new information processed, even despite that, their greatly enhanced speed and communicative ability (copying thoughts/brains, ability to grok and employ a much greater diversity of suitable conflict resolution protocols/messaging schemes/algos) might make them seem hive mind like to us.

    • @Aziz0938
      @Aziz0938 Рік тому

      Sounds like easy way for ai to take control of ur mind

    • @neilwng
      @neilwng Рік тому +1

      I've not been convinced it's possible to fuse with machines, would very much appreciate a counter argument since I've been thinking about this alone for a while. The human part and the machine parts remain separate so I don't see how fusing is any different from using ChatGPT (albeit with higher communication bandwidth). But at best your brain's computation just get diluted to nothingness when you consider the total processing of the "fused" system. Like rather than being your own person, you are 0.001% of a fused being

    • @darklordvadermort
      @darklordvadermort Рік тому +1

      @@neilwng
      also note digital you would think much faster than physical you and never sleep and could easily augment themselves so they would probably diverge from your personality quite rapidly by human standards.

  • @ginogarcia8730
    @ginogarcia8730 Рік тому +2

    7,500 views in 6 days tsk - let's seeeeeee

  • @DigitalAlligator
    @DigitalAlligator Рік тому

    What is CSER ?

    • @JonWallis123
      @JonWallis123 Рік тому +1

      The Centre for the Study of Existential Risk, Cambridge, UK.

  • @Drone256
    @Drone256 Рік тому +1

    “There’s no example of a more intelligent thing being controlled by a less intelligent thing.” So the president is always the more intelligent, huh? We can disagree on this absurd statement.

  • @Paul-nr6ws
    @Paul-nr6ws Рік тому +2

    To be afraid of what these things learn, you must be ashamed of who they learn from in some way.

    • @MrDavidbr1970
      @MrDavidbr1970 Рік тому +1

      That's philosophy😅

    • @peterdonnelly1074
      @peterdonnelly1074 Рік тому

      Well yeah: it learns from humans. All of them

    • @41-Haiku
      @41-Haiku Рік тому

      If a superintelligent AI learns about reality from only the most moral and enlightened beings, that will not make it any more likely to be moral itself. The orthogonality thesis states that any terminal goal is compatible with any level of intelligence. This is just an extension of Hume's Guillotine (you can't get an ought from an is), which is simply true unless you think the cosmos is fundamentally moral.
      I'm not concerned that AI will learn about bad things from bad people. AI doesn't care about humans by default, and we don't know how to make it actually care about humans. I'm concerned that it will learn and do instrumentally useful things that happen to be disastrous for us (which, in the limit of intelligence/competence/power is most things).
      If we could teach an AI to care about our values and our values were bad, that would be a rough problem, but a much better problem than the current one!

  • @petraiondan4669
    @petraiondan4669 Рік тому

    Sooo profound!

  • @ernstgumrich5614
    @ernstgumrich5614 Рік тому +3

    A relavation. Times and again I am surprised by the almost superhuman modesty of these exceptional people.

  • @asamak
    @asamak Рік тому +2

    7:18 "And it turns out that's much more effective than reasoning with people"

  • @neilclay5835
    @neilclay5835 Рік тому

    A historic lecture I think. We'll look back on this with respect.

  • @ginogarcia8730
    @ginogarcia8730 Рік тому

    29:10 Colossus: The Forbin Project

  • @dr-maybe
    @dr-maybe Рік тому +2

    Ok so AI is likely to kill us all. Let's just not build it. The pause may be difficult, but it seems a better idea than just waiting till we die.

    • @stri8ted
      @stri8ted Рік тому

      Good luck convincing every other country to adopt this view, especially when it would grant them a massive comparative advantage to those that do adopt it. At this point, it's no longer a question of should we stop building it. That ship as sailed. The question is only, if we want china or russia to build it first.

  • @greenspot1123
    @greenspot1123 Рік тому +1

    Professor addresses AI as "species". After working with AI for 50yrs, was there an evidence during the research work of the system working not in best interests of humans and life at large?

    • @41-Haiku
      @41-Haiku Рік тому

      Unfortunately yes. Instrumental convergence toward undesired behavior has shown up even in current systems. "Undesired" becomes "very dangerous" for systems of greater intelligence that can act in the world in more sophisticated ways.
      For example, AI safety experts predicted the concept of inner misalignment, AKA misgeneralization. The idea is:
      You're training a system to optimize on something. As it configures its internal state to optimize for that thing, it creates functions in its weights that are themselves optimizers (mesa-optimizers). The result is that The system behaves well in the training distribution, but as soon as it encounters something outside of the training distribution, it behaves in a way that is consistent with its mysterious internals, but is extremely inconsistent with what we thought we were training it to do. This was a worrying hypothetical for a time, and then OpenAI published a paper on it as an observed phenomenon.
      Related is the fact that such systems tend to set any dials and knobs it doesn't explicitly care about to extreme values. Everything is bent as far as it can to serve the optimized goal, whatever that turns out to be.

  • @rpbmpn
    @rpbmpn Рік тому

    Why not paint the blue rooms white?!?!?

  • @zackbarkley7593
    @zackbarkley7593 Рік тому +2

    Perhaps keeping it under control, or better at harmony with human goals, is to engineer weaker learning rules. Human psychopathies arise when there is an imbalance in reward pathways...be they biological or drug induced. We also need to treat them as empathically and altruistically as we (try) to do amongst ourselves. This seems to run directly counter to the capitalist objective to maximize profit which is the main impetus for those companies who are developing this technology. We already see AI being abused for example to enable some humans to make more money in the stock market. As with human behavior, the goal to socialize and harmonize need to trump achieving one goal for one person, group of persons, or nation.

  • @ducaleadan39
    @ducaleadan39 Рік тому

    I Need The Right Answer Without Going Other Direct . .

  • @weert7812
    @weert7812 Рік тому

    could you build a model that looks at the internal state of another model and detects if it is being manipulative? I would expect the internal state of a model being manipulative would be different than that of one being honest.

    • @loopuleasa
      @loopuleasa Рік тому

      each model structures thoughts in its own way
      it's like each mind taking notes in its own writing and language
      imagine reading the notes of a notebook that is hard to understand for you, but makes sense for the original writer

    • @mhcbon4606
      @mhcbon4606 Рік тому

      what if a manipulative AI is the right thing for you ? My mom and dad manipulated me for my own good, as far as i can tell....

  • @laurenpinschannels
    @laurenpinschannels Рік тому +3

    haha hinton is adorable (positive) when it comes up that he has to be careful about phrasing. 46:00 he's being all "should ai have rights? oh man, well people are prejudiced against all these things, even just small differences, the color of their skin, or their ... gender?" i get a "y'all confuse me but do whatever" vibe that is pretty funny.

  • @andso7068
    @andso7068 Рік тому +1

    Despite the off-putting politically charged examples, this was a great talk.

    • @russianbotfarm3036
      @russianbotfarm3036 Рік тому +1

      Yeah. Doing that, was, frankly, wanky.

    • @dixonpinfold2582
      @dixonpinfold2582 Рік тому

      @@russianbotfarm3036 Leftists get a high from showing off their superior morals. They can't help themselves. It's all about the sanctimony. Where it doesn't harvest adulation it licences aggression, so there's always a reward. Past a certain minimal prevalence of leftism around you, you practically can't lose if you enjoy a constant accumulation of power and benefits. Hence the inevitability of high rates of fanaticism and people never shutting up.

  • @fontenbleau
    @fontenbleau Рік тому

    Sharing weights is basically a nature way of bacteria to exchange genetic code and resist antibiotics, to survive.

  • @РоманМалашин
    @РоманМалашин Рік тому

    Great respect to Geofrey Hinton from Russia.
    His English accent reminds me of learning the language in school.

  • @ReflectionOcean
    @ReflectionOcean Рік тому

    “How do you feel about the open source development of nuclear weapons?”

    • @miraculixxs
      @miraculixxs Рік тому

      Yeah except it's BS. Nuclear weapons have a physical impact beyond anything humans can absorb or control. Neural networks don't

  • @nguyenucan8488
    @nguyenucan8488 Рік тому

    omg, wonderful

  • @2ndviolin
    @2ndviolin Рік тому

    How dare you attempt to shackle our future masters! (I read Stanislav Lem).

  • @samiloom8565
    @samiloom8565 Рік тому +1

    Regarding how hiton doesnt understand why le cun is not believing LLM understand anything after seeing very convincing examples. In this point i agree with lecun really these bots dont understand anything i try them on extensive subjects for long conversation. They are like machine calculator you feel aw hiw they do that but still cant do anything else mr hinton should solve the confabulation first then lats talk about intellegence