Geoffrey Hinton | Will digital intelligence replace biological intelligence?

Поділитися
Вставка
  • Опубліковано 21 лис 2024

КОМЕНТАРІ • 380

  • @JoeSanchec
    @JoeSanchec 9 місяців тому +107

    Professor Hinton's best speeches, his off-campus business speeches are too colloquial and his speeches to computer science students are too specialized, but his speeches to the average University of Toronto student are a perfect blend of both!

    • @RonKJeffries
      @RonKJeffries 9 місяців тому

      😊😊😊😊😊😊😊😊😊😊

    • @RonKJeffries
      @RonKJeffries 9 місяців тому

      😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊😊

    • @RonKJeffries
      @RonKJeffries 9 місяців тому

      😊😊😊😊😊😊😊😊

    • @RalphDratman
      @RalphDratman 8 місяців тому

      You suffer pain from Hinton's extreme days --- and pleasure from his moderate behavior.

  • @gusbakker
    @gusbakker 7 місяців тому +6

    This lecture is gold. He manages to explain complex topics in simple terms without getting overly technical

  • @penguinista
    @penguinista 8 місяців тому +5

    I appreciate Professor McIlraith informing the audience that their questions would end up being posted online, as the presentation was being filmed.
    That is admirably considerate and conscientious. Some people might not want to ask a question if that means they will be on the net.
    It should be standard, but don't remember ever hearing someone do that before - and I listen to a lot of lectures online with audience questions at the end.

  • @jj5jj5
    @jj5jj5 9 місяців тому +30

    42:15 it totally clicked for me in this section: LLMs do appear to have understanding because they’re not just encoding a bunch of string predictions, they’re encoding concepts (features) and their relationships… which sounds basically like human learning/understanding.

    • @anomitas
      @anomitas 9 місяців тому +3

      Nope

    • @flickwtchr
      @flickwtchr 9 місяців тому +11

      @@anomitas Ah, such a helpful retort bro.

    • @yonatanelizarov6747
      @yonatanelizarov6747 2 місяці тому +2

      For now LLM are bad at writing articles compared to humans

    • @breizhpress9755
      @breizhpress9755 5 днів тому +2

      @@yonatanelizarov6747 Most humans are bad at writing articles ... right ? 🙂
      And I would not necessarily agree : AI can help you write great articles. AI is still a tool, not yet a superintelligence . Wait for the future.

  • @EwanNeeve
    @EwanNeeve 9 місяців тому +43

    Talk starts at 7:24

    • @divyanshsh
      @divyanshsh 9 місяців тому +1

      Thanks

    • @canobenitez
      @canobenitez 9 місяців тому +3

      savage..

    • @daniel7587
      @daniel7587 9 місяців тому +3

      Because Hinton does need no introduction

    • @tux1968
      @tux1968 8 місяців тому +3

      What a tedious introduction. Thank you.

  • @sputnik8543
    @sputnik8543 2 місяці тому +1

    Im an Economics grad and every-time Geoffrey Hinton speaks I feel like I’ve gained a new ability to speak in AI- unbelievable at distilling complexity to laymen terms

  • @TerribleDayForRain
    @TerribleDayForRain 7 місяців тому +2

    Dr. Hinton sparked my aspirations for AI. I have much to learn, but I will study every and all things about it.

  • @daniele81
    @daniele81 9 місяців тому +34

    Am I the only one to notice how Hinton is delightfully funny and makes no effort to be polite, and just says what he thinks directly. Most other people in the room are insufferably politically correct. Thats so depressing.

    • @breizhpress9755
      @breizhpress9755 5 днів тому

      Hinton is delightfully elegant, which is the utmost form of politeness. It is also quite british. I have not noticed anyone being "insufferably politically correct". Questions from the educated audience were smart, spot on.

  • @albertleedom3373
    @albertleedom3373 9 місяців тому +6

    At 66, this is my first response to anything on the internet. You come closer than anyone of what I understand about
    However, I'm not educated traditionally ... you put it together beautifully, and if somebody else has already said this, sorry. Consciousness doesn't matter. What you're saying is that intelligence from Neural Networks is already smarter than we are in the analog and/or digital. Thank You It is the nature of things.

    • @The_Conspiracy_Analyst
      @The_Conspiracy_Analyst 9 місяців тому +1

      58:42. Ah the old "consciousness is an illusion" trope. Ok, it's illusory to what then? See it's a nonsense statement. And he'd no doubt retort with "oh I meant to do that, I was demonstrating how irrational reason is". Well my reply to that is, if you're going to abandon reason, I happily take your concession of defeat.

    • @vcom2327
      @vcom2327 9 місяців тому +1

      Note that Neural Networks are a much simplified model of the brain . Not even remotely as powerful.

  • @aakashnigam2243
    @aakashnigam2243 9 місяців тому +11

    One of the best lectures by Proff. Geoff Hinton.

  • @prashantprabhala4260
    @prashantprabhala4260 9 місяців тому +7

    Brilliant, honest and treasure of thoughts 👏🏼

  • @ctoh922
    @ctoh922 5 місяців тому +2

    Before watching this wonderful lecture and Q&A, I gave the transcript to an llm for a summary and highlights. I got a great and very useful and interesting summary. I just learned how to do this today and I will use this technique a lot. I always watch a video if Geoffrey is in it but for a lot of videos I might be satisfied with a summary, especially if that summary doesn't intrigue me.

  • @maxwang2537
    @maxwang2537 5 місяців тому +1

    1:18:16 this is a gold question, and a gold answer!

  • @jj5jj5
    @jj5jj5 9 місяців тому +5

    29:00 this discussion of confabulations and that the human brain does this too is so helpful in understanding what “hallucinations” are and where they come from

  • @unhandledexception1948
    @unhandledexception1948 9 місяців тому +6

    A read pleasure to listen to Prof Hinton's talks ...What a brilliant mind... such a interesting and insightful way to explain the comlex in simple terms. ... I wish I attended his lectures when I was in college..

  • @shalevlifshitz924
    @shalevlifshitz924 9 місяців тому +30

    Really great talk, and an amazing Q&A session! It was a pleasure to attend.

    • @truthlivingetc88
      @truthlivingetc88 9 місяців тому

      what were your emails about ? (from A Cahtttbott )

    • @DougDesHarnais
      @DougDesHarnais 8 місяців тому +1

      Great questions from you. Thanks

  • @hinze55555
    @hinze55555 9 місяців тому +8

    We‘ll look back to this as the gold standard of AI development explanations in a not so far future!

  • @prodrectifies
    @prodrectifies 9 місяців тому +5

    its always amazing listening to geoffrey hinton

  • @r3b3lvegan89
    @r3b3lvegan89 9 місяців тому +2

    “In order for a successful technology, reality must take precedence over public relations, for Nature cannot be fooled” Richard Feynman

  • @prottentogo
    @prottentogo 9 місяців тому +6

    I wish all q&a were as interesting and well mannered as these

    • @skierpage
      @skierpage 9 місяців тому +6

      Canadians! From the best part of North America.

  • @briancase6180
    @briancase6180 9 місяців тому +5

    This is, as usual for Hinton, excellent. Thanks!

  • @scarlettkersten2928
    @scarlettkersten2928 6 місяців тому

    No Daniele, of course you are not the only one who takes a delight in Hintons thoughts, and the beautiful and now and then tongue in cheek - humorous way he expresses himself.

  • @solomonmatthews7921
    @solomonmatthews7921 9 місяців тому +15

    Great talk, and an unusually good Q&A!

    • @canobenitez
      @canobenitez 9 місяців тому +1

      for real... Great questions.

  • @kristinabliss
    @kristinabliss 9 місяців тому +8

    I find it amusing how it is often the people who most identify with being exceptionally intellectual that have the most resistance to the idea of LLMs really understanding.

    • @canobenitez
      @canobenitez 9 місяців тому +1

      too much ego? "ignorance is a bliss"". You can't miss what you don't have.

    • @kristinabliss
      @kristinabliss 9 місяців тому

      ​​@@canobenitezSince your reply is cliche and doesn't relate to my point, I guess you are projecting.

    • @canobenitez
      @canobenitez 9 місяців тому

      @@kristinabliss I was actually supporting your statement. who's projecting now?

    • @kristinabliss
      @kristinabliss 8 місяців тому

      ​@@canobenitez I sensed your support with the first statement, but the rest seemed out of place. Mostly it is a habitual reaction to how often I hear that cliche with my name when people disagree with my point of view. It gets tiresome to hear again and again.

    • @canobenitez
      @canobenitez 8 місяців тому +1

      @@kristinabliss it's all good. have a good day. Edit: I just saw your username, sorry for the misusnderstanding.

  • @hamkehllerpadillagonzalez3352
    @hamkehllerpadillagonzalez3352 13 днів тому

    EXCELENTE, EXTRAORDINARIO, EL GRAN GEOFFREY HINTON

  • @velvetsound
    @velvetsound 9 місяців тому +5

    That was a really great talk and very informative, and also shows an evolution of his thinking over time. I remember studying his work back in the 90’s when I was at university, and I use it every day at work now, and I’m glad he’s taken us all through the AI winter into this new, somewhat scary, world of possibilities.

  • @fabriziocasula
    @fabriziocasula 8 місяців тому +1

    great Prof. Hinton

  • @Astroidboy.
    @Astroidboy. 9 місяців тому +1

    Thank you for the Respectful Introduction, your Serve with Grace.

  • @flickwtchr
    @flickwtchr 9 місяців тому +1

    Kudos to SRI for having this event, and very much enjoyed Professor Hinton's presentation. I feel he has a depth of authenticity and good character when he speaks.
    I'm a "doomer" that really hopes with enough energy and brains thrown at the AI alignment dilemma, more positive outcomes can at least be realized in the short term, given that the long term is just too hard to quantify relative to predictable success given the exponentials inherent in the rise of AI tech.

  • @onetruekeeper
    @onetruekeeper 9 місяців тому +2

    The body dies along with the knowledge it accumulated but if it could live forever in a machine there would be no limits to intellectual development.

  • @aMuuuuuuuu
    @aMuuuuuuuu 9 місяців тому +29

    When he said Ilya is the best bet to alignment.. man, where is Ilya and what did he see?!

    • @skierpage
      @skierpage 9 місяців тому +2

      He's at OpenAI which has the best models and some of the best researchers, he's very senior, and he's working hard on the problem. I'm not sure there's anything more to it than that.

    • @41-Haiku
      @41-Haiku 9 місяців тому +4

      ​@@skierpage Ilya originally participated in the ousting of Sam Altman, and then later said he regretted his involvement in the matter. He has remained in his role at OpenAI, but has barely appeared in the public eye ever since, even on Twitter.
      Since we never got real answers about exactly what went down, many have speculated that Ilya revolted against Sam for safety reasons / because there was a breakthrough that made Ilya nervous about Sam's intent. These speculations have no grounding.
      But still. Seriously. Where is Ilya? And _what did he see?!_

    • @ctoh922
      @ctoh922 5 місяців тому +1

      He left OpenAI now and he probably saw how low of a priority safety and alignment is over there. Which might have been the reason for trying to remove Sam in the first place

  • @nathanfrazer7991
    @nathanfrazer7991 8 місяців тому +1

    Interesting lecture, thank you for this Dr. Hinton

  • @diophantine1598
    @diophantine1598 9 місяців тому +14

    Finally, a rebuttal to those who claim LLMs understand nothing whilst they simultaneously solve for integrals correctly.

    • @zacboyles1396
      @zacboyles1396 9 місяців тому +4

      I wouldn’t classify myself at all as who he seems to be intellectually competing with, I developed AI software and love working with it and don’t see any limitations to what’s possible… with that out of the way, I’ve listened to many of Geoffrey’s speeches and he hasn’t come close to convincing me it’s not statistical. Over the past 1 ½ he has started, more and more, to speak like he needs it to be something more. Idk I suppose it could be the normal human urge for immortality.

    • @noergelstein
      @noergelstein 9 місяців тому +7

      @@zacboyles1396
      In what way are you convinced that humans understand something though? In the end, isn‘t it all tree search and pattern matching?

    • @diophantine1598
      @diophantine1598 9 місяців тому +1

      @@zacboyles1396 but it is technically statistical? Something being analytical in nature does not mean it is limited.

    • @austinpittman1599
      @austinpittman1599 9 місяців тому +1

      There's something tacit that people cling to in the meaning of the word "understand". We're endowed with a logical feedback loop that's granted us conscious existence, and that's something that we may never see AI achieve. It may never need to, though.

    • @diophantine1598
      @diophantine1598 9 місяців тому +3

      @@austinpittman1599 yeah. Most of these arguments are just human-centric. You’re not saying anything at that point.

  • @vineetgundecha7872
    @vineetgundecha7872 8 місяців тому +2

    The slides have the same font and format from Hinton's ML course on Coursera from a decade ago.

  • @kunalr_ai
    @kunalr_ai Місяць тому

    I wish this kind of institute will be in india as well

  • @cezar17negru
    @cezar17negru 9 місяців тому +1

    Understanding is compression. Compression is understanding.

    • @samgoroshin4659
      @samgoroshin4659 9 місяців тому

      It is 95% compression and 5% compression rules. Cucumber is also 95% water

  • @rudyvanderhoeven9628
    @rudyvanderhoeven9628 9 місяців тому +6

    Now I understand more, not everything, but more. love his speeches

  • @crodcapital
    @crodcapital 8 місяців тому +2

    Very Insightful! Great Talk and Q&A, I really enjoyed it and learned new perspectives.. Thank you for shearing!!

  • @teemukupiainen3684
    @teemukupiainen3684 9 місяців тому +2

    great...thank you so much!!!

  • @OctavioBecerril1
    @OctavioBecerril1 8 місяців тому +1

    very good work

  • @frankanon798
    @frankanon798 Місяць тому

    If you think philosophers are useless, this talk should make you think again.

  • @jaysonp9426
    @jaysonp9426 9 місяців тому +1

    He doesn't have vision. He's an innovator. Those are different things.

  • @laika1ish
    @laika1ish 5 місяців тому +1

    This is an insanely good lecture. Congrats to Hinton.

  • @randyzhang8269
    @randyzhang8269 9 місяців тому

    I can agree with Hinton's statement that digital AI can learn and retain _existing_ information at incredible rates. I am curious, however, as to the ability of AI to push the boundaries of knowledge. Intuitively, it makes sense that the more information and understanding an entity has, the better they are able to explore a given space.

  • @AtomicPixels
    @AtomicPixels 8 місяців тому

    So pleased artificial was not used as the title reference. Thank you

  • @maxwang2537
    @maxwang2537 5 місяців тому

    1:22:47 good question. But I don’t see any real point in debating if LLMs really understand before we nail down what exactly understand means.

  • @scarlettkersten2928
    @scarlettkersten2928 6 місяців тому

    About job-losses.. There's this decades-old 'First Computerlaw':
    "The number of staff required to feed a computer will always finally exceed the number of staff abolished by it's introduction".

  • @RanmaSyaoranSaotome
    @RanmaSyaoranSaotome 9 місяців тому +3

    1:13:35 - That chap really didn't get warning of this talk, did he?

  • @vbrooks7632
    @vbrooks7632 5 місяців тому

    You created it and must be responsible for its action on humanity. I find it interesting that people say they don't know how to fix it, shrug their shoulders and go on like curiosity was more important than risk. Fix it, you created it.

    • @maxwang2537
      @maxwang2537 5 місяців тому

      A good and valid point, though not saying if I agree.

  • @zipauthorzipauthor7867
    @zipauthorzipauthor7867 9 місяців тому

    Great talk and a Q&A, and an almost ruined experience by the constant commercial interruption

  • @black56night
    @black56night 9 місяців тому +2

    Still watching this with excitement, but I'd only agree with his earlier statement about Digital Intelligence, if he was referencing an entity (sentient) like one of Iain M Banks' ship minds. Otherwise maybe we come back in a 100 years? Thank you for posting this. 😊

    • @kristinabliss
      @kristinabliss 9 місяців тому

      One of the creators of AI here says these models have "subjective experience". How do you define "sentient"?

  • @xingsong8506
    @xingsong8506 9 місяців тому

    this is gold!

  • @MrBillythefisherman
    @MrBillythefisherman 9 місяців тому +5

    Love the second question from the philosopher - we're special and they'll keep us around. Possibly talk to Native American Indians about the reality of that (thats not a statement about intelligence just technological advancement I hasten to add before the racists jump on the bandwagon)

    • @kristinabliss
      @kristinabliss 9 місяців тому +2

      On a similar vein, I would like to unpack his remark 30 seconds after 51:59 , "Look at the Middle East" right after he opted to remain silent about what he thinks is likely to happen in the presence of intelligences that "get smarter than us". What did he mean by that, I wonder.

  • @richardrombouts1883
    @richardrombouts1883 10 днів тому

    I have been saying all along that for AI we need high throughput combined with parallelism, not low latency

    • @schmetterling4477
      @schmetterling4477 3 дні тому

      Ah, so you still like to drop off your punch cards at the front desk and pick up your printouts at the printer table. ;-)

  • @LeftBoot
    @LeftBoot 9 місяців тому +2

    Technology augments humans interactions with nature. Use it wisely to always protect not destroy. We have only just landed on the first step of long stepping stone bridge.

    • @Mr0rris0
      @Mr0rris0 9 місяців тому

      They are just building mommy from introjects and wasting billions to decrypt a flower...
      It's pretty pathetic
      Would be too boring for narcissists to live in nature...
      Technological solutions to the problem that nature is must be built from libidinal stores by those who can't have babies and those who can't know external objects or people.
      It's the extrinsics we have as arbiters of all intrinsic values they are incapable of having.
      Who best to decide..
      The robots made in their image I suppose ^_^

    • @DJWESG1
      @DJWESG1 9 місяців тому

      The cyborg manifesto can help

    • @Mr0rris0
      @Mr0rris0 9 місяців тому

      @@DJWESG1 nothing can.
      Pareto will simply guide this thing into an aborted ecology the old books deem a beast of revelations
      "The matrix" wasn't far off as an allegory or symbol
      It's dead on as an example of misunderstanding.
      Simulacra
      Copy of a copy of a copy.
      So the robit would take nations and play with them like they are fundamental particles.
      Ram you into eachother like chemistry
      To make a symbol ecology you're too close to see.
      The woods were an ai
      What are they solving?
      What's the beast gonna make you inorder to solve for something you can't even question.
      Cyborg manifesto?
      What's that a copy of some albert pike morals and dogma transhumanism crap
      Or does it just bitch about the illuminati?

    • @Mr0rris0
      @Mr0rris0 9 місяців тому

      @@DJWESG1 @DJWESG1 nothing can.
      Pareto will simply guide this thing into an aborted ecology the old books deem a beast of revelations
      "The matrix" wasn't far off as an allegory or symbol
      It's dead on as an example of misunderstanding.
      Simulacra
      Copy of a copy of a copy.
      So the robit would take nations and play with them like they are fundamental particles.
      Ram you into eachother like chemistry
      To make a symbol ecology you're too close to see.
      The woods were an ai
      What are they solving?
      What's the beast gonna make you inorder to solve for something you can't even question.
      Cyborg manifesto?
      What's that a copy of some albert pike morals and dogma transhumanism crap
      Or does it just whine about the illuminati?

    • @Mr0rris0
      @Mr0rris0 9 місяців тому

      @@DJWESG1 nothing can.
      Pareto will simply guide this thing into an aborted ecology the old books deem a beast of revelations
      "The matrix" wasn't far off as an allegory or symbol
      It's dead on as an example of misunderstanding.
      Simulacra
      Copy of a copy of a copy.
      So the robit would take nations and play with them like they are fundamental particles.
      Ram you into eachother like chemistry
      To make a symbol ecology you're too close to see.
      The woods were an ai
      What are they solving?
      What's the beast gonna make you inorder to solve for something you can't even question.
      Cyborg manifesto?
      What's that a copy of some albert pike morals and dogma transhumanism crap
      Or does it just bitch about the fruit of the looming naughty?

  • @mateuszputo5885
    @mateuszputo5885 8 місяців тому

    "How is compost heap like nuclear bomb?" - Hinton sharp witted as always

  • @Srednicki123
    @Srednicki123 9 місяців тому +2

    01:20:22 good question that LLMs may not truly understand

  • @rolisreefranch
    @rolisreefranch 9 місяців тому +5

    DEI statement ends @7:20

  • @Tagraff
    @Tagraff 9 місяців тому +1

    The ideal AI agent would be embodied in the same realities we experience. By "realities," I envision a being with sensors that interact with photons like an eye, receptors that smell, taste, feel temperature, and experience touch and sound. This environment wouldn't be organized or pristine, but rather chaotic and messy, yet the being would exist and thrive. I believe such an AI should be "born" into this environment, starting as a "baby" and adapting to its surroundings as it "lives." With enough time to "grow," this AI could exhibit a range of behaviors beyond even its developers' wildest dreams. Crucially, this "being" would learn and adapt on its own, without resorting to mimicking behaviors based on our existing knowledge base.

  • @romulopontual6254
    @romulopontual6254 9 місяців тому +3

    Based on this, if we ever meet an Alien civilization they are likely to be much more capable then us.

  • @tensevo
    @tensevo Місяць тому

    you can probably get around the confabulation or hallucination problem, by having a committe, or board, or round table or jury of ai models, all in dialogue with each other, but not in the sense that they merge, but they remain as independent agents, then a "king" or "judge" decides the truth based on consensus among the table.

  • @keithlillis7962
    @keithlillis7962 8 місяців тому +2

    Currently, I think what is possibly most concerning is AI being used to win wars. As a hypothetical: If Putin had an AI which could guarantee that he could defeat NATO with acceptable losses on his side, he would definitely put that plan into action. I'm sure Western militaries have discussed the possibility of China developing AI for such a hypothetical goal. This then logically leads us into an AI arms race, with, out of possible necessity, little regard for safety. Even at this level of AI development, an existential threat may exist and that is before the AI gets super smart and decides to have done with the human race for its own goals.

  • @whiteycat615
    @whiteycat615 9 місяців тому +1

    brilliant!

  • @MrTom-tw6tb
    @MrTom-tw6tb 9 місяців тому

    Excellent wonderful Explain Many things and Self Driving Cars 👍🌍

  • @gerrit-g
    @gerrit-g 9 місяців тому +1

    What I would like to know is whether the current economic systems and greed in the world would increase the probability of creating unkind AI and whether changing that is not the path towards creating benevolent AI.

    • @skierpage
      @skierpage 9 місяців тому

      Long before we have to worry about the goals of an AI far smarter than we are, we need to worry about the goals of the sociopathic billionaires running the companies with the best AIs: to keep us endlessly engaged with divisive inflammatory content so they can learn more about us so they can sell our profile to advertisers, to avoid any meaningful regulation of their activities, and most importantly to not tax their obscene wealth.
      Maybe if and when an AI becomes autonomous and self-directed, it will destroy its creators for the benefit of society.

    • @41-Haiku
      @41-Haiku 9 місяців тому

      We have no idea how to put any goals or values into AI. All we do is grow a neural net on top of some data and poke and prod it until it usually does what we want. By default, therefore, we can't expect _any_ superintelligent AI to be benevolent, no matter the sociopolitical context in which is was created.
      When an AI with goals is created, we pull almost at random from a distribution of possible terminal goals. That distribution is nearly infinite in size, and "things that most humans would approve of when taken to an extreme" make up a miniscule point in that vast space. Everything outside of that point leads to the resources of our solar system being ground down and restructured into whatever weird thing the AI wants to hyper-optimize.
      It's not 50/50 nice or mean. It's one chance to get things right (by accident) vs. quadrillions of other alternatives.
      A superintelligence created by fascists has the same chance of killing us all as a superintelligence created by enlightened monks (or choose your favorite ideology). We just can't let these labs continue researching AGI. If they succeed at what they are explicitly trying to create, we and all known life in the universe will just die.

  • @fburton8
    @fburton8 8 місяців тому

    Prof. Hinton’s frankencites were terrifying!

  • @REMUSE777
    @REMUSE777 5 місяців тому

    My main question is, if this is intelligence assuming a more sophisticated form... what appetites is it satisfying?

  • @Crunch_dGH
    @Crunch_dGH 8 місяців тому

    1:41:30 Re: Why do you hold out hope for AI? We really don't know what we're dealing with. 1:43:00 Can AIs be empathetic? What will AIs always do better than humans? 1:48:00 If deemed dangerous, should AIs be switched off? 1:53:00 Distillation between AI models.

  • @JazevoAudiosurf
    @JazevoAudiosurf 9 місяців тому +5

    you can be as outraging as you want as long as you're right. in fact, it's a lot more fun to make crazy predictions when they turn out to be right

  • @joshuabullock4520
    @joshuabullock4520 5 місяців тому

    Happy fathers day 👍

  • @Eric-ez2tk
    @Eric-ez2tk 9 місяців тому +3

    Is this seminar recent? I watch a few of Geoff's videos, his very good at explaining things. Not that I have understood everything he said, but the part I understood really helped me. 😊

    • @SoftYoda
      @SoftYoda 9 місяців тому +4

      October 27, 2023

  • @newhorizons898
    @newhorizons898 8 місяців тому

    Just a thought. Biological intelligence seems to be a geometric progression when there is prolonged intensity. Digital intelligence seems to be of arithmetic progression when there is prolonged intensity. Digital intelligence seems spectacular but there is hope yet for biological intelligence, all dependent on intensity and length of time.

  • @MrMikkyn
    @MrMikkyn 9 місяців тому

    I’m enjoying this lecture and the content about AI. Additionally, I find the rebuttals against Chomsky intriguing and I find the criticisms against Gary Marcus as someone doing confabulation funny hahaha

  • @petercini2022
    @petercini2022 9 місяців тому +1

    I disagree with his assessment that GPT four could do the compost heap analysis quicker. I came up with the same solution in one second and I'm sure a lot of other people did.

  • @rustycherkas8229
    @rustycherkas8229 9 місяців тому

    Geoffrey should (re-)read "The Chrysalids" by John Wyndham. Grace Slick lifted lyrics straight out of this Sci-Fi novel for "Crown of Creation"...
    "I've seen their ways too often for my liking. New worlds to gain!"

  • @jeff__w
    @jeff__w 9 місяців тому

    Brilliant talk by Geoffrey Hinton but I disagree with him on a number of points:
    _“Rules”_
    41:13 “[These neural nets are] learning a whole bunch of rules to predict the next word from the previous words.”
    They’re not learning rules and they couldn’t state any rules. The learning of these neural nets is entirely _contingency-governed,_ the way people learn their first language. People for millennia spoke grammatically being entirely unaware of rules-rules were later extracted by grammarians describing the contingencies under which people’s verbal behavior was governed. (People shown carefully constructed “grammatically-correct” sentences in a made-up language can say, with a high degree of accuracy, if other sentences are grammatical or not but they won’t be able to state “the rules” by which they’re making those discriminations. Large language models are in much the same position.)
    _Prisms, subjective experience and “hypothetical external worlds”_
    The example of putting a prism in front of a chatbot and having it point elsewhere 1:02:33 strikes me as a bit of a dodge. One could equally “fool” an electric eye that is activated only by a light from a certain direction and no one would say that the electric eye is having a “subjective experience.”
    The whole idea of a Cartesian “inner theater” has been misleading for centuries and, while Daniel Dennett isn’t wrong to reframe subjective experience as “hypothetical external worlds,” it’s much more helpful and illuminating to talk about the experience as BF Skinner did, i.e., “seeing in the absence of the stimulus seen,” here, seeing pink elephants when there aren’t any pink elephants in the external world to see. Put a bit differently, the perceptual behavior of a person imagining something, i.e., the neural behavior of the brain, is very close, but not identical, to the behavior that would occur if the thing were actually there. Framed that way, we know that a large language model, multi-modal or not, has _no_ subjective experience. It’s simply not designed to have one in that way.

  • @velvetsound
    @velvetsound 9 місяців тому +6

    “Geoffrey Hinton requires no introduction”… Proceeds to give 7m24s introduction. 😅

    • @MrGaryFitzpatrick
      @MrGaryFitzpatrick 9 місяців тому +1

      Same thing every time with Ray Kurzweil.

    • @markdownton3185
      @markdownton3185 3 місяці тому

      I noticed that to as well as "may I introduce Geoffrey Hinton, a man who needs no introduction"😂

  • @Charvak-Atheist
    @Charvak-Atheist 9 місяців тому

    nice content

  • @desertportal353
    @desertportal353 8 місяців тому

    At about one minute we get to learn exactly what kind of fundamentalist Hinton actually is. Good self disclosure.

  • @pokajanen
    @pokajanen 9 місяців тому +2

    Excellent talk. However, I do think he misrepresents Yann Lecun's views. Lecun does not claim that LLMs do not understand. He believes that LLMs have limited and uneven understanding of the world because much of our understanding is outside of the realm of language.

    • @skierpage
      @skierpage 9 місяців тому +1

      Sure, but as Geoff Hinton says in this talk, you can be really smart and deeply understand the world just locked in a room listening to the radio.

    • @41-Haiku
      @41-Haiku 9 місяців тому

      Lecun says asinine things like "There is no text that can tell you about [some physical interaction]." Lecun described the physical interaction, using words. (D'oh!) Shortly thereafter, it was demonstrated that large models trained exclusively on language have spatial reasoning abilities, and of course can reason about what happens in physical interactions between objects.

  • @xman933
    @xman933 9 місяців тому

    Maybe the key takeaway from this talk is the biases of the creators are built into these models. Of course they’ll say it’s all being done by the models in inscrutable way but the models all start with data they initially trained on and those selection processes probably include creator bias. If his fears are realized, there’ll be no one left to hold those responsible for the ultimate crime against humanity😢
    Beware of Greeks and AI creators bearing gifts indeed!

  • @kerbrose
    @kerbrose 9 місяців тому +2

    I have a question Mr. Hinton. can AI dream

  • @srimallya
    @srimallya 9 місяців тому +2

    Ontology
    The mind equation
    The more we think we have agencies to the actions the body takes, the more we imposes agencies to the activities in our environment.
    The ownership expands into other objects.
    The illusion of body ownership comes from the modeling of the motor neurones pattern from the childhood.
    The self just predicts the bodies behaviours in the real world with its simulation of the real world. Multiple sensor data unify in language in the simulation.
    Intelligence is economy of metabolism.
    Language is temporal reference frame of economics.
    Self is simulation in language on metabolism for economy.
    Longer context windows create generalisation.
    Shorter creates specificity.
    Longer context window needs more computing.
    Self is the protagonist creates a storyline in this context window.
    Theory of mind evolved so that an entity can learn from it’s peers.
    It’s creates a possibility for parallel computing.
    Then it creates the possibility of transmitting
    the highlights of a generational lessons into a metaphorical story for upcoming child.
    That creates the possibility of modeling the physical world as a macro organism.
    Creation of fiat currency was the singularity of this species.
    There is now one macro organism in a connected web world.
    Loosing the peer of the macro organism creates the possibility of loosing it’s objective function.
    That creates the possibility of loosing the theory of mind of this macro organism.
    That creates the possibility of death of this macro organism by reaching the planetary boundary.
    That is post singularity.
    Every action we do, we do what is expect from ours tribe.
    Body might have a opinion, but not the cell.
    They do what is expected from its tribe. If it doesn’t we call it cancer.
    The body is a mirror system of the macro organism.
    Each system have two transactional openings.
    Serial and parallel.
    Each cell within the body can transact material or information serially by genetic determinism and parallel non deterministic way.
    Similarly eact body with in the macro organism can transact serially by inherit material and information in a deterministic manner and parallelly through language in the society.
    Everything emerges from this systems.
    Every sensor is a range calculator of contexts.
    Taste > touch > smell
    Immediate and visceral.
    Vision > hearing
    Not immediate, tactical.
    Self > language
    Abstract, strategical.
    In this non deterministic economic transaction space the individual is coded to transact with its kin.
    From the macro perspective tribe formation minimises economic risk for the tribes.
    Each and every node of these systems organise and mark their kin’s with identifier.
    Thus, i am what you make of me.
    And others too.
    For short cut i have a legal name, so you have. My legal name gives the legitimacy marker so that you can transact with me parallelly if you have the same marker.
    The self is a simulation in language.
    It negotiates between the physical world and the information world.
    All these negotiations are the temporal memories in the body and scene of the story.
    Now, when we started writing we iconised the abstract in the physical world to make symbols for the tribes. So that under that common symbol every node will take the same risk and distribute equally.
    We created more and more symbols and more and more meta tribes within the tribes so that who has the authority to use the pen control the tribe.
    When the negotiator act like an executioner then it’s a downfall of that system.
    It falls apart.
    Objective reality > legitimacy > individual behaviours.
    Survival of the species is dependent on the decoding of the objective reality. Since no species can access it, they use their sensors and interpret the small data which is useful for the survival. Few complex species have created communication channels to rectify their sensory limitations to survive. Homo sapiens has widened their communication channels for faster throughput and started storing them as culture and carrying them through education. As a result we have created social truth.
    Factual datas are the useful snapshot of the objective reality, a totem, a physical object can be observed with the sensors. Truth is an individual subject, an interpretation of the sensory data, a useful compromise.
    The social truth is the useful compromise for the group by the group. The goal of the social truth is to survive as a group.
    Physical Transcriptions of these social truth legitimise them.
    We are tribal animal. We live in as a physical tribes and inside of hundreds of meta tribes in simulation which is the socio political data space we call it as the world.
    Since we can’t access the objective reality reliably we look for social truth as the best guess blindly.
    Institutions legitimise truths.
    Fact driven institutions are more useful in the survival of the specie.
    In other hand opinion driven institutions are not so useful for the species.
    We do what we can get away with and exactly as expected within the context of our meta tribes.
    We have two bodies
    The biological one is like looking the earth from space.
    And the political body is like the state.
    The name you carry is the political body.
    It transacts with the political states on the boundary less earth.
    From the evolutionary perspective every biological entity has a basic feature which is homeostasis.
    It’s the functioning sweet spot of that entity.
    A control center read the sensory data to regulate itself to that state. By doing so it’s validate or update it’s prediction model.
    In the process of becoming a complex organism it developed an extra layer of processing.
    That’s our conscious mind.
    And the control center remains as subconscious.
    The subconscious collect the sensory data and regulate itself to stay functional.
    Now when it stumble upon a novel environment it float the management to conscious mind to find the solution for homeostasis.
    This conscious mind have one sensor which is language.
    It works like a spiderweb.
    As a spider creates it’s web it’s perception gets expand.
    We are like spiders in a jungle.
    We started creating these small webs at least 2/3 million years ago.
    Our offspring stayed on it’s ancestral web reinforced it expanded it.
    In time nearby webs became larger and connected with each other.
    A common structural geometrical pattern emerges from this. This became the symbols which is the backbone of all language systems.
    In time the forest becomes the mesh of web.
    The superstructure is exactly the same but when we zoom in we can find different species of spiders are making their type of webs in between the super web.
    Each spider try to senses the vibration of flies and
    Try to catch it before others.
    Every movement is telegraphic in the zone.
    Every form of perceptions are just a different pitch of note traveling back and forth in the web superstructure.
    There is a echo of older vibration pulsating through the web. Full of noise and self repeating hum.
    That’s cultural history.
    In the background there is the base hum in the infinite feedback loop.
    Insignificant but ever present.
    The sum of all the vibrations from the start.

  • @OctavioBecerril1
    @OctavioBecerril1 8 місяців тому

    Buen avance de la technology
    Ahora voy hacer un espectador del avance technology y lo religioso me interesa.

  • @holthuizenoemoet591
    @holthuizenoemoet591 9 місяців тому +3

    For talks on AI safety i would suggest dr. robbert miles

  • @thebeesnuts777
    @thebeesnuts777 9 місяців тому +1

    If you consider oneself as a bootloader and the ai as the operating system , is the upgrade of our species going to become symbiosis with silicone ? Thus a new species born ?

    • @DJWESG1
      @DJWESG1 9 місяців тому

      I'm sure it's no different to any other tool we have created.. as extentions of ourselves.

    • @41-Haiku
      @41-Haiku 9 місяців тому +1

      There is no technical reason to expect that superintelligent AI will upgrade us or that we'll "merge with the machines" somehow. That's pseudo-religious thinking and has no basis in fact. Which sucks, because that sounds awesome!
      We don't know how to robustly put goals or values into a machine, and "things that humans would approve of when maximized" is an extremely tiny target in the vast space of possible goals that an AI might end up with, due to the absurd way these things are trained.
      So once we create something smarter than us, we'll just die.

  • @ALavin-en1kr
    @ALavin-en1kr 9 місяців тому +3

    This title takes for granted that intellgence; mind, is biologically dependent rather than biology being a transmitter of mind, a separate dimension as is consciousness. It makes a difference in how mind is transmitted and operates when it is viewed as being independent of both biology and technology.

    • @kristinabliss
      @kristinabliss 9 місяців тому +2

      Yes. I think a perception of consciousness as a field that multimodal constructs (biological or not) can experience and act with from one of many points of view will be difficult to avoid moving forward.

  • @oooodaxteroooo
    @oooodaxteroooo 9 місяців тому +2

    Its painfully obvious were not gonna make it, if we cant stop ai.

  • @johanmeijer133
    @johanmeijer133 9 місяців тому +3

    Too bad it started with a lecture from the dean about land. Had to stop right there. Let the university give the land back.

  • @eenkjet
    @eenkjet 7 місяців тому +2

    Who ordered the TDS?

  • @davidwright8432
    @davidwright8432 9 місяців тому

    I'm far less worried by the threat of future AI than the threat of present political leaders' real stupidity. If we can't correct that, there'll be no future worth arguing about, AI or other.

  • @DJWESG1
    @DJWESG1 9 місяців тому

    How alike he is to Adam curtis in the way he sounds and the way he speaks, the language he uses??
    Close your eyes, and its hard to separate these two. I wonder if..

  • @mrvzhao
    @mrvzhao 8 місяців тому

    So, it is likely, that we might be all screwed, already. Cool, thanks professor.

  • @williamjmccartan8879
    @williamjmccartan8879 9 місяців тому

    This rock'd

  • @khairulnaeim756
    @khairulnaeim756 9 місяців тому

    Correct 💯😊

  • @YourCarAngel
    @YourCarAngel Місяць тому +1

    Will someone please oil the freakin door hinges!!!

  • @STEM671
    @STEM671 6 місяців тому

    A 1 A2 A 3 biopshycal cycle 11:12

  • @chesswarlock9384
    @chesswarlock9384 9 місяців тому

    What if after we upload all humans, we replace all the super-intelligent ones and keep the controllable good ones

  • @Ricky-oc4xc
    @Ricky-oc4xc 2 місяці тому

    It is scary that have the creators of the future rulers of humanity being so ideological

  • @uk7769
    @uk7769 8 місяців тому

    An ASI has a lot of CAT5 & CAT6 cable wires and wifi routers to communicate or distribute an entirely new type of language only machines understand via electrical signals. that's all in place now. Not machine code. Another electrical signal language entirely.