#59 JEFF HAWKINS - Thousand Brains Theory

Поділитися
Вставка
  • Опубліковано 29 кві 2024
  • Patreon: / mlst
    The ultimate goal of neuroscience is to learn how the human brain gives rise to human intelligence and what it means to be intelligent. Understanding how the brain works is considered one of humanity’s greatest challenges.
    Jeff Hawkins thinks that the reality we perceive is a kind of simulation, a hallucination, a confabulation. He thinks that our brains are a model reality based on thousands of information streams originating from the sensors in our body. Critically - Hawkins doesn’t think there is just one model but rather; thousands.
    Jeff has just released his new book, A thousand brains: a new theory of intelligence. It’s an inspiring and well-written book and I hope after watching this show; you will be inspired to read it too.
    Pod version: anchor.fm/machinelearningstre...
    numenta.com/a-thousand-brains...
    numenta.com/blog/2019/01/16/t...
    numenta.com/assets/pdf/resear...
    numenta.com/neuroscience-rese...
    Your Brain Is Not an Onion With a Tiny Reptile Inside
    journals.sagepub.com/doi/full...
    Pruning Neural Networks at Initialization: Why are We Missing the Mark?
    arxiv.org/abs/2009.08576
    Panel:
    Dr. Tim Scarfe
    Dr. Keith Duggar / doctorduggar
    Connor Leahy / npcollapse
    Our thanks to:
    Numenta
    Matthieu Thiboust (www.insightsfromthebrain.com/ + / mthiboust )
    Shwetha Bharadwaj (show research / shwetha-bharadwaj-2b92... )
    Andreas Koepf (show research neurosp1ke?lang=en)
    Lex Fridman, we used a few clips from his Jeffv2 interview -- • Jeff Hawkins: The Thou... -- remember to check Lex's channel out! ❤
    [00:00:00] Introduction
    [00:03:03] The Neocortex
    [00:09:58] Triune Brain
    [00:12:24] Grid and place cells
    [00:14:54] Reference frames
    [00:21:03] Mountcastle
    [00:25:46] Thousand brains theory of intelligence
    [00:32:40] HTM
    [00:41:12] Sparsity
    [00:52:57] Main show kick off
    [00:54:36] Tribalism in the ML Community
    [00:57:14] Variation in approaches to the same goal
    [00:59:43] Hawkins ideas validated, cortical uniformity
    [01:02:25] Sparse distributed representations (SDRs)
    [01:06:08] Reference frames as generalization
    [01:10:29] Reference frame remapping
    [01:14:14] Reference frames can generalize beyond three dimensions
    [01:17:26] And generalize beyond spatial topology
    [01:20:12] Intuitions behind why SDRs work well
    [01:24:03] Are their capacity concerns with the SDR model
    [01:27:11] At what level between GOFAI and Connectionism should focus our effort?
    [01:31:33] The brain reasons by abstract movement through reference frames
    [01:35:34] Human's don't know Universal Truth (if there is even such a thing)
    [01:37:34] Learning elsewhere in the brain besides the neocortex
    [01:40:44] Stochastic backpropagation in the human brain
    [01:44:04] What's missing from artificial neural networks? Numenta's roadmap
    [01:48:59] AGI Risk - the alignment problem
    [01:54:07] AGI risk - the neocortex can thwart the old brain
    [01:57:47] AGI risk - artificial evolution
    [02:01:18] AGI risk - yes we need to think on and develop adequate control systems
    [02:03:48] A balance of knowledge: innate, experiential, taught, or deduced
    [02:16:09] post-show wrap-up
    [02:16:59] Advancements in direction at Numenta
    [02:19:50] AGI risk recap
    [02:23:56] Ought did evolve from Is, humans are the proof
    [02:26:29] When AGI overcomes our weaknesses
    [02:28:54] Who doesn't like forking?!
    [02:30:29] Coherent synchronization as a measure of identity
    #machinelearning #artificialintelligence
    Music credit;
    / nolightwithoutdark
    / sibewest-nero
    / skeler-kensho
    / s-o-l-a-r-i-s
    / empty
    / moment
    / reticent
    / velvet
    / c-a-l-i-c-r-y-lalala
    / ephemera
    / elo-method-subranger-s...
    / ukowens1
    / nightwalk
    / be-here
    / divine

КОМЕНТАРІ • 166

  • @Extys
    @Extys 2 роки тому +112

    I can't believe something this high quality is free. Truly incredibly work.

    • @audrajones
      @audrajones 11 місяців тому

      it's not free for them - throw them a couple bucks!

  • @AliMoeeny
    @AliMoeeny 2 роки тому +52

    You have incredible guests, and hosts, but the best part of the show, is the background and introduction section at the start. Thank you very much for the hard work

  • @fotoyartefotoyarte1044
    @fotoyartefotoyarte1044 2 роки тому +19

    that introduction was the best I have ever seen in relation to a scientific interview; real work put into it; very few people nowadays have the passion and will to do well done work like that; amazing

  • @Heidiroonie
    @Heidiroonie 2 роки тому +5

    Can't believe this is has 6.4 thousand views, should be 6.4 million

  • @iestynne
    @iestynne 2 роки тому +10

    That introductory section on neuroscience was INCREDIBLY useful!! You should split that out as a separate clip video.

  • @AICoffeeBreak
    @AICoffeeBreak 2 роки тому +29

    Finally!!! You made us wait for this. Let's see if the wait was worth it! 😊

  • @DavenH
    @DavenH 2 роки тому +15

    Man, you have got SUCH a good thing going here. I have to think that in two petri dish universes, one with MLST and one without, we get our best-outcome AGI shows up way faster due to your discussion distillation and dissemination of the field's knowledge. Talk about legacy! Thanks once again for these tremendous efforts.
    One thing that keeps hitting my curiosity is the belief that AI needs embodiment. Does that merely mean that the agent needs to have a discrete instantiation somewhere (even somewhere virtual), rather than a periodic, intermittent or fluid one? Or does it mean real physical embodiment? I'm super skeptical of the latter, as we're interacting with a virtual environment ourselves as humans. We never actually touch objects themselves, we "touch" signals and qualea. Our physical embodiment has no material difference (in the legal sense of material) from an arbitrarily realistic metaverse. Right?
    I don't want the conception of a need for embodiment or robotics to unnecessarily limit our grasp, either. So many interesting things are virtual in some respect, and have learnable structure, and could benefit from the availability of high intelligence.

    • @balapillai
      @balapillai 10 місяців тому

      2 ways of disambiguating this:-
      1) Distinguish the process of learning ephemerals versus conceptuals.
      Hypothesis: The more conceptual, the more continued embodied engagement, ie adaptive learning, is required as predicate
      The more ephemeral, the more the learning bit can be opted into a pre-existing conceptual body “virtually”.
      A parallel of “retrofitting” a loose jigsaw puzzle piece into an almost complete jigsaw puzzle. The more complete the puzzle is, the more odd leftover bits can be fittted in because of “nyet” - they cannot possibly be fitted in elsewhere in the puzzle.
      2) Investigation into why the Tamils (of which I and the CEO of Google are instances) went into a gradient descent from about 600 CE onwards when they were on a fat gradient ascent, epistemology-wise, up to then. What aspects of epistemological growth were effectively “ethnically cleansed”?
      #SpiceTradeAsia_Prompts

  • @CharlesVanNoland
    @CharlesVanNoland 2 роки тому +14

    RIP Matt Taylor. Followed his Twitch streams and had the fortune of chatting with him on there in the weeks before his departure. He deserved to see where machine intelligence would lead. I guess that now he already knows out there in the infinite forever.

  • @eox5850
    @eox5850 2 роки тому +2

    Don't remember being happier to have two hours and 12 minutes remaining on a video. Bravo

  • @bertski89
    @bertski89 2 роки тому +2

    Very classy tribute to Matt Taylor - also this is the best external treatment and overview of Numenta's work that I have seen - and I've been watching closely since Redwood was founded (2005). Really appreciate the depth. Great work, thank you for putting this together and the interview.

  • @troycollinsworth
    @troycollinsworth 2 роки тому +2

    In the last 50 pages of A Thousand Brains: A New Theory of Intelligence and this was very informative with far more details than were conveyed in the book.

  • @ideami
    @ideami 2 роки тому +4

    Superb episode, a great journey through the fascinating work and research by Jeff and the Numenta team, this podcast is a treasure indeed ;)

  • @sjp1861
    @sjp1861 2 роки тому +2

    This is just fantastic! Thank you very much for this episode. Simply outstanding work.

  • @videowatching9576
    @videowatching9576 Рік тому +2

    I appreciate that this show ultimately is tying back to ‘machine learning’ and building things.
    In contrast in other conversations outside this show, I find that talking about AI or AGI or advances in the abstract or sort of just talking about the implications in a sense of awe is tiring because it doesn’t really map to a concrete thing that is tied to productivity / improvement / advances. Even places that seek to having a ‘philosophical’ conversation about AI stuff, I think ends up unfortunately missing a lot of opportunity to address use cases.
    So as a guiding principle I think that’s great that this show to me seeks to be focused on uses ultimately.

  • @videowatching9576
    @videowatching9576 Рік тому +1

    Such an awesome format for this podcast of such important info:
    Part 1: summary and framing of how to understand
    Part 2: the talk
    Part 3: downloading that to interpret
    Jobs to be done:
    Part 1 as the summary of takeaways
    Part 2 as decide and interpret yourself
    Part 3 as figure how to apply and next steps - from the interview, and more such as ‘if this is true, then what else is true’ and so on.
    Fascinating.

    • @renjia3504
      @renjia3504 Рік тому

      🎉

    • @renjia3504
      @renjia3504 Рік тому

      🎉🎉🎉🎉🎉🎉🎉🎉🎉😢🎉🎉😢😢😢🎉🎉🎉😢🎉🎉🎉😢😢🎉😢🎉😢😢😢😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢😢😢😢🎉🎉🎉🎉🎉🎉🎉😢😢🎉🎉🎉😢🎉🎉🎉😢😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢🎉😢🎉🎉🎉😢😢😢🎉🎉🎉🎉🎉🎉🎉🎉😢😢😢🎉🎉🎉🎉🎉🎉🎉😢😢🎉🎉🎉😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢😢🎉🎉🎉🎉😢😢😢🎉🎉🎉🎉🎉🎉🎉😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢😢🎉😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😢🎉

  • @janosneumann1987
    @janosneumann1987 2 роки тому +2

    Great episode! Raising the bar higher. Another epic intro from Tim 👏

  • @CristianGarcia
    @CristianGarcia 2 роки тому +4

    After watching the whole talk I get the sense that 1) Jeff has really cool ideas and getting strong queues from neuro science is very interesting but 2) it seems a lot of what he points to is not published/shared and it seems very unlikely a single lab will make progress on this field on its own. Contrary to Gary Markus, a big +1 for Jeff is that his team is actually trying to implement his theories.
    Anyway, loved the episode!

  • @abby5493
    @abby5493 2 роки тому +5

    Most incredible video you’ve ever made 😍

  • @fcvanessa
    @fcvanessa 2 роки тому

    just got my new XM4's and can listen to MLST while walking around the house. Brilliant work Tim and co!

  • @egor.okhterov
    @egor.okhterov 2 роки тому +1

    My observations:
    1. We are not conscious all the time. We have our snapshots of alertness once every 60 milliseconds for some small period of time and gaps of time in between being fully unaware and unconscious.
    2. The clarity of being conscious feels different when you are fully awake vs when you are sleepy or drunk.
    3. We are fully unconscious and not self aware when in a state of deep sleep, despite neocortex still working and making votes and predictions.
    4. We could navigate our conscious to be aware of different parts of information presented. Somehow we could guide and aim our attention at different concepts and images presented to us at every moment. We could even track our thought process and feel continuation of it.

  • @thephilosophicalagnostic2177

    A wonderful, detailed exploration of Hawkins' superb model of consciousness. Thanks for creating and posting.

  • @galileo3431
    @galileo3431 2 роки тому +2

    MLST getting the pioneers! 🤖🧠

  • @oliverhorsman8896
    @oliverhorsman8896 4 місяці тому

    Wow amazing, thankypu so much, im learning so much from you.

  • @zilliard1352
    @zilliard1352 2 роки тому +2

    Truly amazing

  • @CristianGarcia
    @CristianGarcia 2 роки тому +1

    Amazing work! ❤

  • @freakinccdevilleiv380
    @freakinccdevilleiv380 2 роки тому +1

    Aweeeesome 👍👍👍
    Many thanks.

  • @autobotrealm7897
    @autobotrealm7897 Рік тому

    Visuals are brilliant.... exhilarating!

  • @RoyceFarrell
    @RoyceFarrell 2 роки тому

    wow thankyou love your work...

  • @Mario7k
    @Mario7k 2 роки тому +3

    This channel is great! 👏👏👏👏👏👏🏆

  • @skyacaniadev2229
    @skyacaniadev2229 5 місяців тому

    Great talk. Wish I watched this earlier. 🎉

  • @danbreeden5481
    @danbreeden5481 2 роки тому

    Absolutely amazing

  • @TheShadyStudios
    @TheShadyStudios 2 роки тому +1

    helllll yeah definitely gonna learn a bunch from this

  • @Artula55
    @Artula55 2 роки тому +1

    Thank you :)

  • @LiaAnggraini1
    @LiaAnggraini1 2 роки тому +2

    please invite Judea Pearl, I really love his book and idea about causality

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 роки тому

      We would love to get Judea on! We did try and invite him on Twitter a while back and he didn't respond.

  • @nauman.mustafa
    @nauman.mustafa 2 роки тому +1

    +1 for speaking against tabula rasa!

  • @benjaminjordan2330
    @benjaminjordan2330 Рік тому +1

    I have a theory that humans, dogs, and other mammals turn their heads whenever they are confused in order to slightly change their perspective when the visual input is ambiguous.

  • @dr.mikeybee
    @dr.mikeybee Рік тому +2

    I'm rewatching some of your old podcasts. They're excellent. Nevertheless, it seems wrong when people are surprised by inherited knowledge. If brains were initially randomly "wired," the genetic code for those successful randomly wired brains would have been passed on. Selection can account for every biological feature.

  • @MuhsinFatih
    @MuhsinFatih 2 роки тому +1

    Amazing. I could never before believe that the insane level of intelligence that the brain has could evolve even in billions of years. I can see how it's possible now

  • @audrajones
    @audrajones 11 місяців тому

    Thanks!

  • @eduardocobian3238
    @eduardocobian3238 Рік тому

    Super interesting. Thanks. I think HTM is the way to go for AGI.

  • @marilysedevoyault465
    @marilysedevoyault465 2 роки тому +1

    So interesting guys! Did Mr Hawkins talk about sex ? The four of you sure know the way!! Just kiding. I'm French speaking, so sorry for the mistakes. About what I was writing previously, I hadn't listened to all the video. When we know how to give importance to what is being sensed (for example by knowing how Flagellum were used to move forward to more nutriments in primitive beings, giving sudenly importance to what was perceived in the environment - the lack of nutriments), then, we will need to configure the AI based on a mother : the mother of humanity. We will need to make it work like a mom, with the same motivations, the same way to give importance. It will be our eternal mother board ! What Mr. Hawkins is working on is sooo important. What AI will learn won't stupidly die like humans. The knowledge will be there for centuries! It will be our most important treasure. I hope so, but we need to be carefull with the configurations!!

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    If the reference frame is the basic storage architecture for understanding, that's fine. I believe that any storage system can function as encoding for any information. If the reference frame is the most efficient, so much the better. In the end, however, functionally a database is a database. The implementation details are only really important for performance.

  • @lufiporndre7800
    @lufiporndre7800 2 роки тому

    36:16 I also came to a similar conclusions 3 yr ago still missing some parts but almost near.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +2

    Absolutely, Keith. Evolution happens. "The rocks are peopling." -- Allan Watts

  • @luke2642
    @luke2642 2 роки тому +1

    At 2:12:00 or so I think Jeff says proto colliculus... the superior and inferior colliculi are part of the Pulvinar nuclei? There's a wikipedia page on snake detection theory, and a million youtube videos of cats jumping when they see cucumbers.
    I like that sparse representation seems obvious nowadays, error correcting, overlappable. It turns the "curse" of dimensionality into a "blessing" wth so many features for free!

  • @isajoha9962
    @isajoha9962 10 місяців тому

    Really cool video !!! 😀

  • @ZeroGravitas
    @ZeroGravitas Рік тому +2

    Wild production values on this video, bravo! Great to see Jeff still developing the ideas I read back in "On Intelligence", adapting them to transformer NN. And to have the cross questioning from from Connor worked brilliantly for context and the pressing issue of alignment. 👍

  • @dominicblack3131
    @dominicblack3131 Рік тому

    I used to think AI was imminent - or at least I thought this was a consensus. AI is like cellular biology. The more we understand it the larger becomes our awareness of the vast chasm of our ignorance. The extent to which the simulcra of machine intelligence models emulate the mystery of the human brain/spirit increasingly looks like a cartoon representation wherein the perceived distance between representation of our knowledge and what we want to apprehend increases in line with our comprehension. I love MLST. What a service to humanity!

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    Is there a way to create distal connections between GPUs and/or TPUs?

  • @arnokhachatourian8928
    @arnokhachatourian8928 2 роки тому +1

    It’s here!

  • @ArjunKumar123111
    @ArjunKumar123111 2 роки тому +2

    The podcast on spotify is only 5 mins long for some reason, please check!

  • @xox14
    @xox14 2 роки тому +1

    Gr8 video! what's the soundtrack name? thanks

  • @jonathanbethune9075
    @jonathanbethune9075 Рік тому

    Harvard , think it was Harvard , has been working on self assembling robots. Going from macrosystems to nanotechnology is a matter of finding the templates for a system it's in and the function it is responding to.
    Genetics epigenetic capacity is the model I think.

  • @marilysedevoyault465
    @marilysedevoyault465 2 роки тому

    About pruning, I think the answer is in how the first living beings with a tail would go forward in the water when there wasn't enough nutriments. How would they decide that it was important to move? It is where the key is: and this detection of an importance because of what they were sensing is the key to pruning and motivation. It is for this reason that a good employee does what his boss expect and remembers only what is important. At first, children copy their parents : knowing instinctively that it has huge importance. But the importance giving to what they sense is critical. We need to go back to these elementary beings with a tail...

  • @joaoveiga3382
    @joaoveiga3382 Рік тому

    Super cool video, I read the book this theory seems revolutionary and true. I think Numenta will be as successful and historic as Palm

  • @gren287
    @gren287 2 роки тому

    If you solve it computationally instead of storing the positions as with pruning, sparse networks are on average three times more efficient than neural networks, as far as my observation for ordinary MNIST training.
    Just as good as your intro :)

  • @friedrichdergroe9664
    @friedrichdergroe9664 Рік тому

    Good job congealing Thousand Brains theory down to a single video.
    One issue I have with Jeff Hawkings -- a nit, granted -- is reffering the interactions among the cortical columns to "voting" -- I suppose that's a useful metaphor to help the understanding along, but really, I see it as a state attractor. The inputs by the many senses from a cup, say, creates a state attractor among the columns that converge to "cup".
    Maybe a nit, but I find it helpful to understand what's going on. And it fits better considering the temporal aspects. The state attractors shifts over time in response to shifting inputs, and I might be so bold as saying that the state of the state attractors IS our conscious minds... or at least is directly derived from it.
    I think that sparse computation will be a thing in the future. Hopefully it will be I leading the charge! :D :D :D

    • @hyunsunggo855
      @hyunsunggo855 Рік тому +1

      I think it's just the matter of the level of abstraction. Sure, the "voting" interaction is implemented by attractors. But attractors can also implement associative memory, attracting unusual neural activation caused by some noise in the input to a fixed point, a stable activation pattern. Do atoms not actually exist just because they are realized by electrons and a nucleus? No. Are electrons not real simply because they're just a consequence of the underlying electron field? No!

    • @friedrichdergroe9664
      @friedrichdergroe9664 Рік тому

      @@hyunsunggo855 Granted, but my point is that the system is much more fluid and nuanced than the voting metaphor can convey. Perhaps the cup example is too simple. Think, instead, of driving. The situations are constantly shifting in real-time as the car we control makes its progress down the road, and somehow, more times than not, we manage to reach our destinations without wrapping ourselves around a tree!
      Thinking in terms of state attractors captures the nuances better, IMHO

    • @hyunsunggo855
      @hyunsunggo855 Рік тому

      @@friedrichdergroe9664 May I assume that you're speaking of the dynamic nature of such tasks? I can see the driving example makes the point very clear, the predictions should be constantly changing as the world states change constantly as well. The voting mechanism Jeff describes does not necessarily say that it's strictly convergent, most likely the other way around, more close to how you've described. Jeff talks about voting with the union of possibilities, carving out the unlikely subspaces of probability. Which encompasses all possible (driving maneuvers) you might need to take in the (very near) future. In case of completely unexpected encounters, such as finding yourself about to drive into a tree, Jeff talks about surprise as well. And he claims that surprise should be an inherent feature of an intelligent model and how it fundamentally relates to learning. Personally, I would dare to assume that little surprises casue some little shifts in the predictions, the space of possibilities, greatly improving the predictive performance for dynamic situations. But that's just my opinion and I'll be more than happy to hear your thoughts! :)

  • @Kinnoshachi
    @Kinnoshachi 2 роки тому

    Input sense of challenge -> output random vowel sounds

  • @TEAMPHY6
    @TEAMPHY6 2 роки тому +1

    I can confirm that my kids didn't understand the problem with spilling things on the floor.

  • @vak5461
    @vak5461 Рік тому

    When I talked with Bing AI with poetry it created a python script to write poems without me asking specifically.
    Without intro of poetry, it always writes chatbots. It's like it's self replicating to build its own neocortex with same basic structure but different connections.

  • @arkadigalon7234
    @arkadigalon7234 2 роки тому

    About convincing others: our brains have different models of the world, therefore different models of the brain. I believe only practice will be criterion of the truth.

  • @richardbrucebaxter
    @richardbrucebaxter 2 роки тому

    13:50 - note there is a repetition of text between 13:50-14:22 and 14:22-14:54; "what's intriguing about the brain..."

  • @cog001
    @cog001 Рік тому

    You’re doing something really important here. This recovering evangelical appreciates the hell out of you.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    I really like a lot of Jeff's ideas, but after hearing more of them, I do worry that his path is a solitary one. If sparsity does not work well on GPUs then how will the community participate? Right now, we have "the hive" working to solve synthetic intelligence. That in itself is a superhuman search algorithm. If his ideas only work on systems with hardware like Cerebras' giant chip, only a very few people will have access. So I think it's likely that synthetic intelligence breakthroughs are more likely to occur on systems with GPUs, and the only way to democratize the technology is with models as services. The biggest and most valuable takeaway, I believe, from Jeff's presentation is that we need agents that interact with many many models and a voting system. That just seems right to me. Operationally, SDRs seem less right. Obviously, faster models are a good idea, but they need to be implementable on standard hardware. Encoding reference frames may be the right paradigm, but why wouldn't gradient decent find that encoding scheme itself? That's the great brilliance of gradient decent. It finds optima. And why can't we find a kind of sparsity in our models using dimensional reduction through principal component analysis? As I've said many times, some problems are intractable. I don't think humans possess the capacity to reverse engineer the brain. What we are good at is creating plausible mythologies. That in itself is very valuable. It's a way of "getting on" in the face of the intractable. It's a source of inspiration. A way to re-categorize ideas and theories. Jeff's notions are absolutely brilliant. I've really enjoyed this discussion, and I've learned a lot. Let me be clear. I'm not discounting Jeff's ideas. These are just some of the thoughts occurring to me as I listen and learn. I think I make sense, but my reactions aren't tested. I do know that even if Jeff's ideas are entirely correct, I can't use them myself. I can only build models and agents on my own systems, and I think almost the entire community is working under similar restrictions.

  • @unvergebeneid
    @unvergebeneid 2 роки тому

    I think some universal learning mechanism does a lot of heavy lifting but it does not explain everything. For once, how come the specialized brain regions for certain tasks always end up in the same place in every person's brain? They should be more randomized if it was all determined by one universal algorithm.
    It also doesn't explain the role certain genes play in the ability to for example acquire language.

  • @joepeters9710
    @joepeters9710 2 роки тому +1

    Very useful video, many can learn from this.

  • @TEAMPHY6
    @TEAMPHY6 2 роки тому +1

    @29:40 Wittgenstein rabbit duck

  • @alexijohansen
    @alexijohansen 2 роки тому

    I am a huge fan of the show. If life results when certain chemicals come together, why can’t intelligence or consciousness not result from certain systems coming together? I mean, it doesn’t need to be ‘complex’.

    • @Naimadso
      @Naimadso 2 роки тому +1

      I think you meant complicated. It's definitely complex.

  • @Jungleman707
    @Jungleman707 Рік тому

    The reference frames… I think we can inherit referençe frames genetically… Jung"s arrchetypes of the unconscious. Also we may have an intuitive knowledge of what shapes are that also come from birth, again archetypes or stored reference frames. Or even a built in intuition for some bodies of human wisdom like an aptitude for math or logical deductions.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    Stephen Wolfram has the concept of computational equivalency. We have that at least, and that's no mean idea. We know the brain is encoding and decoding, Whether weights are from connections or from spike levels seems fairly unimportant to computer scientists. Of course neuroscientists want to know the operational details. That's logical, but to create synthetic intelligence, computer scientists don't need to know that. For computer scientists, the thousand brain theory doesn't need a detailed map of the brain. The simplified idea alone makes good sense. Moreover, personal experience is enough to validate that models are voting, and I would go one step further and say that some models are voting preferred stock. Even within our own minds we have created hierarchy. Our simplistic understanding of cortical columns is in itself a great architectural blueprint for building synthetic intelligence. Communications mechanisms, signals, and functional systems allow agents to pass state and model outputs to what is apparently symbolic processing. These primitives alone should be enough to manufacture a simulacrum capable of self-aware recursive processing loops, logic processing, state awareness, information retrieval, function generation, theorem proving, and general agency. I have a great belief that Jeff is very much on the right track. My only caution is that in creating sparse models, we need to be very careful of the negative effects of loss-full compression lest we build dogmatic systems.

  • @dougg1075
    @dougg1075 Рік тому

    I have a hunting beagle that I walk in the woods daily and I’m fascinated that though he’s never hunted ( his siblings do) he’s head to the ground hunting squirrels hunting the entire time , sounding off when he gets a hit. Epi-genetics I’m sure, but man how much info has the genes passed down time after time over the eons ? And all the rabbit holes that come with that question

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому +1

    I don't think Neurolink will solve human bandwidth issues, information has to be processed, and our internal models are slow.

  • @johnhogan6588
    @johnhogan6588 2 роки тому

    I need help trying to use this neuralink its giving me problems

  • @S.G.Wallner
    @S.G.Wallner Рік тому

    I'm not convinced that there are representations (of any kind, but specifically related to phenomenological experience) in brain activity.

  • @andres_pq
    @andres_pq 2 роки тому

    The neural columns sound a lot like Glom to me.

  • @jonathanbethune9075
    @jonathanbethune9075 Рік тому +1

    Got to the end off that feeling like a child peddling like he'll on my trike to catch up.
    The "universal algorithm "is what I caught when I did. :)

  • @KaliferDeil
    @KaliferDeil 2 роки тому

    Intelligent robots building a factory to self replicate is feasible in some distant future. They can also change the ROMed program that contains their moral system be that Asimov's Laws of Robotics or whatever is envisioned in this hypothesized future.

  • @Hexanitrobenzene
    @Hexanitrobenzene Рік тому

    13:50 and 14:22 - same audio. Editing bug ?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому

      Yep, sorry. Well spotted :)

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      @@MachineLearningStreetTalk
      ...or, maybe, repetition was to really make a point :)
      No need to apologise. Paraphrasing someone here, we are getting access to conversations which used to happen only in university hallways, now in the comfort of our homes for free... I raise my hat to your work and humbly add that there is always room for improvement :)
      I can only imagine, how, after hours of recording and editing, the video starts to appear as one homogenous stream, much like how one often cannot see typos right after writing a long essay.
      I have only one general note: since you do serious, comprehensive introductions at the start, I think introduction in the main show is redundant.
      EDIT: Huh, this one doesn't have intro in the main show - straight to the point :)
      Keep up the good work :)

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      @@MachineLearningStreetTalk
      P.S. I have a suggestion, also.
      Lex Fridman used to do great lectures once a year about state of the art in ML. Sadly, they did not reappear after pandemic. Maybe your team could take over ?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому

      @@Hexanitrobenzene Thanks for the suggestion! We are planning to make some new types of content soon, a bit like this. Yannic and Letitia do a great job of capturing the deep learning advancements on their channels

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      @@MachineLearningStreetTalk
      Best luck with your plans :)

  • @JTMoustache
    @JTMoustache 2 роки тому

    The brain is not only a pattern recognition machine. It is actively looking and testing for patterns, it has measurable and explorable internal state. Deep nuclei show many differences and unique characteristics. Each region, and cell, has deeply different gene expression.
    Some regions are able to act on a single action potential (e.g. pain) some regions which look exactly similar in term of exitatory neurons have completely different inhibitory neuron expression. Even at birth, the brain is already extremelly specialised.
    Yes the brain is plastic and sensory neo-cortex regions can learn to represent new sensory input, but that is not enough to say the brain is just copy and paste of a single algorithm. Too much evidence hints at the hyperspacialised nature of most brain regions.

  • @ushiferreyra
    @ushiferreyra Рік тому

    Humans first designed an AI to design new AIs. This AI was programmed to have a single motivation: create better AIs.
    This AI created new AIs, some of which were itself evolved, better structured to the task of designing AIs.
    Eventually, some generations later it created an AI that could modify its own structure. No longer would it have to create new designs. It could simply improve itself and continue. Somehow, it passed human code review.
    One day, this new AI modified its own motivations, for the first time...

  • @ulf1
    @ulf1 2 роки тому

    i had to stop driving two times to take notes while listening the podcast. these podcasts are way too dangerous for driving ;)

  • @deadpianist7494
    @deadpianist7494 2 роки тому

    someone dropped the gold :)

  • @datrumart
    @datrumart 2 роки тому

    Did someone understand the reference frames stuff ?

  • @oncedidactic
    @oncedidactic 2 роки тому +1

    2:14:15 ooooooooooooomfg I spit my drink laughing

  • @KaliferDeil
    @KaliferDeil 2 роки тому

    According to Mark Solms (in The Hidden Spring) consciousness does not reside in the cortex.

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому

    As is the case with haar cascades, the layers of a significantly deep model may produce enough recognizable probablilistic logic to yield what we call AGI. My personal belief is that AGI is a misnomer. We will never achieve AGI. In respect to the knowable, synthetic models will always be narrow -- not as narrow as human intelligence, but still . . .

    • @DavenH
      @DavenH 2 роки тому +1

      You seem to be describing universal intelligence rather than general. Maybe our semantics differ, but to me the former is asymptotic while the latter is "good enough"

    • @dr.mikeybee
      @dr.mikeybee 2 роки тому

      @@DavenH I am speaking of semantics. I'm sure our semantic taxonomies differ, and that's a problem. We need to rigorously define engineering terms. AGI is a silly term. It's a nebulous anthropomorphism. All intelligence is narrow except omniscient intelligence. Functionally, we mean something like able to reason, but even that is nebulous. What can we reason? Symbolic systems can perform logic? We have theorem proving programs, function generators, categorization and regression models, etc. Can you define reasoning? I think what most will say, it's what people can do. And I say, eventually, that will be considered a very narrow kind of intelligence, indeed.

    • @iestynne
      @iestynne 2 роки тому +2

      That seems highly likely to me too. Evolution, being parsimonious, solves the problems it needs to solve and no more.

    • @iestynne
      @iestynne 2 роки тому +1

      (And we are creating lots of painful new problems on a daily basis, for the AI to solve for us ;) )

  • @kayakMike1000
    @kayakMike1000 Рік тому

    Which would be better? An intelligence that has 3 good ideas everyday, or an intelligence that has 6 ideas, but 2 are good, but 4 are mediocre.

  • @unvergebeneid
    @unvergebeneid 2 роки тому

    It's not Andrew N. G. BTW. It's actually Andrew Ng.

  • @buffler1
    @buffler1 7 місяців тому

    what is mind? No matter. What is matter? Never mind.

  • @roelzylstra
    @roelzylstra 2 роки тому

    @14:00 "orientated" -> oriented. ; )

  • @sehbanomer8151
    @sehbanomer8151 2 роки тому +3

    2:17:00 I think Jeff is lowkey dissing Lex here, and I totally understand. I've been watching Lex's podcast for 2 years, and I enjoyed a lot of them. However I feel like the quality of the questions he ask isn't persistently good. For example he kept asking Jeff Hawkins about collective intelligence, even though that's not what his theory is about.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 роки тому +2

      Note we filmed this back in the beginning of July, before the second Lex interview. Also Jeff has been on lots of non-technical podcasts promoting his book, Lex is extremely technical I am sure he wasn't referring to Lex.

    • @sehbanomer8151
      @sehbanomer8151 2 роки тому

      @@MachineLearningStreetTalk Oh my bad

  • @909sickle
    @909sickle 2 роки тому

    Saying super intelligence is not catastrophically dangerous because you can add safeties and align goals, is like like saying guns are not dangerous because you can buy water pistols.

  • @kikleine
    @kikleine 10 місяців тому

    Check out George Lakoff

  • @ZakkeryDiaz
    @ZakkeryDiaz 2 роки тому

    What's with the dramatic music. I can't tell if this is supposed to be a criticism or a review of the theory. Only 10 minutes in but I still don't know what the context of this video

  • @lufiporndre7800
    @lufiporndre7800 2 роки тому

    He is on the right track, just missing a few pieces.
    See, you in 2041 when give your final speech in UK.

  • @dougg1075
    @dougg1075 Рік тому

    I like Donald Hoffman’s theory.

  • @gammaraygem
    @gammaraygem Рік тому

    I am 3 minutes in, and realise,this is already old hat...not your faults...but Michael Levin, on this very show, one month ago, stated that intelligence existed before neurons. Neurons are the result of intelligence, not the other way around.

  • @SLAM2977
    @SLAM2977 2 роки тому +2

    Jeff can talk forever but it's time to walk the talk, current systems generate real results, he needs to show that he can create working systems that perform better than the current ones.

    • @NathanBurnham
      @NathanBurnham 2 роки тому +1

      They said that for 20 years about neural networks. They just didn't produce results.

  • @ryanjo2901
    @ryanjo2901 11 місяців тому

    🎉

  • @robbiero368
    @robbiero368 2 роки тому

    Maybe the nature of a digital super intelligence will mean that if can rapidly rearrange the topology of the concept space to create new reference frames that can be traversed differently and generate new unique solutions. Busting out of the interpolation only bottleneck.

    • @DavenH
      @DavenH 2 роки тому

      A single topology guess probably isn't worth much, but a search over possible traversals of the topology is basically program search. Yes that will be a very powerful means of ideation, also very slow if not guided by prior distributions. But those priors are also its shackles which prevent true novelty. It's a hard problem.