The Thousand Brains Theory

Поділитися
Вставка
  • Опубліковано 19 тра 2024
  • The Thousand Brains Theory: A Framework for Understanding the Neocortex and Building Intelligent Machines
    Recent advances in reverse engineering the neocortex reveal that it is a highly-distributed sensory-motor modeling system. Each cortical column learns complete models of observed objects through movement and sensation. The columns use long-range connections to vote on what objects are currently being observed. In this talk we introduce the key elements of this theory and describe how these elements can be introduced into current machine learning techniques to improve their capabilities, robustness, and power requirements.
    See more at www.microsoft.com/en-us/resea...
  • Наука та технологія

КОМЕНТАРІ • 100

  • @cafeliu5401
    @cafeliu5401 4 роки тому +98

    Followed him here after Lex interview :P

  • @eduardocobian3238
    @eduardocobian3238 Рік тому +1

    AI and robotics are not separable. Genius Jeff.

  • @simonhughes1284
    @simonhughes1284 4 роки тому +8

    This is an very interesting idea. I feel there are a lot of follow up questions:
    1. How is information integrated across the different columns? Is voting sufficient for this? What is the purpose of the voting, is it to decide which motor outputs to take? It blow my mind to think that we basically consist of lot of small computers, which all co-ordinate a set of motor outputs, but it does certainly seem both feasible and a somewhat novel idea.
    2. How does memory work? The hippocampus is critical for the formation of long term memory, and is not part of the neocortex. Memory is critical for our sense of self, and thus you could argue consciousness (although I think people place too much importance on the idea consciousness in cognition, but I digress).

    • @NumentaTheory
      @NumentaTheory 4 роки тому +6

      See our papers at numenta.com/papers for answers to these questions

  • @Marcos10PT
    @Marcos10PT 3 роки тому +13

    I've learned a pattern in Microsoft research videos: they don't like to show the slides on screen for some reason 😁

    • @ramblinevilmushroom
      @ramblinevilmushroom 2 роки тому

      They want you to go to their website, thats where the slides are.

    • @saturdaysequalsyouth
      @saturdaysequalsyouth 2 роки тому

      @@ramblinevilmushroom I thought he was referring to the fact that Hawkins is using a Mac.

  • @TomAtkinson
    @TomAtkinson 4 роки тому +1

    1:00:41 my understanding is that the central terminating synapses are stimulant, and the dendritic synapses inhibitory. Really interesting to hear the way you described this part! Your words were that they prime the cell to fire, this makes a lot of sense. Calcium bridges at the synapse are then formed in that rare case a dendritic action potential leads to a main axial firing.

  • @matthieuthiboust8869
    @matthieuthiboust8869 5 років тому +10

    Fascinating and very promising biological-oriented approach to overcome current deep learning limits. Our neocortex is an invaluable source of inspiration to build AI models with strong noise robustness and continuous learning by design.
    I'm eager to see the next steps involving "voting" between cortical columns and cortical-subcortical interactions (thalamus, basal ganglia, hippocampus, ...)

  • @jeff_holmes
    @jeff_holmes 2 роки тому +1

    Am currently reading (listening) to Hawkin's book, "A Thousand Brains". Fascinating stuff. Am surprised that this video doesn't have more views.

  • @MegaNexus77
    @MegaNexus77 3 роки тому +1

    Very, very cool theory! I tested the HTM technology using the Etaler framework which uses OpenCL to speed up the simulation

  • @w1ndache
    @w1ndache 4 роки тому +19

    Crazy UA-cam algorithm recomended this after i watched lex interview

  • @armandooliveira3712
    @armandooliveira3712 4 роки тому +3

    thsi is the most important ai research to date...

  • @AngelOne11
    @AngelOne11 3 роки тому

    I think this is definitely helpful for OO programming and AI in the long term

  • @rickharold69
    @rickharold69 5 років тому +3

    Super awesome !!

  • @poprockssuck87
    @poprockssuck87 4 роки тому

    I posted this earlier on the Lex Fridman podcast with Jeff Hawkins:
    How much of the brain is redundancy? I ask this because Hawkins talks about robustness, and there is the apparent benefit that ANNs don't need this really. Even if ANNs can't ever be as dense or efficient as brains, they may be able to compensate because of this.
    It appears that current neurons in ANNs are actually just some vague representation of neuronal connectivity and/or action potential. Furthermore, current ANNs seem to just be complicated versions of a SINGLE neuron or several neurons in series, not any real representation of a brain or even a neuronal cluster.
    Firstly, ANNs need to be able to construct themselves relative to the complexity of the problem at hand. They should be able to create layers and nodes as new data is introduced and new patterns are discovered.
    Also, layers need to be more dynamic, as does connectivity among nodes. In recent years, too much effort has been put into making ANNs deeper, which is like stringing neurons in series. Yes, this allows for something approaching "memory" but neglects constructing a more natural form of pattern recognition through weights existing across connections and not the nodes themselves. As Hawkins mentions, sparse connectivity is the goal if we are going to try and mimic the brain, and this can only be done if layers aren't treated as some fixed block.
    Currently, there are only heuristics about how many layers and nodes should be involved, and this can't be right. Being able to construct an ANN relative to the problem at hand is another apparent potential advantage of ANNs. You could potentially have ANNs with a size or complexity that is accurately proportional to the question, as scaling or uniting these for something approaching AGI would take fewer resources.

  • @nothinhappened
    @nothinhappened 3 роки тому +4

    how many found their way here after seeing his interview on Lex Fridmans Artificial Intelligence podcast?

  • @dakrontu
    @dakrontu 4 роки тому +1

    It occurs to me that towards the end of puberty there is a massive culling of neuronal connections that leads from the adolescent brain to the adult brain.
    That culling makes things more sparse, and given the emphasis on the importance on sparseness in this video, I wonder if it has a similar relevance in improving the brain overall by culling connections.
    Also does any of this shed light on sleep? Is sleep just random free cycling within the brain or does it serve some purpose that can now be more precisely defined than the usual assumption of reinforcing wanted connections from the preceding wake period and culling the unwanted ones?

    • @egor.okhterov
      @egor.okhterov 2 роки тому

      As far as I remember, sleep is needed to transfer new information from short term memory to the long term memory. This is information is also processed and repackaged along the way.

  • @DeepLearningTV
    @DeepLearningTV 4 роки тому

    "Grid cells in the neocortex" - a core hypothesis behind this theory - this should be relatively easily verifiable, right? I am assuming that the different types of neurons in the different parts of the brain are known. If thats the case, is this a new discovery and if so, why was it missed so far? I realize I sound skeptical, but I am really trying to understand.
    Thanks Jeff and Subutai for your work :-)

    • @vast634
      @vast634 3 роки тому +1

      The neurons and morphology are certainly known. But how they process information and for what purpose within the context of the whole brain seems to be still very vaguely known.

  • @Pencilbrush1
    @Pencilbrush1 4 роки тому +4

    finally next stage... Its happening ... been following the development of Jeff and Numenta since On intelligence. 15 years cant believe it!

    • @diy-bunny
      @diy-bunny 4 роки тому

      Yeah, 15 years... Time flies, and Jeff is old now.

  • @swanknightscapt113
    @swanknightscapt113 4 роки тому +5

    Please show the presentation when the speaker is explaining its content. I don't need to see the speaker face when his is looking and pointing at the slides, in case you don't already know.

    • @OttoFazzl
      @OttoFazzl 4 роки тому +1

      Don't thank me: numenta.com/resources/videos/thousand-brains-theory-of-intelligence-microsoft/

  • @badhumanus
    @badhumanus 4 роки тому +1

    Jeff Hawkins is one of the most brilliant minds in neuroscience and AGI research. He's light years ahead of the deep learning crowd. He's mistaken about one thing though, which is the notion that the brain creates a complex model of the world. There are neither enough neurons nor energy in the brain to maintain such a model. If we had a model of the world in memory, we would have no trouble navigating in familiar places with our eyes closed. Fact is, we need to keep our eyes open. There's no need for a complex model: the world is its own model and we can perceive it directly. We do retain a relatively small number of high-level bits of things we have experienced but only if they are recalled repeatedly. Low level details are either forgotten within seconds or written over by new experiences.
    Unlike deep neural nets, the brain can instantly perceive any complex object or pattern even if it has never seen it before, i.e., without a prior representation in memory. Perception without representations is the biggest pressing problem in neuroscience, in my opinion. Good luck in your research.

    • @roseburgpowell
      @roseburgpowell 4 роки тому

      in your non-representational model: 1) Do you agree that each column performs the same function for one or (many) more sensorimotor units (habituated/habituating networks) of possible movement/behavior and 2) what function do the grid cells in the bottom of each column perform?

  • @BradCaldwellAuburn
    @BradCaldwellAuburn 3 роки тому

    As everybody has already said, please stay on the pertinent slide while he's still talking about it. Also, Jeff is so easy to understand, and I love that he talks fast because it makes it fun and you know exactly what he is trying to convey. Subatai however I wish would slow down enough to make the connection between what Jeff was talking about and whatever this vector sparce stuff he is talking about is. I know what sparce means, but not in this new context so please define that term. Also, i know what a vector generally is (force plus direction), but he must mean something else, so please define your new meaning of vector. Also, I have a gist of what "binary" means (something to do with 2), but please clarify what your meaning of binary is in this context. For example, at 1:02:35, Subatai says, "Each learning event is like creating a sub-sample consisting of a very sparce vector." What the what?? Not knowing Subatai's specialized meanings, my brain gets an image of lots of forces and directions and all of the sudden when learning happens the brain groups together a set (implies multiple objects) of one very rare force and direction vector (that's one thing not a set). If he would just define the fancy words first, I could understand.

    • @egor.okhterov
      @egor.okhterov 2 роки тому

      Vector in mathematics is not force plus direction.
      Vector is an ordered set of numbers.
      For example, (1, 2) is a vector.
      (2,1) is also a vector.
      Vector (1, 2) is different from vector (2, 1).
      You could represent many things with vectors. For example, vector (1, 2, 7) could represent location of my head.

  • @dakrontu
    @dakrontu 4 роки тому +1

    If a cortical column has most of its connections internal, with far fewer coming from outside, that has advantageous implications for building electronic equivalents.

    • @dakrontu
      @dakrontu 4 роки тому

      Andrea Cvečić Eh???

  • @akompsupport
    @akompsupport 4 роки тому +1

    Does the second speaker have a github repo? Thanks in advance.

    • @Kaikuns
      @Kaikuns 4 роки тому +2

      You could take a look at Numentas repo which includes all their research work and source code for papers that he mentions: github.com/numenta/

  • @afterthesmash
    @afterthesmash 4 роки тому +4

    26:00 His quick answer to this question should be part of his standard presentation order. A bit mind-blowing that Jeff thinks this particular detail can be left for another time.

  • @MatthiasKlees
    @MatthiasKlees 4 роки тому +1

    Brainjogging from one of the greatest geniuses of our tines

  • @fredxu9826
    @fredxu9826 2 роки тому

    great talk. also the name "Subutai" is so savage

  • @jasonsebring3983
    @jasonsebring3983 4 роки тому +3

    Maybe consensus is the part of the secret sauce behind being aware or "conscious". This is something that wouldn't be just invented by people without some inspiration from biology so understanding more about our brain cannot hurt.

  • @MenGrowingTOWin
    @MenGrowingTOWin 4 роки тому +15

    It may be that AI systems will be nothing like the human brain in the same way that jet planes are nothing like birds.

    • @erwinmoreno23
      @erwinmoreno23 4 роки тому +10

      But they are bound by the same rules to certain extent. So its good to understand the principles by which they operate

    • @MenGrowingTOWin
      @MenGrowingTOWin 4 роки тому +4

      @@erwinmoreno23 of course it's helpful you are right.

    • @michelechaussabel732
      @michelechaussabel732 4 роки тому +2

      To me comparing the brain to a computer is like comparing life on earth to possible life on another planet. No reason to think it’s anyway near the same.

    • @TheReferrer72
      @TheReferrer72 4 роки тому +1

      @@michelechaussabel732 Alien Life will still be an entropy-increasing mechanism, human brain and computers are computation substrates.

    • @michelechaussabel732
      @michelechaussabel732 4 роки тому +1

      I’m afraid you lost me with the substrates, but thank you for the reply

  • @kp2718
    @kp2718 4 роки тому +2

    4:20 Then 75% percent of human brain by volume (as on a slide) or area? Maybe 75% of brain outsie area if u ask me.

    • @Jamie-my7lb
      @Jamie-my7lb 4 роки тому +1

      Konrad Yeah, he meant volume.

  • @alexiscao8749
    @alexiscao8749 Рік тому

    Why cannot the camera follow the content in the presentation?

  • @Theodorus5
    @Theodorus5 2 роки тому

    Love Hawkins :)

  • @johnappleseed7601
    @johnappleseed7601 4 роки тому +2

    Amazing content, but the production is an embarrassment. For such a spatial topic why isn't the camera focused on the presentation. Microsoft Research you flunked this video, redo the video with Sight and Sound included, thank you.

    • @dakrontu
      @dakrontu 4 роки тому

      Yes, and the audio level could have been made more consistent too.

  • @richardnunziata3221
    @richardnunziata3221 5 років тому +2

    How about some working code that people can play , expand , learn with say for a game engine , simulator, or classification / segmentation....

    • @AleksandarKamburov
      @AleksandarKamburov 5 років тому +1

      For code and other information go to numenta.org . There is a list with shortcuts on the right with Community Resources.

    • @sebastiangonzalezaseretto7885
      @sebastiangonzalezaseretto7885 5 років тому +2

      You can try NuPIC
      github.com/numenta/nupic

  • @manfredadams3252
    @manfredadams3252 4 роки тому +20

    Would have been nice to see more of the presentation and less of the presenter. Micro$oft must have had a middle school intern as their media guy this day.

  • @smyk1975
    @smyk1975 5 років тому +1

    The first part of the presentation is like Hinton’s Capsule Networks except for lack of mathematical model and rigorous thinking.

    • @arunavaghatak8614
      @arunavaghatak8614 5 років тому +1

      That is because both this model and Capsule Networks were inspired by the human brain.
      It lacks the mathematical modelling because our brains don't do the math either that Capsule networks use.

    • @hegerwalter
      @hegerwalter 5 років тому +1

      Hinton Capsule's talk ua-cam.com/video/6S1_WqE55UQ/v-deo.html

  • @scottunique
    @scottunique 4 роки тому

    Someone get this man access to supercomputing, please.

    • @OrthodoxDAO
      @OrthodoxDAO 4 роки тому +1

      You know he is not exactly poor, antisocial or marginalized, don't you

    • @scottunique
      @scottunique 4 роки тому

      I was actually slightly sarcastically highlighting the fact that he came there to beg for supercomputing. At least that's what I saw.

    • @OrthodoxDAO
      @OrthodoxDAO 4 роки тому +1

      @@scottunique I have not watched it, but my comment was straight-faced. He can probably get the "academic packages" from the big cloud providers anytime he wants, if he is not already a recipient. And his circles are not exactly broke, even clowns Invacio managed to buy some xyzFLOPS of compute. The latter was an ICO of course, but if he really gets desperate...

    • @vast634
      @vast634 3 роки тому +2

      Large computing resources make only sense, once the algorithms are nailed down. They are still in process of researching the proper architecture. Its a misconception that just throwing more resources at the computation will magically make the ai have true intelligence. It works only on specific problem sets currently in deep learning (classifying things based on huge datasets). But they argue that a change in the architecture of neural nets is necessary.

  • @ericmanuel3201
    @ericmanuel3201 4 роки тому

    If people can know what will happen in the future is a foolish! Time & time again every thing has it's own time! & THIS IS ALL VANITY!!! all things must past! So let us not be foolish !!Thank you

    • @dakrontu
      @dakrontu 4 роки тому +1

      What they were talking about was the brain predicting the future on a statistical basis, not foretelling it.
      Let's say you are driving a car. As your experience grows, you learn to anticipate dangers such as people stepping in front of the vehicle or other vehicles going thru red lights.
      This anticipatory capability has great survival value for any species. For example while being chased by predators. Humans are not chased much by predators these days, but they do plenty of other things requiring anticipation of likely events.

  • @nickeaton1255
    @nickeaton1255 4 роки тому +2

    Thank god for playback speed .75 for the first guy lol

    • @monkyyy0
      @monkyyy0 4 роки тому +1

      thank god for 2x playback for everything

    • @OttoFazzl
      @OttoFazzl 4 роки тому +1

      Usually, I watch lectures on 1.5x - 2x, but not for Jeff Hawkins.

    • @vast634
      @vast634 3 роки тому

      @@monkyyy0 For an information dense presentation such as this, 2x speed would be way too fast, unless you know the topic, or dont want to catch all the details.

  • @koningsbruggen
    @koningsbruggen 4 роки тому +1

    Is it me or is this the same theory as Kurzweil his Pattern Recognition Theory of Mind

  • @naptastic
    @naptastic 3 роки тому

    Ok, yes, they're both very handsome men, but can you please show the slides for more than five seconds each? It would be much easier to understand the graph they're talking about if it were visible while they were talking about it.

  • @i-never-look-at-replies-lol
    @i-never-look-at-replies-lol 3 місяці тому

    Shit. I only have 999 brains. No wonder why I'm so dumb.

  • @aaronturner6760
    @aaronturner6760 3 роки тому

    Sales pitch to Microsoft

  • @Turbo_Tastic
    @Turbo_Tastic 5 років тому +2

    Great video, but he's missing something huge.. there are radio frequency connections too.. nothing physical, just parts of the brains neurons communicating remotely with each other wirelessly. A team of researchers studying the brain have discovered a brand new and previously unidentified form of “wireless” neural communications that self-propagates across brain tissue and is capable of leaping from neurons in one part of the brain to another, even if the connection between them has been severed.
    The discovery by biomedical engineering researchers at Case Western Reserve University in Cleveland, Ohio could prove key to understanding the activity surrounding neural communication, as well as specific processes and disorders in the central nervous system.

    • @AkashSwamyBazinga
      @AkashSwamyBazinga 4 роки тому +7

      Can you please post some papers or articles about this discovery ?

    • @Addoagrucu
      @Addoagrucu 3 роки тому +1

      This is misinformation. The "wireless" communication is very local and only happens when the minute electrical current carried by the dendrites creates a minute magnetic field around the direction of the current, very minimally affecting the action potentials of nearby neurons. This process is not implicated in any major functions of the brain, and consequently, all the claims you've stated are bogus.

  • @somnathsikdar6657
    @somnathsikdar6657 3 роки тому

    Poor camera work.

  • @peterrandall9381
    @peterrandall9381 4 роки тому

    The vibe in this room is killer. Maybe because the speakers are a little nerveous, but it makes me uncomfortable.

  • @rgibbs421
    @rgibbs421 3 роки тому

    and um

  • @deric18roshan18
    @deric18roshan18 4 роки тому

    Somebody tell Jeff to slowdown while talking.

    • @afterthesmash
      @afterthesmash 4 роки тому

      Eventually I had to slow my playback from 1.5 to 1.25, because he really talks fast.

  • @kennethgarcia25
    @kennethgarcia25 3 роки тому

    Am I the only one who finds really annoying his rapid, high pitched voice which claims to have "discovered" the work many others were primarily responsible for.