Jeff Hawkins NAISys: How the Brain Uses Reference Frames, Why AI Needs to do the Same (re-recording)

Поділитися
Вставка
  • Опубліковано 13 чер 2024
  • Jeff Hawkins presents a talk on "How the Brain Uses Reference Frames to Model the World, Why AI Needs to do the Same." In this talk, he gives an overview of The Thousand Brains Theory and discusses how machine intelligence can benefit from working on the same principles as the neocortex.
    This talk was first presented at the NAISys conference on November 10, 2020. This video is a re-recording of that presentation since the recordings from NAISys are not released publicly.
    Jeff's presentation slides: www.slideshare.net/numenta/je...
    Subutai's poster "Sparsity in the Neocortex and its Implications for Machine Learning" from NAISys 2020: numenta.com/neuroscience-resea...
    Jeff's upcoming book (Mar 2021) A Thousand Brains: www.amazon.com/Thousand-Brain...
    - - - - -
    Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
    Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
    tinyurl.com/NumentaNewsDigest
    Subscribe to our Newsletter for the latest Numenta updates:
    tinyurl.com/NumentaNewsletter
    Our Social Media:
    / numenta
    / officialnumenta
    / numenta
    Our Open Source Resources:
    github.com/numenta
    discourse.numenta.org/
    Our Website:
    numenta.com/
  • Наука та технологія

КОМЕНТАРІ • 18

  • @bm5543
    @bm5543 3 роки тому +10

    Sir, I really believe you are onto something. What attracted me to HTM was its unifying of batch and streaming training of the model like our brain. I am not educated enough to understand everything you say, but I will be keeping my ears open to what you have to say. Thank you for all your hardwork. Love from Korea.

  • @sau002
    @sau002 Рік тому

    Very insightful. As others have said - you are on to something

  • @rb8049
    @rb8049 3 роки тому +1

    Always excellent. I hope you can add to your book a chapter on how you go about your model development. If more can do the same, it should speed up advancement.

  • @sgrimm7346
    @sgrimm7346 Рік тому

    The term "reference frames" still eludes me. I've yet to see a good example of what he's talking about.....I've seen all the videos, even bought the book.... but it seems "reference frames" is more of a place holder until Jeff comes up with a better explanation for what it actually is. With that said, I still believe Jeff is way ahead of any other brain research theories. His book 'On Intelligence' changed my view of computational intelligence forever. I always look forward to more information.

  • @j3i2i2yl7
    @j3i2i2yl7 Рік тому

    The model of memory proposed in the Thousand Brains theory seems consistant with what good teachers and good communicators know; to describe something they start with the common thing that everyone is familiar with and expand on it by describing the similarities and differences. Analogy is a powerful communication tool.

  • @RyanJamesMcCall
    @RyanJamesMcCall 3 роки тому +1

    Thanks for the upload -- I found the audio quality muddy and a bit distracting

  • @aamir122a
    @aamir122a 3 роки тому +2

    Please try to improve audio quality for the next presentation.

  • @martin777xyz
    @martin777xyz 3 роки тому

    Mind blowing and profound 👍👍👍

  • @randomselectionofwords
    @randomselectionofwords 3 роки тому +3

    Great presentation. Book pre-ordered. :)

    • @Stan_144
      @Stan_144 3 роки тому

      Any review of the book ?

    • @randomselectionofwords
      @randomselectionofwords 3 роки тому

      @@Stan_144 It's great.

    • @Stan_144
      @Stan_144 3 роки тому

      @@randomselectionofwords I have read it yesterday ..

  • @winkletter
    @winkletter 3 роки тому

    The point at 5:30 where he says "the tree, the key, the trick" seems like an example of how you can have prediction errors caused by a union of sparse, distributed memory patterns. The sequence "the ____ to understanding" can be completed by both "trick" and "key" which ends up generating the portmanteau "tree" in his motor outputs. He recognizes the anomaly and substitutes "key," but it still doesn't seem to match the sequence he meant and so he finally corrects with "trick."

  • @Stan_144
    @Stan_144 3 роки тому

    The brain builds model of the world. So it must be able to remember various objects, most of them are the objects we observe in our environment, but also a lot of various abstract objects. The objects need to be connected. They also are arranged in hierarchy.

  • @rb8049
    @rb8049 3 роки тому

    Can Numenta generate functional schematic models of the neuron which capture the measured responses? That is, one could put the diagram into a simulator (circuit simulator, labview, simulink, etc) and reproduce the functional measured effects? A real model is one which replicates what is measured in the real world. Not just lots of words describing what something does. Engineers are great at turning diagrams into functional systems.

    • @NumentaTheory
      @NumentaTheory  3 роки тому +4

      Hello! In many of our papers, we create simulations to test the theories. These simulations have helped us understand where our theories fall short and given us insight into capacities, limits, and other attributes. Our papers’ source codes are available here: github.com/numenta/htmpapers If you’re interested in learning more about our neuron model, you can find it under the paper “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex"

  • @dsm5d723
    @dsm5d723 3 роки тому

    What's up Jeff. How is the world looking different from when I first emailed you? You WERE right about the step function, but it is a Mass Gap in consciousness upgrade, not "intelligence." This is finished already, you guys just need to keep your function. And you are mathematically incorrect about Consciousness in a machine. Dimensionally so. With God's compiler kernel described, a machine with a body would learn physics as I have: NO logical fallacies on notation to wrestle with. And spoken language would be the iD sine wave input. Make a machine functional, and our intelligence would be the controller; no systems resources devoted to moving a body or NLP. These would be recursively reified in the CCA.