Very enjoyable ; especially at 40mins when the penny drops and you see the connection he's made. The layman only needs to grasp the basics and Hawkins conveys that in a comprehensible way , whilst talking to his peers.
28:20 Jeff is technically incorrect here, or at least murky. The brain could be predicting _every_ tactile sensation associated with the cup simultaneously (including hot/cold, wet/dry). This is extremely predictive compared to the set of all tactile sensations (rubber, noodles, cork, sushi, paper, salt, sand, wood, spinach are all excluded to give a very short list). My kitchen kettle has a thick steel base with the manufacturer's information embossed into the bottom. This is text with an orientation relative to the rest of the kettle. But I have _no_ idea about the orientation of that text relative to the spout. Replace my kettle with another one where the text is rotated 90° and I wouldn't have the least clue. My paper coffee filters, which I handle every day, have a short seam on the bottom and also along one diagonal side (conical design). The seams are sealed by a large number of short, parallel grooves embossed into the paper. The net result is that the filter has a chiral symmetry. But again, if you substituted my filters with the opposite chirality, I highly doubt my sense of touch would detect an anomaly. On Jeff's mug, surely the fingers don't expect to find the handle when touching the rough undersurface. We certainly have a mental model of the viable proximities of different textual sensations. But this is weaker than jumping straight into an allocentric coordinate system. I don't reject his allocentric hypothesis, it's just that I feel his narrative here is woefully insufficient.
Every time i watch this guy, i find myself furiously coding 😂😅 If you ever see this message Jeff, I found clear simple rules for the neocortex that would blow your mind.
Been following your work for a couple years now. Love the incorporation of Head Direction and Grid cells into the theory. Am profoundly intrigued by the assertion that this prediction model might explain abstract cognition. You have some important heavy lifting to do there to get us from sequence memory to abstract cognition, so get to work. I anxiously await that paper. On a less pie-In-the-sky note: is there a copy of this presentation somewhere that does not have the recurrent skips/ dropouts They are off putting rather.
31:20 Can we apply it to brains, instead of collumns? We perceive the world from different locations, but we model the same object (which we are part of).
Interesting how sensations over time work into that, as we stroke an object, a very common movement related to grasping, we are have a sequence of features that change over time in a way that we can predict, but where is the sequence residing, where is the buffer that makes that temporal sequence the equivalent of the parallel inputs from multiple fingertips? Also had you noticed that a front to back stroke is richer in meaning that a side to side one, as if that, being a more typical movement, selects for certain feature sequence sensitivities over others?
Sequential is essential for learning, once learned you can recognize objects with less information or in novel arrangements. For example you can recognize an object in a picture even if it's just flashed in front of you with no time to move your eyes over it. You learned to recognize it as a discrete object by viewing over time with movement etc and that learning allows you to recognize it even without the benefit of viewing it in a sequence.
39:58 How brain integrates paths from acceleration and duration of movement? One object would be a field of possible acceleration, and a (feature at) location in a field would be the acceleration perceived. We really do Newtonian mechanics this way? The question is - do we integrate twice (acceleration to velocity, velocity to location in a room) in a one step? That would be a cool machine!
I would think that the tracking predictions of an accelerating object in our field of vision would get inputs from neurons at 1/2at^2 or whatever that works out to be on the retina, naturally and perceived experientially ... and probably inbuilt genetically since this is something all animals with eyes have to understand - and integrated with head movement, body movement, displacement, etc, naturally and very quickly.
40:03 What happens, if we go deep in self-perception and lose this focus on objects (which we have learned deeply before). Uniqueness of location of a coffee cup is still in a background (it is learned and automatic), but this surprising feeling of novelty with each touch is there : ]
The "novelty" you're referring to has always been there - it is the slower frequency (more general) touch perception, only containing information on location, spatial orientiation, and pressure - and it is the same "kind" of perception for all objects. However, the same signals from the touch receptors also contain higher frequency modes which are perceived as the object's texture. These higher frequency are complex enough to be passed through the motor cortex without being represented or categorized, but the slower wave touch data always has the same "tone" so the motor cortex easily picks it out and so it becomes part of the feedback loop governing motor output. When the activity of input and output information become loop locked like this in a cortical column, the appropriate incoming sensory information is sequestered by that layer and so never has a chance to enter one's conscious awareness (which consists of all the leftover sensory information which was not recognizable by any cortical structure.) Accessing this subconscious sensory data requires the use of Citta, or pure consciousness, to subtly alter the incoming sensory data within the motor cortex (by way of the apical dentrites in layer 1) which interferes with the locked feedback cycle, permitting the signal to pass through the column unimpeded where it can then become consciously perceived.
Thank you so much for this link. I listened to the interview. Hawkins still carries on about the cup, but he does get into the emergence of higher thoughts. Also, I identified with his preoccupation with neuroscience. I also spend a lot of time following my synapses. One thing he neglects, however, is the role of emotion in generating thoughts. Please let me know if you come across other talks like this.
1. Abstract thoughts are exactly the same as concrete. You represent the number "1" the same way you represent a pen. 2. Emotion is a different domain from object representation which he has talked about and it is not the function of the neocortex.
The neocortex includes the "white matter" connecting to it, which are just axons. The actual neuronal somas are within the 4mm sheet at the surface, but they're all fed by a solid mass of axons coming from the basal ganglia and corpus calloseum, like a thick bundle of fibers.
I don't think reverse engineering the neocortex is do-able, at least with what is known now, and the state of other technologies that might be necessary to view what is going on. The neocortex is talked about like it is a thing, but is it? Isn't it really a process. Think about how hard if would be to figure out what a computer was doing if you could see the chips and wires, but were not able to look inside the chips or measure voltages ... and track them over time. Then compare the brain as being maybe millions of times more complicated than a computer, and even if it does have some vague general patterns, every single connection has a personal context inside the person that relates to all other stuff in their brains ... how does that get mapped or understood?
@@egor.okhterov I was reading neuroscience books before you were born ahole. They do not even know all the different types of brain cells yet. If you had a serious thing to say you would not need to make a ridiculous assumption that this engineer is afraid of science ... but I am sick of drive-by idiots claiming they know so much by insulting their betters.
Our physical body is just a transducer our spirit uses to take input from our physical senses and do output to our body for motion and non mental body regulation.
But we can do this type of modelling when we are out of our body? So this must really be happening as an operation of the spirit. I think of the physical body as our temporary vehicle to experience this physical reality. So I see the body like a set of transducers to input/output physical reality with spirit. The point is that spirit knows about 3d world inherently, without the body plus more. And our memories and personality are definitely attributes of spirit. My best guess is that base spacial awareness is also attribute and function of spirit and that the basic wiring of body gives orientation information for spirit to process our physical body's position and orientation values that spirit then navigates with ONLY in regards to body. Recognising objects and people and places is an attribute of spirit, not the brain at all. Because we perform that function and remember this in spirit. When you suddenly "die" and shed this physical body you no longer have a brain and nervous system but we can still navigate and recognise everything around us while just spirit. We move by just thinking to be elsewhere. We still have same memories and feelings. So this means that the brain has nothing to do with learning of objects at all. Try to summarise: the brain gives information about our physical sensors and for our spirit to process and for our spirit to control the physical body. Our spirit also communicates directly to spiritual world and access the entire library of learnt experiences that way. All the processing of object recognition and ultimate orientation and movement decisions and all personality and higher level consciousness is performed by spirit.
Very enjoyable ; especially at 40mins when the penny drops and you
see the connection he's made.
The layman only needs to grasp the basics and Hawkins conveys that
in a comprehensible way , whilst talking to his peers.
28:20 Jeff is technically incorrect here, or at least murky. The brain could be predicting _every_ tactile sensation associated with the cup simultaneously (including hot/cold, wet/dry). This is extremely predictive compared to the set of all tactile sensations (rubber, noodles, cork, sushi, paper, salt, sand, wood, spinach are all excluded to give a very short list).
My kitchen kettle has a thick steel base with the manufacturer's information embossed into the bottom. This is text with an orientation relative to the rest of the kettle. But I have _no_ idea about the orientation of that text relative to the spout. Replace my kettle with another one where the text is rotated 90° and I wouldn't have the least clue.
My paper coffee filters, which I handle every day, have a short seam on the bottom and also along one diagonal side (conical design). The seams are sealed by a large number of short, parallel grooves embossed into the paper. The net result is that the filter has a chiral symmetry. But again, if you substituted my filters with the opposite chirality, I highly doubt my sense of touch would detect an anomaly.
On Jeff's mug, surely the fingers don't expect to find the handle when touching the rough undersurface. We certainly have a mental model of the viable proximities of different textual sensations. But this is weaker than jumping straight into an allocentric coordinate system. I don't reject his allocentric hypothesis, it's just that I feel his narrative here is woefully insufficient.
> But again, if you substituted my filters with the opposite chirality, I highly doubt my sense of touch would detect an anomaly.
Amazing talk, excited to see the next papers released!
I had to do a lot of prediction because of all the audio skips
Every time i watch this guy, i find myself furiously coding 😂😅 If you ever see this message Jeff, I found clear simple rules for the neocortex that would blow your mind.
Been following your work for a couple years now. Love the incorporation of Head Direction and Grid cells into the theory. Am profoundly intrigued by the assertion that this prediction model might explain abstract cognition.
You have some important heavy lifting to do there to get us from sequence memory to abstract cognition, so get to work. I anxiously await that paper.
On a less pie-In-the-sky note: is there a copy of this presentation somewhere that does not have the recurrent skips/ dropouts They are off putting rather.
3:40 for when he finally comes up
I like this guy, his energy is 🦾⚡
31:20 Can we apply it to brains, instead of collumns? We perceive the world from different locations, but we model the same object (which we are part of).
Very very interesting. Sadly the audio quality is not up to the task.
thank you very much for sharing this.
Interesting how sensations over time work into that, as we stroke an object, a very common movement related to grasping, we are have a sequence of features that change over time in a way that we can predict, but where is the sequence residing, where is the buffer that makes that temporal sequence the equivalent of the parallel inputs from multiple fingertips? Also had you noticed that a front to back stroke is richer in meaning that a side to side one, as if that, being a more typical movement, selects for certain feature sequence sensitivities over others?
Sequential is essential for learning, once learned you can recognize objects with less information or in novel arrangements.
For example you can recognize an object in a picture even if it's just flashed in front of you with no time to move your eyes over it. You learned to recognize it as a discrete object by viewing over time with movement etc and that learning allows you to recognize it even without the benefit of viewing it in a sequence.
This would be an interesting discussion to have on HTM Forum at discourse.numenta.org/
39:58
How brain integrates paths from acceleration and duration of movement?
One object would be a field of possible acceleration, and a (feature at) location in a field would be the acceleration perceived. We really do Newtonian mechanics this way?
The question is - do we integrate twice (acceleration to velocity, velocity to location in a room) in a one step? That would be a cool machine!
I would think that the tracking predictions of an accelerating object in our field of vision would get inputs from neurons at 1/2at^2 or whatever that works out to be on the retina, naturally and perceived experientially ... and probably inbuilt genetically since this is something all animals with eyes have to understand - and integrated with head movement, body movement, displacement, etc, naturally and very quickly.
50:14 ResNets, DenseNets, Dual path networks. Right?
40:03
What happens, if we go deep in self-perception and lose this focus on objects (which we have learned deeply before). Uniqueness of location of a coffee cup is still in a background (it is learned and automatic), but this surprising feeling of novelty with each touch is there : ]
The "novelty" you're referring to has always been there - it is the slower frequency (more general) touch perception, only containing information on location, spatial orientiation, and pressure - and it is the same "kind" of perception for all objects. However, the same signals from the touch receptors also contain higher frequency modes which are perceived as the object's texture. These higher frequency are complex enough to be passed through the motor cortex without being represented or categorized, but the slower wave touch data always has the same "tone" so the motor cortex easily picks it out and so it becomes part of the feedback loop governing motor output. When the activity of input and output information become loop locked like this in a cortical column, the appropriate incoming sensory information is sequestered by that layer and so never has a chance to enter one's conscious awareness (which consists of all the leftover sensory information which was not recognizable by any cortical structure.) Accessing this subconscious sensory data requires the use of Citta, or pure consciousness, to subtly alter the incoming sensory data within the motor cortex (by way of the apical dentrites in layer 1) which interferes with the locked feedback cycle, permitting the signal to pass through the column unimpeded where it can then become consciously perceived.
Ok for cups, but can we get beyond this to abstract thoughts? We’ve been waiting a long time for something on this?
Thank you so much for this link. I listened to the interview. Hawkins still carries on about the cup, but he does get into the emergence of higher thoughts. Also, I identified with his preoccupation with neuroscience. I also spend a lot of time following my synapses. One thing he neglects, however, is the role of emotion in generating thoughts. Please let me know if you come across other talks like this.
1. Abstract thoughts are exactly the same as concrete. You represent the number "1" the same way you represent a pen.
2. Emotion is a different domain from object representation which he has talked about and it is not the function of the neocortex.
Relevant: openreview.net/pdf?id=B17JTOe0- (Showing you can get grid cells / place cells with RNNs)
Jeff says here neocortex is 70% of the volume of the brain. But if it is like a napkin I would expect it to be less. Could someone clarify ?
The neocortex is densely folded - imagine a wet, heavy napkin crumpled into a ball around a large wad of gum.
The neocortex includes the "white matter" connecting to it, which are just axons. The actual neuronal somas are within the 4mm sheet at the surface, but they're all fed by a solid mass of axons coming from the basal ganglia and corpus calloseum, like a thick bundle of fibers.
Dr. Curt Connors from the Amazing SpiderMan!
I don't think reverse engineering the neocortex is do-able, at least with what is known now, and the state of other technologies that might be necessary to view what is going on. The neocortex is talked about like it is a thing, but is it? Isn't it really a process. Think about how hard if would be to figure out what a computer was doing if you could see the chips and wires, but were not able to look inside the chips or measure voltages ... and track them over time. Then compare the brain as being maybe millions of times more complicated than a computer, and even if it does have some vague general patterns, every single connection has a personal context inside the person that relates to all other stuff in their brains ... how does that get mapped or understood?
I think you should stick to memes and politics videos if engineering and science frightens you =)
@@egor.okhterov
I was reading neuroscience books before you were born ahole. They do not even know all the different types of brain cells yet. If you had a serious thing to say you would not need to make a ridiculous assumption that this engineer is afraid of science ... but I am sick of drive-by idiots claiming they know so much by insulting their betters.
Our physical body is just a transducer our spirit uses to take input from our physical senses and do output to our body for motion and non mental body regulation.
cortical columns are analogous to feathers- you can't understand flight by examining feathers, you must understand aerodynamics
like #296
o-o
Egyptian af.
But we can do this type of modelling when we are out of our body? So this must really be happening as an operation of the spirit. I think of the physical body as our temporary vehicle to experience this physical reality. So I see the body like a set of transducers to input/output physical reality with spirit. The point is that spirit knows about 3d world inherently, without the body plus more. And our memories and personality are definitely attributes of spirit. My best guess is that base spacial awareness is also attribute and function of spirit and that the basic wiring of body gives orientation information for spirit to process our physical body's position and orientation values that spirit then navigates with ONLY in regards to body. Recognising objects and people and places is an attribute of spirit, not the brain at all. Because we perform that function and remember this in spirit. When you suddenly "die" and shed this physical body you no longer have a brain and nervous system but we can still navigate and recognise everything around us while just spirit. We move by just thinking to be elsewhere. We still have same memories and feelings. So this means that the brain has nothing to do with learning of objects at all.
Try to summarise: the brain gives information about our physical sensors and for our spirit to process and for our spirit to control the physical body. Our spirit also communicates directly to spiritual world and access the entire library of learnt experiences that way. All the processing of object recognition and ultimate orientation and movement decisions and all personality and higher level consciousness is performed by spirit.