Gpt-3 and this model presented here have some interesting similarities from a kind of flow state of how things are acted upon,( predictions ). Also there seems to be no mention of microtubule Q-bit states, and the physics underpinnings that might be apparent in this work. Perhaps there is a computational lens that might be incorporated in these analysis to get a better perspective. Thanks for the great work!
Very informative, thank you! I am going to start a PhD in Neurosci soon and would love to get in contact with you guys regarding experimental predictions that your theory makes, I could to do one of those as my PhD project!
Earlier activation leads to suppression of other activations in first winner model. So makes sense. I love these models because they are essentially digital and the hardware to implement could be so much simpler that current ML systems. AND gates and one shots are simple circuits. Even summers with such characteristics are simple circuits.
Jeff gave a presentation on the coffe cup and AI. I have a suggestion to this presentation. Instead of modelling the finger as a pressure point, model the finger as a pressure array. The pressure array has 2.5 dimensions of data were the .5 dimension allows curvature and flexure to be determinef of the surface. The array data allows more robust grasping movements than a single point sensor. Perhaps the pressure data would be applicable to the sparseness techniques mention in the school videos. An interesting addition would the study if how grasping in done by personal on space station as rhere is no gravity or friction. The personal would have to go thru a rapid learning cycle to function in zero gravity.
Would be nice to see if the system can be modeled with digital inputs and outputs and with on-shots, and gates, or gates, current mirrors and summers, integrators, etc. If a small block could be designed then it could be tiled. Then the question is how to connect all the neurons. A common memory with data transferred along serial buses. Obviously 2D does not support high connectivity required, but a combination of local and global memories could do it. Also this block could be designed purely digitally with counters and digital comparators, but maybe the slightly “analog” approach might be more area and power efficient?
Seems to me from a ML perspective this neural model leads to higher energy efficiency. Also more discrete categorization. Not clear BP works so well for learning with such a model. Need a new learning model to go with this neural model.
I'd really love to see a Numenta researchers break down of Teslas approach to AI as presented on their AI day. Would be super interesting to see where on the path to artificial general intelligence you think they are.
I'm guessing they're not very impressed. The human brain is wildly more effective than Tesla's Dojo at probably less than a millionth of energy consumption. I'm finally getting why Numenta is so obsessed with this research. Unlocking even a fraction of the intelligence efficiency of the neocortex could have a much larger impact on the human species than classical computers.
Jeff gave a talk about the coffee cup and ai. i was waiting for the update I have a suggestion for the approach. The finger is not a pressure sensor point, but a pressure area with 2.5 dimensions. The .5 dimension is a limited vertical depth were the finger could detect curvature of the surface as well as any flexure in the surface. This allows much simpler control sequences, than if a single pressure point is used. The pressure array at the fingers might be applicable to the sparse sensor array approach. An interesting variation would be to observe how people on the space station adapt to grasping objects as both gravity and friction are both absent.
@@saturdaysequalsyouth I guess you're right, Dojo is still using dense NNs. Tesla made some cool improvements (like prediction in vector spaces) and some say, their approach is already revolutionary, but continuously learning sparse NNs with all the HTM and 1000 Brains capacity will probably be the real revolution for AI.
This is a very comprehensive and informative update of your work done to date. I wish I could share this with more people.
Gpt-3 and this model presented here have some interesting similarities from a kind of flow state of how things are acted upon,( predictions ). Also there seems to be no mention of microtubule Q-bit states, and the physics underpinnings that might be apparent in this work. Perhaps there is a computational lens that might be incorporated in these analysis to get a better perspective. Thanks for the great work!
Very informative, thank you! I am going to start a PhD in Neurosci soon and would love to get in contact with you guys regarding experimental predictions that your theory makes, I could to do one of those as my PhD project!
Thank you so much for this information!
50:12 lol at every 5 trial mouse was like "is this an experiment? are they showing me a different thing this time? Did I see THAT over there?"
Earlier activation leads to suppression of other activations in first winner model. So makes sense. I love these models because they are essentially digital and the hardware to implement could be so much simpler that current ML systems. AND gates and one shots are simple circuits. Even summers with such characteristics are simple circuits.
Jeff gave a presentation on the coffe cup and AI. I have a suggestion to this presentation. Instead of modelling the finger as a pressure point, model the finger as a pressure array.
The pressure array has 2.5 dimensions of data were the .5 dimension allows curvature and flexure to be determinef of the surface. The array data allows more robust grasping movements than a single point sensor. Perhaps the pressure data would be applicable to the sparseness techniques mention in the school videos.
An interesting addition would the study if how grasping in done by personal on space station as rhere is no gravity or friction. The personal would have to go thru a rapid learning cycle to function in zero gravity.
Would be nice to see if the system can be modeled with digital inputs and outputs and with on-shots, and gates, or gates, current mirrors and summers, integrators, etc. If a small block could be designed then it could be tiled. Then the question is how to connect all the neurons. A common memory with data transferred along serial buses. Obviously 2D does not support high connectivity required, but a combination of local and global memories could do it. Also this block could be designed purely digitally with counters and digital comparators, but maybe the slightly “analog” approach might be more area and power efficient?
Seems to me from a ML perspective this neural model leads to higher energy efficiency. Also more discrete categorization. Not clear BP works so well for learning with such a model. Need a new learning model to go with this neural model.
I'd really love to see a Numenta researchers break down of Teslas approach to AI as presented on their AI day. Would be super interesting to see where on the path to artificial general intelligence you think they are.
I'm guessing they're not very impressed. The human brain is wildly more effective than Tesla's Dojo at probably less than a millionth of energy consumption. I'm finally getting why Numenta is so obsessed with this research. Unlocking even a fraction of the intelligence efficiency of the neocortex could have a much larger impact on the human species than classical computers.
Jeff gave a talk about the coffee cup and ai. i was waiting for the update I have a suggestion for the approach.
The finger is not a pressure sensor point, but a pressure area with 2.5 dimensions. The .5 dimension is a limited vertical depth were the finger could detect curvature of the surface as well as any flexure in the surface. This allows much simpler control sequences, than if a single pressure point is used.
The pressure array at the fingers might be applicable to the sparse sensor array approach.
An interesting variation would be to observe how people on the space station adapt to grasping objects as both gravity and friction are both absent.
Sorry placed post in wrong place
@@saturdaysequalsyouth I guess you're right, Dojo is still using dense NNs. Tesla made some cool improvements (like prediction in vector spaces) and some say, their approach is already revolutionary, but continuously learning sparse NNs with all the HTM and 1000 Brains capacity will probably be the real revolution for AI.