There is a martial art taught to Russia's military elite called 'Systema'(aka 'the system'). Its like the perfected version of everything you describe about cognitive architectures. It doesn't teach any techniques. It teaches a system of thinking that focuses on the rate in which someone can create, reset, delete, and adapt their cellular memory mappings to move perfectly regardless of situation or scenario. Quite a few studies were done on master practitioners of 'systema'. Giuseppe Filotto(himself a master in it) wrote a book about it called "Systema The Russian Martial System: Created by the Soviet Military for Their Special Forces Elite" that I suspect you'd find fascinating in its similarities to some of the cognitive architectures of AI, but overall, imo, its a cognitive architecture(system of thinking and acting) thats far superior to every cognitive architecture created or postulated for AI. It likely has remained hidden from the academic world because its a unique martial art taught to elite military units(and was kept secret for years because of old Soviet government).
@SpellsOfTruth Wow... that sounds very interesting... i assume that some other disciplines have come up with similar cognitive structures, and that some people might have attempted to apply that research (it kind of reminds me of how samurais adapted zen to the practice of swordsmanship)
Extremely well defined hierarchical relations. Regarding AI safety at 08:58: autonomous robots will need some sort of value system working as some sort of feedback loops. If this system is "read-only" and programmed by humans then intellectual robots will be safe. The ethical values may be stored in the highest level of hierarchy so they can block any other layer, I think. Some highly ethical people have this kind of control, which does not deprive them from freedom and does not make them less intelligent (there is quite a massive sub-culture in humans though who believe that doing good is very"stupid")
"If this system is "read-only" and programmed by humans then intellectual robots will be safe" - unfortunately, (1) humans are prone to errors, so "programmed by humans" is definitely *not* the definition of safety :-) Further, ensuring that the system is "read-only" is also human job, so see point (1).
There is a lot going on nowadays in the areas of general artificial intelligence, robotics, autonomy, explainable artificial intelligence, etc. I wouldn't have time to record more videos on these topics due to many other commitments, but fortunately there are many other resources available, and these days everyone talks about such topics :)
There is a martial art taught to Russia's military elite called 'Systema'(aka 'the system'). Its like the perfected version of everything you describe about cognitive architectures. It doesn't teach any techniques. It teaches a system of thinking that focuses on the rate in which someone can create, reset, delete, and adapt their cellular memory mappings to move perfectly regardless of situation or scenario. Quite a few studies were done on master practitioners of 'systema'. Giuseppe Filotto(himself a master in it) wrote a book about it called "Systema The Russian Martial System: Created by the Soviet Military for Their Special Forces Elite" that I suspect you'd find fascinating in its similarities to some of the cognitive architectures of AI, but overall, imo, its a cognitive architecture(system of thinking and acting) thats far superior to every cognitive architecture created or postulated for AI. It likely has remained hidden from the academic world because its a unique martial art taught to elite military units(and was kept secret for years because of old Soviet government).
@SpellsOfTruth Wow... that sounds very interesting... i assume that some other disciplines have come up with similar cognitive structures, and that some people might have attempted to apply that research (it kind of reminds me of how samurais adapted zen to the practice of swordsmanship)
Extremely well defined hierarchical relations. Regarding AI safety at 08:58: autonomous robots will need some sort of value system working as some sort of feedback loops. If this system is "read-only" and programmed by humans then intellectual robots will be safe. The ethical values may be stored in the highest level of hierarchy so they can block any other layer, I think. Some highly ethical people have this kind of control, which does not deprive them from freedom and does not make them less intelligent (there is quite a massive sub-culture in humans though who believe that doing good is very"stupid")
"If this system is "read-only" and programmed by humans then intellectual robots will be safe" - unfortunately, (1) humans are prone to errors, so "programmed by humans" is definitely *not* the definition of safety :-) Further, ensuring that the system is "read-only" is also human job, so see point (1).
Interesting! any plans for updates on this series of videos? the research looks pretty interesting!
There is a lot going on nowadays in the areas of general artificial intelligence, robotics, autonomy, explainable artificial intelligence, etc. I wouldn't have time to record more videos on these topics due to many other commitments, but fortunately there are many other resources available, and these days everyone talks about such topics :)