To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/. The first 200 of you will get 20% off Brilliant’s annual premium subscription.
I just want to ask you please can neuroscientist now erase traumatic and fear memories ?? When they're gonna start the clinical trials on humans please if you have any idea answer me please 🙏 thank you
As a Technology and Neuroscience's undergraduate i can say your videos are not only a scientific work but also one hell of a art piece! Thanks man, greetings from Brazil
This is very fansinating, I mean now I know how my brain literally physically learn things, and it makes sense some questions I have on some learning advice, "why do you should learn using most of your senses" "why do you need to focus, pay attention" "why repetition "why you should use your prior exprience to help you to learn" "why do you forget sometimes then remember other times or why you cant retrieve your memory anytime"
I’m studying neuroscience in the context of phase transitions. I sometimes intellectually veer towards AI and general computer science but the brilliancy of your videos rekindles the fire for neuroscience. If only more people with your communication and multimedia skills were involved in neurosci, we’d be marching on towards something marvelous. Public exposure and interest control the funding both in academia and industry, this kind of content has the power to ignite mass movements of brilliant minds.
Do you really believe there is a need and void to fill for this particular type of content? Genuinely curious to know if you really think this and why.
I recently discovered your videos, and being a Neuroscience PhD student myself, I want to thank you, your work has re-sparked the motivation to read about topics outside my PhD subject, something I was feeling to do for a long time but never found the energy in the day to day of working. The presentation of the topics is excellent, as well as the edition of the videos, thank you very much for these incredible contributions.
@@john.8805 Hello! Sorry, I didn't see your comment. I work on brain-computer interfaces, which are applications that decode brain signals and use them to send commands to a computer or to estimate cognitive processes and inform other applications about the user's mental state
@@Anatanomerodi I am in AI and am very much on my way to incorporating brain-computer interfaces to create bio-feedback loops it'd be cool to bounce ideas, you have an email or something feel like chatting?
It actually makes perfect sense that memory will not be stored in just one part of the brain because memory recall is a recreation of an entire 6-sense experience (though in a somewhat faded and less vivid form in most of the cases). An experience is not limited to any one region of the brain, it activates many regions of the brain at the same time.
Second-year uni student here (neuroscience major); I feel like I am watching a spoiler and can't stop myself. This is so interesting, learning about all the progress we have on the neuronal basis of learning and memory. Much much much more interesting than the various theoretical memory models I have to memorize in psychology classes!
I would love to see you take a deep dive into cognative/behavioral relationships to engram learning. A lot of people struggling with trauma related memory issues (inc. PTSD) would likely benefit from understanding how their brains physically learned (and could un-learn). In fact, it seems to me many therapists could also do with knowing more about learning and plasticity.
Check out Johannes Graff's research. He talks about the critical window during which a memory can be reupdated to decrease aversion or fear - improving therapy for PTSD. And yes, therapists do know about those concepts, but research into how they can be implemented safely needs more data. For ex: if the reupdating of the memory is not done carefully, it might lead to increase in fear rather than decrease (because you are recalling the fearful memory and not reupdating it to a positive one)
I'm in undergrad, exploring intersections of neuroscience + engineering + psychology, and your channel was/is my first exposure to computational neuroscience. very cool stuff. thank you for your videos and they're so well made!
wow... so many questions about this one... 1. Is memory encoded in the structure of neuron interconnections, or the pattern of action potentials buzzing through web of neurons? Given a network pattern of dendrites, axons, and synapses is the memory "still there" even when no signals are being passed? 2. How can repetition strenghten memory, when talking about the physical connections between neurons? 3. On gene activation when memory forms, what is the timescale of this process? remembering can be pretty fast.. can genes be expressed (and make lasting changes) just as fast? 4. How far can we "isolate parts of a memory": with mice fear conditioning, how can we be sure that the pain of shock is linked to the sound stimulus only, instead of sound stimulus + a given position in the lab + objects, shapes, and colors around the mouse at that time + ambient smell +.... , other things that might also be encoded in the engram? 5. If two different mice went through fear conditioning with the exact same setup, would we see a difference in the engrams of each mouse? 6. lets say we subject a mouse to fear conditioning, and observe the engram. We then wait for some time until the mouse forgets that experience (weeks? months?). If we do fear conditioning again on the same mouse, would the same engram be formed? 7. Can the idea of engrams be used to estimate the memory capacity of a brain? we know it can't be infinite because the brain is a physical substrate 8. Can we induce the growth of new linking neurons between two engrams chemically/biologically? so instead of the mouse retrieving two memories simultaneously and getting those memories linked, we "link" two memories artifically with those two memories have nothing to do with each other before 9. we know that the brain is not the only component of the central nervous system. Are memories (related to reflexes) encoded in the spinal cord in the same way as they are in the brain?
7 is not guaranteed if mind body dualism is true. Then the combination of neuron activation acts as an indexing/lookup function. The combination of millions of neurons is fundamentally 10^1,000,000 and we have billions if not trillions. Even 10^80 would be a memory per atom in the universe
The temporary excitability remind me of dropout which is a technique to improve deep learning by turning off neurons randomly. That improves the robustness of the network
Current deep learning is a pale and weak version of biological neurons. We will look back and be amused that we thought this could actually be the right architecture when we have brains all around us and we took almost no inspiration or principle from them.
@@ShpanMan The power of current deep learning certainly does not lie in its architecture but in its scaling ability and ease of use. I doubt more architecturally accurate versions would currently be really useful as they would probably require orders of magnitude more computational resources using currently available technology/hardware.
Hi, really interesting to learn about the waxing and waning of neuron excitability. Makes sense why there's just some things that are easier to process depending on the time of day. There's one more thing you can add to the reason why only some neurons are selected for an engram, and that is that when one neuron fires, it raises the action potential of the area outside of its membrane, which in turn locally raises the threshold needed for other neurons to fire. If there are two neurons equal in excitability and one of them happens to fire first, the second one may not fire because of the heightened action potential required. Love watching your videos, very inspiring and well communicated!
It's absolutely mind blowing to realize that our brain is basically a highly evolved computer and storage system, and that ultimately computers are starting to evolve like a biological brain
As a NN engineer, i could sense similarities and realized just how much we copy the functionality of the brain without even knowing it😂😂 these are some tricks we do to train our models to catch patterns from seemingly unrelated piles of data
I would like to see you talk about one topic: biological neurons are capable of making XOR operations. Not only a single neuron is capable, but even the dentrines are. While an artificial neuron is not. Take a look on the paper: “Dendritic action potentials and computation in human layer 2/3 cortical neurons”
You're one of my favorite educational/scientific youtubers!! Your work inspires me to do better videos in my own language as well as to understand more compreenhesively my work field as a phD student here in Brazil! Do you create your own animations or you have a team that do it?
If the brain codes parts of a memory in different areas of the brain this might explain why some sounds and smells would bring you back to something like a childhood memory. If differant areas are responsable for different portions of memory then a small triggering of one of those stimuli might cause a cascade of associated brain regions to in response
Such an amazing video on such an interesting field, thank you for this! I've recently studied a module on engrams and one paper I found really interesting - claiming to have satisfied the engram mimicry criterion - was Vetere et al. (2019) - "Memory formation in the absence of experience". I found this to be the most groundbreaking stuff so far, and the only evidence so far to suggest that mimicry may be possible. I'd love to know your thoughts! I'd also love to see a video on the clinically translatable parts of engrams - and the utilisation of the tag and manipulate/erase tools as treatments for OCD and addiction. I also thought this area had some really cool research, and seeing it in video format with your animations and explanations would be really useful!
Thanks. Not shure what AI designers might do with this information. I think adding the dimension of time and powerlaw activation patterns might boost the capabilities of neural nets
I'm curious on how you plan to implement it? Are you trying to engineer some kind of neural network which is structured & functionally organized similarly to the brain?
Idk lol I just think if we can create something really really close to how our brain works within a computer, we can understand how we work on a deeper level. Thankfully I've got until I die to figure it out.
As a Biotech, at work I have to design experiments with this kind of train of thought and I see it as part of the routine. This video totally awakens the passion and awe that led me to follow this career, thank you for posting!!
This is a ton of help for me, I am trying to figure out what we know about how the brain works and come up with as many principles that can be converted into artificial neural networks. It's incredible how this graph of nodes and edges can do so much.
Do we currently know how brains "check for overlapping" in separate engrams? Also, is it possible for completely unrelated memory clusters to randomly have similar engrams/engram positions, causing them to be intrinsically linked, and, if so, how often/how likely is this to occur?
So good! honestly my favourite channel on UA-cam and the only one I check regularly to see if I've missed any videos. Just keeps getting better! Optogenetics really is a field living up to the hype. Incredible tech. It would also be interesting to see whether manually setting the engram comes with some cost.
Artem, great job. Your presentation is off the charts. I've been doing modeling research on engrams for a couple of years now, but your video was still super informative for me. Thanks!
Thanks you for this video, I am not usely writing comments but I have to say that you really did an incredible job of pedagogy in this video. Usely I need to see your videos sevral time to understand all and in this one it was so clear that only one is enough.
Okay, folks. Here's the first comment. I've done (Edit): Most of the information in the video is familiar to me. But the visualization works great, updating and complementing my knowledge. It’s real piece of art in the popularization genre. Or even like a Disney film for scientists ;)
Amygdala: emotion Hippocampus: measurement Cortex: sensation But i first need to reminisce to appreciate each one, so I have thalamus and hypothalamus left over. Which one do I choose?
What i took from this was,brain stores info into multiple sparsely populated graph like structures, which on co-allocation or co-retrieval are connected by adding some nodes. Also neutrons of an experience are well spread apart in the brain, maybe so that, in eventual co-retrieval some neurons can be left to facilitate connections. Also since sparse graphs and planar graphs are easier to traverse, maybe some processes also handle some form of garbage collection aiming at those neurons.
I wonder causes issues like difficulty forming or recalling memories, or why some things are more easily learned? If you find something interesting, it seems to make you more likely to remember it?
I am speechless and amazed with the content and presentation, and the insight... Somewhere in my brain, a school of engrams are recruted for this awesome youtube channel❤
Thank you for making these really great videos about a field i would otherwise never be able to learn about.(I have a very strong aversion to anything gory or needles or pictures (or thoughts of pictures) of organs and similar).
Holy fuck, what an amazingly high quality video and explanation. And entirely without useless stock footage but instead graphics that actually enhance what's said. This deserves a lot more followers!
The way I'm interpretting this information is that doing things like listening to music or an off-topic audiobook while studying is not optimal. Your brain is trying to overlap memories without a sinusoidal property. So rather than the earlier examples it's better to try studying two related topics, with a build up and cool down of interesity. After some time, its actually optimal to take a break and force that sine wave to baseline encoding intensity. Then after a break build back slowly into learning and don't just dive in. Like work through a simple math problem or think of a good way to put a logical circuit together. I will try this route.
hy^^ i wanted to say that i realy like the amount of Information per slide^^ its clean, need and visibl, esay to follow and therefore perfect for lerning! Keep it up :)
I have difficulty remaining focused on each specific new thought you present and the direction you chose to adequately cover your message. Too frequently I needed to pause the video and reflect, then I seem to be taken in another direction when I get back to the video. Your visuals and text open too many avenues for my limited thought processes to remain on tract. It reminds me of trying to follow a city map while visiting a foreign country. Getting from point A to point B eliminates exploring all the interesting sights that the side streets may have. Your visuals are superb, text is inspiring, but voice inflection is somewhat unfamiliar. Thank you, Artem, for all your considerable work that this video has displayed.
23:32 I have a question regarding to the size of engrams. Isn't the size set of engrams for a specific memory fixed? But it seems that the co-retrieval of 2 distinct engrams increases the engrams on both sets.Or because the new linking engrams only contain the linking information, so that doesn't count the original size of the 2 engrams?
Amazing point, thank you! I also had this very question while I was creating the video, but I'm afraid I don't have a great answer. The source paper for this finding ( pubmed.ncbi.nlm.nih.gov/28126819/ ) just reports an "increased overlap" but doesn't compare overall sizes (or I just missed it). My intuition is that the "reorganization" would mean that some non-overlapping neurons become excluded from the engram to keep the density constant, while increasing the overlap. But your interpretation with "linking information" is equally plausible 🤔 If you find the answer, please let me know!
В этом видео множество удивительного, но больше всего меня поразила мыша, дрожащая в ожидании шока (нарисую такую где-нибудь - посмотрим, поможет ли она memory retrieval)
I have difficulty remaining focused on each specific new thought you present and the direction you chose to adequately cover your message. Too frequently I needed to pause the video and reflect, then I seem to be taken in another direction when I get back to the video. Your visuals and text open too many avenues for my limited thought processes to remain on tract. It reminds me of trying to follow a city map while visiting a foreign country. Getting from point A to point B eliminates exploring all the interesting sights that the side streets may have. Your visuals are superb, text is inspiring, but voice inflection is somewhat unfamiliar. Thanks for all your considerable work that this video has displayed.
Artem, great work behind this video. Thanks for breaking down complex information and making it more accessible. I'm looking forward to bumping into you at some Neuro meeting in the US!
Amazing video! I have never seen such a comprehensive explanation of memory mechanisms. Any suggestions of how to do a PhD in this specific area? Which authors/institutions to look for?
And yet. There are so many parallels that pop up in modern ML to concepts in neuroscience. In most cases it's "convergent evolution" -- something that "just worked" for the ML groups -- rather than something copied from nature. Different things are hard / easy for biological / artificial neural networks, but the essence seems to be in the process of being captured.
@@Gorulabro Distant parallels For the most part, most modern neural network architectures are not even really based on how the brain works, and the few that are (such as spiking neural networks) are also relatively distant approximations of how our brains produce the effects we see in reality. The truth is that we are simply far from even coming close to simulating something like this.
@@diadetediotedio6918 My point is exactly that. We don't have to mimic nature to develop similar functionality. Latent representation, sparsified encoding, sequence positional encoding in transformer architectures, all those are high level concepts discussed on this channel that have representations in modern ML. Not one-to-one, because that would be as wastefull as trying to build planes with flapping wings instead of propellors.
@@Gorulabro I don't disagree with you that it's not necessary for us to copy the nature of 1-1 to have similar "functionality". Now I would say that you need to be very careful with your definition of "functionality". Because then, there is no functionality similar to the brain in artificial neural networks if we consider functionality as the set of qualitative experiences that imply a certain general behavior in the system, for example, ANN's are terrible for having several things that would require them to have a qualitative representation of the world and whose functionality in fact cannot be simulated by a computer. On the other hand, we can make excellent mimics of "functionality" in the external sense, something that merely reproduces a desired external behavior, as ChatGPT does with producing texts that appear "intelligent" and aware. There are reasons why we don't make planes with flapping wings as well that go way beyond that, and some birds actually just glide most of the time and just use their wings as a way to lift themselves up, but nobody says planes are simulations of birds and nor that we are functioning like birds. The general similarity of a bird and an airplane is the same as that of a bird and a firearm projectile or a ballistic missile, both are "flying" in some sense, but it doesn't seem to me that it makes sense to say that having this "functionality" similarly let us translate this knowledge into terms of what goes on in birds, as many people try to do by saying that AI's somehow have an inner workings close to what goes on in human consciousness. It takes a lot of care to do these analyses, but in terms I don't disagree with you that these are efficient means of approaching something that refers to the intelligent external behavior that we seek to automate.
machine learning is about interpolation on a dataset, it can only learn statistically. statistical learning is the lowest form of intelligence and is very different from interaction and survival in a real world environment. the best state of the art ml model is much stupider than the simplest of bacteria
if we were to implement all these behaviors as agents that will behave and act throughout time, and each neuron is a deep neural net on its own, I think it is possible to replicate a digital artificial human brain. It seems like we have a few puzzles here and there already and as the research goes on, when more and more findings get implemented into code it is definitely doable. The disappointing part of NN right now is that it doesn't get trained like how brain is at the moment, everything is fed into the model and will activate a bunch of nodes unlike what we see here only few highly active neurons is activated, we can use some dropout and stuff, so maybe the dropout signal is out transmitted to neighboring agents shutting down neighbouring agents made of DNNs to make them less active, and maybe each agent can be spawned or despawn to make the entire thing dynamic so say 2 mappings got fired at once new agents spawn in and linked them together etc... and there are more questions ahead like how do you train them and how everything is going to work out in math, it will surely be an interesting research to do.
It's a tragedy that back propagation works as well as it does. Most ML is stuck on this obvious local maximum instead of taking more inspiration from the brain and fix efficiency, lifelong learning, and scalability.
as roger penrose said consciousness are computationally impossible and neurons are infinitely difficult to solve, as per we depends on data and energy to keep a simulated consciousness alive we might face the limit of a handcrafted automaton (maybe when quantum computers and negative energy be domesticated in near future we might achieve something as rare as a pure human).
you might be interested in the recently conceived "forward-forward" learning mechanism, which is much more neuromorphic and has local parameter updates. doi: 10.48550/arXiv.2212.13345
No bytes at all. A byte is a series of eight bits (ones and zeros). Neurons don't function with ones and zeros. Neurons are not digital, they're analogue. Synapses are analogue. This gives them a much greater capacity than a transistor in a processor, which is only used as a switch and as such only processes two values (ones and zeros).
Today's neural networks merely simulate neurons and synapses, digitally. They're a far cry from the real thing. Neuromorphic processors that are analogue are being developed by several companies. These emulate rather than simulate neurons and synapses. Very promising technology, which offers such advantages as more computing power for less energy and an inherent ability to continuously learn rather than requiring a resource-hungry training process. Unfortunately, though, the size and density of the components are currently nowhere near a match for the latest GPUs, such as those produced by Nvidia. This is probably why Nvidia is showing no interest in developing its own yet - it's doing a great job with GPUs.
@@antonystringfellow5152 Bits are a unit of measurement for information/uncertainty, not just some detail of how computers work. You can quantify the amount of information needed to describe any physical system as being some number of bits.
@TheRyulord bits are by definition binary, you can not encode analog data unless you are fine with losing raw information and then creating an interpreter to guess what was the actual raw information, take any analog wave and transform it into a digital wave.
@@AkiraKurai You don't lose any information. Look up "Bekenstein bound". All physical systems, including analog electronics, can be losslessly described by a string of bits.
I'm not sure if you could see this commet. I am a Chinese computational neuroscience student, and I'm really inspired by your series of video explaining neuroscience. I was wondering if you'd be cool with me translating your videos and sharing them on a Chinese video platform with Chinese subtitles?
the video is a brilliant work, the structure of the material is perfectly designed to understand it to the fullest. Thank you! It inspires me to get a master in CS even more!
thank you so much for this video! it offers so much invaluable information that is easily broken down with analogies and detailed visuals. keep up the great work, i always learn something so interesting with each one of your videos!
Hi there, I had a question. Why aren't memories overwritten / replaced during co-allocation? To me it seems that when stimulus 2 occurs (within 6 hrs of stimulus 1), the memory associated with stimulus 1 should be replaced with the memory associated with stimulus 2 (since the same neurons, who's excitability is the highest outcompete the neighboring neurons for storing that memory in that engram). Or is it possible for a single engram to host multiple memories?
From my understanding of Artificial neural networks, neural nets are really good at reusing connections and neurons to store different information, basically they are awesome at compression of all sorts of data because each neuron is tuned to be able to be used in different pathways and play a different role in each pathway. So I think, in organisms, new similar memories is just new surrounding neurons participating and all the previous involved neurons adjusting to accommodate most important memories + new memories (which may be similar). I think this is why we start to forget old stuff when we learn new stuff, but we can revive that memory easily by a little relearning. It's just neurons trying to optimally compress information. Just a theory tho, I might be wrong. Btw, look at 20:42, you can see 2 new neurons involved due to the second stimulus
Also, there is this thing I heard in machine learning circles, where when learning new stuff, neural networks forget old stuff to a high degree, unlike humans. So the idea was to learn new + old stuff again, so everything is in the dataset. This idea was inspired by the idea that maybe during sleep, brains learn old and new stuff together to avoid forgetting old stuff
To add: Neurons simply a lot more complex than an ANN node, and can do a lot of things with their inputs besides just adding them ( like XOR and integration over time ) This allows them to be reused even more efficiently
Another top tier video. I’m curious how/if these mechanisms are influenced by, or vary in, brains with PTSD, addiction, or other maladaptive tendencies (ie: the relationship between PTSD and engram formation and linking, for example). Would we see larger engrams with more overlapping neurons? Less optimized neuronal selection and encoding? Thank you for the amazing content, as always. You’ve left me with much to think about and research!
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/.
The first 200 of you will get 20% off Brilliant’s annual premium subscription.
i cheated the system by pretending to be the administrator of a school and have brilliant for completely free, thanks for the offer tho
@@verlax8956 You're a cheater
hyperthymesia syndrome how it happens.
I just want to ask you please can neuroscientist now erase traumatic and fear memories ?? When they're gonna start the clinical trials on humans please if you have any idea answer me please 🙏 thank you
vorinostat for fear reduction!
As a Technology and Neuroscience's undergraduate i can say your videos are not only a scientific work but also one hell of a art piece! Thanks man, greetings from Brazil
Thank you!
agree
Which university are you studying neuroscience?
Olha só quem encontrei
@@BrunoSantos-bg8xz AAAAAAAAAAAAAH NÃO É POSSÍVEL KKKKKKKKKKKKKK Acho que todos os alunos de neuro da UFABC veem o Artem
This is very fansinating, I mean now I know how my brain literally physically learn things, and it makes sense some questions I have on some learning advice, "why do you should learn using most of your senses" "why do you need to focus, pay attention" "why repetition "why you should use your prior exprience to help you to learn" "why do you forget sometimes then remember other times or why you cant retrieve your memory anytime"
The brain is sensitive, especially to chemical changes...diet and health have the most influence in the physical make up of the body and brain.
Don't forget to try teaching someone after you learned something new
I’m studying neuroscience in the context of phase transitions. I sometimes intellectually veer towards AI and general computer science but the brilliancy of your videos rekindles the fire for neuroscience. If only more people with your communication and multimedia skills were involved in neurosci, we’d be marching on towards something marvelous. Public exposure and interest control the funding both in academia and industry, this kind of content has the power to ignite mass movements of brilliant minds.
Where are you studying? I'm a physicist that wants to move to neuroscience
Wollen Wir das Wirklich?
Ich denke Nein !
Behalte diese Worte ,für Dein Leben .
Ciao
as someone studying quantum physics, also specifically phase transitions, its interesting learning what phase transition means in other fields
exactly
Do you really believe there is a need and void to fill for this particular type of content? Genuinely curious to know if you really think this and why.
I had no idea that neuron excitability varied with a period of hours! Such an important piece of the puzzle, thanks for this video.
I recently discovered your videos, and being a Neuroscience PhD student myself, I want to thank you, your work has re-sparked the motivation to read about topics outside my PhD subject, something I was feeling to do for a long time but never found the energy in the day to day of working. The presentation of the topics is excellent, as well as the edition of the videos, thank you very much for these incredible contributions.
May I ask what you do for work as a Neuroscience PhD? Is it medicine? Ive always wondered.
@@john.8805 Hello! Sorry, I didn't see your comment. I work on brain-computer interfaces, which are applications that decode brain signals and use them to send commands to a computer or to estimate cognitive processes and inform other applications about the user's mental state
@@Anatanomerodi I am in AI and am very much on my way to incorporating brain-computer interfaces to create bio-feedback loops it'd be cool to bounce ideas, you have an email or something feel like chatting?
@@eismccc That would be cool! I don't know how to DM here on youtube and I'd rather not post my email in the comments section tho
This is one of the best video essays I’ve ever watched on UA-cam
Man, this channel is a treasure for someone interested in biology and neuroscience. Thanks a lot for your efforts! ❤❤❤
It actually makes perfect sense that memory will not be stored in just one part of the brain because memory recall is a recreation of an entire 6-sense experience (though in a somewhat faded and less vivid form in most of the cases). An experience is not limited to any one region of the brain, it activates many regions of the brain at the same time.
Second-year uni student here (neuroscience major); I feel like I am watching a spoiler and can't stop myself. This is so interesting, learning about all the progress we have on the neuronal basis of learning and memory. Much much much more interesting than the various theoretical memory models I have to memorize in psychology classes!
I would love to see you take a deep dive into cognative/behavioral relationships to engram learning. A lot of people struggling with trauma related memory issues (inc. PTSD) would likely benefit from understanding how their brains physically learned (and could un-learn). In fact, it seems to me many therapists could also do with knowing more about learning and plasticity.
Check out Johannes Graff's research. He talks about the critical window during which a memory can be reupdated to decrease aversion or fear - improving therapy for PTSD. And yes, therapists do know about those concepts, but research into how they can be implemented safely needs more data. For ex: if the reupdating of the memory is not done carefully, it might lead to increase in fear rather than decrease (because you are recalling the fearful memory and not reupdating it to a positive one)
This is actually the basis of Scientology.
@@bermagot9238 I think that's something you have projected into scientology, rather than it being inherently in the fabric of that framework.
I'm in undergrad, exploring intersections of neuroscience + engineering + psychology, and your channel was/is my first exposure to computational neuroscience. very cool stuff. thank you for your videos and they're so well made!
I am always surprised by how beginner friendly your videos are.
wow... so many questions about this one...
1. Is memory encoded in the structure of neuron interconnections, or the pattern of action potentials buzzing through web of neurons? Given a network pattern of dendrites, axons, and synapses is the memory "still there" even when no signals are being passed?
2. How can repetition strenghten memory, when talking about the physical connections between neurons?
3. On gene activation when memory forms, what is the timescale of this process? remembering can be pretty fast.. can genes be expressed (and make lasting changes) just as fast?
4. How far can we "isolate parts of a memory": with mice fear conditioning, how can we be sure that the pain of shock is linked to the sound stimulus only, instead of sound stimulus + a given position in the lab + objects, shapes, and colors around the mouse at that time + ambient smell +.... , other things that might also be encoded in the engram?
5. If two different mice went through fear conditioning with the exact same setup, would we see a difference in the engrams of each mouse?
6. lets say we subject a mouse to fear conditioning, and observe the engram. We then wait for some time until the mouse forgets that experience (weeks? months?). If we do fear conditioning again on the same mouse, would the same engram be formed?
7. Can the idea of engrams be used to estimate the memory capacity of a brain? we know it can't be infinite because the brain is a physical substrate
8. Can we induce the growth of new linking neurons between two engrams chemically/biologically? so instead of the mouse retrieving two memories simultaneously and getting those memories linked, we "link" two memories artifically with those two memories have nothing to do with each other before
9. we know that the brain is not the only component of the central nervous system. Are memories (related to reflexes) encoded in the spinal cord in the same way as they are in the brain?
7 is not guaranteed if mind body dualism is true. Then the combination of neuron activation acts as an indexing/lookup function. The combination of millions of neurons is fundamentally 10^1,000,000 and we have billions if not trillions. Even 10^80 would be a memory per atom in the universe
The temporary excitability remind me of dropout which is a technique to improve deep learning by turning off neurons randomly. That improves the robustness of the network
Current deep learning is a pale and weak version of biological neurons. We will look back and be amused that we thought this could actually be the right architecture when we have brains all around us and we took almost no inspiration or principle from them.
@@ShpanMan The power of current deep learning certainly does not lie in its architecture but in its scaling ability and ease of use. I doubt more architecturally accurate versions would currently be really useful as they would probably require orders of magnitude more computational resources using currently available technology/hardware.
@@Smonjirez What are you talking about? Your brain runs on a McDonalds happy meal. You think current Neural networks are more efficient? 🤣
@@ShpanMan Ehm no? I think their current design is more efficient to run on computers.
@@ShpanMan Yes, in specialised tasks artificial neurons are way more efficient.
As a scientist and entrepreneur in education field I only can say thank you for this amazing video. Now I have more papers to dive in.
Subscribe
Thanks for the new engram.
Massive respect for the brain guys who do the brain work
why? to lock you in hell in here? look what they did. most ppl i knew are now empty vessels. on frikking shot and soul is gone.
watch?v=Z4-VyHOQT-k
cry your hard out, once you understand, what they did.
do you understand, ppl are masturbating to be robots. and most already have.
@@v2ike6udikthe soul can’t be gone,the soul is eternal
@@Andrea-fd2bw disconected soul from spirit becomes basically a demon. soul is "gone".
Hi, really interesting to learn about the waxing and waning of neuron excitability. Makes sense why there's just some things that are easier to process depending on the time of day.
There's one more thing you can add to the reason why only some neurons are selected for an engram, and that is that when one neuron fires, it raises the action potential of the area outside of its membrane, which in turn locally raises the threshold needed for other neurons to fire. If there are two neurons equal in excitability and one of them happens to fire first, the second one may not fire because of the heightened action potential required. Love watching your videos, very inspiring and well communicated!
nice job man... one of my top youtube sources for up-to-date neuroscience without dumbing down
New Artem kirsanov vid just dropped, shits gonna be a banger
It's absolutely mind blowing to realize that our brain is basically a highly evolved computer and storage system, and that ultimately computers are starting to evolve like a biological brain
😂
Its almost like computers operate like our thinking tendancies...
As a NN engineer, i could sense similarities and realized just how much we copy the functionality of the brain without even knowing it😂😂 these are some tricks we do to train our models to catch patterns from seemingly unrelated piles of data
This is so interesting. Cheers to you brilliant researchers that figured this stuff out. Thanks for sharing.
I would like to see you talk about one topic: biological neurons are capable of making XOR operations. Not only a single neuron is capable, but even the dentrines are. While an artificial neuron is not. Take a look on the paper:
“Dendritic action potentials and computation in human layer 2/3 cortical neurons”
Hi! I actually already have a video on this very topic :)
ua-cam.com/video/hmtQPrH-gC4/v-deo.html
@@ArtemKirsanov so when can neuroscientists erase our fear and painful memories -??
What an achievement this video is, thanks for taking the time to create this.
You're one of my favorite educational/scientific youtubers!! Your work inspires me to do better videos in my own language as well as to understand more compreenhesively my work field as a phD student here in Brazil! Do you create your own animations or you have a team that do it?
Wow, thank you so much! I do everything myself :)
What kind of editing program do you use?
Literally this channel is a treasure and this video is just a masterpiece ❤
Never end this series please!
I absolutely adore this. I have asked myself this very question. And the way this is answered is done beautifully. Thank you so much sir!
I was really missing your videos. Thanks for uploading
❤
Thank you! Yeah, sorry about that. I was quite busy with finishing my degree and moving countries
If the brain codes parts of a memory in different areas of the brain this might explain why some sounds and smells would bring you back to something like a childhood memory. If differant areas are responsable for different portions of memory then a small triggering of one of those stimuli might cause a cascade of associated brain regions to in response
Such an amazing video on such an interesting field, thank you for this! I've recently studied a module on engrams and one paper I found really interesting - claiming to have satisfied the engram mimicry criterion - was Vetere et al. (2019) - "Memory formation in the absence of experience". I found this to be the most groundbreaking stuff so far, and the only evidence so far to suggest that mimicry may be possible. I'd love to know your thoughts!
I'd also love to see a video on the clinically translatable parts of engrams - and the utilisation of the tag and manipulate/erase tools as treatments for OCD and addiction. I also thought this area had some really cool research, and seeing it in video format with your animations and explanations would be really useful!
Thank you! I’m happy to know you enjoyed it :)
Hmm, I haven’t encountered this particular paper. Thanks for pointing it out! I’ll take a look
Thanks. Not shure what AI designers might do with this information. I think adding the dimension of time and powerlaw activation patterns might boost the capabilities of neural nets
Artem, your videos are the biggest help to me in my quest to create a digital consciousness.
I'm curious on how you plan to implement it? Are you trying to engineer some kind of neural network which is structured & functionally organized similarly to the brain?
lol good luck
@@bitterlemonboy indeed lol
Idk lol I just think if we can create something really really close to how our brain works within a computer, we can understand how we work on a deeper level.
Thankfully I've got until I die to figure it out.
You will not succeed with that in digital computers.
This whole video brings to mind the nature of trauma, how it is ingrained, and ultimately how it can be untangled.
As a Biotech, at work I have to design experiments with this kind of train of thought and I see it as part of the routine. This video totally awakens the passion and awe that led me to follow this career, thank you for posting!!
This is a ton of help for me, I am trying to figure out what we know about how the brain works and come up with as many principles that can be converted into artificial neural networks. It's incredible how this graph of nodes and edges can do so much.
Finally a video on this channel that I could follow the entire time
Probably the best neuroscience youtuber
Do we currently know how brains "check for overlapping" in separate engrams? Also, is it possible for completely unrelated memory clusters to randomly have similar engrams/engram positions, causing them to be intrinsically linked, and, if so, how often/how likely is this to occur?
I am baffled by how simple you’re making this sound. I’ve always been curious how brains work, and binging your videos has totally made it made sense
So good! honestly my favourite channel on UA-cam and the only one I check regularly to see if I've missed any videos. Just keeps getting better!
Optogenetics really is a field living up to the hype. Incredible tech.
It would also be interesting to see whether manually setting the engram comes with some cost.
Artem, great job. Your presentation is off the charts. I've been doing modeling research on engrams for a couple of years now, but your video was still super informative for me. Thanks!
Absolutely fabulous video, as always. Maximally interesting content with maximally intuitive animations. Unmatched!
This video is gold. Clean animations and calm voice. It deserves many more views
I can’t wrap my head around memory. Wild stuff.
We know so much yet so little about the brain. This is a very exciting topic to follow, thanks for the video!
Thanks you for this video, I am not usely writing comments but I have to say that you really did an incredible job of pedagogy in this video. Usely I need to see your videos sevral time to understand all and in this one it was so clear that only one is enough.
Thank you!
Great video! Thanks for making it, Artem!
This (the linking memory part) is the best explanation I've heard about the brain's principle of contiguity
Okay, folks. Here's the first comment. I've done
(Edit):
Most of the information in the video is familiar to me. But the visualization works great, updating and complementing my knowledge. It’s real piece of art in the popularization genre. Or even like a Disney film for scientists ;)
Nice work champ
@@cheapshotfishing9239always alert 🫡
Amygdala: emotion
Hippocampus: measurement
Cortex: sensation
But i first need to reminisce to appreciate each one, so I have thalamus and hypothalamus left over. Which one do I choose?
Hypothalamus is your hormone control center that governs your endocrine system
Thank You, it was a pieace of Art
The Brain is such an amazingly interesting organ 🧠❤
And you do a great job at explaining concepts regarding the brain, thank you! 🔥👍
Yea, it's the most amazing. But then again, look who is telling you that. Might be some bias 😂
@@nateshrager512 Well, it might be 😅
But its always cool to listen to someone who is passionate about his topic 👍
What i took from this was,brain stores info into multiple sparsely populated graph like structures, which on co-allocation or co-retrieval are connected by adding some nodes.
Also neutrons of an experience are well spread apart in the brain, maybe so that, in eventual co-retrieval some neurons can be left to facilitate connections. Also since sparse graphs and planar graphs are easier to traverse, maybe some processes also handle some form of garbage collection aiming at those neurons.
I wonder causes issues like difficulty forming or recalling memories, or why some things are more easily learned? If you find something interesting, it seems to make you more likely to remember it?
Great video! There are also fast-degarding GFP variants to improve the temporal correspondence between gfp signal and gene expression
@ArtemKirsanov Your videos are amazing. Congratulations, how do you make your animations?
I am speechless and amazed with the content and presentation, and the insight... Somewhere in my brain, a school of engrams are recruted for this awesome youtube channel❤
Thank you for making these really great videos about a field i would otherwise never be able to learn about.(I have a very strong aversion to anything gory or needles or pictures (or thoughts of pictures) of organs and similar).
Brilliant video, very informative, inspiring, and entertaining! Greetings from a neuroscientist who loves your channel!
Thank you! I’m glad you enjoyed this :)
Holy fuck, what an amazingly high quality video and explanation. And entirely without useless stock footage but instead graphics that actually enhance what's said. This deserves a lot more followers!
You're awesome man great video, I'm in AI and this is right in my wheelhouse. Looking forward to more great content like this!
19:44 does this mean that trying to learn some big topic at the *same time* everyday is more effective then at *random times* everyday?
Brilliant video, very comprehensible and straight to the point, and minimalistic enough to keep my attention. Definitely worth a sub!
Truly an amazing video. From the content, explanation, and visuals. Keep it up!
Thanks a lot, this is so usefull to understand !!
The way I'm interpretting this information is that doing things like listening to music or an off-topic audiobook while studying is not optimal.
Your brain is trying to overlap memories without a sinusoidal property. So rather than the earlier examples it's better to try studying two related topics, with a build up and cool down of interesity.
After some time, its actually optimal to take a break and force that sine wave to baseline encoding intensity.
Then after a break build back slowly into learning and don't just dive in. Like work through a simple math problem or think of a good way to put a logical circuit together.
I will try this route.
Last night I was literally googling what memories are physically and like how neurons work, I really would love to learn more about this stuff
This is a top quality production and the information in the field of neuroscience is well explained. Liked. Subbed.
hy^^ i wanted to say that i realy like the amount of Information per slide^^ its clean, need and visibl, esay to follow and therefore perfect for lerning!
Keep it up :)
Fascinating. I love learning about how the brain works.
In 7:45, could we just induce a comma so it can form new memories? In any case the tag approach is awesome
Impressive content! Thanks!
This has got to be the coolest video on memories! Thank you.
I wonder if the gathering the results only through fear responses is a practical way of describing something as multifaceted as memory.
I have difficulty remaining focused on each specific new thought you present and the direction you chose to adequately cover your message. Too frequently I needed to pause the video and reflect, then I seem to be taken in another direction when I get back to the video. Your visuals and text open too many avenues for my limited thought processes to remain on tract. It reminds me of trying to follow a city map while visiting a foreign country. Getting from point A to point B eliminates exploring all the interesting sights that the side streets may have. Your visuals are superb, text is inspiring, but voice inflection is somewhat unfamiliar. Thank you, Artem, for all your considerable work that this video has displayed.
As a Cognitive Psychology student, your channel has been super helpful to expand my understanding, props to you ❤
23:32 I have a question regarding to the size of engrams. Isn't the size set of engrams for a specific memory fixed? But it seems that the co-retrieval of 2 distinct engrams increases the engrams on both sets.Or because the new linking engrams only contain the linking information, so that doesn't count the original size of the 2 engrams?
Amazing point, thank you! I also had this very question while I was creating the video, but I'm afraid I don't have a great answer.
The source paper for this finding ( pubmed.ncbi.nlm.nih.gov/28126819/ ) just reports an "increased overlap" but doesn't compare overall sizes (or I just missed it).
My intuition is that the "reorganization" would mean that some non-overlapping neurons become excluded from the engram to keep the density constant, while increasing the overlap. But your interpretation with "linking information" is equally plausible 🤔
If you find the answer, please let me know!
В этом видео множество удивительного, но больше всего меня поразила мыша, дрожащая в ожидании шока
(нарисую такую где-нибудь - посмотрим, поможет ли она memory retrieval)
I have difficulty remaining focused on each specific new thought you present and the direction you chose to adequately cover your message. Too frequently I needed to pause the video and reflect, then I seem to be taken in another direction when I get back to the video. Your visuals and text open too many avenues for my limited thought processes to remain on tract. It reminds me of trying to follow a city map while visiting a foreign country. Getting from point A to point B eliminates exploring all the interesting sights that the side streets may have. Your visuals are superb, text is inspiring, but voice inflection is somewhat unfamiliar. Thanks for all your considerable work that this video has displayed.
Artem, great work behind this video. Thanks for breaking down complex information and making it more accessible. I'm looking forward to bumping into you at some Neuro meeting in the US!
Amazing video! I have never seen such a comprehensive explanation of memory mechanisms. Any suggestions of how to do a PhD in this specific area? Which authors/institutions to look for?
easily the hardest thing ive forced myself to comprehend even as simple as you made it
Thanks for your effort to share the neuroscience knowledge. greeting from south korea
Best channel on UA-cam ❤
Мы скучаем по вам, Артём ❤️
Вдохновения вам, и удачи в поиске и творчестве и жизни!
the brain makes machine learning looks like a child toy
And yet. There are so many parallels that pop up in modern ML to concepts in neuroscience. In most cases it's "convergent evolution" -- something that "just worked" for the ML groups -- rather than something copied from nature. Different things are hard / easy for biological / artificial neural networks, but the essence seems to be in the process of being captured.
@@Gorulabro
Distant parallels For the most part, most modern neural network architectures are not even really based on how the brain works, and the few that are (such as spiking neural networks) are also relatively distant approximations of how our brains produce the effects we see in reality. The truth is that we are simply far from even coming close to simulating something like this.
@@diadetediotedio6918 My point is exactly that. We don't have to mimic nature to develop similar functionality. Latent representation, sparsified encoding, sequence positional encoding in transformer architectures, all those are high level concepts discussed on this channel that have representations in modern ML. Not one-to-one, because that would be as wastefull as trying to build planes with flapping wings instead of propellors.
@@Gorulabro
I don't disagree with you that it's not necessary for us to copy the nature of 1-1 to have similar "functionality". Now I would say that you need to be very careful with your definition of "functionality". Because then, there is no functionality similar to the brain in artificial neural networks if we consider functionality as the set of qualitative experiences that imply a certain general behavior in the system, for example, ANN's are terrible for having several things that would require them to have a qualitative representation of the world and whose functionality in fact cannot be simulated by a computer. On the other hand, we can make excellent mimics of "functionality" in the external sense, something that merely reproduces a desired external behavior, as ChatGPT does with producing texts that appear "intelligent" and aware. There are reasons why we don't make planes with flapping wings as well that go way beyond that, and some birds actually just glide most of the time and just use their wings as a way to lift themselves up, but nobody says planes are simulations of birds and nor that we are functioning like birds. The general similarity of a bird and an airplane is the same as that of a bird and a firearm projectile or a ballistic missile, both are "flying" in some sense, but it doesn't seem to me that it makes sense to say that having this "functionality" similarly let us translate this knowledge into terms of what goes on in birds, as many people try to do by saying that AI's somehow have an inner workings close to what goes on in human consciousness.
It takes a lot of care to do these analyses, but in terms I don't disagree with you that these are efficient means of approaching something that refers to the intelligent external behavior that we seek to automate.
machine learning is about interpolation on a dataset, it can only learn statistically.
statistical learning is the lowest form of intelligence and is very different from interaction and survival in a real world environment.
the best state of the art ml model is much stupider than the simplest of bacteria
if we were to implement all these behaviors as agents that will behave and act throughout time, and each neuron is a deep neural net on its own, I think it is possible to replicate a digital artificial human brain. It seems like we have a few puzzles here and there already and as the research goes on, when more and more findings get implemented into code it is definitely doable. The disappointing part of NN right now is that it doesn't get trained like how brain is at the moment, everything is fed into the model and will activate a bunch of nodes unlike what we see here only few highly active neurons is activated, we can use some dropout and stuff, so maybe the dropout signal is out transmitted to neighboring agents shutting down neighbouring agents made of DNNs to make them less active, and maybe each agent can be spawned or despawn to make the entire thing dynamic so say 2 mappings got fired at once new agents spawn in and linked them together etc... and there are more questions ahead like how do you train them and how everything is going to work out in math, it will surely be an interesting research to do.
It's a tragedy that back propagation works as well as it does. Most ML is stuck on this obvious local maximum instead of taking more inspiration from the brain and fix efficiency, lifelong learning, and scalability.
as roger penrose said consciousness are computationally impossible and neurons are infinitely difficult to solve, as per we depends on data and energy to keep a simulated consciousness alive we might face the limit of a handcrafted automaton (maybe when quantum computers and negative energy be domesticated in near future we might achieve something as rare as a pure human).
you might be interested in the recently conceived "forward-forward" learning mechanism, which is much more neuromorphic and has local parameter updates. doi: 10.48550/arXiv.2212.13345
@@snk-jsPenrose is not a neuroscientist
@@snk-js As @user-sq7zm3qt5e said, Penrose is not a specialist in this field. There are good arguments both for and against his claims.
Amazing content, thank you!
It would be interesting to know how much data an engram needs in terms of bytes. And how much memory is available in theory to an average person?
No bytes at all.
A byte is a series of eight bits (ones and zeros).
Neurons don't function with ones and zeros. Neurons are not digital, they're analogue. Synapses are analogue. This gives them a much greater capacity than a transistor in a processor, which is only used as a switch and as such only processes two values (ones and zeros).
Today's neural networks merely simulate neurons and synapses, digitally. They're a far cry from the real thing.
Neuromorphic processors that are analogue are being developed by several companies. These emulate rather than simulate neurons and synapses. Very promising technology, which offers such advantages as more computing power for less energy and an inherent ability to continuously learn rather than requiring a resource-hungry training process. Unfortunately, though, the size and density of the components are currently nowhere near a match for the latest GPUs, such as those produced by Nvidia. This is probably why Nvidia is showing no interest in developing its own yet - it's doing a great job with GPUs.
@@antonystringfellow5152ANALOG IS SWINGING BACK BABY!!!!!!!
@@antonystringfellow5152 Bits are a unit of measurement for information/uncertainty, not just some detail of how computers work. You can quantify the amount of information needed to describe any physical system as being some number of bits.
@TheRyulord bits are by definition binary, you can not encode analog data unless you are fine with losing raw information and then creating an interpreter to guess what was the actual raw information, take any analog wave and transform it into a digital wave.
@@AkiraKurai You don't lose any information. Look up "Bekenstein bound". All physical systems, including analog electronics, can be losslessly described by a string of bits.
I'm not sure if you could see this commet. I am a Chinese computational neuroscience student, and I'm really inspired by your series of video explaining neuroscience. I was wondering if you'd be cool with me translating your videos and sharing them on a Chinese video platform with Chinese subtitles?
Really good, gradient pedagogy, emphasis on clarity. I'll check out the rest of your channel.
Smarter-faster.
This is the REAL NEWS I subscribed for!
the video is a brilliant work, the structure of the material is perfectly designed to understand it to the fullest. Thank you! It inspires me to get a master in CS even more!
Thank you for this video!
thank you so much for this video! it offers so much invaluable information that is easily broken down with analogies and detailed visuals. keep up the great work, i always learn something so interesting with each one of your videos!
Hi there, I had a question. Why aren't memories overwritten / replaced during co-allocation?
To me it seems that when stimulus 2 occurs (within 6 hrs of stimulus 1), the memory associated with stimulus 1 should be replaced with the memory associated with stimulus 2 (since the same neurons, who's excitability is the highest outcompete the neighboring neurons for storing that memory in that engram). Or is it possible for a single engram to host multiple memories?
From my understanding of Artificial neural networks, neural nets are really good at reusing connections and neurons to store different information, basically they are awesome at compression of all sorts of data because each neuron is tuned to be able to be used in different pathways and play a different role in each pathway. So I think, in organisms, new similar memories is just new surrounding neurons participating and all the previous involved neurons adjusting to accommodate most important memories + new memories (which may be similar). I think this is why we start to forget old stuff when we learn new stuff, but we can revive that memory easily by a little relearning. It's just neurons trying to optimally compress information. Just a theory tho, I might be wrong.
Btw, look at 20:42, you can see 2 new neurons involved due to the second stimulus
Also, there is this thing I heard in machine learning circles, where when learning new stuff, neural networks forget old stuff to a high degree, unlike humans. So the idea was to learn new + old stuff again, so everything is in the dataset. This idea was inspired by the idea that maybe during sleep, brains learn old and new stuff together to avoid forgetting old stuff
@@UA-camrboi596 Thank you 🙏
To add:
Neurons simply a lot more complex than an ANN node, and can do a lot of things with their inputs besides just adding them ( like XOR and integration over time )
This allows them to be reused even more efficiently
This is such a fantastic video! 😄 Thank you so much for your effort in presenting these topics so beautifully.❤
Another top tier video.
I’m curious how/if these mechanisms are influenced by, or vary in, brains with PTSD, addiction, or other maladaptive tendencies (ie: the relationship between PTSD and engram formation and linking, for example). Would we see larger engrams with more overlapping neurons? Less optimized neuronal selection and encoding?
Thank you for the amazing content, as always. You’ve left me with much to think about and research!
Great video as always 🎉